Mouhssine Rifaki

I am an incoming PhD student at Imperial College London, in the Department of Electrical and Electronic Engineering, starting Fall 2026.

My research aims to prove that exploiting latent structure in Markov decision processes can reduce the sample complexity of reinforcement learning by orders of magnitude, enabling agents that learn from almost nothing. My broader interests are at the intersection of reinforcement learning, control theory, and multi-agent systems.

I hold a BSc in Mathematics from Sorbonne University and am completing the MVA master's at ENS Paris-Saclay.

Mouhssine Rifaki

Research

I work on sample complexity reduction in reinforcement learning by exploiting latent structure in Markov decision processes, including low-rank Q-functions, spectral gaps, and manifold geometry. I am interested in low-rank matrix completion for recovering Q-functions from sparse samples via leveraged CUR decompositions, two-to-infinity subspace recovery for the row-wise error control needed in policy extraction, score matching in learned latent spaces for diffusion world models with planning guarantees that scale with intrinsic dimension, kernel spectral methods for continuous block MDP decoding, and reducing the demonstration complexity of imitation learning when the expert's Q-function has low-rank structure.

News

Publications