Structured Classification for Inverse Reinforcement Learning
Résumé
This paper addresses the Inverse Reinforcement Learning (IRL) problem which is a particular case of learning from demonstrations. The IRL framework assumes that an expert, demonstrating a task, is acting optimally with respect to an unknown reward function to be discovered. Unlike most of existing IRL algorithms, the proposed approach doesn't require any of the following: complete trajectories from the expert, a generative model of the environment, the knowledge of the transition probabilities, the ability to repeatedly solve the forward Reinforcement Learning (RL) problem, the expert's policy anywhere in the state space. Using a classi cation approach in which the structure of the underlying Markov Decision Process (MDP) is implicitly injected, we end-up with an e cient subgradient descent-based algorithm. In addition, only a small amount of expert demonstrations (not even in the form of trajectories but simple transitions) is required. Keywords: inverse reinforcement learning, structured multi-class classi cation