Skip to content

Latest commit

 

History

History
15 lines (9 loc) · 1.02 KB

LQR+GAIfO.md

File metadata and controls

15 lines (9 loc) · 1.02 KB

Sample-efficient Adversarial Imitation Learning from Observation

Faraz Torabi, Sean Geiger, Garrett Warnell, Peter Stone

Abstract

Imitation from observation is the framework of learning tasks by observing demonstrated state-only trajectories.

Recently, adversarial approaches have achieved significant performance improvements over other methods for imitating complex behaviors.

However, these adversarial imitation algorithms often require many demonstration examples and learning iterations to produce a policy that is successful at imitating a demonstrator’s behavior.

This high sample complexity often prohibits these algorithms from being deployed on physical robots.

In this paper, we propose an algorithm that addresses the sample inefficiency problem by utilizing ideas from trajectory centric reinforcement learning algorithms.

We test our algorithm and conduct experiments using an imitation task on a physical robot arm and its simulated version in Gazebo and will show the improvement in learning rate and efficiency