Official Tensorflow implementation of 'Goal-Conditioned End-to-End Visuomotor Control for Versatile Skill Primitives'
-
Updated
Jul 21, 2021 - Python
Official Tensorflow implementation of 'Goal-Conditioned End-to-End Visuomotor Control for Versatile Skill Primitives'
Movement Primitive Diffusion (MPD) is a diffusion-based imitation learning method for high-quality robotic motion generation that focuses on gentle manipulation of deformable objects.
We use visual data alone to learn a control policy for a robotic arm by observing expert video demonstrations. We implement and test several models, accomplishing an 85% success rate for a pick-and-place task.
Add a description, image, and links to the visuomotor-control topic page so that developers can more easily learn about it.
To associate your repository with the visuomotor-control topic, visit your repo's landing page and select "manage topics."