Skip to content

Combine knowledge of robot sensor measurements and movement to create a map of an environment from only sensor and motion data gathered by a robot, over time.

ken-power/CVND-SLAM

Repository files navigation

Landmark Detection and Robot Tracking using SLAM

I completed this project as part of Udacity's Computer Vision Nanodegree program.

The goal of this project is to implement SLAM (Simultaneous Localization and Mapping) for a 2 dimensional world.

We combine knowledge of robot sensor measurements and movement to create a map of an environment from only sensor and motion data gathered by a robot, over time. SLAM provides a way to track the location of a robot in the world in real-time and identify the locations of landmarks such as buildings, trees, rocks, and other world features.

Below is an example of a 2D robot world with landmarks (purple x's) and the robot (a red 'o') located and found using only sensor and motion data collected by that robot.

Project Files

The project consists of three Python notebooks and one python file.

The first two notebooks are for exploration of the code, and include a review of SLAM architectures. Notebook 3 and the robot_class.py file contain the project code that implements landmark detection and tracking using SLAM.