SLAM
Last updated
Last updated
In this article, we attempt to give a high-level presentation of the SLAM(Simultaneous Localization and Mapping) based on the book Probabilistic Robotics. For the purpose of simplicity, the mathematical derivation will be omitted because they are technical details and are not strictly necessary for understanding the main idea of SLAM.
Let's start with the Bayes Filter algorithm
, and represent state, control and measurement respectively. The part in line 3 is called the motion model and it describes the state transition given a specific control. The in line 3 is called the measurement model or perception model and it describes the (expected) distribution of sensor data given the (current) state of the robot. Obviously, the motion model depends on how the robot moves and the measurement model depends on the sensors. The details of these two parts are not super important in our discussion about the SLAM algorithm.
The Bayes Filter algorithm listed above is such a general framework that three out of four parts of the book Probabilistic Robotics are discussing its application. It can be used to solve the localization problems and the mapping problems. One of the reasons why it's so powerful is that state is a general concept. If the state consists of the 2D-robot pose , the Bayes Filter solves the localization problem; if the state consists of both the robot pose and mapping, the Bayes Filter solves the localization and mapping problem simultaneously (SLAM).
TODO
A SLAM algorithm has the following dimensions:
Motion models
Measurement models
Online SLAM vs full SLAM
Feature-based map vs grid
The first two dimensions are not specific to SLAM and as we've seen earlier, they are part fo the general Bayes Filter algorithm. Online SLAM means our target is the snapshot at time : while the full SLAM means the target is the full history: . As a practical consideration, a full SLAM algorithm requires more memory because it needs to track the full history. Therefore, special attention is required in the implementation. The last dimension is about map representation. Most of the algorithm described in the book use feature-based map and with a feature-based map, we can further divide the problem into two categories: (1) problem with known correspondence, and (2) problem with unknown correspondence. A problem with known correspondence means when we receive the sensor data, we know it's measurement of a specific feature in the environment. This correspondence information is not always available. For example, if the robot is put in a completely unknown environment or the environment has many symmetric structure, it's not an easy task to establish the correspondence relationship between the sensor data and a location in the environment. When the correspondence is unknown, the SLAM algorithm needs to take care of two more tasks:
how to identify the correspondence
how to know if the sensor is for a new feature that we haven't see before
In next section, we follow the book Probabilistic Robotics and provide a brief discussion on three SLAM algorithms.
TODO
TODO
TODO