Navigation Messaging System
The Navigation Messaging System (NMS) is built into the ARC framework. It allows skills to push data (location & obstacle detection) into the messaging system. Then, mapping/navigation SLAM skills can subscribe to the navigation events. The NMS was designed to provide transparent navigation to the user in ARC. You do not have to write any complicated code because the robot skills will do the work for navigation and obstacle avoidance.
The NMS stack consists of sensors that feed data into the messaging system, which are received by mapping skills.
Level #1 - Map & Navigation
This is the way to view the sensor data and organize it into maps for navigating. There are a few skills which do this. They subscribe to the sensor event data and assemble a map. Then, they provide you the ability to click on the map and instruct the robot where to navigate to (either way-point or any destination depending on your sensor data). A few navigation skills are...
Level #3 - Sensor Groups
Skills you require are sensors that feed data into the messaging system that maps are made from. The navigation messaging system collects data by supporting two sensor input groups...
- lidar/obstacle scanners (optional): any kind of sensor that can detect obstacles (laser, 360 degree lidar ultrasonic, IR, etc.). The sensor will detect objects and mark on the map where the robot cannot go (i.e. walls, sofa, chairs, etc.). Supported skills are...
- Localization navigation: Special sensors that keep track of the robot's position (pose) in the real world (IPS, wheel encoders, roomba movement panel, intel realsense t265, etc.). *Note: only one Group #2 sensor can be active at one time.
What Sensors Do You Need?
Generally, a Level #1 navigator will require Level #3 Sensor #2 for navigating because it is the sensor that knows where the robot is. Sensor group #1 is optional because it is used to create a no-go map of places the robot cannot go. However, your robot can still navigate through pre-recorded paths with only sensor group #2. It'll be up to you to decide how important autonomous navigation is vs pre-recorded paths.For Pre-recorded Path
This option uses only sensor group #2.The robot can only drive pre-recorded paths to pre-defined way-points within the map.For Autonomous Navigation
This option uses both sensor groups #1 & #2. The robot can drive any path to get to any destination within the map.
Cartesian Coordinate System
This robot skill uses cartesian coordinates to reference the robot's position from the starting position. The starting position is always 0,0 and is defined at startup. As the robot navigates, the skill measures the distance from the starting position. The unit of measurement is in CM (centimeters). Read more about the cartesian coordinate system on Wikipedia.
Navigation, localization, and known pose positioning of robotics has been researched and documented by many experts. While the Intel T265 has demonstrated the greatest success at inside-out localization, there are many other methods that have been attempted over the years. The topic of reach into robot navigation is far too vast to be covered in this manual. However, we'll leave you with a few papers that have greatly influenced the robot industry in this field.
- Where Am I? - Systems and Methods for Mobile Robot Positioning by J. Borenstein, H. R. Everett, and L. Feng
- Carleton Winter 2012 Position Estimation presentation