The Better Navigator

by Synthiam

Improved version of The Navigator based on Hector SLAM, with more features and path finding.

How to add the The Better Navigator robot skill

  1. Load the most recent release of ARC (Get ARC).
  2. Press the Project tab from the top menu bar in ARC.
  3. Press Add Robot Skill from the button ribbon bar in ARC.
  4. Choose the Navigation category tab.
  5. Press the The Better Navigator icon to add the robot skill to your project.

Don't have a robot yet?

Follow the Getting Started Guide to build a robot and use the The Better Navigator robot skill.

How to use the The Better Navigator robot skill

A better navigation skill based on Hector SLAM using ARC's NMS location/positioning and obstacle data. This skill is combined with other skills contributing navigation data to ARC's Navigation Messaging System (NMS). The lidar or depth camera data will create a map of the room(s) as the robot drives. You can then add way-points that are saved with the project. You can have the robot automatically navigate by clicking on a way-point (i.e., kitchen, sofa, or dining room). The robot will figure out a path to get there and avoid obstacles. 


Sensor Requirements

This robot skill uses data submitted to the NMS. It requires a positioning source (Layer 3 Group 2) and a depth/lidar sensor (Layer 3 group 1). Check the NMS manual for a list of sensors you can use for this skill. You need a Layer 3 Group 1 and Layer 3 Group 2 sensor. Pick one from each group to be used in this skill. Here's the NMS manual:

Positioning Sensor (NMS L3G2)
This robot skill requires data for the SLAM pose hint. This is a suggested position where the SLAM should start looking in its map for where the robot might be. Depending on the depth sensor you are using, the internal hector slam can be used as the pose hint instead of a pose sensor. 

If you wish to use a pose sensor, the best sensor is the Intel Realsense T265. This robot skill's algorithm will fuse the positioning sensor's data with the SLAM pose data, providing high accuracy of pose telemetry. You may also have good pose prediction with a wheel encoder NMS, such as the iRobot Roomba. The NMS Faux Odometry will most likely not provide accurate pose data.

If you wish to use the internal hector slam to provide its pose hint, that can be done with supporting sensors. For example, the Hitachi and RPI Lidar both have an option to fake the pose hint event. In this case, you can set the configuration of this robot skill pose hint to HECTOR and use only those lidar sensors.

Depth/Lidar Sensor (NMS L3G1)
This robot skill requires the NMS to have depth avoidance sensors providing multiple data points, such as a 360 degree Lidar, Intel Realsense Depth Camera, or Microsoft Kinect. This means ultrasonic distance sensor data does not give enough scan points for this robot's skill but can be added for additional scan information.


This screenshot uses an Intel RealSense T265 with a 360-degree lidar sensor. The robot was instructed to drive around the waypoints at various speeds.

User-inserted image

This screenshot uses only an RPI Lidar. The RPI Lidar robot skill is set to fake the pose hint event. And The Better Navigator is configured to use the HECTOR as the pose hint.
User-inserted image

ARC Navigation Messaging System

This skill is part of the ARC navigation messaging system. It is encouraged to read more about the Navigation Messaging System and learn compatible skills. This particular skill (The Better Navigator) operates on Level #1 of the NMS overview. This skill (The Better Navigator) requires a Level #3 Group #2 location/position sensor for operation. The location/positioning system will feed position data into the NMS, which this skill will use for navigation. See the NMS for compatible skills that provide location/position data.

User-inserted image


While your robot is driving around and navigating, this skill will log the trajectory. You define the waypoint and path points by manually going your robot to various locations (waypoints). Once multiple path points are defined for a waypoint, you can instruct your robot to autonomously navigate to that exact waypoint (or back again) at any time.

Map Size
The map is currently hardcoded for 20x20 meters.

Main Screen

User-inserted image

1) Map control buttons for clearing trajectory and clearing the map.

2) The robot's current cartesian coordinates as reported by an NMS Level #3 Group #2 sensor (i.e., Intel T265, wheel encoders).

3) Saved waypoints. Here you can add, remove and select waypoints.

4) The path points within a waypoint. A waypoint will consist of many path points for navigating throughout the environment. You may right-click on path points to edit the coordinate for fine-tuning. You may also re-order the path points by right-clicking and selecting Move Up or Move Down.

5) Current heading of the robot relative to the cartesian starting position as reported by an NMS Level #3 Group #2 sensor.

6) The yellow dot marks the robot's current cartesian position as reported by an NMS Level #3 Group #2 position/location sensor.

7) Path points are connected with a straight line demonstrating where the robot drives. Right-click on the map view and select Add Path Point to add path points. It is best to drive the robot, which creates a trajectory. Then, right-click on some points of the tractory to add new path points to the selected waypoint.

8) Log messages are displayed about navigation and sensor activity.

Main Screen - Navigation Controls
User-inserted image

This button manually starts navigating to the selected waypoint. You may also begin navigating by using ControlCommands from other skills. When the robot is navigating, this button behavior changes to stop navigating.


Config - Scripts
User-inserted image

1) Script that will execute when the navigation to a waypoint is started. Navigation can begin by manually pressing the Start button or using a ControlCommand().

2) Script will execute when the navigation is canceled or successfully ended.

3) If the navigation is paused by a JavaScript/Python command from the Navigation namespace. Or if the paused is triggered by the NMS Level #3 Group #1 distance sensor returning a value less than the specified range. This is configured in the Settings tab.

Config - Variables
User-inserted image

Many global variables are set for The Better Navigator. A question mark next to each variable explains in greater detail. The variable contents can be viewed using the Variable Watcher skill found in the Scripts category. The option to uncheck Set Realtime Variables will save on performance if the variables are not used in your custom scripts. This data is available in the NMS scripting engine namespace anyway.

Config - Navigation
User-inserted image

1) Disregard values lower than
Ignore distance values less than this specified distance in CM. The distance values are provided by any NMS Level #3 Group #1 sensor. If wires or a camera block the sensor, this will ignore those values.

2) Disregard values higher than
Ignore distance values further than this specified distance in CM. The distance values are provided by any NMS Level #3 Group #1 sensor. Many sensors are inaccurate at far distances, so you can ignore those values.

3) Pause Navigation Distance
If the NMS distance sensor provides a value greater than the "lower than" but lower than this, any navigation will be paused. This will also execute the PAUSE script from the Scripts tab. Your program may use this opportunity to navigate the obstacle and continue navigating again. Use the Javascript or Python command in the Navigation namespace to continue navigating. That command is Navigation.setNavigationStatusToNavigating();

4) Pause Navigation Degrees
This value complements the pause navigation distance value. This value will determine the degree range of when to pause navigation. If you wish for the entire range to be paused, enter 360 degrees. If you only want objects in front of the robot paused, enter 90. The degree number entered is divided by two and used from the left and right of the center of the robot. - If 90 degrees is entered, then 45 degrees to the left of the center of the robot and 45 degrees to the right of the center of the robot are detected.- If 180 degrees is entered, then 90 degrees to the left of the center of the robot and 90 degrees to the right of the center of the robot are detected.- If 360 degrees are entered, the full range will be detected.

5) Trajectory history count
Like a snake trail, a trail is left behind the robot's navigation. This is the number of history positions we keep. Otherwise, the trail will be gone forever and clutter the map.

6) Pose Frame Update Path Planning
The path planning will only update X frames from the L3G2 pose telemetry sensor to save CPU usage.

7) Way-point Font Size
The size of the font for the way-point titles. Depending on how zoomed you are on the map, you may change the font size.

8) Path planning resolution
A path consists of many micro way-points. This is the resolution of how many way-points to create. A value of 2 would mean every 2 CM is a new way-point, and a value of 20 would mean every 20 CM is a new way-point. The higher the number, the fewer waypoints and the less correcting the robot would need to make. However, if the value is too high, corners will be cut too close, and the robot may come in contact. You will recognize the lower resolution when fewer turns are made in the drawn path. The risk with lower resolution could mean cutting corners too close.

Here is an example of a resolution of 2...
User-inserted image

Here is the same example of a resolution of 20...
User-inserted image

You can see how the lower resolution (higher value) caused the robot to drive into the corner. While having many micro way-points causes the robot to correct more often, it also prevents the robot from hitting corners. Finding a balance for your environment requires testing. 

9) Personal space size
This is a robot's personal space size bubble to keep from walls and objects when path planning. So a value of 50 would be 50 CM square. If this value is too large, the robot may not have enough room to navigate and reach destinations. If the value is too small, the robot may touch the wall or objects.

Configuration - Movement
User-inserted image

1) Forward speed
When navigating to a way-point, this is the speed for the forward movement. You do not want the robot to move too quickly when navigating, increasing pose telemetry accuracy. By moving too quickly, the robot will lose position. Have the robot move as slowly as you can to improve accuracy.

2) Turn speed
Similar to the Forward speed, this is the speed used for turning when navigating. 

3) Degrees of forgiveness
When navigating to way-points, a path is calculated. The path consists of many smaller way-points. The robot must turn toward the next waypoint before moving forward. This is the number of degrees of forgiveness for how accurate the robot must be facing. Many robots do not have the accuracy when turning, especially if they turn too quickly, so you would want this number to be higher. If the robot bounces back and forth attempting to line up the next way-point, this value must be increased. 

4) Enable Dynamic Turning
This will allow the robot to turn using radial symmetry toward the object rather than rotate on the spot. This requires the Movement Panel to support individual wheel speed control, such as the continuous rotation servo, hbridge PWM, sabertooth, dynamixel wheel mode, etc. 

5) Dynamic Min & Max Speed
The minimum (slowest) speed for turning. For example, if turning hard left, the left wheel would spin at this speed (slowest), and the right wheel would spin at the Max (fastest) speed. The value between the min and max are used to dynamically calculate how much speed the wheels need to turn in an arc.

6) Dynamic Turn Degrees
The robot will use dynamic turning if the next waypoint is less than this value of degrees. Otherwise, if the turn difference is higher than this value, the robot will use the standard rotate on the spot turning. If the waypoint is 180 degrees behind the robot, it would be more efficient to rotate on the spot toward the waypoint. Otherwise, if the waypoint is 30 degrees to the right, drive toward the waypoint on a slight radial path.

Configuration - Video
User-inserted image

1) Video Stream
The output of the map can be sent to a camera device. The camera device must be started and in CUSTOM mode. The Custom can be selected in the device dropdown. This is useful if the map is displayed in a custom interface screen or PiP in Exosphere.

Configuration - Advanced
User-inserted image

1) Navigation debugging
Outputs a noisy log when navigating the distances and degrees needed to turn. Do not use this if you're trying to save on performance.

2) Pose data debugging
Output information about pose data received from the NMS. This is a very noisy log and not recommended for saving on performance.

3) Pose Hint Source
The Hector SLAM algorithm accepts a parameter for calculating the robot's position on a map. Because the NMS also accepts a sensor for pose data (i.e., wheel encoder, intel realsense t265, etc.), the data can be fused with the hector calculation. You can either use the external NMS sensor only, the Hector calculation only, an average of the two sensors, or the difference of the external sensor added to the hector value.

*Note: the Hector slam algorithm used in this robot skill requires many data points for accurate pose estimation. Many depth cameras or lidar sensors may not provide enough scan data to rely on the Hector pose estimation calculation. If this scenario happens, use the External option to rely on an external sensor or choose a depth sensor with more data points, such as a 360-degree lidar.

- Hector Only (Recommended with 360-degree Lidar only)
This relies on using the Hector SLAM to calculate its pose hint. This can be a reliable mapping option if your depth/scan/lidar sensor has enough data points for the Hector SLAM to accurately predict the robot's pose. If you use a 360-degree lidar, such as the RPI or Hitachi, they provide enough data points for this option. To fake the pose hint event, you will have to configure the depth/scan/lidar sensor and enable the option.

- Differential (Requires external L3G2 pose sensor & 360-degree Lidar)
This adds the external sensor's difference since the last pose update to the hector pose hint. Essentially, the external sensor pose hint is only used as a difference between the last time it was updated. That value is added to the hector's pose hint. If the external sensor has a high chance of error, it will decrease the error because it uses smaller snapshots.

For example, a wheel encoder may go out of sync within 60cm of travel. But, the value can be trusted within 5-10cm of travel. So this keeps a history of the last pose update and subtracts that from the current pose value. It then adds that pose value to the hector pose. By doing so, the external sensor pose error is reduced.

- External Only
The pose hint source is the external NMS sensor, such as a wheel encoder or Intel Realsense. The Intel Realsense may be the highest external positioning sensor available if you rely solely on external.

If you use a very noisy or unreliable sensor, such as the Faux Odometer, you may wish to use Hector only. That way, you are not giving the algorithm bad data to work with. Just make sure the depth sensor has enough data points, such as a 360-degree lidar.

- Average
This will average the Hector and External sensor positioning. Essentially it's a combination of the two. This is not very accurate because it merely divides the error between both sensors by two. So, the error isn't as noticeable over time, but the error grows as one sensor's errors increase.

Pose Hint Suggestions

We have a few suggestions for the pose hint value based on your robot's sensor configuration.

360 Degree Lidar Only (recommended)
- The Better Navigator Pose Hint should be set for Hector
- The 360-degree lidar configuration should be set to Fake Pose Hint Event (checked)

360 Degree Lidar with NMS L3G2 Pose Sensor (i.e., Wheel encoder, T265, etc.)
- The Better Navigator pose hint should be set for Differential
- The 360-degree lidar configuration should _not_ set a fake pose hint event (unchecked)
- The downfall to this sensor configuration is that the pose sensor can still result in mapping errors. This is noticeable when the map begins to shift.

Depth Camera (i.e., Kinect, Realsense, etc.) with NMS L3G2 Pose Sensor (i.e., Wheel encoder, T265, etc.)
- The Better Navigator pose hint should be set for External
- The downfall to this sensor configuration is that the depth camera does not provide enough data points for the SLAM to produce a pose hint. That means you will rely solely on the external NMS L3G2 pose sensor, which will increase errors over time. The solution is to combine the depth camera with a 360-degree lidar.

360 Degree Lidar, Depth Camera (i.e., Kinect, Realsense, etc.) with NMS L3G2 Pose Sensor (i.e., Wheel encoder, T265, etc.)
- The Better Navigator pose hint should be set for Differential
- The 360-degree lidar configuration should _not_ set a fake pose hint event (unchecked)

Starting Position

This navigation skill uses cartesian coordinates in CM from the starting position (0, 0). Any saved maps will be referenced from the same starting position and heading angle. When you re-load a project to have the robot navigate the same course, the robot must be positioned in the same starting position and heading angle. We recommend using painter/masking tape as the starting reference point for the robot. If your robot has an auto dock for charging, secure the charger to a fixed position on the floor, which can be used as a reference point. 

User-inserted image

We're using an iRobot Roomba as the robot with an Intel T265 positioning sensor in the photo above. The painter's tape on the floor marks the robot's starting position. The outline allows us to position the robot in the square, and the marking on the front of the robot aligns with the specified heading.

Cartesian Coordinate System
This robot skill uses cartesian coordinates to reference the robot's starting position. The starting position is always 0,0 and is defined at startup. As the robot navigates, the skill measures the distance from the starting position. The unit of measurement is in CM (centimeters). Read more about the cartesian coordinate system on Wikipedia.

User-inserted image

Example #1 (360 degree lidar only)

We'll use only a 360-degree lidar with this skill for navigation and mapping. Here's a screenshot of this type of setup. The Movement Panel is continuous rotation which uses robotis dynamixel servos. However, any Movement Panel will do. The 360-degree lidar is RPI Lidar A1. 

User-inserted image

* Note: This example assumes you already have a movement panel, and the robot can move.

1) Connect your lidar to the PC via the USB cable

2) Add the respective lidar robot skill to the project (in this case, we're using an RPI Lidar A1)

3) Configure the lidar robot skill and select the Fake Pose Hint Event option. (read the manual for the lidar robot skill for more information on that option)

4) Add The Better Navigator robot skill to your project

5) Configure The Better Navigator and select the pose hint to be HECTOR

6) Start the lidar robot skill

7) The map will begin to fill. You can now slowly drive the robot around and watch the map continually fill. 

8) Right-click on the map and select areas to navigate to.

Example #2 (Intel Realsense T265 & 360 degree lidar)

To get sensor data for mapping, other skills must be loaded that are compatible. In this quick example, we'll use the Intel Realsense T265 and 360-degree Lidar in combination with this skill. Here's a screenshot of this type of setup. The Movement Panel is continuous rotation which uses robotis dynamixel servos. The 360-degree lidar is the Hitachi. However, any Movement Panel will do.
User-inserted image

* Note: This example assumes you already have a movement panel, and the robot can move.

1) Connect your Intel RealSense T265 to the computer's USB port

2) Connect the distance sensor of choice (i.e., 360-degree lidar)

3) Load ARC 

4) Add the Intel RealSense skill to your workspace, select the port, and press Start

5) Add the 360-degree lidar robot skill to your workspace. Select the port and press Start

6) Now, add this skill (The Better Navigator) to your workspace

7) Configure The Better Navigator and specify the Pose Hint to be Differential

8) You will now see localization path and distance data from the Intel RealSense sensor displayed in The Better Navigator window. This robot skill will be displaying and rendering the data.

Example #3 (Interact With Speech Recognition)

The list of waypoints is added to an array. That array can be used with the WaitForSpeech() command in EZ-Script, JavaScript, or Python. This example shows how to use it with JavaScript. Add this code snippet to a speech recognition phrase script. In this example, the phrase we'll add to the speech recognition robot skill is "Robot go somewhere."


Audio.sayWait("Where would you like me to go?");

dest = Audio.waitForSpeech(5000, getVar("$TheNavSavedWayPoints"))

if (dest != "timeout") {

Audio.say("Navigating to " + dest);

ControlCommand("The Better Navigator", "GoToWayPoint", dest);
User-inserted image

With the code inserted, speak the phrase "Robot go somewhere." The robot will speak, "Where do you want me to go?" and display the list of stored waypoints. Speak one of the waypoints, and the robot will begin navigating.
User-inserted image


Upgrade to ARC Pro

Synthiam ARC Pro is a cool new tool that will help unleash your creativity with programming robots in just seconds!

This initial release does not have the path planning implemented yet. There are also several silly issues with navigating. 

The neat thing is if a lidar or depth camera is used, the odometry can be assisted with either Faux Odometry robot skill or wheel encoders. It has much better pose estimation than The Navigator robot skill.
#128   — Edited
Not yet that will be the next skill to be incorporated.
I'd like to input a map into The Better Navigator, what type of file does it accept?
I've never used this skill or any kind of navigational programs. However after reading through a lot of the above instructions and looking at links provided I've come to the conclusion that you can't load your own maps that were made outside of these skills. Sounds like you can only load maps that have been made and saved by other compatible skills? 


The lidar or depth camera data will create a map of the room(s) as the robot drives. 


To get sensor data for mapping, other skills must be loaded that are compatible.
Thanks for the info Dave. Maybe this would be an area where an update would be appropriate. I'm thinking of using the camera overlay or an actual cad drawing overlay and having them work together. Have it navigating properly now but always looking for the next step. Working on Camera Pose but there are different issues to work through, multiple wifi or wifi extenders, how to use wireless cameras without using their app because when you use someones unscrupulous app they can have access to all of your data which can be very bad. Have a Merry Christmas!
You can’t draw a map. That would be impossible considering how this works. If you research slam algorithm, you’ll understand the complexities of it. It is doing very advanced real-time analysis of the environment. If you drew a map, it would be torn apart by the algorithm immediately
Ok understand. When you say it's very involved behind the scenes, I'm believing it. Have a Merry Christmas!
Thanks! You as well - wish your family a great holiday.
United Arab Emr
Dear friends and robot builders  
Honest greetings and respect 

I need to implement a hector slam navigation  Navigation Messaging System (NMS) for indoor navigation  on 2 wheel robot platform 
I'm in Africa and I'm running low on resources i cant get  intel depth cameras 
working on a medical robot project (nonprofitable)
here is my available hardware 
1-Hitachi-LG LDS Lidar
2-Kinect Xbox 360
3-2Wheel Encoder Counter
4-win10 companion computer latepanda
I am afraid of working with the Kinect Xbox 360 system. I have seen some interventions and comments related to its inaccuracy. I am forced to use it because I have no other alternative.
based on these components Can I build the (hector slam navigation) level 3 system?
@DJ Sures
In the beginning just keep it simple till you get it working. You don't need the encoders nor the Kinect for now. You will need the lidar working properly. DJ made a good video for The Better Navigator that you will need to study closely. In the video you will see how to check Hector which bypasses encoder and KInect. I have it working and it's fine with just the lidar. Will be adding more sensors as needed. Good Luck!
Have been trying to set the acceleration to a more moderate level for when it rotates on the spot and then goes towards it's position but nothing seems to work. Just wondering if the Movement Panel even allows for accel/decel because it would interfere with the forward navigation and slight changing of direction. It would help if it would work at least when it rotates on the spot and then accelerates as my bot jumps a bit after it rotates. This kind of throws the lidar out of whack as it bounces.The speed is not even set all that high.
What sensor are you using for "nothing seems to work"? Can you please expand on what sensor you use with this robot skill? Also, the acceleration (if supported by your movement panel) would not affect the navigation. The movement panels are integrated with Synthiam ARC, and the robot skill does not need to know where the robot is moving. This navigation robot skill gets its sensor information from sensors, not a movement panel.
I'm using a 360 lidar with the The Better Navigator. Can you have a continuous servo with the same D0 designation and then change the acceleration or does the Movement Panel have a higher hierarchy and not let anything else affect it?
I don’t understand the question. The only robot skill that should move the servo is the movement panel. There’s no hierarchy, there’s movement panels.
OK I understand but in the Movement Panel there is no means for acceleration deceleration. So it is full user set speed whenever it stops and starts.-
Yes, I understand there’s a speed setting
#142   — Edited
Ok, so I hear you saying that you want the servos to "ramp" up to speed slowly and then "ramp" back down slowly when the move is stopping. It also sounds like you want to be able to control how fast or slow the ramping is. You don't want the servos to jump to full speed or suddenly stop when the move is starting or is complete?
Here is the scenario. The bot has 10" wheels, it goes to a waypoint and has to do 170 degree turnaround. It then rotates on itself (one turns one way the other turns the other way), so coming out of say 70 speed rotation it then goes 100 speed forward. Well one of those wheels is going backwards so at one point you actually add the two together (170 for a split second) and it make front two wheels come off the ground. If acceleration/ decell is not an option a possible simple fix is to add a 1-2 second delay between the rotation and the forward but that would have to be done behind the scenes.
#144   — Edited
Movement panels for servos do not control acceleration - the acceleration value is a parameter for the Servo. It can be assigned with the Servo.SetAcceleration and Servo.SetVelocity javascript commands.

Acceleration for servos would need to be managed by the EZB or servo Controller. Servos uhigh-speedspeed PWM is incredibly too fast for a PC to generate, which is why they need a microcontroller. The acceleration is included with that PWM, so you'd need to use a controller that supports acceleration.

ARC has an acceleration parameter for controllers that support it. This is documented in the servo Control page:

For a servo controller that uses acceleration, I am pretty sure dynamixels, lewansoul, lynxmotion, polulu maestro, and maybe the kondoor use PWM servos; I think the polulu is the only one with built-ins built-in acceleration. You'd have to closely examine what's available in the robot skill section and ezb section.
Also, watch this tutorial video i made to see if there are settings you're missing. 

Ahh that's what I was looking for, thanks for the explanation. It's right there in Blockly. Thanks
Ya DJ, That really is a mind blowing video you did showing how Robot can change direction and find a new path way home just using only the Lidar. Every time I lose my interest in robotics for a while ,all I need to do is watch one of your videos on how easy it is to make robot seem super intelligent ! I start catching the robot addiction bug again,LOL!