ARC Pro

Upgrade to ARC Pro

ARC Early Access will give you immediate updates and new features needed to unleash your robot's potential!

Thumbnail

The Better Navigator

by Synthiam

Improved version of The Navigator based on Hector SLAM, with more features and path finding.

How to add the The Better Navigator robot skill

  1. Load the most recent release of ARC (Get ARC).
  2. Press the Project tab from the top menu bar in ARC.
  3. Press Add Robot Skill from the button ribbon bar in ARC.
  4. Choose the Navigation category tab.
  5. Press the The Better Navigator icon to add the robot skill to your project.

Don't have a robot yet?

Follow the Getting Started Guide to build a robot and use the The Better Navigator robot skill.

How to use the The Better Navigator robot skill

A better navigation skill based on Hector SLAM using ARC's NMS location/positioning and obstacle data. This skill is combined with other skills contributing navigation data to ARC's Navigation Messaging System (NMS). The lidar or depth camera data will create a map of the room(s) as the robot drives. You can then add way-points that are saved with the project. You can have the robot automatically navigate by clicking on a way-point (i.e., kitchen, sofa, or dining room). The robot will figure out a path to get there and avoid obstacles. 



Sensor Requirements


This robot skill uses data submitted to the NMS. It requires a positioning source (Layer 3 Group 2) and a depth/lidar sensor (Layer 3 group 1). Check the NMS manual for a list of sensors you can use for this skill. You need a Layer 3 Group 1 and Layer 3 Group 2 sensor. Pick one from each group to be used in this skill. Here's the NMS manual: https://synthiam.com/Support/ARC-Overview/robot-navigation-messaging-system


Positioning Sensor (NMS L3G2)
This robot skill requires data for the SLAM pose hint. This is a suggested position where the SLAM should start looking in its map for where the robot might be. Depending on the depth sensor you are using, the internal hector slam can be used as the pose hint instead of a pose sensor. 

If you wish to use a pose sensor, the best sensor is the Intel Realsense T265. This robot skill's algorithm will fuse the positioning sensor's data with the SLAM pose data, providing high accuracy of pose telemetry. You may also have good pose prediction with a wheel encoder NMS, such as the iRobot Roomba. The NMS Faux Odometry will most likely not provide accurate pose data.

If you wish to use the internal hector slam to provide its pose hint, that can be done with supporting sensors. For example, the Hitachi and RPI Lidar both have an option to fake the pose hint event. In this case, you can set the configuration of this robot skill pose hint to HECTOR and use only those lidar sensors.

Depth/Lidar Sensor (NMS L3G1)
This robot skill requires the NMS to have depth avoidance sensors providing multiple data points, such as a 360 degree Lidar, Intel Realsense Depth Camera, or Microsoft Kinect. This means ultrasonic distance sensor data does not give enough scan points for this robot's skill but can be added for additional scan information.


Example


This screenshot uses an Intel RealSense T265 with a 360-degree lidar sensor. The robot was instructed to drive around the waypoints at various speeds.

User-inserted image


This screenshot uses only an RPI Lidar. The RPI Lidar robot skill is set to fake the pose hint event. And The Better Navigator is configured to use the HECTOR as the pose hint.
User-inserted image




ARC Navigation Messaging System


This skill is part of the ARC navigation messaging system. It is encouraged to read more about the Navigation Messaging System and learn compatible skills. This particular skill (The Better Navigator) operates on Level #1 of the NMS overview. This skill (The Better Navigator) requires a Level #3 Group #2 location/position sensor for operation. The location/positioning system will feed position data into the NMS, which this skill will use for navigation. See the NMS for compatible skills that provide location/position data.

User-inserted image



Mapping


While your robot is driving around and navigating, this skill will log the trajectory. You define the waypoint and path points by manually going your robot to various locations (waypoints). Once multiple path points are defined for a waypoint, you can instruct your robot to autonomously navigate to that exact waypoint (or back again) at any time.

Map Size
The map is currently hardcoded for 20x20 meters.


Main Screen


User-inserted image

1) Map control buttons for clearing trajectory and clearing the map.

2) The robot's current cartesian coordinates as reported by an NMS Level #3 Group #2 sensor (i.e., Intel T265, wheel encoders).

3) Saved waypoints. Here you can add, remove and select waypoints.

4) The path points within a waypoint. A waypoint will consist of many path points for navigating throughout the environment. You may right-click on path points to edit the coordinate for fine-tuning. You may also re-order the path points by right-clicking and selecting Move Up or Move Down.

5) Current heading of the robot relative to the cartesian starting position as reported by an NMS Level #3 Group #2 sensor.

6) The yellow dot marks the robot's current cartesian position as reported by an NMS Level #3 Group #2 position/location sensor.

7) Path points are connected with a straight line demonstrating where the robot drives. Right-click on the map view and select Add Path Point to add path points. It is best to drive the robot, which creates a trajectory. Then, right-click on some points of the tractory to add new path points to the selected waypoint.

8) Log messages are displayed about navigation and sensor activity.

Main Screen - Navigation Controls
User-inserted image



This button manually starts navigating to the selected waypoint. You may also begin navigating by using ControlCommands from other skills. When the robot is navigating, this button behavior changes to stop navigating.


Configuration


Config - Scripts
User-inserted image


1) Script that will execute when the navigation to a waypoint is started. Navigation can begin by manually pressing the Start button or using a ControlCommand().

2) Script will execute when the navigation is canceled or successfully ended.

3) If the navigation is paused by a JavaScript/Python command from the Navigation namespace. Or if the paused is triggered by the NMS Level #3 Group #1 distance sensor returning a value less than the specified range. This is configured in the Settings tab.

Config - Variables
User-inserted image


Many global variables are set for The Better Navigator. A question mark next to each variable explains in greater detail. The variable contents can be viewed using the Variable Watcher skill found in the Scripts category. The option to uncheck Set Realtime Variables will save on performance if the variables are not used in your custom scripts. This data is available in the NMS scripting engine namespace anyway.

Config - Navigation
User-inserted image



1) Disregard values lower than
Ignore distance values less than this specified distance in CM. The distance values are provided by any NMS Level #3 Group #1 sensor. If wires or a camera block the sensor, this will ignore those values.

2) Disregard values higher than
Ignore distance values further than this specified distance in CM. The distance values are provided by any NMS Level #3 Group #1 sensor. Many sensors are inaccurate at far distances, so you can ignore those values.

3) Pause Navigation Distance
If the NMS distance sensor provides a value greater than the "lower than" but lower than this, any navigation will be paused. This will also execute the PAUSE script from the Scripts tab. Your program may use this opportunity to navigate the obstacle and continue navigating again. Use the Javascript or Python command in the Navigation namespace to continue navigating. That command is Navigation.setNavigationStatusToNavigating();

4) Pause Navigation Degrees
This value complements the pause navigation distance value. This value will determine the degree range of when to pause navigation. If you wish for the entire range to be paused, enter 360 degrees. If you only want objects in front of the robot paused, enter 90. The degree number entered is divided by two and used from the left and right of the center of the robot. - If 90 degrees is entered, then 45 degrees to the left of the center of the robot and 45 degrees to the right of the center of the robot are detected.- If 180 degrees is entered, then 90 degrees to the left of the center of the robot and 90 degrees to the right of the center of the robot are detected.- If 360 degrees are entered, the full range will be detected.

5) Trajectory history count
Like a snake trail, a trail is left behind the robot's navigation. This is the number of history positions we keep. Otherwise, the trail will be gone forever and clutter the map.

6) Pose Frame Update Path Planning
The path planning will only update X frames from the L3G2 pose telemetry sensor to save CPU usage.

7) Way-point Font Size
The size of the font for the way-point titles. Depending on how zoomed you are on the map, you may change the font size.

8) Path planning resolution
A path consists of many micro way-points. This is the resolution of how many way-points to create. A value of 2 would mean every 2 CM is a new way-point, and a value of 20 would mean every 20 CM is a new way-point. The higher the number, the fewer waypoints and the less correcting the robot would need to make. However, if the value is too high, corners will be cut too close, and the robot may come in contact. You will recognize the lower resolution when fewer turns are made in the drawn path. The risk with lower resolution could mean cutting corners too close.

Here is an example of a resolution of 2...
User-inserted image


Here is the same example of a resolution of 20...
User-inserted image


You can see how the lower resolution (higher value) caused the robot to drive into the corner. While having many micro way-points causes the robot to correct more often, it also prevents the robot from hitting corners. Finding a balance for your environment requires testing. 


9) Personal space size
This is a robot's personal space size bubble to keep from walls and objects when path planning. So a value of 50 would be 50 CM square. If this value is too large, the robot may not have enough room to navigate and reach destinations. If the value is too small, the robot may touch the wall or objects.


Configuration - Movement
User-inserted image



1) Forward speed
When navigating to a way-point, this is the speed for the forward movement. You do not want the robot to move too quickly when navigating, increasing pose telemetry accuracy. By moving too quickly, the robot will lose position. Have the robot move as slowly as you can to improve accuracy.

2) Turn speed
Similar to the Forward speed, this is the speed used for turning when navigating. 

3) Degrees of forgiveness
When navigating to way-points, a path is calculated. The path consists of many smaller way-points. The robot must turn toward the next waypoint before moving forward. This is the number of degrees of forgiveness for how accurate the robot must be facing. Many robots do not have the accuracy when turning, especially if they turn too quickly, so you would want this number to be higher. If the robot bounces back and forth attempting to line up the next way-point, this value must be increased. 

4) Enable Dynamic Turning
This will allow the robot to turn using radial symmetry toward the object rather than rotate on the spot. This requires the Movement Panel to support individual wheel speed control, such as the continuous rotation servo, hbridge PWM, sabertooth, dynamixel wheel mode, etc. 

5) Dynamic Min & Max Speed
The minimum (slowest) speed for turning. For example, if turning hard left, the left wheel would spin at this speed (slowest), and the right wheel would spin at the Max (fastest) speed. The value between the min and max are used to dynamically calculate how much speed the wheels need to turn in an arc.

6) Dynamic Turn Degrees
The robot will use dynamic turning if the next waypoint is less than this value of degrees. Otherwise, if the turn difference is higher than this value, the robot will use the standard rotate on the spot turning. If the waypoint is 180 degrees behind the robot, it would be more efficient to rotate on the spot toward the waypoint. Otherwise, if the waypoint is 30 degrees to the right, drive toward the waypoint on a slight radial path.


Configuration - Video
User-inserted image


1) Video Stream
The output of the map can be sent to a camera device. The camera device must be started and in CUSTOM mode. The Custom can be selected in the device dropdown. This is useful if the map is displayed in a custom interface screen or PiP in Exosphere.


Configuration - Advanced
User-inserted image


1) Navigation debugging
Outputs a noisy log when navigating the distances and degrees needed to turn. Do not use this if you're trying to save on performance.

2) Pose data debugging
Output information about pose data received from the NMS. This is a very noisy log and not recommended for saving on performance.

3) Pose Hint Source
The Hector SLAM algorithm accepts a parameter for calculating the robot's position on a map. Because the NMS also accepts a sensor for pose data (i.e., wheel encoder, intel realsense t265, etc.), the data can be fused with the hector calculation. You can either use the external NMS sensor only, the Hector calculation only, an average of the two sensors, or the difference of the external sensor added to the hector value.

*Note: the Hector slam algorithm used in this robot skill requires many data points for accurate pose estimation. Many depth cameras or lidar sensors may not provide enough scan data to rely on the Hector pose estimation calculation. If this scenario happens, use the External option to rely on an external sensor or choose a depth sensor with more data points, such as a 360-degree lidar.

- Hector Only (Recommended with 360-degree Lidar only)
This relies on using the Hector SLAM to calculate its pose hint. This can be a reliable mapping option if your depth/scan/lidar sensor has enough data points for the Hector SLAM to accurately predict the robot's pose. If you use a 360-degree lidar, such as the RPI or Hitachi, they provide enough data points for this option. To fake the pose hint event, you will have to configure the depth/scan/lidar sensor and enable the option.

- Differential (Requires external L3G2 pose sensor & 360-degree Lidar)
This adds the external sensor's difference since the last pose update to the hector pose hint. Essentially, the external sensor pose hint is only used as a difference between the last time it was updated. That value is added to the hector's pose hint. If the external sensor has a high chance of error, it will decrease the error because it uses smaller snapshots.

For example, a wheel encoder may go out of sync within 60cm of travel. But, the value can be trusted within 5-10cm of travel. So this keeps a history of the last pose update and subtracts that from the current pose value. It then adds that pose value to the hector pose. By doing so, the external sensor pose error is reduced.

- External Only
The pose hint source is the external NMS sensor, such as a wheel encoder or Intel Realsense. The Intel Realsense may be the highest external positioning sensor available if you rely solely on external.

If you use a very noisy or unreliable sensor, such as the Faux Odometer, you may wish to use Hector only. That way, you are not giving the algorithm bad data to work with. Just make sure the depth sensor has enough data points, such as a 360-degree lidar.

- Average
This will average the Hector and External sensor positioning. Essentially it's a combination of the two. This is not very accurate because it merely divides the error between both sensors by two. So, the error isn't as noticeable over time, but the error grows as one sensor's errors increase.


Pose Hint Suggestions


We have a few suggestions for the pose hint value based on your robot's sensor configuration.

360 Degree Lidar Only (recommended)
- The Better Navigator Pose Hint should be set for Hector
- The 360-degree lidar configuration should be set to Fake Pose Hint Event (checked)

360 Degree Lidar with NMS L3G2 Pose Sensor (i.e., Wheel encoder, T265, etc.)
- The Better Navigator pose hint should be set for Differential
- The 360-degree lidar configuration should _not_ set a fake pose hint event (unchecked)
- The downfall to this sensor configuration is that the pose sensor can still result in mapping errors. This is noticeable when the map begins to shift.

Depth Camera (i.e., Kinect, Realsense, etc.) with NMS L3G2 Pose Sensor (i.e., Wheel encoder, T265, etc.)
- The Better Navigator pose hint should be set for External
- The downfall to this sensor configuration is that the depth camera does not provide enough data points for the SLAM to produce a pose hint. That means you will rely solely on the external NMS L3G2 pose sensor, which will increase errors over time. The solution is to combine the depth camera with a 360-degree lidar.

360 Degree Lidar, Depth Camera (i.e., Kinect, Realsense, etc.) with NMS L3G2 Pose Sensor (i.e., Wheel encoder, T265, etc.)
- The Better Navigator pose hint should be set for Differential
- The 360-degree lidar configuration should _not_ set a fake pose hint event (unchecked)


Starting Position


This navigation skill uses cartesian coordinates in CM from the starting position (0, 0). Any saved maps will be referenced from the same starting position and heading angle. When you re-load a project to have the robot navigate the same course, the robot must be positioned in the same starting position and heading angle. We recommend using painter/masking tape as the starting reference point for the robot. If your robot has an auto dock for charging, secure the charger to a fixed position on the floor, which can be used as a reference point. 

User-inserted image


We're using an iRobot Roomba as the robot with an Intel T265 positioning sensor in the photo above. The painter's tape on the floor marks the robot's starting position. The outline allows us to position the robot in the square, and the marking on the front of the robot aligns with the specified heading.

Cartesian Coordinate System
This robot skill uses cartesian coordinates to reference the robot's starting position. The starting position is always 0,0 and is defined at startup. As the robot navigates, the skill measures the distance from the starting position. The unit of measurement is in CM (centimeters). Read more about the cartesian coordinate system on Wikipedia.

User-inserted image


Example #1 (360 degree lidar only)


We'll use only a 360-degree lidar with this skill for navigation and mapping. Here's a screenshot of this type of setup. The Movement Panel is continuous rotation which uses robotis dynamixel servos. However, any Movement Panel will do. The 360-degree lidar is RPI Lidar A1. 

User-inserted image


* Note: This example assumes you already have a movement panel, and the robot can move.

1) Connect your lidar to the PC via the USB cable

2) Add the respective lidar robot skill to the project (in this case, we're using an RPI Lidar A1)

3) Configure the lidar robot skill and select the Fake Pose Hint Event option. (read the manual for the lidar robot skill for more information on that option)

4) Add The Better Navigator robot skill to your project

5) Configure The Better Navigator and select the pose hint to be HECTOR

6) Start the lidar robot skill

7) The map will begin to fill. You can now slowly drive the robot around and watch the map continually fill. 

8) Right-click on the map and select areas to navigate to.


Example #2 (Intel Realsense T265 & 360 degree lidar)


To get sensor data for mapping, other skills must be loaded that are compatible. In this quick example, we'll use the Intel Realsense T265 and 360-degree Lidar in combination with this skill. Here's a screenshot of this type of setup. The Movement Panel is continuous rotation which uses robotis dynamixel servos. The 360-degree lidar is the Hitachi. However, any Movement Panel will do.
User-inserted image


* Note: This example assumes you already have a movement panel, and the robot can move.

1) Connect your Intel RealSense T265 to the computer's USB port

2) Connect the distance sensor of choice (i.e., 360-degree lidar)

3) Load ARC 

4) Add the Intel RealSense skill to your workspace, select the port, and press Start

5) Add the 360-degree lidar robot skill to your workspace. Select the port and press Start

6) Now, add this skill (The Better Navigator) to your workspace

7) Configure The Better Navigator and specify the Pose Hint to be Differential

8) You will now see localization path and distance data from the Intel RealSense sensor displayed in The Better Navigator window. This robot skill will be displaying and rendering the data.


Example #3 (Interact With Speech Recognition)


The list of waypoints is added to an array. That array can be used with the WaitForSpeech() command in EZ-Script, JavaScript, or Python. This example shows how to use it with JavaScript. Add this code snippet to a speech recognition phrase script. In this example, the phrase we'll add to the speech recognition robot skill is "Robot go somewhere."

Code:


Audio.sayWait("Where would you like me to go?");

dest = Audio.waitForSpeech(5000, getVar("$TheNavSavedWayPoints"))

if (dest != "timeout") {

Audio.say("Navigating to " + dest);

ControlCommand("The Better Navigator", "GoToWayPoint", dest);
}
User-inserted image


With the code inserted, speak the phrase "Robot go somewhere." The robot will speak, "Where do you want me to go?" and display the list of stored waypoints. Speak one of the waypoints, and the robot will begin navigating.
User-inserted image

ARC Pro

Upgrade to ARC Pro

Get access to the latest features and updates before they're released. You'll have everything that's needed to unleash your robot's potential!

PRO
Synthiam
#1  
This initial release does not have the path planning implemented yet. There are also several silly issues with navigating. 

The neat thing is if a lidar or depth camera is used, the odometry can be assisted with either Faux Odometry robot skill or wheel encoders. It has much better pose estimation than The Navigator robot skill.
PRO
Synthiam
#104  
I'm using the intel depth camera fine with my rock pi. The trouble is it does not provide enough data to use the hector pose hint.

Maybe your resolution on the depth sensor is set too high? Or too high of a framerate?
Portugal
#105  
I don't have a depth camera, i had the t265 but sold it. Planning to use lidar and encoders.
PRO
Colombia
#106  
Hi DJ, sorry for the delay to answer the questions (#94) .  It seems now it is better not to use the T265, anyway my answers:

1) Are you sure the pose hint source is set to external? yes
  
2) You still haven’t told me what your stat values are from post #86 . sensor: 0ms Path: 11ms  to 18ms or more depending on the waypoint distance.  

3) do you ever see the message "Busy... Skipping location scan event": yes some times I saw this but not so frequently , when it happens  I re-started the lidar to test again. 

4) The "center" would be the center between the treads, not the robot: ok. 


So would be is possible to use the Intel depth camera  (i.e 435) for other purposes and stop feeding the NMS? . 
or use it only for obstacle avoiding? I am using also a rock pi X.
PRO
Synthiam
#107  
You can use depth sensor with lidar together for nms. You can have as many sensors as you want for the nms. That’s the point to the nms. The NMS manual explains how it works.
PRO
Synthiam
#108  
v34 

- allow resetting the position without clearing the map

- removed ekf pose hint

- added differential pose hint (adds the difference of external pose hint updates to the hector pose hint - read manual above)

- average pose hint improved

- added new option to clear map & reset pose position to 0

- new ControlCommand() that allows specifying the robot pose for custom re-alignment
PRO
Colombia
#109  
Hi DJ, 
For me is working better the maping  with the option Lidar + T265 (with differential pose hint as indicated).
I made paths that I never reached before and the map is much more stable. 
I also made some physical changes relocating the T265 a little bit higher and tuning the offset. 
Something I noticed is that if I activate the RS d435 the map started to shift instead to improve the navigation or object avoiding.

So is it possible to include also the option  in the RS435 skill just to use the it as camera source only w/o feed the NMS? 

I tried also the option only Lidar and Hector pose hint but I got map shifts when I made turns. 

Thanks, good advance! .
PRO
Colombia
#111  
Ok thanks, I am going there.
PRO
Colombia
#112  
Hi , I had to stop testing this skill for some days but now I started again and is working very nice until now. I made also some mechanical adjustments. I will continue with the other skills. Thanks DJ.
User-inserted image
Portugal
#113  
Nice work pardilav, i have something similar in mind. Does it have IR sensors too? Would love to see a video of your robot navigating and move that arm.
PRO
Synthiam
#115  
Oh, that's a great video!!! I'd enjoy seeing more of your robot - it's very impressive. Thanks for sharing! I hope you make a robot showcase for it one day
PRO
Colombia
#116  
Thanks DJ!, yes I will publish more videos with the different skills I am including in the robot.
#117   — Edited
Very cool! It's exciting to watch your robot arm move around like that. I'm looking forward to seeing more. 

Just a question; Looks like the rail the arm is mounted on wobbles a little when the arm moves. Does this interfere with it's accuracy? Maybe a couple braces near the bottom to make it more ridged? Just an idea. Either way this is a fantastic.

I absolutely love the embedded tablet in the base and the animated eyes it shows. Can you share what you are using down there? Looks like some brand of tablet running ARC mobile appt?
PRO
Colombia
#118  
Thanks for your comments Dave, 
The idea of the arm mount is also to move it in a vertical way,  so I installed already a Nema motor  at the bottom to do that but still I am not using it. 

About the tablet question  it is not a tablet it is a  7" HDMI touch screen connected  to the Single Board Computer (Rock Pi X ) that is running windows and ARC.
#119  

Quote:

7" HDMI touch screen connected to the Single Board Computer (Rock Pi X )


Awesome idea.
PRO
USA
#120   — Edited
I don’t think this can be done, it would be super cool if DJ was able to a make a screen layout using CM grid squares on the Better Navigator screen.  This way you would know how far you need to travel without having to measure your home to calculate how far things are by the camera.   

DJ, I have faith you could accomplish this task.

Cheers!
PRO
Synthiam
#121  
You don’t have to measure anything to use this robot skill. The lidar performs the measurements. You might need to scroll up and read what this robot skill is - it’ll help understand how to use it as well.
PRO
Colombia
#122  
Hi Dj, I am trying to load a saved map but nothing happens. It generates the file when saving but the load feature is not working. Thanks in advance for your support.
PRO
Colombia
#123  
Hi DJ.  Just to comment that I changed the ROCK pi X by a Beelink U59 and the performance of the skill  is much better now. Still the load of the map is not working but as I understood the correction is ongoing according the last message I received from support team. I hope soon I can share more videos.