A navigation skill for path planning based on receiving external location and obstacle data
How to add the The Navigator robot skill
- Load the most recent release of ARC (Get ARC).
- Press the Project tab from the top menu bar in ARC.
- Press Add Robot Skill from the button ribbon bar in ARC.
- Choose the Navigation category tab.
- Press the The Navigator icon to add the robot skill to your project.
Don't have a robot yet?
Follow the Getting Started Guide to build a robot and use the The Navigator robot skill.
How to use the The Navigator robot skill
A navigation skill for path planning based on receiving NMS location/positioning and obstacle data. This skill is combined with other skills that contribute navigation data to ARC's Navigation Messaging System (NMS).Updated Skill
This robot skill has been superseded by a newer version called The Better Navigator.
Example
This screenshot uses an Intel RealSense T265 with three low-cost HC-SR04 ultrasonic distance sensors. The robot was instructed to drive around the waypoints at various speeds. Ideally, you would want to use this robot skill with a 360-degree lidar or other high-resolution obstacle detection.

Here is a video of The Navigator working in conjunction with the Intel RealSense T265 tracking camera skill, which operates at NMS Level #3 Group #2 by providing position/location data into the NMS
ARC Navigation Messaging System
This skill is part of the ARC navigation messaging system. It is encouraged to read more about the Navigation Messaging System and learn compatible skills. This particular skill (The Navigator) operates on Level #1 of the NMS overview. This skill (The Navigator) requires a Level #3 Group #2 location/position sensor for operation. The location/positioning system will feed position data into the NMS, which this skill will use for navigation. See the NMS for compatible skills that provide location/position data.

Mapping
While your robot is driving around and navigating, this skill will log the trajectory. You define the waypoint and path points by manually driving your robot to various locations (waypoints). Once multiple path points are defined for a waypoint, you can instruct your robot to autonomously navigate to that exact waypoint (or back again) at any time.
Map Size
The map is currently hardcoded for 20x20 meters.
Main Screen

1) Map control buttons for clearing trajectory and clearing the map.
2) The robot's current cartesian coordinates as reported by an NMS Level #3 Group #2 sensor (i.e., Intel T265, wheel encoders).
3) Saved waypoints. Here you can add, remove and select waypoints.
4) The path points within a waypoint. A waypoint will consist of many path points for navigating throughout the environment. You may right-click on path points to edit the coordinate for fine-tuning. You may also re-order the path points by right-clicking and selecting Move Up or Move Down.
5) Current heading of the robot relative to the cartesian starting position as reported by an NMS Level #3 Group #2 sensor.
6) The yellow dot marks the robot's current cartesian position as reported by an NMS Level #3 Group #2 position/location sensor.
7) Path points are connected with a straight line demonstrating where the robot would drive. Right-click on the map view and select Add Path Point to add path points. It is best to drive the robot, which creates a trajectory. Then, right-click on some points of the tractory to add new path points to the selected waypoint.
8) Log messages are displayed about navigation and sensor activity.
Main Screen - Navigation Controls

There are three buttons for manually starting navigating. You may also start navigating by using ControlCommands from other skills. The three buttons each have different behavior.
Start (Forward Order) - This will instruct the robot to start following the specified paths within the waypoint from the first to last in that order. If your robot is close to the starting (first path point), then it will be convenient and head right there. However, if your robot is far away from the first path point, you may wish to use the Reverse or Auto button.
Start (Reverse Order) - This will instruct the robot to start following the specified paths within the waypoint in reverse order from last to first. If your robot is closest to the last path point, this will be convenient to get your robot back to the first path point in the waypoint.
Start (Auto) - This will automatically calculate which order to execute the path direction based on the robot's distance from the first or last path point. This is the most convenient operation because it will allow a waypoint to be navigated to and from. For example, if your robot drives to a location, use this button to get your robot back to the starting location. This will happen automatically.
Config - Scripts

1) Script that will execute when the navigation to a waypoint is started. Navigation can start by manually pressing the Start button or using a ControlCommand().
2) Script will execute when the navigation is canceled or successfully ended.
3) If the navigation is paused by a JavaScript/Python command from the Navigation namespace. Or if the paused is triggered by the NMS Level #3 Group #1 distance sensor returning a value less than the specified range. This is configured in the Settings tab.
Config - Variables

Many global variables are set for The Navigator. A question mark next to each variable explains in greater detail. The variable contents can be viewed using the Variable Watcher skill found in the Scripts category.
Config - Settings

1) Ignore distance values less than this specified distance in CM. The distance values are provided by any NMS Level #3 Group #1 sensor.
2) Ignore distance values further than this specified distance in CM. The distance values are provided by any NMS Level #3 Group #1 sensor.
3) Filter false positives by removing detected obstacles and then later not detected within the same area.
4) If the NMS distance sensor provides a value greater than the "lower than" but lower than this, any navigation will be paused. This will also execute the PAUSE script from the Scripts tab. Your program may use this opportunity to navigate the obstacle and continue navigating again. Use the Javascript or Python command in the Navigation namespace to continue navigating. That command is Navigation.setNavigationStatusToNavigating();
5) This value complements the pause navigation distance value. This value will determine the degree range of when to pause navigation. If you wish for the entire range to be paused, enter 360 degrees. If you only want objects in front of the robot paused, enter 90. The degree number entered is divided by two and used from the left and right of the center of the robot.
- If 90 degrees is entered, then 45 degrees to the left of the center of the robot and 45 degrees to the right of the center of the robot are detected.
- If 180 degrees is entered, then 90 degrees to the left of the center of the robot and 90 degrees to the right of the center of the robot are detected.
- If 360 degrees are entered, the full range will be detected.
Starting Position
This navigation skill uses cartesian coordinates in CM from the starting position (0, 0). This means that any saved maps will be referenced from the same starting position and heading angle. When you re-load a project to have the robot navigate the same course, the robot must be positioned in the same starting position and heading angle. We recommend using painter/masking tape as the starting reference point for the robot. If your robot has an auto dock for charging, secure the charger to a fixed position on the floor, which can be used as a reference point.

We're using an iRobot Roomba as the robot with an Intel T265 positioning sensor in the photo above. The painter's tape on the floor marks the robot's starting position. The outline allows us to position the robot into the square, and the marking on the front of the robot aligns with the specified heading.
Cartesian Coordinate System
This robot skill uses cartesian coordinates to reference the robot's starting position. The starting position is always 0,0 and is defined at startup. As the robot navigates, the skill measures the distance from the starting position. The unit of measurement is in CM (centimeters). Read more about the cartesian coordinate system on Wikipedia.

How Do You Use This
This skill is a navigation renderer and movement planner. This means it can build a map from participating sensor data and control the robot by navigating it to specified locations. To get sensor data, other skills must be loaded that are compatible. In this quick example, we'll use the Intel Realsense T265 in combination with this skill...
1) Connect your Intel RealSense T265 to the computer's USB port
2) Load ARC (version must be >= 2020.12.25.00)
3) Add the Intel RealSense skill to your workspace
4) Now, add this skill (The Navigator) to your workspace
5) Press START on the Intel RealSense skill
6) You should now begin to see localization path data from the Intel RealSense sensor displayed in The Navigator window. This robot skill will be displaying and rendering the data.
*Optionally, you may add a supporting NMS Level #3 Group #1 sensor to detect the distance of obstacles and environment.
Related Hack Events
live hack

A Little Of This, A Little Of That
We'll take it as it goes tonight in this last-minute live hack event. My plan is to do a few things with the ESP32 Cam...
live hack

DJ's K8 Intel Realsense & Navigation
We're going to play with the new robot that I have been re-building. Added the Rock Pi/x, Intel Realsense T265 & D435i,...
Related Questions
Upgrade to ARC Pro
Your robot can be more than a simple automated machine with the power of ARC Pro!
Level 3 group 2 sensors!
I almost bought the camera 69.00
ZasLuke Webcam, 1080P HD USB QuickCam Intelligent Motion Tracking Camera, Built-in Dual Noise Canceling Microphone, 270Rotation Smart Tracking Streaming Webcam for Zoom Skype Facetime, PC Mac Laptop
Question about the TODO items:
- Automatically navigate obstacle avoidance
- Load and save maps
Do you have an estimated timeframe for the first one? My experience now that I have my Lidar working (mostly) is that the Navigator is not looking a the direction of movement and the direction of near obstacle, so if it passes close to a wall but nothing is obstructing the direction of travel, it is still pausing. If you are a week or two away from building in the functionality, I will wait. If it is further down your list, or you don't know, I can write some scripts to handle the situation, but they will be pretty complex since I'll want to compare distances to each side in a range of degrees to determine which way to turn before attempting to continue.
For the second item, I just wanted to say I am glad you are planning it. For now, my plan is to just save different versions of hte project for different maps (or really, starting points) but as I make changes in projects, that could become a real hassle. Would rather just pull up the map for a particular starting point in the same project.
Alan
I'll start with some scripts and see how fancy I can get using the navigator variables available. (I seriously need to learn Javascript though. I think it would be way less complex to script this in Javascript than EZ-Script).
Alan
- allow specifying a limit to the number of degrees that will trigger the NMS pause event.
- nearest & furthest variables are only set when there is scan data (no longer displaying strange numbers)
- a performance improvement that lowered 3% cpu usage on my test computer
I can't test tonight, but I will try it out tomorrow.
- Performance improvement
- Saves map with project
- new option in config for how many trajectory points to remember (Avoid clutter of UI)
Particularly, the new mapping system is neat. As objects are detected, they increase their persistence. So when objects stop being detected, they slowly disappear because their persistence is decreased. This means if you move a chair or object around the room, the map slowly learns that the object has moved. It's cool to see the prototype - which makes the maps self-learning and adjust over time.
I'm waiting on some hardware to test more with. Because we want to make a rpi lidar driver as well. We're a few weeks away but i did want to share :D.
Alan
The new navigator (internally called The Better Navigator) is based on the Hector Slam, a popular ros package. It only works with 360-degree lidars, though. I'm trying to hack the algorithm to see if I can get it to work with depth cameras.
The EZ Slam uses Core Slam; we might modify that one to work with The Better Navigator.
The outcome I want from The Better Navigator is to have a table that I can instruct to move around my house to anywhere I want. I can ask the table to move to the kitchen counter. Or the living room small sofa. Or the dining room table, etc... I want to define many different house areas in waypoints and have the table move to them with speech commands.
The use case is to put stuff on the table, like dinner dishes, tools, or whatever... And say, "hey magic table bring this into the kitchen by the fridge," haha