Improved version of The Navigator based on Hector SLAM, with more features and path finding.
How to add the The Better Navigator robot skill
- Load the most recent release of ARC (Get ARC).
- Press the Project tab from the top menu bar in ARC.
- Press Add Robot Skill from the button ribbon bar in ARC.
- Choose the Navigation category tab.
- Press the The Better Navigator icon to add the robot skill to your project.
Don't have a robot yet?
Follow the Getting Started Guide to build a robot and use the The Better Navigator robot skill.
How to use the The Better Navigator robot skill
A better navigation skill based on Hector SLAM using ARC's NMS location/positioning and obstacle data. This skill is combined with other skills contributing navigation data to ARC's Navigation Messaging System (NMS). The lidar or depth camera data will create a map of the room(s) as the robot drives. You can then add way-points that are saved with the project. You can have the robot automatically navigate by clicking on a way-point (i.e., kitchen, sofa, or dining room). The robot will figure out a path to get there and avoid obstacles.Tutorial
Sensor Requirements
This robot skill uses data submitted to the NMS. It requires a positioning source (Layer 3 Group 2) and a depth/lidar sensor (Layer 3 group 1). Check the NMS manual for a list of sensors you can use for this skill. You need a Layer 3 Group 1 and Layer 3 Group 2 sensor. Pick one from each group to be used in this skill. Here's the NMS manual: https://synthiam.com/Support/ARC-Overview/robot-navigation-messaging-system
Positioning Sensor (NMS L3G2)
This robot skill requires data for the SLAM pose hint. This is a suggested position where the SLAM should start looking in its map for where the robot might be. Depending on the depth sensor you are using, the internal hector slam can be used as the pose hint instead of a pose sensor.
If you wish to use a pose sensor, the best sensor is the Intel Realsense T265. This robot skill's algorithm will fuse the positioning sensor's data with the SLAM pose data, providing high accuracy of pose telemetry. You may also have good pose prediction with a wheel encoder NMS, such as the iRobot Roomba. The NMS Faux Odometry will most likely not provide accurate pose data.
If you wish to use the internal hector slam to provide its pose hint, that can be done with supporting sensors. For example, the Hitachi and RPI Lidar both have an option to fake the pose hint event. In this case, you can set the configuration of this robot skill pose hint to HECTOR and use only those lidar sensors.
Depth/Lidar Sensor (NMS L3G1)
This robot skill requires the NMS to have depth avoidance sensors providing multiple data points, such as a 360 degree Lidar, Intel Realsense Depth Camera, or Microsoft Kinect. This means ultrasonic distance sensor data does not give enough scan points for this robot's skill but can be added for additional scan information.
Example
This screenshot uses an Intel RealSense T265 with a 360-degree lidar sensor. The robot was instructed to drive around the waypoints at various speeds.

This screenshot uses only an RPI Lidar. The RPI Lidar robot skill is set to fake the pose hint event. And The Better Navigator is configured to use the HECTOR as the pose hint.

ARC Navigation Messaging System
This skill is part of the ARC navigation messaging system. It is encouraged to read more about the Navigation Messaging System and learn compatible skills. This particular skill (The Better Navigator) operates on Level #1 of the NMS overview. This skill (The Better Navigator) requires a Level #3 Group #2 location/position sensor for operation. The location/positioning system will feed position data into the NMS, which this skill will use for navigation. See the NMS for compatible skills that provide location/position data.

Mapping
While your robot is driving around and navigating, this skill will log the trajectory. You define the waypoint and path points by manually going your robot to various locations (waypoints). Once multiple path points are defined for a waypoint, you can instruct your robot to autonomously navigate to that exact waypoint (or back again) at any time.
Map Size
The map is currently hardcoded for 20x20 meters.
Main Screen

1) Map control buttons for clearing trajectory and clearing the map.
2) The robot's current cartesian coordinates as reported by an NMS Level #3 Group #2 sensor (i.e., Intel T265, wheel encoders).
3) Saved waypoints. Here you can add, remove and select waypoints.
4) The path points within a waypoint. A waypoint will consist of many path points for navigating throughout the environment. You may right-click on path points to edit the coordinate for fine-tuning. You may also re-order the path points by right-clicking and selecting Move Up or Move Down.
5) Current heading of the robot relative to the cartesian starting position as reported by an NMS Level #3 Group #2 sensor.
6) The yellow dot marks the robot's current cartesian position as reported by an NMS Level #3 Group #2 position/location sensor.
7) Path points are connected with a straight line demonstrating where the robot drives. Right-click on the map view and select Add Path Point to add path points. It is best to drive the robot, which creates a trajectory. Then, right-click on some points of the tractory to add new path points to the selected waypoint.
8) Log messages are displayed about navigation and sensor activity.
Main Screen - Navigation Controls

This button manually starts navigating to the selected waypoint. You may also begin navigating by using ControlCommands from other skills. When the robot is navigating, this button behavior changes to stop navigating.
Configuration
Config - Scripts

1) Script that will execute when the navigation to a waypoint is started. Navigation can begin by manually pressing the Start button or using a ControlCommand().
2) Script will execute when the navigation is canceled or successfully ended.
3) If the navigation is paused by a JavaScript/Python command from the Navigation namespace. Or if the paused is triggered by the NMS Level #3 Group #1 distance sensor returning a value less than the specified range. This is configured in the Settings tab.
Config - Variables

Many global variables are set for The Better Navigator. A question mark next to each variable explains in greater detail. The variable contents can be viewed using the Variable Watcher skill found in the Scripts category. The option to uncheck Set Realtime Variables will save on performance if the variables are not used in your custom scripts. This data is available in the NMS scripting engine namespace anyway.
Config - Navigation

1) Disregard values lower than
Ignore distance values less than this specified distance in CM. The distance values are provided by any NMS Level #3 Group #1 sensor. If wires or a camera block the sensor, this will ignore those values.
2) Disregard values higher than
Ignore distance values further than this specified distance in CM. The distance values are provided by any NMS Level #3 Group #1 sensor. Many sensors are inaccurate at far distances, so you can ignore those values.
3) Pause Navigation Distance
If the NMS distance sensor provides a value greater than the "lower than" but lower than this, any navigation will be paused. This will also execute the PAUSE script from the Scripts tab. Your program may use this opportunity to navigate the obstacle and continue navigating again. Use the Javascript or Python command in the Navigation namespace to continue navigating. That command is Navigation.setNavigationStatusToNavigating();
4) Pause Navigation Degrees
This value complements the pause navigation distance value. This value will determine the degree range of when to pause navigation. If you wish for the entire range to be paused, enter 360 degrees. If you only want objects in front of the robot paused, enter 90. The degree number entered is divided by two and used from the left and right of the center of the robot. - If 90 degrees is entered, then 45 degrees to the left of the center of the robot and 45 degrees to the right of the center of the robot are detected.- If 180 degrees is entered, then 90 degrees to the left of the center of the robot and 90 degrees to the right of the center of the robot are detected.- If 360 degrees are entered, the full range will be detected.
5) Trajectory history count
Like a snake trail, a trail is left behind the robot's navigation. This is the number of history positions we keep. Otherwise, the trail will be gone forever and clutter the map.
6) Pose Frame Update Path Planning
The path planning will only update X frames from the L3G2 pose telemetry sensor to save CPU usage.
7) Way-point Font Size
The size of the font for the way-point titles. Depending on how zoomed you are on the map, you may change the font size.
8) Path planning resolution
A path consists of many micro way-points. This is the resolution of how many way-points to create. A value of 2 would mean every 2 CM is a new way-point, and a value of 20 would mean every 20 CM is a new way-point. The higher the number, the fewer waypoints and the less correcting the robot would need to make. However, if the value is too high, corners will be cut too close, and the robot may come in contact. You will recognize the lower resolution when fewer turns are made in the drawn path. The risk with lower resolution could mean cutting corners too close.
Here is an example of a resolution of 2...

Here is the same example of a resolution of 20...

You can see how the lower resolution (higher value) caused the robot to drive into the corner. While having many micro way-points causes the robot to correct more often, it also prevents the robot from hitting corners. Finding a balance for your environment requires testing.
9) Personal space size
This is a robot's personal space size bubble to keep from walls and objects when path planning. So a value of 50 would be 50 CM square. If this value is too large, the robot may not have enough room to navigate and reach destinations. If the value is too small, the robot may touch the wall or objects.
Configuration - Movement

1) Forward speed
When navigating to a way-point, this is the speed for the forward movement. You do not want the robot to move too quickly when navigating, increasing pose telemetry accuracy. By moving too quickly, the robot will lose position. Have the robot move as slowly as you can to improve accuracy.
2) Turn speed
Similar to the Forward speed, this is the speed used for turning when navigating.
3) Degrees of forgiveness
When navigating to way-points, a path is calculated. The path consists of many smaller way-points. The robot must turn toward the next waypoint before moving forward. This is the number of degrees of forgiveness for how accurate the robot must be facing. Many robots do not have the accuracy when turning, especially if they turn too quickly, so you would want this number to be higher. If the robot bounces back and forth attempting to line up the next way-point, this value must be increased.
4) Enable Dynamic Turning
This will allow the robot to turn using radial symmetry toward the object rather than rotate on the spot. This requires the Movement Panel to support individual wheel speed control, such as the continuous rotation servo, hbridge PWM, sabertooth, dynamixel wheel mode, etc.
5) Dynamic Min & Max Speed
The minimum (slowest) speed for turning. For example, if turning hard left, the left wheel would spin at this speed (slowest), and the right wheel would spin at the Max (fastest) speed. The value between the min and max are used to dynamically calculate how much speed the wheels need to turn in an arc.
6) Dynamic Turn Degrees
The robot will use dynamic turning if the next waypoint is less than this value of degrees. Otherwise, if the turn difference is higher than this value, the robot will use the standard rotate on the spot turning. If the waypoint is 180 degrees behind the robot, it would be more efficient to rotate on the spot toward the waypoint. Otherwise, if the waypoint is 30 degrees to the right, drive toward the waypoint on a slight radial path.
Configuration - Video

1) Video Stream
The output of the map can be sent to a camera device. The camera device must be started and in CUSTOM mode. The Custom can be selected in the device dropdown. This is useful if the map is displayed in a custom interface screen or PiP in Exosphere.
Configuration - Advanced

1) Navigation debugging
Outputs a noisy log when navigating the distances and degrees needed to turn. Do not use this if you're trying to save on performance.
2) Pose data debugging
Output information about pose data received from the NMS. This is a very noisy log and not recommended for saving on performance.
3) Pose Hint Source
The Hector SLAM algorithm accepts a parameter for calculating the robot's position on a map. Because the NMS also accepts a sensor for pose data (i.e., wheel encoder, intel realsense t265, etc.), the data can be fused with the hector calculation. You can either use the external NMS sensor only, the Hector calculation only, an average of the two sensors, or the difference of the external sensor added to the hector value.
*Note: the Hector slam algorithm used in this robot skill requires many data points for accurate pose estimation. Many depth cameras or lidar sensors may not provide enough scan data to rely on the Hector pose estimation calculation. If this scenario happens, use the External option to rely on an external sensor or choose a depth sensor with more data points, such as a 360-degree lidar.
- Hector Only (Recommended with 360-degree Lidar only)
This relies on using the Hector SLAM to calculate its pose hint. This can be a reliable mapping option if your depth/scan/lidar sensor has enough data points for the Hector SLAM to accurately predict the robot's pose. If you use a 360-degree lidar, such as the RPI or Hitachi, they provide enough data points for this option. To fake the pose hint event, you will have to configure the depth/scan/lidar sensor and enable the option.
- Differential (Requires external L3G2 pose sensor & 360-degree Lidar)
This adds the external sensor's difference since the last pose update to the hector pose hint. Essentially, the external sensor pose hint is only used as a difference between the last time it was updated. That value is added to the hector's pose hint. If the external sensor has a high chance of error, it will decrease the error because it uses smaller snapshots.
For example, a wheel encoder may go out of sync within 60cm of travel. But, the value can be trusted within 5-10cm of travel. So this keeps a history of the last pose update and subtracts that from the current pose value. It then adds that pose value to the hector pose. By doing so, the external sensor pose error is reduced.
- External Only
The pose hint source is the external NMS sensor, such as a wheel encoder or Intel Realsense. The Intel Realsense may be the highest external positioning sensor available if you rely solely on external.
If you use a very noisy or unreliable sensor, such as the Faux Odometer, you may wish to use Hector only. That way, you are not giving the algorithm bad data to work with. Just make sure the depth sensor has enough data points, such as a 360-degree lidar.
- Average
This will average the Hector and External sensor positioning. Essentially it's a combination of the two. This is not very accurate because it merely divides the error between both sensors by two. So, the error isn't as noticeable over time, but the error grows as one sensor's errors increase.
Pose Hint Suggestions
We have a few suggestions for the pose hint value based on your robot's sensor configuration.
360 Degree Lidar Only (recommended)
- The Better Navigator Pose Hint should be set for Hector
- The 360-degree lidar configuration should be set to Fake Pose Hint Event (checked)
360 Degree Lidar with NMS L3G2 Pose Sensor (i.e., Wheel encoder, T265, etc.)
- The Better Navigator pose hint should be set for Differential
- The 360-degree lidar configuration should _not_ set a fake pose hint event (unchecked)
- The downfall to this sensor configuration is that the pose sensor can still result in mapping errors. This is noticeable when the map begins to shift.
Depth Camera (i.e., Kinect, Realsense, etc.) with NMS L3G2 Pose Sensor (i.e., Wheel encoder, T265, etc.)
- The Better Navigator pose hint should be set for External
- The downfall to this sensor configuration is that the depth camera does not provide enough data points for the SLAM to produce a pose hint. That means you will rely solely on the external NMS L3G2 pose sensor, which will increase errors over time. The solution is to combine the depth camera with a 360-degree lidar.
360 Degree Lidar, Depth Camera (i.e., Kinect, Realsense, etc.) with NMS L3G2 Pose Sensor (i.e., Wheel encoder, T265, etc.)
- The Better Navigator pose hint should be set for Differential
- The 360-degree lidar configuration should _not_ set a fake pose hint event (unchecked)
Starting Position
This navigation skill uses cartesian coordinates in CM from the starting position (0, 0). Any saved maps will be referenced from the same starting position and heading angle. When you re-load a project to have the robot navigate the same course, the robot must be positioned in the same starting position and heading angle. We recommend using painter/masking tape as the starting reference point for the robot. If your robot has an auto dock for charging, secure the charger to a fixed position on the floor, which can be used as a reference point.

We're using an iRobot Roomba as the robot with an Intel T265 positioning sensor in the photo above. The painter's tape on the floor marks the robot's starting position. The outline allows us to position the robot in the square, and the marking on the front of the robot aligns with the specified heading.
Cartesian Coordinate System
This robot skill uses cartesian coordinates to reference the robot's starting position. The starting position is always 0,0 and is defined at startup. As the robot navigates, the skill measures the distance from the starting position. The unit of measurement is in CM (centimeters). Read more about the cartesian coordinate system on Wikipedia.

Example #1 (360 degree lidar only)
We'll use only a 360-degree lidar with this skill for navigation and mapping. Here's a screenshot of this type of setup. The Movement Panel is continuous rotation which uses robotis dynamixel servos. However, any Movement Panel will do. The 360-degree lidar is RPI Lidar A1.

* Note: This example assumes you already have a movement panel, and the robot can move.
1) Connect your lidar to the PC via the USB cable
2) Add the respective lidar robot skill to the project (in this case, we're using an RPI Lidar A1)
3) Configure the lidar robot skill and select the Fake Pose Hint Event option. (read the manual for the lidar robot skill for more information on that option)
4) Add The Better Navigator robot skill to your project
5) Configure The Better Navigator and select the pose hint to be HECTOR
6) Start the lidar robot skill
7) The map will begin to fill. You can now slowly drive the robot around and watch the map continually fill.
8) Right-click on the map and select areas to navigate to.
Example #2 (Intel Realsense T265 & 360 degree lidar)
To get sensor data for mapping, other skills must be loaded that are compatible. In this quick example, we'll use the Intel Realsense T265 and 360-degree Lidar in combination with this skill. Here's a screenshot of this type of setup. The Movement Panel is continuous rotation which uses robotis dynamixel servos. The 360-degree lidar is the Hitachi. However, any Movement Panel will do.

* Note: This example assumes you already have a movement panel, and the robot can move.
1) Connect your Intel RealSense T265 to the computer's USB port
2) Connect the distance sensor of choice (i.e., 360-degree lidar)
3) Load ARC
4) Add the Intel RealSense skill to your workspace, select the port, and press Start
5) Add the 360-degree lidar robot skill to your workspace. Select the port and press Start
6) Now, add this skill (The Better Navigator) to your workspace
7) Configure The Better Navigator and specify the Pose Hint to be Differential
8) You will now see localization path and distance data from the Intel RealSense sensor displayed in The Better Navigator window. This robot skill will be displaying and rendering the data.
Example #3 (Interact With Speech Recognition)
The list of waypoints is added to an array. That array can be used with the WaitForSpeech() command in EZ-Script, JavaScript, or Python. This example shows how to use it with JavaScript. Add this code snippet to a speech recognition phrase script. In this example, the phrase we'll add to the speech recognition robot skill is "Robot go somewhere."
Code:
Audio.sayWait("Where would you like me to go?");
dest = Audio.waitForSpeech(5000, getVar("$TheNavSavedWayPoints"))
if (dest != "timeout") {
Audio.say("Navigating to " + dest);
ControlCommand("The Better Navigator", "GoToWayPoint", dest);
}

With the code inserted, speak the phrase "Robot go somewhere." The robot will speak, "Where do you want me to go?" and display the list of stored waypoints. Speak one of the waypoints, and the robot will begin navigating.

The neat thing is if a lidar or depth camera is used, the odometry can be assisted with either Faux Odometry robot skill or wheel encoders. It has much better pose estimation than The Navigator robot skill.
Will there be a "Best Navigator" or "Better Better Navigator" skill?
Back on topic, I'm really looking forward to using this skill!
@DJ Question for you, I have a LIDAR sensor that I'd like to see supported in ARC. It is a different brand than all the rest and is even a lower cost than the Hitachi. How should I go about getting it supported in ARC? Should it have its own skill (as the protocol is likely unique)? Should I look for a developer here in the community to make a skill or contract Synthiam? What do you recommend?
- Display the cartesian coordinates from the Hector slam estimated pose. The NMS odometry is fused with the hector telemetry pose.
In this video, the robot is not moving, and the starting position is Yellow in the bottom left. As I click around the scanned map, the algorithm will find the best path to the destination.
Thanks.
Pablo
Question 1:
Would it be possible to have 1 path broken down into say, more than one way point?
Robot go to kitchen, then it would travel to many different way points to get to the kitchen?
Question 2:
Say you wanted the robot to travel to the bedroom from the living room.
I can see traveling around one room but will you be able to go to one point (say the door way of another room) and then have the program change to that rooms way points?
Hope I'm making this clear ( wife says I don't make my questions clear to people ... lol).
Herr Ball
2) It automatically figures out how to get to the way point. It doesn't matter if there's doors or anything. The whole house gets mapped and saved. It knows where it is. It knows how to get somewhere
Your questions make sense :). Both questions are answered with Yes lol. You'll just have to try it. It's really awesome. I've been playing with it all night!
Which brings me to a question?
What processor/ram are you using in your tests?
Pablo
The path is displayed in a darker green when not navigating. And brighter and thicker when navigating.
Navigating button changes between green and red based on navigation status.
Renders the estimated path when a waypoint is selected
Updated this skill to
- have more space next to walls during path planning
- select font size
- few other tweaks, such as the default values have been optimized a bit
- buttons for clearing map and trajectory moved to the top menu to save real estate
- new Tools menu for clearing all waypoints or realigning waypoints
The re-align waypoint menu allows horizontal, vertical, or degree rotation, altering the waypoints. The value you enter is a negative or positive number that will change the location of all the waypoints by that amount. View the changes in real-time. There is an Undo button to remove changes. This menu is helpful if your robot's starting position is slightly off.
- robot's personal space bubble size
- navigation path resolution
Both are documented above in the Config section part of the manual
Let me look into a save and load option for you.
The list of waypoints are added to an array. That array can be used with the WaitForSpeech() command that is available in EZ-Script, JavaScript or Python. This example shows how to use it with JavaScript. Add this to a speech recognition phrase script.
Code:
With the code inserted, speak the phrase "Robot go somewhere". The robot will speak "Where do you want me to go?", then it will display the list of stored waypoints. Speak one of the waypoints and the robot will begin navigating.
- Fix for renaming waypoints where the name didn't update in the list
- Added variable array that stores all waypoints
- fixed a bug where the robot would stop navigating before reaching the waypoint
- optimized path planning to be much faster and less CPU
- path planning iteration option allows tweaking the path planning to give up earlier and save CPU
This release seems to be top-notch. It's working well for me.
So would be great if the reference map can be included as a layer I can make visible/not visible. This is how I see it useful.:). Other improvements you commented looks very good I also noticed some of those issues but I thought was related to my hardware capacity.
2) You will now need a NMS Level Group 2 sensor of some sort.
I don't know what other skills you're referring to? What do you mean by "no way to try it"? You can't try it without a NMS Level 3 Group 1 sensor AND a NMS Level 3 Group 2 sensor. It needs both of those types of sensors. There's no way you can try it without those because it wont work.
Look at the NMS support document and see what sensors you have or which ones you want to get: https://synthiam.com/Support/ARC-Overview/robot-navigation-messaging-system
The L3G1 and L3G2 sensors are listed on that page.
Look at that page, pick one sensor from the Group 1, and pick another sensor from the Group 2. Once you have one sensor picked from each of those groups, you can now use this robot skill.
If you have a Roomba, do not use the NMS Faux. The NMS Faux and the roomba are each sensors. Choose ONE only from each group. check the link i provided above and choose one sensor from each group.
- realtime update slam map rather than buffer every 500ms. This should be fine for lower power SBCs. If you receive a Busy message in the log, let me know
- map viewer performance improvements
- config option to disable realtime variables to improve performance
- advanced experimental option for a pose fuse v2
- dynamic turning. This allows the robot to move toward waypoints in a slight turn arc. Rather than rotating on the spot to correct
- There are also a few changes to the navigation engine that provides less "spinning on the spot" when turning toward a waypoint.
- changed a few default values
- stops spinning on the spot when navigating with a smarter algorithm
- dynamic turning has a minor improvement
the one thing I noticed is my lidar doesn’t get chair legs very well. I’m wondering if it makes sense to allow sketching off areas to avoid. Because there’s always stuff that isn’t detected that are thin
I am having a problem now with my lidar’s serial adapter hardware or cable, so I need to fix that before continue with the tests but I could check the changes and I really liked.
New parameters allow a good fine tuning! Thanks!
Just to comment that sometimes I need to erase and install again the T265 skill and reconnect the T265 to be recognized not sure if this is a RS Intel problem or can be fixed in ARC/ skill or is possible a workaround. For the 435 this part is more stable.
Is it possible to auto-connect both RS cameras on startup to begin the navigation instead of doing it manually? Thanks
- minor performance improvement
- new map renderer that shows scanned areas and un-scanned areas
- new improved SLAM algorithm that uses multiple maps of varying resolutions for pose matching
- new option in Advanced tab for configuring the source of pose hint data to the Hector slam algorithm
- new mapping model uses larger detected items
- faster mapping because there's a reduced resolution on the mapping bitmap
*Needs testing... anyone?
oh and the destinations and robot view are the same size as the robot. These look big but that’s necessary to properly guess where the robot is going to end up
Any possibility in the future to define the final direction of the robot when arriving?
One question just to confirm, what is the best location of the lidar? is the center of the robot? or aligned with the T265. I am experimenting some "drifts" in the map when turning left ot right the robot.
Any comment about the possibility to restrict some areas in the map? Thanks!
If your t265 is facing down, it won’t be able to track anything because it only sees the floor. It has a manual from Intel and the best placement is where it can see the most landmarks.
- robot direction is displayed on the robot rather than a compass (compass is removed)
- robot displays the relative size that has been specified for it's "personal space"
- path waypoints include the relative size of the robot so you can see if it's going to reach the destination and what is in the way
- path highlights in orange if it "can't make it" to the destination
- path planning performance improved
- hector SLAM performance improvement
- the detected distances for the SLAM are displayed more visibly with larger "pixels"
- number of gui and rendering performance improvements
- improved graphic render style
- history trajectory is robot size
- shows stats in the bottom left. For the path planning time and how often Pose sensor data is received from the NMS. this is useful when debugging performance issues.
BTW your path planning resolution is really high! how come? I use a value of 2 or 3. It prevents hitting stuff
This is the same destination but with a resolution of 2
- updated to prevent history trajectory from jumping around
By the way I am recently having some challenges to have an "stable map" specially during the rotation of the robot , I tried modifying the available parameters but I am having always a similar situation with the changes.
I do not remember to have this problem on previous versions, before the addition of some of the new parameters. I am checking for example also the RPLidar position in the robot vs the T265 and trying different speeds (very low speeds helps), but no major changes. My robot base is a TANK configuration using a Dual Hbridge w/PWM , Not sure if I am not measuring the right "center" of the robot for example. Any suggestion will be appreciated! .
Could RPlidar A1 Skill have an offset in order to position it in a different place? . Thanks .
Version: 2022.03.26.00
System.InvalidOperationException: stop() cannot be called before start() ---> System.Runtime.InteropServices.ExternalException: rs2_pipeline_stop(pipe:07B47600)
--- End of inner exception stack trace ---
at Intel.RealSense.ErrorMarshaler.MarshalNativeToManaged(IntPtr pNativeData) in C:\Documents\SVN\Developer - Controls\In Production\Intel Realsense T265\MY_PROJECT_NAME\Intel.RealSense\Helpers\ErrorMarshaler.cs:line 66
at System.StubHelpers.MngdRefCustomMarshaler.ConvertContentsToManaged(IntPtr pMarshalState, Object& pManagedHome, IntPtr pNativeHome)
at Intel.RealSense.NativeMethods.rs2_pipeline_stop(IntPtr pipe, Object& error)
at Intel_Realsense_T265.MainForm.stop() in C:\Documents\SVN\Developer - Controls\In Production\Intel Realsense T265\MY_PROJECT_NAME\MainForm.cs:line 266
at Intel_Realsense_T265.MainForm._ts_OnEventError(EZTaskScheduler sender, Int32 taskId, Object o, Exception ex) in C:\Documents\SVN\Developer - Controls\In Production\Intel Realsense T265\MY_PROJECT_NAME\MainForm.cs:line 291
at EZ_B.EZTaskScheduler.oIwgr3tQwvyZg7jII5a(Object , Object , Int32 taskId, Object , Object )
at EZ_B.EZTaskScheduler.nwF8IHFyPN()
at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()
1) Are you sure the pose hint source is set to external?
2) You still haven’t told me what your stat values are from post #86
3) do you ever see the message "Busy... Skipping location scan event"
4) The "center" would be the center between the treads, not the robot
- major update with dynamic turning
- default settings updated (probably a good idea to try the default settings if you're already using this skill)
- major update to navigating
I'm no longer using the T265 at all. I'm using.
- the Faux Odometry with the update set for 100ms
- RPI A1 Lidar
- the better navigator set to HECTOR for Pose Hint
And that's it... I'm getting way better results without the t265
Just try version 33 without the T265 and use the settings i said above
Maybe your resolution on the depth sensor is set too high? Or too high of a framerate?
1) Are you sure the pose hint source is set to external? yes
2) You still haven’t told me what your stat values are from post #86 . sensor: 0ms Path: 11ms to 18ms or more depending on the waypoint distance.
3) do you ever see the message "Busy... Skipping location scan event": yes some times I saw this but not so frequently , when it happens I re-started the lidar to test again.
4) The "center" would be the center between the treads, not the robot: ok.
So would be is possible to use the Intel depth camera (i.e 435) for other purposes and stop feeding the NMS? .
or use it only for obstacle avoiding? I am using also a rock pi X.
- allow resetting the position without clearing the map
- removed ekf pose hint
- added differential pose hint (adds the difference of external pose hint updates to the hector pose hint - read manual above)
- average pose hint improved
- added new option to clear map & reset pose position to 0
- new ControlCommand() that allows specifying the robot pose for custom re-alignment
For me is working better the maping with the option Lidar + T265 (with differential pose hint as indicated).
I made paths that I never reached before and the map is much more stable.
I also made some physical changes relocating the T265 a little bit higher and tuning the offset.
Something I noticed is that if I activate the RS d435 the map started to shift instead to improve the navigation or object avoiding.
So is it possible to include also the option in the RS435 skill just to use the it as camera source only w/o feed the NMS?
I tried also the option only Lidar and Hector pose hint but I got map shifts when I made turns.
Thanks, good advance! .
A short video with the arm( Basic test). I will try to include more videos soon.
https://youtube.com/shorts/VB_h5CYe4Sk?feature=share
Just a question; Looks like the rail the arm is mounted on wobbles a little when the arm moves. Does this interfere with it's accuracy? Maybe a couple braces near the bottom to make it more ridged? Just an idea. Either way this is a fantastic.
I absolutely love the embedded tablet in the base and the animated eyes it shows. Can you share what you are using down there? Looks like some brand of tablet running ARC mobile appt?
The idea of the arm mount is also to move it in a vertical way, so I installed already a Nema motor at the bottom to do that but still I am not using it.
About the tablet question it is not a tablet it is a 7" HDMI touch screen connected to the Single Board Computer (Rock Pi X ) that is running windows and ARC.
Awesome idea.
DJ, I have faith you could accomplish this task.
Cheers!
Honest greetings and respect
I need to implement a hector slam navigation Navigation Messaging System (NMS) for indoor navigation on 2 wheel robot platform
I'm in Africa and I'm running low on resources i cant get intel depth cameras
working on a medical robot project (nonprofitable)
here is my available hardware
1-Hitachi-LG LDS Lidar
2-Kinect Xbox 360
3-2Wheel Encoder Counter
4-win10 companion computer latepanda
I am afraid of working with the Kinect Xbox 360 system. I have seen some interventions and comments related to its inaccuracy. I am forced to use it because I have no other alternative.
based on these components Can I build the (hector slam navigation) level 3 system?
@DJ Sures
Yes, I understand there’s a speed setting
Acceleration for servos would need to be managed by the EZB or servo Controller. Servos uhigh-speedspeed PWM is incredibly too fast for a PC to generate, which is why they need a microcontroller. The acceleration is included with that PWM, so you'd need to use a controller that supports acceleration.
ARC has an acceleration parameter for controllers that support it. This is documented in the servo Control page: https://synthiam.com/Support/ARC-Overview/Servo-Controls.
For a servo controller that uses acceleration, I am pretty sure dynamixels, lewansoul, lynxmotion, polulu maestro, and maybe the kondoor use PWM servos; I think the polulu is the only one with built-ins built-in acceleration. You'd have to closely examine what's available in the robot skill section and ezb section.
Question refers to enclosed pic.
Have a question about the mounting of the Lidar.
As you can see by the pic, I plan to mount the lidar in the center of my base (not attached yet).
I also plan to place a shelf, over top of the lidar, the same size as the black bottom shelf.
The top shelf will be connected to the base by the three screws, with the shelf only as high off the lidar as needed.
Hope I explained that right?
Lidar:
Will the three screws between the lidar base and the top shelf interfere with the correct operation of the lidar or Better Navigator?
Thanks
Nothing will block the lidar sensor except the three screws.
Thanks for the info ..."ignore distance values less than " ... that's the ticket!
I was worried that the lidar would always see the screws as objects and wasn't sure how they would affect the program.
Again Thanks!