Thumbnail

The Better Navigator

by Synthiam

Improved version of The Navigator based on Hector SLAM, with more features and path finding.

Requires ARC v34 (Updated 4/1/2022)

How to add the The Better Navigator robot skill

  1. Load the most recent release of ARC (Get ARC).
  2. Press the Project tab from the top menu bar in ARC.
  3. Press Add Robot Skill from the button ribbon bar in ARC.
  4. Choose the Navigation category tab.
  5. Press the The Better Navigator icon to add the robot skill to your project.

Don't have a robot yet?

Follow the Getting Started Guide to build a robot and use the The Better Navigator robot skill.

How to use the The Better Navigator robot skill

A better navigation skill based on Hector SLAM using ARC's NMS location/positioning and obstacle data. This skill is combined with other skills contributing navigation data to ARC's Navigation Messaging System (NMS). The lidar or depth camera data will create a map of the room(s) as the robot drives. You can then add way-points that are saved with the project. You can have the robot automatically navigate by clicking on a way-point (i.e., kitchen, sofa, or dining room). The robot will figure out a path to get there and avoid obstacles.

Tutorial

Sensor Requirements

This robot skill uses data submitted to the NMS. It requires a positioning source (Layer 3 Group 2) and a depth/lidar sensor (Layer 3 group 1). Check the NMS manual for a list of sensors you can use for this skill. You need a Layer 3 Group 1 and Layer 3 Group 2 sensor. Pick one from each group to be used in this skill. Here's the NMS manual: https://synthiam.com/Support/ARC-Overview/robot-navigation-messaging-system

Positioning Sensor (NMS L3G2) This robot skill requires data for the SLAM pose hint. This is a suggested position where the SLAM should start looking in its map for where the robot might be. Depending on the depth sensor you are using, the internal hector slam can be used as the pose hint instead of a pose sensor.

If you wish to use a pose sensor, the best sensor is the Intel Realsense T265. This robot skill's algorithm will fuse the positioning sensor's data with the SLAM pose data, providing high accuracy of pose telemetry. You may also have good pose prediction with a wheel encoder NMS, such as the iRobot Roomba. The NMS Faux Odometry will most likely not provide accurate pose data.

If you wish to use the internal hector slam to provide its pose hint, that can be done with supporting sensors. For example, the Hitachi and RPI Lidar both have an option to fake the pose hint event. In this case, you can set the configuration of this robot skill pose hint to HECTOR and use only those lidar sensors.

Depth/Lidar Sensor (NMS L3G1) This robot skill requires the NMS to have depth avoidance sensors providing multiple data points, such as a 360 degree Lidar, Intel Realsense Depth Camera, or Microsoft Kinect. This means ultrasonic distance sensor data does not give enough scan points for this robot's skill but can be added for additional scan information.

Example

This screenshot uses an Intel RealSense T265 with a 360-degree lidar sensor. The robot was instructed to drive around the waypoints at various speeds.

User-inserted image

This screenshot uses only an RPI Lidar. The RPI Lidar robot skill is set to fake the pose hint event. And The Better Navigator is configured to use the HECTOR as the pose hint.

User-inserted image

ARC Navigation Messaging System

This skill is part of the ARC navigation messaging system. It is encouraged to read more about the Navigation Messaging System and learn compatible skills. This particular skill (The Better Navigator) operates on Level #1 of the NMS overview. This skill (The Better Navigator) requires a Level #3 Group #2 location/position sensor for operation. The location/positioning system will feed position data into the NMS, which this skill will use for navigation. See the NMS for compatible skills that provide location/position data.

User-inserted image

Mapping

While your robot is driving around and navigating, this skill will log the trajectory. You define the waypoint and path points by manually going your robot to various locations (waypoints). Once multiple path points are defined for a waypoint, you can instruct your robot to autonomously navigate to that exact waypoint (or back again) at any time.

Map Size The map is currently hardcoded for 20x20 meters.

Main Screen

User-inserted image

1) Map control buttons for clearing trajectory and clearing the map.

  1. The robot's current cartesian coordinates as reported by an NMS Level #3 Group #2 sensor (i.e., Intel T265, wheel encoders).

  2. Saved waypoints. Here you can add, remove and select waypoints.

  3. The path points within a waypoint. A waypoint will consist of many path points for navigating throughout the environment. You may right-click on path points to edit the coordinate for fine-tuning. You may also re-order the path points by right-clicking and selecting Move Up or Move Down.

  4. Current heading of the robot relative to the cartesian starting position as reported by an NMS Level #3 Group #2 sensor.

  5. The yellow dot marks the robot's current cartesian position as reported by an NMS Level #3 Group #2 position/location sensor.

  6. Path points are connected with a straight line demonstrating where the robot drives. Right-click on the map view and select Add Path Point to add path points. It is best to drive the robot, which creates a trajectory. Then, right-click on some points of the tractory to add new path points to the selected waypoint.

  7. Log messages are displayed about navigation and sensor activity.

Main Screen - Navigation Controls

User-inserted image

This button manually starts navigating to the selected waypoint. You may also begin navigating by using ControlCommands from other skills. When the robot is navigating, this button behavior changes to stop navigating.

Configuration

Config - Scripts

User-inserted image

1) Script that will execute when the navigation to a waypoint is started. Navigation can begin by manually pressing the Start button or using a ControlCommand().

  1. Script will execute when the navigation is canceled or successfully ended.

  2. If the navigation is paused by a JavaScript/Python command from the Navigation namespace. Or if the paused is triggered by the NMS Level #3 Group #1 distance sensor returning a value less than the specified range. This is configured in the Settings tab.

Config - Variables

User-inserted image

Many global variables are set for The Better Navigator. A question mark next to each variable explains in greater detail. The variable contents can be viewed using the Variable Watcher skill found in the Scripts category. The option to uncheck Set Realtime Variables will save on performance if the variables are not used in your custom scripts. This data is available in the NMS scripting engine namespace anyway.

Config - Navigation

User-inserted image

1) Disregard values lower than Ignore distance values less than this specified distance in CM. The distance values are provided by any NMS Level #3 Group #1 sensor. If wires or a camera block the sensor, this will ignore those values.

  1. Disregard values higher than Ignore distance values further than this specified distance in CM. The distance values are provided by any NMS Level #3 Group #1 sensor. Many sensors are inaccurate at far distances, so you can ignore those values.

  2. Pause Navigation Distance If the NMS distance sensor provides a value greater than the "lower than" but lower than this, any navigation will be paused. This will also execute the PAUSE script from the Scripts tab. Your program may use this opportunity to navigate the obstacle and continue navigating again. Use the Javascript or Python command in the Navigation namespace to continue navigating. That command is Navigation.setNavigationStatusToNavigating();

4) Pause Navigation Degrees This value complements the pause navigation distance value. This value will determine the degree range of when to pause navigation. If you wish for the entire range to be paused, enter 360 degrees. If you only want objects in front of the robot paused, enter 90. The degree number entered is divided by two and used from the left and right of the center of the robot. - If 90 degrees is entered, then 45 degrees to the left of the center of the robot and 45 degrees to the right of the center of the robot are detected.- If 180 degrees is entered, then 90 degrees to the left of the center of the robot and 90 degrees to the right of the center of the robot are detected.- If 360 degrees are entered, the full range will be detected.

  1. Trajectory history count Like a snake trail, a trail is left behind the robot's navigation. This is the number of history positions we keep. Otherwise, the trail will be gone forever and clutter the map.

  2. Pose Frame Update Path Planning The path planning will only update X frames from the L3G2 pose telemetry sensor to save CPU usage.

  3. Way-point Font Size The size of the font for the way-point titles. Depending on how zoomed you are on the map, you may change the font size.

  4. Path planning resolution A path consists of many micro way-points. This is the resolution of how many way-points to create. A value of 2 would mean every 2 CM is a new way-point, and a value of 20 would mean every 20 CM is a new way-point. The higher the number, the fewer waypoints and the less correcting the robot would need to make. However, if the value is too high, corners will be cut too close, and the robot may come in contact. You will recognize the lower resolution when fewer turns are made in the drawn path. The risk with lower resolution could mean cutting corners too close.

Here is an example of a resolution of 2...

User-inserted image

Here is the same example of a resolution of 20...

User-inserted image

You can see how the lower resolution (higher value) caused the robot to drive into the corner. While having many micro way-points causes the robot to correct more often, it also prevents the robot from hitting corners. Finding a balance for your environment requires testing.

  1. Personal space size This is a robot's personal space size bubble to keep from walls and objects when path planning. So a value of 50 would be 50 CM square. If this value is too large, the robot may not have enough room to navigate and reach destinations. If the value is too small, the robot may touch the wall or objects.

Configuration - Movement

User-inserted image

  1. Forward speed When navigating to a way-point, this is the speed for the forward movement. You do not want the robot to move too quickly when navigating, increasing pose telemetry accuracy. By moving too quickly, the robot will lose position. Have the robot move as slowly as you can to improve accuracy.

  2. Turn speed Similar to the Forward speed, this is the speed used for turning when navigating.

  3. Degrees of forgiveness When navigating to way-points, a path is calculated. The path consists of many smaller way-points. The robot must turn toward the next waypoint before moving forward. This is the number of degrees of forgiveness for how accurate the robot must be facing. Many robots do not have the accuracy when turning, especially if they turn too quickly, so you would want this number to be higher. If the robot bounces back and forth attempting to line up the next way-point, this value must be increased.

  4. Enable Dynamic Turning This will allow the robot to turn using radial symmetry toward the object rather than rotate on the spot. This requires the Movement Panel to support individual wheel speed control, such as the continuous rotation servo, hbridge PWM, sabertooth, dynamixel wheel mode, etc.

  5. Dynamic Min & Max Speed The minimum (slowest) speed for turning. For example, if turning hard left, the left wheel would spin at this speed (slowest), and the right wheel would spin at the Max (fastest) speed. The value between the min and max are used to dynamically calculate how much speed the wheels need to turn in an arc.

  6. Dynamic Turn Degrees The robot will use dynamic turning if the next waypoint is less than this value of degrees. Otherwise, if the turn difference is higher than this value, the robot will use the standard rotate on the spot turning. If the waypoint is 180 degrees behind the robot, it would be more efficient to rotate on the spot toward the waypoint. Otherwise, if the waypoint is 30 degrees to the right, drive toward the waypoint on a slight radial path.

Configuration - Video

User-inserted image

  1. Video Stream The output of the map can be sent to a camera device. The camera device must be started and in CUSTOM mode. The Custom can be selected in the device dropdown. This is useful if the map is displayed in a custom interface screen or PiP in Exosphere.

Configuration - Advanced

User-inserted image

1) Navigation debugging Outputs a noisy log when navigating the distances and degrees needed to turn. Do not use this if you're trying to save on performance.

2) Pose data debugging Output information about pose data received from the NMS. This is a very noisy log and not recommended for saving on performance.

3) Pose Hint Source The Hector SLAM algorithm accepts a parameter for calculating the robot's position on a map. Because the NMS also accepts a sensor for pose data (i.e., wheel encoder, intel realsense t265, etc.), the data can be fused with the hector calculation. You can either use the external NMS sensor only, the Hector calculation only, an average of the two sensors, or the difference of the external sensor added to the hector value.

*Note: the Hector slam algorithm used in this robot skill requires many data points for accurate pose estimation. Many depth cameras or lidar sensors may not provide enough scan data to rely on the Hector pose estimation calculation. If this scenario happens, use the External option to rely on an external sensor or choose a depth sensor with more data points, such as a 360-degree lidar.

  • Hector Only (Recommended with 360-degree Lidar only) This relies on using the Hector SLAM to calculate its pose hint. This can be a reliable mapping option if your depth/scan/lidar sensor has enough data points for the Hector SLAM to accurately predict the robot's pose. If you use a 360-degree lidar, such as the RPI or Hitachi, they provide enough data points for this option. To fake the pose hint event, you will have to configure the depth/scan/lidar sensor and enable the option.

  • Differential (Requires external L3G2 pose sensor & 360-degree Lidar) This adds the external sensor's difference since the last pose update to the hector pose hint. Essentially, the external sensor pose hint is only used as a difference between the last time it was updated. That value is added to the hector's pose hint. If the external sensor has a high chance of error, it will decrease the error because it uses smaller snapshots.

For example, a wheel encoder may go out of sync within 60cm of travel. But, the value can be trusted within 5-10cm of travel. So this keeps a history of the last pose update and subtracts that from the current pose value. It then adds that pose value to the hector pose. By doing so, the external sensor pose error is reduced.

  • External Only The pose hint source is the external NMS sensor, such as a wheel encoder or Intel Realsense. The Intel Realsense may be the highest external positioning sensor available if you rely solely on external.

If you use a very noisy or unreliable sensor, such as the Faux Odometer, you may wish to use Hector only. That way, you are not giving the algorithm bad data to work with. Just make sure the depth sensor has enough data points, such as a 360-degree lidar.

  • Average This will average the Hector and External sensor positioning. Essentially it's a combination of the two. This is not very accurate because it merely divides the error between both sensors by two. So, the error isn't as noticeable over time, but the error grows as one sensor's errors increase.

Pose Hint Suggestions

We have a few suggestions for the pose hint value based on your robot's sensor configuration.

360 Degree Lidar Only (recommended)

  • The Better Navigator Pose Hint should be set for Hector
  • The 360-degree lidar configuration should be set to Fake Pose Hint Event (checked)

360 Degree Lidar with NMS L3G2 Pose Sensor (i.e., Wheel encoder, T265, etc.)

  • The Better Navigator pose hint should be set for Differential
  • The 360-degree lidar configuration should not set a fake pose hint event (unchecked)
  • The downfall to this sensor configuration is that the pose sensor can still result in mapping errors. This is noticeable when the map begins to shift.

Depth Camera (i.e., Kinect, Realsense, etc.) with NMS L3G2 Pose Sensor (i.e., Wheel encoder, T265, etc.)

  • The Better Navigator pose hint should be set for External
  • The downfall to this sensor configuration is that the depth camera does not provide enough data points for the SLAM to produce a pose hint. That means you will rely solely on the external NMS L3G2 pose sensor, which will increase errors over time. The solution is to combine the depth camera with a 360-degree lidar.

360 Degree Lidar, Depth Camera (i.e., Kinect, Realsense, etc.) with NMS L3G2 Pose Sensor (i.e., Wheel encoder, T265, etc.) - The Better Navigator pose hint should be set for Differential

  • The 360-degree lidar configuration should not set a fake pose hint event (unchecked)

Starting Position

This navigation skill uses cartesian coordinates in CM from the starting position (0, 0). Any saved maps will be referenced from the same starting position and heading angle. When you re-load a project to have the robot navigate the same course, the robot must be positioned in the same starting position and heading angle. We recommend using painter/masking tape as the starting reference point for the robot. If your robot has an auto dock for charging, secure the charger to a fixed position on the floor, which can be used as a reference point.

User-inserted image

We're using an iRobot Roomba as the robot with an Intel T265 positioning sensor in the photo above. The painter's tape on the floor marks the robot's starting position. The outline allows us to position the robot in the square, and the marking on the front of the robot aligns with the specified heading.

Cartesian Coordinate System This robot skill uses cartesian coordinates to reference the robot's starting position. The starting position is always 0,0 and is defined at startup. As the robot navigates, the skill measures the distance from the starting position. The unit of measurement is in CM (centimeters). Read more about the cartesian coordinate system on Wikipedia.

User-inserted image

Example #1 (360 degree lidar only)

We'll use only a 360-degree lidar with this skill for navigation and mapping. Here's a screenshot of this type of setup. The Movement Panel is continuous rotation which uses robotis dynamixel servos. However, any Movement Panel will do. The 360-degree lidar is RPI Lidar A1.

User-inserted image

  • Note: This example assumes you already have a movement panel, and the robot can move.
  1. Connect your lidar to the PC via the USB cable

  2. Add the respective lidar robot skill to the project (in this case, we're using an RPI Lidar A1)

  3. Configure the lidar robot skill and select the Fake Pose Hint Event option. (read the manual for the lidar robot skill for more information on that option)

  4. Add The Better Navigator robot skill to your project

  5. Configure The Better Navigator and select the pose hint to be HECTOR

  6. Start the lidar robot skill

  7. The map will begin to fill. You can now slowly drive the robot around and watch the map continually fill.

  8. Right-click on the map and select areas to navigate to.

Example #2 (Intel Realsense T265 & 360 degree lidar)

To get sensor data for mapping, other skills must be loaded that are compatible. In this quick example, we'll use the Intel Realsense T265 and 360-degree Lidar in combination with this skill. Here's a screenshot of this type of setup. The Movement Panel is continuous rotation which uses robotis dynamixel servos. The 360-degree lidar is the Hitachi. However, any Movement Panel will do.

User-inserted image

  • Note: This example assumes you already have a movement panel, and the robot can move.
  1. Connect your Intel RealSense T265 to the computer's USB port

  2. Connect the distance sensor of choice (i.e., 360-degree lidar)

  3. Load ARC

  4. Add the Intel RealSense skill to your workspace, select the port, and press Start

  5. Add the 360-degree lidar robot skill to your workspace. Select the port and press Start

6) Now, add this skill (The Better Navigator) to your workspace

  1. Configure The Better Navigator and specify the Pose Hint to be Differential

  2. You will now see localization path and distance data from the Intel RealSense sensor displayed in The Better Navigator window. This robot skill will be displaying and rendering the data.

Example #3 (Interact With Speech Recognition)

The list of waypoints is added to an array. That array can be used with the WaitForSpeech() command in EZ-Script, JavaScript, or Python. This example shows how to use it with JavaScript. Add this code snippet to a speech recognition phrase script. In this example, the phrase we'll add to the speech recognition robot skill is "Robot go somewhere."


Audio.sayWait("Where would you like me to go?");

dest = Audio.waitForSpeech(5000, getVar("$TheNavSavedWayPoints"))

if (dest != "timeout") {

  Audio.say("Navigating to " + dest);
  
  ControlCommand("The Better Navigator", "GoToWayPoint", dest);
}

User-inserted image

With the code inserted, speak the phrase "Robot go somewhere." The robot will speak, "Where do you want me to go?" and display the list of stored waypoints. Speak one of the waypoints, and the robot will begin navigating.

User-inserted image


ARC Pro

Upgrade to ARC Pro

ARC Pro is your gateway to a community of like-minded robot enthusiasts and professionals, all united by a passion for advanced robot programming.

PRO
Synthiam
#1  

This initial release does not have the path planning implemented yet. There are also several silly issues with navigating.

The neat thing is if a lidar or depth camera is used, the odometry can be assisted with either Faux Odometry robot skill or wheel encoders. It has much better pose estimation than The Navigator robot skill.

PRO
Canada
#2  

I laughed when I saw the name of this. It reminds me of file naming structures in our past:D File - New - New.

Will there be a "Best Navigator" or "Better Better Navigator" skill?xD Kidding!

Back on topic, I'm really looking forward to using this skill!

@DJ Question for you, I have a LIDAR sensor that I'd like to see supported in ARC. It is a different brand than all the rest and is even a lower cost than the Hitachi. How should I go about getting it supported in ARC? Should it have its own skill (as the protocol is likely unique)? Should I look for a developer here in the community to make a skill or contract Synthiam? What do you recommend?

PRO
Synthiam
#3  

Ya you make a robot skill for it. You can use the hitachi lidar source code as example to build from. Get the source code the hitachi lidar from the robot skill manual page

PRO
Synthiam
#4  
  • Updated to use the traditional red color.

  • Display the cartesian coordinates from the Hector slam estimated pose. The NMS odometry is fused with the hector telemetry pose.

PRO
Synthiam
#5   — Edited

Here's a video of the SLAM path planning for The Better Navigator. This should be included in the next update within two weeks. Rather than defining waypoints, such as The Navigator, you now provide destinations only. The robot will navigate to the goals by automatically determining the most appropriate path.

In this video, the robot is not moving, and the starting position is Yellow in the bottom left. As I click around the scanned map, the algorithm will find the best path to the destination.

PRO
Synthiam
#6   — Edited

Updated with ability to save way points and path planning prediction. The way points can't execute the path yet - that's the next step. Right now you can select points on the map (right-click) and add way points or select areas to navigate to

PRO
Colombia
#7  

Hi DJ I started to test and I see nice improvements over past navigator, the map that now creates is much better and precise (using Intel realsense technology in my case).  I really  see a good future for this skill.  Thanks.

PRO
Synthiam
#8  

Did you try right-click and add way points to see the auto generated path planning? Pretty wild

PRO
Colombia
#9  

Yes , I tried. That’s great.

Pablo

PRO
Synthiam
#10  

Wowsers - another teaser, but combining the Hector SLAM of The Better Navigator with the Intel T265 and the Lidar produces amazing results. I'm blown away! Hopefully I'll be comfortable with these changes to push an update in the next few days. We're blasting through bug checks now because it's a big update to the NMS

User-inserted image

#11  

@DJ

Question 1: Would it be possible to have 1 path broken down into say, more than one way point? Robot go to kitchen, then it would travel to many different way points to get to the kitchen?

Question 2: Say you wanted the robot to travel to the bedroom from the living room. I can see traveling around one room but will you be able to go to one point (say the door way of another room) and then have the program change to that rooms way points?

Hope I'm making this clear ( wife says I don't make my questions clear to people ... lol).

Herr Ball

PRO
Synthiam
#12  
  1. you can create as many way points as you want. And if you want it to stop at specific points, just write a script to do it. Once it gets to one point, wait for a command and then move to the next

  2. It automatically figures out how to get to the way point. It doesn't matter if there's doors or anything. The whole house gets mapped and saved. It knows where it is. It knows how to get somewhere

Your questions make sense:). Both questions are answered with Yes lol. You'll just have to try it. It's really awesome. I've been playing with it all night!

PRO
Colombia
#13   — Edited

Hi DJ, great news, I will play with this as soon it will  be available and I will let you know my comments. I hope also the update for the Lidar A1 will be available soon. Good progress!

#14  

This is really cool progress.

PRO
Colombia
#15  

I started to test and it looks very  promising, for now no need to include the Realsense 435. Only the RS T265 and The Lidar is required. Also seems that it  would be too much for the processor I have. Great progress!  Thanks DJ.

#16  

@DJ

Which brings me to a question? What processor/ram are you using in your tests?

PRO
Synthiam
#17  

I’m using a rock pi with a hitachi lidar and t265 realsense

PRO
Synthiam
#18  

Updated with performance improvement

PRO
Colombia
#19  

Hi D.J, I am trying to run events like :  ControlCommand("The Better Navigator", "GoToWayPoint", "start"), but nothing happens.  When I select manually the waypoint it works.

Pablo

PRO
Synthiam
#20  

I’ll look into it! Stay tuned

PRO
Synthiam
#21  

Updated with a fix for receiving control commands. Check the cheat sheet.

The path is displayed in a darker green when not navigating. And brighter and thicker when navigating.

Navigating button changes between green and red based on navigation status.

Renders the estimated path when a waypoint is selected

#22  

I would like to try this great progress but I do not believe the free version supports it.

PRO
Colombia
#24  

Thanks DJ. Now is working and new render features are  very useful.  Just noticing that sometimes the auto path is too close to the wall or corner , do you think is possible to include a parameter to control this?.:)

PRO
Synthiam
#25  

Yah let me see if I can do something about that.

PRO
Synthiam
#26   — Edited

Good mapping tonight!

Updated this skill to

  • have more space next to walls during path planning
  • select font size
  • few other tweaks, such as the default values have been optimized a bit

User-inserted image

PRO
Synthiam
#27  

Here's a real good map today with the new tweaks

User-inserted image

PRO
Synthiam
#28   — Edited

Updated v10

  • buttons for clearing map and trajectory moved to the top menu to save real estate

  • new Tools menu for clearing all waypoints or realigning waypoints

The re-align waypoint menu allows horizontal, vertical, or degree rotation, altering the waypoints. The value you enter is a negative or positive number that will change the location of all the waypoints by that amount. View the changes in real-time. There is an Undo button to remove changes. This menu is helpful if your robot's starting position is slightly off.

User-inserted image

PRO
Synthiam
#29  

It is updated to output the map to a camera video device. Read the manual above for more information.

User-inserted image

PRO
Synthiam
#30  

I updated the re-alignment tool to use directional buttons that move the waypoints around the map. This version of the re-alignment tool is easier to use than entering distances in the previous version.

User-inserted image

PRO
Synthiam
#31   — Edited

Added options for...

  • robot's personal space bubble size

  • navigation path resolution

Both are documented above in the Config section part of the manual

User-inserted image

PRO
Colombia
#32  

Great updates, I will try those  tonight!. Thanks!

PRO
Colombia
#33  

After some testing, I see that new options are working  very well and are improving a lot the navigation.  Thanks  DJ!.  I am thinking now on future desirable features as save/load map in tools menu:)

PRO
Synthiam
#34  

I can add a load/save map, but i don't think that's useful. The map will never really be the same, so you can't expect it to resume. It seems to make more sense (for my use anyway) to let the robot re-map the room as it navigates.

Let me look into a save and load option for you.

PRO
Synthiam
#35   — Edited

This is neat...

The list of waypoints are added to an array. That array can be used with the WaitForSpeech() command that is available in EZ-Script, JavaScript or Python. This example shows how to use it with JavaScript. Add this to a speech recognition phrase script.


Audio.sayWait("Where would you like me to go?");

dest = Audio.waitForSpeech(5000, getVar("$TheNavSavedWayPoints"))

if (dest != "timeout") {

Audio.say("Navigating to " + dest);

ControlCommand("The Better Navigator", "GoToWayPoint", dest);
}

User-inserted image

With the code inserted, speak the phrase "Robot go somewhere". The robot will speak "Where do you want me to go?", then it will display the list of stored waypoints. Speak one of the waypoints and the robot will begin navigating.

User-inserted image

PRO
Synthiam
#36  

Updated v14

  • Fix for renaming waypoints where the name didn't update in the list

  • Added variable array that stores all waypoints

PRO
Synthiam
#37  

It seems that the robot personal space size has a huge impact on performance. On my I7 it takes 4 seconds to calculate the path using the optimized AStar. I'll have to see if there's anything I can do to fix that

PRO
Synthiam
#38  

Okay, version 15 has significant performance improvements and better mapping.

  • fixed a bug where the robot would stop navigating before reaching the waypoint

  • optimized path planning to be much faster and less CPU

  • path planning iteration option allows tweaking the path planning to give up earlier and save CPU

This release seems to be top-notch. It's working well for me.

PRO
Colombia
#39  

Thanks D.j about save/load map , in theory if I use the exact start position the saved map should be almost the same. This will be the reference also to align waypoints on large maps. So this will be a fixed reference complemented with the real-time information provided by the LiDAR.

So would be great if the reference map can be included as a layer I can make visible/not visible. This is how I see it useful.:). Other improvements you commented looks very good I also noticed some of those issues but I thought was related to my hardware capacity.

PRO
Synthiam
#40  

v16 has map saving and loading. It takes a long time to load or save a map - the compressed file size is 170mb (over 1gb uncompressed)

PRO
Synthiam
#41  

v17 has a smaller map filesize but still takes some time to load and save

PRO
Colombia
#42  

This is fantastic, I will try this version.  New versions are available so Fast!:) Thanks.

#43  

I am wanting to try this out using my Roomba base . Will the skill only work if I buy both Real sense cam and the Hitachi 360 Lidar, or can I buy just 1 of them to get it working? I do have 3 ultrasonic sensors already and can use the Roomba encoders.

PRO
Synthiam
#44  

You need an NMS level 3 group 1 and level 3 group 2 sensor. You can read the manual which is easier than repeating it:)

#45  

Okay ya I have read much of the info but say if I get a 360 Lidar only ,is it going to be able to map out the floor and obstacles with any of the ARC skills or it must be the Better Navigator only? i would hate to buy the Lidar and then no way to try it. But I will continue looking at all the info here first thanks.

PRO
Synthiam
#46   — Edited
  1. The 360 lidar is a NMS Level 3 Group 1 sensor

  2. You will now need a NMS Level Group 2 sensor of some sort.

I don't know what other skills you're referring to? What do you mean by "no way to try it"? You can't try it without a NMS Level 3 Group 1 sensor AND a NMS Level 3 Group 2 sensor. It needs both of those types of sensors. There's no way you can try it without those because it wont work.

Look at the NMS support document and see what sensors you have or which ones you want to get: https://synthiam.com/Support/ARC-Overview/robot-navigation-messaging-system

The L3G1 and L3G2 sensors are listed on that page.

#47  

I was thinking of trying out Lidar with the original EZ slam since it is still available in Arc. Or I also saw the NMS Faux skill to try possibly with my Roomba encoders.Then I saw the update on using Real sense D435i and damn that really looks like it works great!

PRO
Synthiam
#48   — Edited

Here's the NMS manual page again: https://synthiam.com/Support/ARC-Overview/robot-navigation-messaging-system

Look at that page, pick one sensor from the Group 1, and pick another sensor from the Group 2. Once you have one sensor picked from each of those groups, you can now use this robot skill.

If you have a Roomba, do not use the NMS Faux. The NMS Faux and the roomba are each sensors. Choose ONE only from each group. check the link i provided above and choose one sensor from each group.

#49  

Okay good to know Faux not required as already on Roomba.Also I did not understand only needing 1 sensor from each group, thought I needed all in the group.Much better ,got it!

PRO
Synthiam
#50  

Updated with a number of changes, some experimental

  • realtime update slam map rather than buffer every 500ms. This should be fine for lower power SBCs. If you receive a Busy message in the log, let me know

  • map viewer performance improvements

  • config option to disable realtime variables to improve performance

  • advanced experimental option for a pose fuse v2

PRO
Synthiam
#51   — Edited

V20 is updated with a significant change...

  • dynamic turning. This allows the robot to move toward waypoints in a slight turn arc. Rather than rotating on the spot to correct

User-inserted image

  • There are also a few changes to the navigation engine that provides less "spinning on the spot" when turning toward a waypoint.
PRO
Synthiam
#52   — Edited

V21 is really awesome:). Oh boy you're going to really enjoy this update! It's nearly flawless wow

  • changed a few default values

  • stops spinning on the spot when navigating with a smarter algorithm

  • dynamic turning has a minor improvement

Portugal
#53  

Which encoders are supported?

PRO
Synthiam
#54  

This is a navigator robot skill. It uses compatible NMS robot skills that contribute data. Take a look at the NMS.

PRO
Colombia
#55  

Hi DJ, I will try this right now.  It seems the changes are really good!

PRO
Synthiam
#56  

I know, right?!? It’s super awesome.

the one thing I noticed is my lidar doesn’t get chair legs very well. I’m wondering if it makes sense to allow sketching off areas to avoid. Because there’s always stuff that isn’t detected that are thin

PRO
Colombia
#57  

That could be a good option, also to have flexibility to limit some areas with mats etc. I am having a problem now with my lidar’s serial adapter hardware or cable, so I need to fix that before continue with the tests but I could check the changes and I really liked. New parameters allow a good fine tuning! Thanks!

PRO
Synthiam
#58  

v23 updated with directional buttons when editing a waypoint to move it around rather than entering coordinates

PRO
Colombia
#59  

Hi DJ, I am testing  the skill with the RS 435 and  T265 meanwhile  I solve the hardware problem with my Lidar.

Just to comment that sometimes I need to erase and  install again the T265 skill and reconnect the T265 to be recognized not sure if this is a RS Intel problem or can be fixed in ARC/ skill or is possible a workaround.  For the 435 this part is more stable.

Is it possible  to auto-connect  both RS  cameras on startup to begin the navigation  instead of doing it manually?  Thanks

PRO
Synthiam
#60  

It’s the worst ever - I know. So frustrating eh? Wish Intel would actually test their code!

PRO
Colombia
#61  

Right. I ordered the spare part for the LiDAR and I hope I can continue  playing soon with this skill.

PRO
Synthiam
#62   — Edited

Updated V25

  • minor performance improvement

  • new map renderer that shows scanned areas and un-scanned areas

  • new improved SLAM algorithm that uses multiple maps of varying resolutions for pose matching

  • new option in Advanced tab for configuring the source of pose hint data to the Hector slam algorithm

User-inserted image

PRO
Synthiam
#63  

V26 adds an Extended Kalman Filter option under the Advanced tab to fuse the pose estimation with the Hector slam pose prediction and the NMS pose sensor data.

PRO
Colombia
#64  

I really liked the new map render! I am having some crashes but I am testing with the rs435, I am  still waiting the LiDAR spare part. I played with the pose source option but for now the one that  is working better for me is the external.(t265).

PRO
Synthiam
#65  

Do you have any messages about the error crash?

PRO
Colombia
#66  

I will recreate the situation and I will send it as soon it appears.

PRO
Colombia
#67  

Now testing again with RPLidar and no crashes. Starting to fine tune with the parameters!. Really fun.

PRO
Synthiam
#68  

I got a new version that I’m gonna test this Friday. Might do a live hack with it. So far it seems real good but I need some more testing.

PRO
Synthiam
#69  

Updated v28

  • new mapping model uses larger detected items

  • faster mapping because there's a reduced resolution on the mapping bitmap

*Needs testing... anyone?:)

PRO
Colombia
#70  

Great, I will start testing:)

PRO
Synthiam
#71  

I have another update for you this evening. I had some ideas on my last flight so I made a bunch of changes. The width of the path reflects the size of the robot so you can get a better idea of the path. Also the dot will show the direction the robot is facing. And the path will glow orange if it can’t get to the destination

oh and the destinations and robot view are the same size as the robot. These look big but that’s necessary to properly guess where the robot is going to end up

PRO
Colombia
#72  

Those are real good ideas, I was thinking the same about the robot`s direction. you read my mind:).  Any possibility in the future to define the final direction of the robot when arriving?   One question just to confirm, what is the best location of the lidar? is the center of the robot? or aligned with the T265.  I am experimenting some "drifts" in the map when turning left ot right the robot. Any comment about the possibility to restrict some areas in the map?   Thanks!

PRO
Synthiam
#73  

The exact center of the robot. Make sure you set the t265 offset in its settings.

If your t265 is facing down, it won’t be able to track anything because it only sees the floor. It has a manual from Intel and the best placement is where it can see the most landmarks.

PRO
Synthiam
#74  

Oh and make sure you have External selected for the pose hint if you’re using the t265. manual above provides more info on that

PRO
Synthiam
#76  

yah how it works is the "pose hint" is where the slam algorithm starts looking for where it thinks the robot is based on existing data. So by providing a hint, that's where it starts saying "okay look around me and see if that's where i think i am". And if not, it moves over a little and tries again.... keeps doing that until it finds dimensions that match where it thinks the robot is.

PRO
Synthiam
#77  

V29 updated....

  • robot direction is displayed on the robot rather than a compass (compass is removed)

  • robot displays the relative size that has been specified for it's "personal space"

  • path waypoints include the relative size of the robot so you can see if it's going to reach the destination and what is in the way

  • path highlights in orange if it "can't make it" to the destination

  • path planning performance improved

  • hector SLAM performance improvement

  • the detected distances for the SLAM are displayed more visibly with larger "pixels"

  • number of gui and rendering performance improvements

PRO
Synthiam
#78  

v30 updated...

  • improved graphic render style

  • history trajectory is robot size

PRO
Colombia
#79  

Hi, just to comment that each time I try to clear the map it takes more and more time. Also when trying to clear waypoints.

PRO
Synthiam
#80  

I don't know what you mean. Can you explain more detail

PRO
Colombia
#81  

Yes, when I start ARC and I make a map and try to clear it, it is very fast but when I continue doing it without restarting ARC, each attempt takes more time, adding a couple of seconds each time before I receive the message box to clear the map. I hope this helps.

PRO
Synthiam
#82  

What’s your setting for path planning iterations?

Unknown Country
#84  

Do you ever see this message in the better navigator's log window?

Quote:

Busy... Skipping location scan event

PRO
Synthiam
#85  

v31

  • shows stats in the bottom left. For the path planning time and how often Pose sensor data is received from the NMS. this is useful when debugging performance issues.

User-inserted image

PRO
Synthiam
#86   — Edited

@Pardilav look at the stats on build v31 and what are your numbers?

BTW your path planning resolution is really high! how come? I use a value of 2 or 3. It prevents hitting stuff

PRO
Colombia
#87  

Thanks DJ, I will try it.

PRO
Synthiam
#88  

Here's an example. If you have the value of 20, you'll only get 1 micro way-point for every 20. In this screenshot, you can see how the corner is missed.

User-inserted image

This is the same destination but with a resolution of 2

User-inserted image

PRO
Colombia
#89  

Understood , yes I was starting to have that issue when trying more complex curves. I thought that could be solved increasing the Robots personal space.

PRO
Synthiam
#90  

Does your robot have variable speed control? So you can control the speed of each wheel? If so, use the Dynamic Turning option because it'll be smoother with many micro way-points.

PRO
Colombia
#91   — Edited

Yes, I am using the dynamic  turning option, works good!.  I will try a bigger map with longer paths and multiple rooms now to see how it works. Still I need to fine tune some mechanical details in the robot.

PRO
Synthiam
#92  

v32

  • updated to prevent history trajectory from jumping around
PRO
Colombia
#93  

HI DJ,  I was testing and  I got this error a couple if times and  then ARC was Closed, I didn't note any special condition when it occurs.

By the way I am recently having  some challenges to have an "stable map" specially during the rotation of the robot , I tried modifying the available parameters but  I am having always  a similar situation with the changes.

I do not remember to have this problem on previous versions, before the addition of some of the new parameters.  I am checking for example  also the RPLidar position in the robot vs the T265 and trying different speeds (very low speeds helps),  but no major changes.  My robot base is a TANK configuration using a Dual Hbridge w/PWM ,  Not sure if I am not measuring  the right "center" of the robot for example. Any suggestion will be appreciated! . Could RPlidar A1 Skill have an offset in order to position it in a different place? . Thanks .

Version: 2022.03.26.00

System.InvalidOperationException: stop() cannot be called before start() ---> System.Runtime.InteropServices.ExternalException: rs2_pipeline_stop(pipe:07B47600) --- End of inner exception stack trace --- at Intel.RealSense.ErrorMarshaler.MarshalNativeToManaged(IntPtr pNativeData) in C:\Documents\SVN\Developer - Controls\In Production\Intel Realsense T265\MY_PROJECT_NAME\Intel.RealSense\Helpers\ErrorMarshaler.cs:line 66 at System.StubHelpers.MngdRefCustomMarshaler.ConvertContentsToManaged(IntPtr pMarshalState, Object& pManagedHome, IntPtr pNativeHome) at Intel.RealSense.NativeMethods.rs2_pipeline_stop(IntPtr pipe, Object& error) at Intel_Realsense_T265.MainForm.stop() in C:\Documents\SVN\Developer - Controls\In Production\Intel Realsense T265\MY_PROJECT_NAME\MainForm.cs:line 266 at Intel_Realsense_T265.MainForm._ts_OnEventError(EZTaskScheduler sender, Int32 taskId, Object o, Exception ex) in C:\Documents\SVN\Developer - Controls\In Production\Intel Realsense T265\MY_PROJECT_NAME\MainForm.cs:line 291 at EZ_B.EZTaskScheduler.oIwgr3tQwvyZg7jII5a(Object , Object , Int32 taskId, Object , Object ) at EZ_B.EZTaskScheduler.nwF8IHFyPN() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

PRO
Synthiam
#94   — Edited

That stop start error is from Intel realsense. It’s an annoying error that we’ll have to get used to unless they fix their code someday. Highly unlikely:)

  1. Are you sure the pose hint source is set to external?

  2. You still haven’t told me what your stat values are from post #86

  3. do you ever see the message "Busy... Skipping location scan event"

  4. The "center" would be the center between the treads, not the robot

PRO
Synthiam
#95  

v33

  • major update with dynamic turning

  • default settings updated (probably a good idea to try the default settings if you're already using this skill)

  • major update to navigating

Portugal
#96   — Edited

Dj, i see from the video that the BN is constantly calculating the path to destination waypoint while moving. Will it recalculate if it finds an obstacle in the way?

PRO
Synthiam
#97  

Yes, it will.

I'm no longer using the T265 at all. I'm using.

  • the Faux Odometry with the update set for 100ms

  • RPI A1 Lidar

  • the better navigator set to HECTOR for Pose Hint

And that's it... I'm getting way better results without the t265

User-inserted image

Portugal
#98  

Awesome, if that is the case, imagine using real encoders...

PRO
Synthiam
#99   — Edited

I also use encoders, and they don't work nearly as well because they do out of sync. You can use the wheel encoder robot skill if you want to try it. Read the NMS manual to see what sensors you can use.

Just try version 33 without the T265 and use the settings i said above

PRO
Synthiam
#100   — Edited

If you use the RPI Lidar with The Better Navigator, you no longer need the Faux NMS Odometry. Take a look at the latest (v14) RPI Lidar robot skill. There's an option to enable a fake pose hint event if the better navigator uses Hector as the pose hint.

Portugal
#101  

I dont have the rplidar, i am waiting for the ydlidar skill:)

PRO
Synthiam
#102  

What are you using for distance sensing? The intel realsense depth camera?

Portugal
#103   — Edited

I have 2 YDlidars, the X2 and the X4. The depth cameras are very resource hungry to use with an sbc.

PRO
Synthiam
#104  

I'm using the intel depth camera fine with my rock pi. The trouble is it does not provide enough data to use the hector pose hint.

Maybe your resolution on the depth sensor is set too high? Or too high of a framerate?

Portugal
#105  

I don't have a depth camera, i had the t265 but sold it. Planning to use lidar and encoders.

PRO
Colombia
#106  

Hi DJ, sorry for the delay to answer the questions (#94) .  It seems now it is better not to use the T265, anyway my answers:

  1. Are you sure the pose hint source is set to external? yes

  2. You still haven’t told me what your stat values are from post #86 . sensor: 0ms Path: 11ms  to 18ms or more depending on the waypoint distance.  

  3. do you ever see the message "Busy... Skipping location scan event": yes some times I saw this but not so frequently , when it happens  I re-started the lidar to test again. 

  4. The "center" would be the center between the treads, not the robot: ok.

So would be is possible to use the Intel depth camera  (i.e 435) for other purposes and stop feeding the NMS? .  or use it only for obstacle avoiding? I am using also a rock pi X.

PRO
Synthiam
#107  

You can use depth sensor with lidar together for nms. You can have as many sensors as you want for the nms. That’s the point to the nms. The NMS manual explains how it works.

PRO
Synthiam
#108  

v34

- allow resetting the position without clearing the map

  • removed ekf pose hint

  • added differential pose hint (adds the difference of external pose hint updates to the hector pose hint - read manual above)

  • average pose hint improved

  • added new option to clear map & reset pose position to 0

  • new ControlCommand() that allows specifying the robot pose for custom re-alignment

PRO
Colombia
#109  

Hi DJ,  For me is working better the maping  with the option Lidar + T265 (with differential pose hint as indicated). I made paths that I never reached before and the map is much more stable.  I also made some physical changes relocating the T265 a little bit higher and tuning the offset.  Something I noticed is that if I activate the RS d435 the map started to shift instead to improve the navigation or object avoiding.

So is it possible to include also the option  in the RS435 skill just to use the it as camera source only w/o feed the NMS?

I tried also the option only Lidar and Hector pose hint but I got map shifts when I made turns.

Thanks, good advance! .

PRO
Colombia
#111  

Ok thanks, I am going there.

PRO
Colombia
#112  

Hi , I had to stop testing this skill for some days but now I started again and is working very nice until now. I made also some mechanical adjustments. I will continue with the other skills. Thanks DJ.

User-inserted image

Portugal
#113  

Nice work pardilav, i have something similar in mind. Does it have IR sensors too? Would love to see a video of your robot navigating and move that arm.

PRO
Synthiam
#115  

Oh, that's a great video!!! I'd enjoy seeing more of your robot - it's very impressive. Thanks for sharing! I hope you make a robot showcase for it one day

PRO
Colombia
#116  

Thanks DJ!, yes I will publish more videos with the different skills I am including in the robot.

#117   — Edited

Very cool! It's exciting to watch your robot arm move around like that. I'm looking forward to seeing more.

Just a question; Looks like the rail the arm is mounted on wobbles a little when the arm moves. Does this interfere with it's accuracy? Maybe a couple braces near the bottom to make it more ridged? Just an idea. Either way this is a fantastic.

I absolutely love the embedded tablet in the base and the animated eyes it shows. Can you share what you are using down there? Looks like some brand of tablet running ARC mobile appt?

PRO
Colombia
#118  

Thanks for your comments Dave,  The idea of the arm mount is also to move it in a vertical way,  so I installed already a Nema motor  at the bottom to do that but still I am not using it.

About the tablet question  it is not a tablet it is a  7" HDMI touch screen connected  to the Single Board Computer (Rock Pi X ) that is running windows and ARC.

#119  

Quote:

7" HDMI touch screen connected to the Single Board Computer (Rock Pi X )

Awesome idea.

#120   — Edited

I don’t think this can be done, it would be super cool if DJ was able to a make a screen layout using CM grid squares on the Better Navigator screen.  This way you would know how far you need to travel without having to measure your home to calculate how far things are by the camera.

DJ, I have faith you could accomplish this task.

Cheers!

PRO
Synthiam
#121  

You don’t have to measure anything to use this robot skill. The lidar performs the measurements. You might need to scroll up and read what this robot skill is - it’ll help understand how to use it as well.

PRO
Colombia
#122  

Hi Dj, I am trying to load a saved map but nothing happens. It generates the file when saving but the load feature is not working. Thanks in advance for your support.

PRO
Colombia
#123  

Hi DJ.  Just to comment that I changed the ROCK pi X by a Beelink U59 and the performance of the skill  is much better now. Still the load of the map is not working but as I understood the correction is ongoing according the last message I received from support team. I hope soon I can share more videos.

#124   — Edited

I like this skill, the cartesian coordinate system is the way my brain works. You guys are making some very cool bots! Will be experimenting soon.

PRO
Synthiam
#127  

Oh really?! That's awesome!! Are you using it with the Camera NMS?

#128   — Edited

Not yet that will be the next skill to be incorporated.

#129  

I'd like to input a map into The Better Navigator, what type of file does it accept?

#130  

I've never used this skill or any kind of navigational programs. However after reading through a lot of the above instructions and looking at links provided I've come to the conclusion that you can't load your own maps that were made outside of these skills. Sounds like you can only load maps that have been made and saved by other compatible skills? 

Quote:

The lidar or depth camera data will create a map of the room(s) as the robot drives. 

Quote:

To get sensor data for mapping, other skills must be loaded that are compatible.

#131  

Thanks for the info Dave. Maybe this would be an area where an update would be appropriate. I'm thinking of using the camera overlay or an actual cad drawing overlay and having them work together. Have it navigating properly now but always looking for the next step. Working on Camera Pose but there are different issues to work through, multiple wifi or wifi extenders, how to use wireless cameras without using their app because when you use someones unscrupulous app they can have access to all of your data which can be very bad. Have a Merry Christmas!

PRO
Synthiam
#132  

You can’t draw a map. That would be impossible considering how this works. If you research slam algorithm, you’ll understand the complexities of it. It is doing very advanced real-time analysis of the environment. If you drew a map, it would be torn apart by the algorithm immediately

#133  

Ok understand. When you say it's very involved behind the scenes, I'm believing it. Have a Merry Christmas!

PRO
Synthiam
#134  

Thanks! You as well - wish your family a great holiday.

PRO
Kuwait
#135  

Dear friends and robot builders   Honest greetings and respect  I need to implement a hector slam navigation  Navigation Messaging System (NMS) for indoor navigation  on 2 wheel robot platform  I'm in Africa and I'm running low on resources i cant get  intel depth cameras  working on a medical robot project (nonprofitable) here is my available hardware  1-Hitachi-LG LDS Lidar 2-Kinect Xbox 360 3-2Wheel Encoder Counter 4-win10 companion computer latepanda I am afraid of working with the Kinect Xbox 360 system. I have seen some interventions and comments related to its inaccuracy. I am forced to use it because I have no other alternative. based on these components Can I build the (hector slam navigation) level 3 system? @DJ Sures

#136  

In the beginning just keep it simple till you get it working. You don't need the encoders nor the Kinect for now. You will need the lidar working properly. DJ made a good video for The Better Navigator that you will need to study closely. In the video you will see how to check Hector which bypasses encoder and KInect. I have it working and it's fine with just the lidar. Will be adding more sensors as needed. Good Luck!

#137  

Have been trying to set the acceleration to a more moderate level for when it rotates on the spot and then goes towards it's position but nothing seems to work. Just wondering if the Movement Panel even allows for accel/decel because it would interfere with the forward navigation and slight changing of direction. It would help if it would work at least when it rotates on the spot and then accelerates as my bot jumps a bit after it rotates. This kind of throws the lidar out of whack as it bounces.The speed is not even set all that high.

#138  

What sensor are you using for "nothing seems to work"? Can you please expand on what sensor you use with this robot skill? Also, the acceleration (if supported by your movement panel) would not affect the navigation. The movement panels are integrated with Synthiam ARC, and the robot skill does not need to know where the robot is moving. This navigation robot skill gets its sensor information from sensors, not a movement panel.

#139  

I'm using a 360 lidar with the The Better Navigator. Can you have a continuous servo with the same D0 designation and then change the acceleration or does the Movement Panel have a higher hierarchy and not let anything else affect it?

PRO
Synthiam
#140  

I don’t understand the question. The only robot skill that should move the servo is the movement panel. There’s no hierarchy, there’s movement panels.

#141  

OK I understand but in the Movement Panel there is no means for acceleration deceleration. So it is full user set speed whenever it stops and starts.- Yes, I understand there’s a speed setting

#142   — Edited

Ok, so I hear you saying that you want the servos to "ramp" up to speed slowly and then "ramp" back down slowly when the move is stopping. It also sounds like you want to be able to control how fast or slow the ramping is. You don't want the servos to jump to full speed or suddenly stop when the move is starting or is complete?

#143  

Here is the scenario. The bot has 10" wheels, it goes to a waypoint and has to do 170 degree turnaround. It then rotates on itself (one turns one way the other turns the other way), so coming out of say 70 speed rotation it then goes 100 speed forward. Well one of those wheels is going backwards so at one point you actually add the two together (170 for a split second) and it make front two wheels come off the ground. If acceleration/ decell is not an option a possible simple fix is to add a 1-2 second delay between the rotation and the forward but that would have to be done behind the scenes.

PRO
Synthiam
#144   — Edited

Movement panels for servos do not control acceleration - the acceleration value is a parameter for the Servo. It can be assigned with the Servo.SetAcceleration and Servo.SetVelocity javascript commands.

Acceleration for servos would need to be managed by the EZB or servo Controller. Servos uhigh-speedspeed PWM is incredibly too fast for a PC to generate, which is why they need a microcontroller. The acceleration is included with that PWM, so you'd need to use a controller that supports acceleration.

ARC has an acceleration parameter for controllers that support it. This is documented in the servo Control page: https://synthiam.com/Support/ARC-Overview/Servo-Controls.

For a servo controller that uses acceleration, I am pretty sure dynamixels, lewansoul, lynxmotion, polulu maestro, and maybe the kondoor use PWM servos; I think the polulu is the only one with built-ins built-in acceleration. You'd have to closely examine what's available in the robot skill section and ezb section.

PRO
Synthiam
#145  

Also, watch this tutorial video i made to see if there are settings you're missing.

#146  

Ahh that's what I was looking for, thanks for the explanation. It's right there in Blockly. Thanks

#147  

Ya DJ, That really is a mind blowing video you did showing how Robot can change direction and find a new path way home just using only the Lidar. Every time I lose my interest in robotics for a while ,all I need to do is watch one of your videos on how easy it is to make robot seem super intelligent ! I start catching the robot addiction bug again,LOL!

#148  

User-inserted image

Question refers to enclosed pic.

Have a question about the mounting of the Lidar. As you can see by the pic, I plan to mount the lidar in the center of my base (not attached yet). I also plan to place a shelf, over top of the lidar, the same size as the black bottom shelf. The top shelf will be connected to the base by the three screws, with the shelf only as high off the lidar as needed.

Hope I explained that right?

Lidar: Will the three screws between the lidar base and the top shelf interfere with the correct operation of the lidar or Better Navigator?

Thanks

#149  

With something on the way, the lidar will not be able to see through the material. That means you will not have 90, 180 and 270 degree views but you will have the rest. So it will still operate fine but only without those angles. Also you need to read the manual above and ignore distance values less than the screws otherwise it’ll always detect those as objects.

#150  

Thanks for your response. Nothing will block the lidar sensor except the three screws. Thanks for the info ..."ignore distance values less than " ... that's the ticket! I was worried that the lidar would always see the screws as objects and wasn't sure how they would affect the program.

Again Thanks!

#151   — Edited

DJ, you mentioned that it will work with any Movement Panel but I primarily use Auto Position Movement Panel which has many different servos being used. Seems like the user would need to designate which 2 servos to use when using The Better Navigator and Auto position. Is there something in the configuration that has already addressed this scenario.  I already have it working nicely with the typical Movement Panel but since it only allows one panel per project then I would have to choose Auto Position panel because I use it for so many other scenarios.

#152  

The variables list TheNavWaypoints but do not give the locations of the waypoints, how do I get to those numbers say of Waypoint 2. Is there an array that those are being held in since it is an x and y value?

PRO
Synthiam
#153  
#154   — Edited

I see that Auto Position is considered a movement panel. If I am using 20 different steppers in that panel how will it know which stepper is controlling the lower wheels compared to the arms, wrists, end effector etc? Ahh here it is Two Versions There are two versions of the Auto Position robot skill (Movement Panel and Non-movement panel). The only difference between the two is the inclusion of the Movement Panel functionality.

#155  

I see what what you guys did and I understand it. It's all a process that sometimes takes some deep concentration to absorb it.

#156  

The better navigator works great so far. I have mapped my entire house and set waypoints and navigated to them. However, when i try to load a save map, there is nothing, and i am not able to navigate to my waypoints without first mapping again.  It would also be great if we were able to block out hazards such as stairwells, and chair legs that do not get mapped when scanning. is there a problem with the map loading?

#157  

So, everyone is using the "RPI Lidar A1". Are these the only ones compatible with this skill? Where can I get one?

#158  

The better navigator uses the ARC NMS (navigation messaging service) which can use a variety of input devices. The sensors that you will need will depend on your application. For general slam waypoint navigation, the A1 seems to be popular.

We recommend familiarizing yourself with the NMS and features of this skill and other NMS compatible skills to determine what works best for your application.