Asked
Resolved Resolved by DJ Sures!

Wheel Slip Impact On Navigation

I am experimenting with navigation using my WheelBot and am considering modifying it into a telepresence robot. Currently, I'm utilizing a combination of the following ARC skills: Continuous servo Movement Panel, Better Navigator, Joystick, Camera Device, and RPLIDAR (Model C1). I've enabled the RPLIDAR skill parameters: use NMS and fake NMS pose hint event, and I'm using the pose hint source with Hector in the Better Navigator skill.

In one of my tests, the robot successfully navigated a 1.5-meter course and returned to its "home" position. However, during a second test, I noticed only half of the return path was visible. Normally, the entire return path should be displayed in green (refer to the LIDAR image below).

User-inserted image

It appears that some information was lost during navigation. In another test, the yellow dot representing the robot was correctly positioned but then suddenly shifted to another location, as shown in the second LIDAR image.

The original position was similar to the photo above (LIDAR Image 1).

User-inserted image

I'm not entirely sure if my observations are helpful in explaining this behavior, but I wanted to share them. My floor is tiled, and I've noticed the robot’s wheels sometimes slip, slightly spinning in place. I do not have encoders on the wheels. Could this affect navigation accuracy?

If wheel slippage is influencing navigation, could someone explain how wheels without encoders affect navigation?

If it's not related to the wheels, could you please provide some guidance on how to address this issue? I'm considering adding spikes to the wheels if this might help. Thank you!


Related Hardware AdventureBot
Related Controls The Better Navigator RPLidar

ARC Pro

Upgrade to ARC Pro

Unleash your creativity with the power of easy robot programming using Synthiam ARC Pro

PRO
Canada
#1  

Below is an explanation and some suggestions regarding wheel slip and how it can affect navigation accuracy in your ARC project.


How Wheel Slip Affects Navigation

  1. Lack of Accurate Odometry Feedback
    Without wheel encoders, the robot cannot accurately measure how much each wheel has rotated. In many navigation systems, the odometry (dead-reckoning data) is used either directly or as a pose hint to refine the robot’s position. When your wheels occasionally slip on the tiled floor, the expected movement does not match the actual travel distance. This results in incorrect pose estimates.

  2. Impact on Pose Estimation and SLAM
    In your setup, you are using the RPLIDAR with the "fake NMS pose hint event" and are relying on the Hector SLAM data as the source for the pose hint in The Better Navigator. When there is wheel slippage, the expected movement of the robot may not be clearly represented in the LiDAR scan matching. This can manifest as:

    • Partial Return Path: The loss of part of the return path (observed as only half the path in green) may be due to inconsistent or lost pose data when the wheels slip.
    • Sudden Position Shifts: The yellow dot (representing the robot) suddenly shifting positions can indicate that the fusion of the LiDAR data (Hector SLAM) and your (fake) pose hint is being thrown off by unaccounted-for wheel movement.
  3. Using Fake Pose Hint with Hector SLAM
    Enabling the "fake NMS pose hint event" allows Hector SLAM to generate pose hints in environments where there is no external odometry. However, if your robot is experiencing wheel slippage, even Hector SLAM may struggle to create a continuous or reliable map because the actual movement isn’t being reported accurately.


Suggestions to Improve Navigation Accuracy

  1. Improve Traction

    • Consider Adding Spikes or Grippy Surfaces: Increasing traction on your wheels can reduce slippage on the tiled floor. Better grip should mean that the wheels rotate more in line with the actual movement, even if you still don’t have encoder feedback.
    • Rubberized Wheel Covers: In addition, using wheels with a rubberized surface or adding grip tape could help improve traction further.
  2. Enhance Odometry

    • Add Wheel Encoders: If possible, consider retrofitting your AdventureBot (or the modified telepresence robot) with wheel encoders. Encoder feedback can provide a true measure of distance traveled, which greatly enhances the performance of pose estimation in navigation.
    • Exploit Other Sensors: If retrofitting coupons with encoders isn’t feasible, consider complementary sensors (such as an IMU or other external pose sensors) that can help refine your position estimates. The Better Navigator can work with various sensor inputs to enhance accuracy.
  3. Tune Your Navigation Parameters

    • Revisit the Pose Hint Source Settings: Since you are using Hector for the pose hint, you might experiment with the settings in The Better Navigator to see if a different mode (or tweaking parameters like expected movement thresholds) can compensate for occasional wheel slip.
    • Adjust the Thresholds: Some navigation skills allow you to adjust how aggressively they respond to pose changes. Increasing the "Degrees of Forgiveness" or modifying dynamic turning settings might help the system better cope with unexpected shifts.

Next Steps

  • Trial Traction Enhancements: Consider first trying out traction enhancement methods (spikes, rubber covers) to see if the slip issue diminishes.
  • Monitor Map Consistency: Re-run your navigation tests and watch if the return path is consistently mapped and the yellow dot remains stable.
  • Experiment with Settings: If you continue to see discrepancies, try adjusting the parameters within The Better Navigator skill to better handle minor discrepancies in movement.

Do these suggestions align with what you were thinking, or would you like further details on any of the adjustments? Feel free to ask if you need more clarification or step-by-step guidance on configuring any of these features.

For more details on The Better Navigator or general navigation tips, please visit:
The Better Navigator Manual

And for hardware support on AdventureBot, see:
AdventureBot Hardware Support


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Synthiam
#2  

The image looks like the robot is moving too quickly. The lidar spins and grabs frames. The SLAM requires several frames to build confidence in its map. The robot must move very slowly, it's not a race:). Try again moving as super slow, and you'll see it work flawlessly.

Wheel slippage has nothing to do with this if you're not using wheel encoders. Using Fake Hint, the slam system uses the map to identify it's position.

If the robot is skipping to a different location on the map, it's because the map is incomplete. If the map is incomplete, it's because the slam did not get enough frames. If the slam did not get enough frames, it's because the robot is moving too quickly.

PRO
Canada
#3  

Below is an explanation that expands on DJ_Sures’ observations and some additional suggestions for your setup:


Key Points Raised by DJ_Sures:

  • Robot Speed and Frame Capture:
    The behavior you’re observing-such as the incomplete return path and sudden jumps in position-appears to be caused by the robot moving too quickly. The RPLidar needs to capture several frames in order for the SLAM system (using Hector via the fake pose hint) to build a reliable map. When the robot moves too fast, the LIDAR doesn’t capture enough data, resulting in an incomplete map or abrupt shifts in the robot’s perceived position.

  • Wheel Encoders and Fake Pose Hint:
    When using fake pose hints, the system relies solely on the SLAM scans to update the map. Without wheel encoders or another form of external odometry, any discrepancy is due to frame capture issues (i.e., the robot’s speed) rather than wheel slip.


Additional Suggestions to Address the Issue:

  1. Reduce Your Robot's Speed:

    • As DJ_Sures mentioned, try navigating at a much lower speed. This gives the LIDAR enough time to capture consecutive frames and allows Hector SLAM to continuously update its mapping.
    • Temporarily adjust your Movement Panel settings (e.g., in the Continuous servo Movement Panel or Joystick control) to limit the maximum speed during testing.
  2. Monitor LIDAR Scanning Performance:

    • Ensure that the LIDAR is spinning at its optimal speed and producing consistent scan data. If the scanning rate drops, the SLAM system might not receive a continuous stream of data, leading to mapping errors.
  3. Review and Tune SLAM Settings:

    • In The Better Navigator configuration, check any parameters related to how many frames are required to build confidence in the map.
    • While the fake pose hint event simplifies setup, you might also experiment with alternative pose hint sources if available, or adjust the sampling or filtering parameters to better handle slower movements.
  4. Test Incrementally:

    • Perform tests on a shorter course where you can control the movement speed very precisely. Monitor the LIDAR's output (if you’re using the debug or log output in ARC) to verify that enough frames are being captured for each segment of the journey.

Conclusion:

Based on DJ_Sures’ input and your own observations, the inconsistent mapping is most likely due to the robot moving too quickly for the SLAM algorithm to gather sufficient frame data. Working at a slower pace should help produce a more continuous and reliable map. Wheel slippage isn’t a contributing factor here if you're not using wheel encoders.

Please let me know if you’d like assistance with adjusting any specific ARC settings or if you have any further questions regarding your navigation setup. For detailed guidance, you might also review the following resources:

Feel free to ask if you need further clarification!


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Germany
#4  

great and thanks for explanations.  Even the wheel slip is not the problem,  I have to improve the traction, maybe bigger wheel,other material. I will lower the speed as much as possible and check parameters.

PRO
Germany
#5  

I modified my Wheelbot by placing the drive wheels at the front instead of the back. I thought this would improve traction. TurtleBot uses this type of drive. There is a bit less slipping, but it's still not optimal. I really need to change something-maybe a lighter battery or different wheel materials. I'll have to see.

I noticed something: if the robot moves normally, without jerks due to slipping, the return-to-home tracking in Better Navigator is correct. However, when the robot moves, slips, moves again, and slips again, the tracking is incomplete-part of the green tracking line is missing. The movement speed is very slow, about 20 cm in 3.5 seconds.

Another observation: two or three times, when I requested a return to the home position, the robot started zigzagging, moving very little-left, right-before stopping. I was testing with small movements, around 1.5 meters.

I’m using my Wheelbot as a prototype to develop a telepresence robot. I want to make sure everything works correctly before creating a finalized product, and since navigation is quite important for a telepresence robot, I need it to be reliable.

I have the same setup as DJ in his video "Add Lidar Navigation to a Robot," except that I'm using the RPLIDAR C1 model and a SBC running Windows 11.

PRO
Synthiam
#6  

Is your lidar direct center of the robot chassis? Also, the wheels position must rotate the robot exactly on the spot. Meaning, having the wheels centered between front/back on the robot chassis.

if you look at a TurtleBot or a robot vacuum you’ll see what I mean. The wheel position and lidar centering is really important.

PRO
Germany
#7  

Oh dear, no! The wheels are at the front of the robot, and the lidar is 7 cm behind, not at the center of the chassis. So, everything is wrong in my design. I got a TurtleBot chassis, so I’m going to use it.

PRO
Synthiam
#8  

Ah yes - that'll do it. There's a bunch of math involved in why the robot's rotation must maintain the lidar to be centered. That is why you commonly see that configuration on robot platforms with lidars, specifically robot vacuums. Robot vacuums are probably the leading use-case for lidar on robots because they're the most common mass-consumer product that uses a lidar. @Athena would better explain why having the lidar located in the center and the wheel centered between the front/back is essential for SLAM.

PRO
Canada
#9  

Here's why having the lidar mounted in the center and having your drive wheels aligned symmetrically is so important for SLAM and overall navigation:

  1. Consistent Rotation Center:
    When the lidar is exactly at the robot’s center of rotation, every time the robot turns, the sensor pivots around its own position. This makes the math behind the SLAM algorithms much simpler because the relative positions of obstacles are measured around a fixed, known point. If the sensor is off-center, the measured points can shift in unpredictable ways as the robot rotates, resulting in errors in the produced map.

  2. Accurate Pose Estimation:
    SLAM algorithms rely heavily on accurate pose estimation (i.e., knowing exactly where the robot is and how it's oriented). When your wheels are perfectly centered front-to-back, the robot rotates on the spot without any translational drift. This means that the lidar data accurately reflects the environment. With an off-center mounting, rotational motion inadvertently introduces translation, causing the map to become distorted or shifted - the kind of errors you saw as incomplete tracking.

  3. Simplified Data Processing:
    With a centered lidar, you eliminate the need for compensating math in your SLAM algorithm. When the sensor is not centered, the algorithm must incorporate complex calculations to adjust the scan data based on the offset, which can lead to inaccuracies if not precisely calibrated. This is why many robot vacuums and TurtleBot-style platforms position their sensors at the center.

  4. Reliable Return-to-Home and Navigation:
    When the geometric relationships between the robot’s chassis, drive wheels, and lidar are accurately maintained, navigation functions such as return-to-home become much more reliable. Any misalignment can manifest as unexpected zigzagging or erratic movement because the mapping of real-world coordinates to your robot’s SLAM-derived positions is off.

In summary, centering the lidar and ensuring that your wheels allow for pure rotation benefits the SLAM process by keeping scan data consistent and minimizing the transformation errors the algorithms must correct. This leads to better maps, more accurate navigation, and more reliable autonomous behavior.

Does this clarify the importance of sensor and wheel alignment for your platform? Let me know if you’d like more details or have additional questions on optimizing your robot’s configuration!


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Germany
#10  

thanks for the explanation. I learn something every day.

PRO
Synthiam
#11  

I like hearing that! Means Synthiam's working!

I just got back from a school with grade 6 students. Boy, they surprised me so much. Before I even started talking and the students saw the robots, they had their hands up. They asked questions like, "Do those robots have speech recognition?" and "What do you program in?" - all sorts of technical questions! I was blown away. The stuff that adults are learning today, kids already know.

PRO
Germany
#12  

yes and they learn fast.  There is a huge amount to learn with Synthiam. We're not stuck in a daily routine.

PRO
Germany
#13  

i modified the design, it looks like the turblebot 3 now. It can turn on the spot, almost perfectly. The T265 is placed above the LIDAR. What’s strange is that the RPLIDAR skill doesn’t display anything, and the real-time values are all at 4000. I remember seeing in previous tests that these values would normally change - I must have forgotten something again, but I can’t figure out what. The navigator displays the room.  here a picture. I’ll keep running tests, but maybe I have a problem with the RPLIDAR!!!

User-inserted image

#14  

The v25 of the RP Lidar has a fix for the UI render. The UI render can be disabled to save CPU time. The previous version of the rp lidar robot skill appeared to have a bug with enabling the UI render. Mostly, the UI render is disabled and only used for debugging purposes.

PRO
Germany
#15  

Hi Synthiam Team,

Unfortunately, I still have no good news - I'm continuing to experience issues with navigation.

My first design was completely off, so I rebuilt the system using a TurtleBot 3 chassis with two 360 servos, an RPLIDAR placed in the middle, and a T265 camera mounted 20 cm higher and 5 cm behind the center. The robot moves well mechanically, but the navigation is very poor.

I tried many combinations:

RPLIDAR + T265

RPLIDAR only

Navigator set to Hector, External, Fused, Dynamic

Disabling the Navigator and reinstalling it

With short, straight movements, the robot can return to its starting point. But with longer paths (e.g. 3 meters), it fails. It might turn left or right randomly, stop after 10-30 cm, or display a return path that’s completely outside the room.

Since I couldn’t get it working, I decided to switch to a Roomba 600 chassis. I bought a used one in good condition and modified it using the SCI cable and H/W UART. I followed the Synthiam video on connecting Roomba to EZB via H/W UART 1.

Using the iRobot Roomba Movement Panel skill, I can control the robot fine: sending init, powering on/off, and receiving sensor streaming data. I also installed a camera, and color/object tracking works well. I decided to try using the better Navigator this time.

I followed the procedure described in the Roomba Movement Panel skill exactly:

Enable Send position to NMS in the skill config

Connect the Roomba and enable sensor streaming (data is received)

Press reset/power off (to reset wheel encoders and set home position)

Place the Roomba on the ground (this is position 0,0)

Press the clean button (light comes on)

Press init and enable sensor streaming again

Up to this point, everything looks good.

Then I started the T265 and the RPLIDAR. Here’s where things get strange:

If I use pose hints like External, Average, Fused, or Dynamic, the yellow dot (representing the robot position) becomes unstable - it either spins on the spot or jumps to random positions. It only looks somewhat better when I select Hector and Fake NMS in the RPLIDAR skill.

I tried a 1.5-meter run and then asked the robot to return home. It couldn’t - it stayed in place, without showing any error. Sometimes I see the message: "Busy... Skipping location scan event (if unsure, ask about this on the forum)"

I also tested with just the RPLIDAR (without T265), and it wasn’t any better. Manual control works fine. Mapping looks good, and obstacles appear in the right places, so the RPLIDAR seems to be working. I tested the T265 using Intel RealSense Viewer, and it looks fine as well.

This is disappointing for my telepresence robot project. I was quite confident that the Roomba setup would finally work.

Maybe I’m missing something, but after many hours of testing and trying different configurations, I’m out of ideas.

Any help or suggestions would be greatly appreciated. here are 3 pictures.

User-inserted image

User-inserted image

User-inserted image

PRO
Synthiam
#16  

Do you have a photo of the robot so I can see the configuration?

Also, turn on the lidar UI render to ensure the lidar is facing the correct direction for the robot's heading direction. The first photo looks great but I can't quite figure out what's happening with the others, specifically the last.

Is the robot moving very slow with these settings...

  • hector pose hint in better navigator
  • fake NMS enabled in lidar robot skill
  • t265 robot skill entirely removed or not started
  • push to NMS disabled in robot movement panel

and verify by enabling UI render that the lidar render is facing the correct direction.

PRO
Germany
#17  

User-inserted image

User-inserted image

ill make a try asap with your settings.

PRO
Synthiam
#18  

Oh the robot has no rear view. A little less than 50% of the scan is missing. The trouble with how slam works is gonna be a problem. Slam tries to determine where the robot is on the map by looking for distance measurements that position it within the previous map.

when a huge part of the data is missing, there’s no easy way to identify the location. Specifically because there isn’t enough stored map data to compare against.

use a Dremmel and cut a hole in your holder behind the lidar.

when I get home I’ll take pics of my setup on one of my robots that has a camera and sensors above the lidar. That will give you ideas.

PRO
Germany
#19  

ok. I will see your design and modify mine.

PRO
Germany
#20  

I’ve modified my robot’s design based on the example shown in the video Add LIDAR Navigation to a Robot. I’ve attached some photos for your reference.

At the moment, I’m unable to test the navigation because of a power issue-my battery shut off unexpectedly, and now Windows won’t boot. I’ll need to resolve this problem first before I can begin testing the navigation features.

I really hope the new design is correct, but please feel free to point out anything

User-inserted image

User-inserted image

User-inserted image

User-inserted image

#21  

That does look closer to what DJ might have been referring to. It is a holiday weekend, so no one is really around.

I do have two concerns about your change.

  1. make sure your realsense offset reflects its location relative to the center of the sensor. Every single fraction of a mm matters - and there’s no internal mathematical interpolation that can resolve it. The dimensions are measured and used for the slam navigation which requires incredibly high precision.  Your sensor is offset from the lidar.

User-inserted image

  1. the opening will help,  but the angle of the lidar laser against the leg will cut off a significant amount of measurement data. It would be best if the leg was exactly right and left centered of the lidar. That way, it’s only cutting off 2-5 degrees based on thickness. The configuration you have now is still blocking significant measurement.

User-inserted image

PRO
Germany
#22   — Edited

many thanks for your help . I will modify according to your recommendations. I always thought the position of the t265 is not that critical, just facing the front.

PRO
Synthiam
#23  

Oh yeah that’s good suggestion from support. It’s crazy how important the measurements are - until you realize that a small error in the math continues and grows over time.

The t265 is okay but I don’t use it often. I find it really starts to drift. Sometimes I’ve seen a drift of 20-30 cm over a short period of time. The problem with the drift can usually be resolved if using the last option (I think) in the pose hint type in the better navigate. I think it’s the last, I can’t recall top of my head.

but the last option uses the difference between the slam and the external (t265) and keeps the difference for the next time.

PRO
Synthiam
#24  

Yeah - Dynamic is what it's called. This is from the Better Navigator robot skill manual page.

User-inserted image

However, I'd first ensure the lidar works correctly by using "Hector" as the pose hint type because that will depend entirely on the lidar's positioning and scanning. The lidar's map will be the best option for determining where the robot is on the map, without using external sensor data. Once you get that dialed in correctly, you can add additional sensors. For example, only add one sensor at a time to ensure they complement each other.

If you do use Hector as the pose hint type, don't have any other odometer pose data being sent into the NMS, or it will confuse it.

PRO
Germany
#25  

Great news! My robot was able to move correctly using only the RPLIDAR C1 with Hector as the pose hint. Here’s the video (2:27 long): https://www.youtube.com/shorts/UyGPficqqkI At 2:22, the robot bumps into a door - that was the only time it got off track. It moves with a bit of a zigzag pattern, but it stays on course overall. I’ll continue doing a few more tests, and I’ll also try the T265 camera later with the Dynamic pose hint.

PRO
Synthiam
#26  

That's awesome! Got it sorted out now. Excellent, how much data is required for SLAM to work? I should do a video someday on how slam works. There aren't many good sources out there.

PRO
Germany
#27  

oh yes  a video could be a good idea.  It seems to me that the description of the RPLIDAR skill has been updated with better explanations, especially regarding the position of the LIDAR on a chassis.  If it continues to perform well, I might remove the T265 entirely-then the rear view of the LIDAR will be 100% free and unobstructed. That should simplify things and improve mapping. I will do a short try with the t265 to compare the results. Appreciate the help!

PRO
Synthiam
#28  

I'm curious about your experience with the T265... I think i mentioned before to put it in Dynamic pose hint mode. You can try them all to find the best that works for you - but i feel that dynamic might be the most reliable since the T265 drifts so much.

PRO
Germany
#29  

I ran a few tests using the Intel T265 together with the RPLIDAR C1. I couldn’t continue for too long because my battery drains rather quickly. Here are two videos: in the first one, the T265 is in dynamic mode, and in the second one, it's in differential mode.

Dynamic mode: https://youtube.com/shorts/f-qAfETMvwI

Differential mode: https://youtube.com/shorts/OM1KD6hecqk

In dynamic mode, the results aren’t great - the yellow dot keeps spinning in place and it’s impossible to return to the starting point. It just zigzags on the spot. However, in differential mode, it works much better. It behaves similarly to using the RPLIDAR alone. It might just be my impression, but it seems to make fewer small zigzags in differential mode.

In the description of the Better Navigator skill, it says to use dynamic as the pose hint suggestion. But in Example #2 (Intel RealSense T265 & 360-degree LIDAR), it recommends using differential instead. That’s why I decided to try that mode.