Asked
Resolved by DJ Sures!
Hi @DJ, Probably off topic but if you were to purchase a LiDAR for ARC, would you buy the Hitachi-LG LDS or the RPILiDAR A1 . The link at top of this page for hardware specs appears to be outdated but on Robotis page it says 120mm to 3,500mm and Hitachi with 5% error and 300rpm with 1 degree resolution. https://www.robotis.us/360-laser-distance-sensor-lds-01-lidar/
The RPI Lidar appears to have a much larger range of 12 meters and they appear to be easy to find and a lot available for sale. https://www.slamtec.com/en/Lidar/A1Spec
I don't own either, so I am just trying to work out what would be the best one to purchase to work with ARC, Both seem to be about the same price $130 to $140 Canadian although the Hitachi is hard to find.
The RPI is my favorite. The Hitachi is okay but difficult to come by, and I find it doesn't produce as many scans per rotation. It seems the RPI has a fuller scan if that makes sense. We're doing a ydLidar robot skill for a client right now - that'll probably be up in a week or so. But bang for the buck is RPI for sure. I use it on my test robot.
Do you also have a T265? Because they work better together than with wheel encoders
PS, I made a new thread for you
Oh, and don't ask about hacking the neato xv11. It's such a piece, ugh... I wasted a few days on it a while ago because a school had asked about it. I did have a robot skill working for it, but I won't publish it because it was too bad. It was so bad that it would reflect poorly on our software, so I can't post it. It's just that bad. There aren't words to talk about how bad it is. haha
Thanks for the thread @DJ, Yes I have a T265 and my Roomba had a lot of fun running around the house annoying the family with it
OK RPI it is !!!
Yeah, the RPI combined with the t265 is going to be wild for you to experience. I've been making a thing today - it's 3d printing the 4th modification now. I'll show it tomorrow when it's done because it'll be another few hours. But it's a 3d printed holder for the RPI and T265 together. It might be helpful for your new build.
I'd like to see an IMU to use for pose telemetry. I know it'll suffer from drift, but I feel if the IMU were combined with the wheel encoder, it would be very accurate. It could be whipped up quickly with an Arduino and a low-cost IMU. But I'm not sure about the algorithm to get distance from acceleration or how to fuse it with the wheel encoder. I'm guessing an extended Kalman filter would be the best choice, but the advanced math is beyond my concentration.
The SLAM in The Better Navigator requires a pose hint from some other odometry, such as a wheel encoder or the T265. The concept behind a pose hint is to give the slam a place to start looking for where it thinks the robot is in the saved map. Because without a pose hint, the slam algorithm will have no idea where to begin looking for the robot amongst the saved map. The map will have millions of data points, and some might be similar to others. So having some secondary, albeit less reliable, the source is better than nothing. So the T265 works well, but it's discontinued, and we need a replacement solution.
Looking forward to seeing your design and 3D print for the Lidar and the T265. I have been meaning to design a 3D model for a head for my Dog that will hold a T265 for eyes a microphone for nose and stereo speakers for ears but I need to work on my design skills.
I thought you added glyph support for your pose hint. That appears to be the way everyone else does it and triangulate based on size and location with a cheap stereo cameras. Yeah not as nice as a the T265 but that sensor also drifts. My roomba docking station was the way I recalibrated (go back to start before next mission).
There is always the indoor positioning systems that use electromagnetic fields, wifi, Bluetooth and other RF signals to locate its position like indoor atlas. I wonder if there is an open source thing that can use an IMU gyroscope / accelerometer off an arduino.
Looks like a bunch of opensource projects started but I can’t find anything that really finished or usable .
why am I now reading Papers at 11pm on Kalman filter for establishing location? I see what you mean about the mathematics:)
https://www.mdpi.com/1424-8220/22/2/428/htm
Re instructable link: That’s not a pose hint, it’s a complete realignment of the pose. With a glyph it can only be done when in camera view. You can have the robot in view all the time, as factories would be. Depends on your setup. The Synthiam version is called Camera NMS Pose or something. Look at the NMS manual for more info
The pose hint is what I described above. It needs to happen in real-time a dozen times a second. You’ll have to read what I wrote for the pose hint. summary repeating myself is it’s a hint for every time the slam routine as a hint of where to search in the memory array map
OK I understand the problem a little better now and I have 2 wheel encoders from another build. Problem with my build with the mecanum wheels, it goes sideways (well it is supposed to if I get it working properly) so not sure wheel encoders would work very well. An old Mouse ball solution maybe the trick here but it will stop working when you lift the mouse off the table and you don't have anyway of knowing where the physical mouse actually started from.
@DJ, for witch ydlidar are you doing the robot skill, the X2 or X4? I have both Lidars and can help testing if you need.
I believe the ydlidar SDK supports their entire family of lidars. So it won't be model specific, from what I understand. I have not seen the robot skill yet, so I'm not 100% positive about that.
@DJ So the T265 works well, but it's discontinued, and we need a replacement solution.
Has anyone used the Luxonis oak-d? Looks like a really useful piece of kit for robotics.
I don't think the T265 is worth replacing. I get way better results using the Faux NMS Odometry with only a 360 lidar. Make sure you set The Better Navigator to use the pose hint HECTOR.
The t265 is a bit wonky in some conditions and goes out of sync over time - doubly so in sunny conditions.
The hector pose hint set to HECTOR works way better if there's a lot of stuff in the room to produce a noticeable map so the robot can figure out where it is
True, i sold my T265 because of the drift and relocation issues. I am planning on using the Roombas encoder count or better yet adapt some quadrature encoders and make a skill for them. Just waiting for the ydlidar skill to see the light of day.:p
I've been navigating between 3 points for more of the day, which is the same map I've been generating. This is with only the hector pose hint and the NMS faux Odometry.
The latest RPI Lidar A1 robot skill has an option to enable a fake pose hint event. Using the better navigator, you can set the pose hint to be Hector and run entirely off the 360 RPI lidar slam sensor. This way you do not need a T265 or the NMS faux odometry
The Better Navigator can use the Hector SLAM pose hint, which will not require a pose sensor. If you have The Better Navigator configured to use Hector as the pose hint, you can enable this option. This will send a "fake" pose hint of 0,0 to the NMS so that the pose event will run after every lidar scan.