The boards have been pretty quiet so I thought I would share what I am working on now.
***This is for robots using an on-board computer***
I have started making a plugin that is designed to use the xv-11 sensor, wheel radius and encoder counts per revolution of the wheels to make a map of the environment. It also uses the compass part of the 4-in-1 sensor sold on this site.
The map is currently stored in a SQLLite3 database. It is housed in the plugin directory. I don't know if this will be the final design or not but it is what I have working right now. There is a map table that is created by the user with the plugin. It contains the X coord, Y coord, tile and Counter fields. This will be used to store the map as it is being updated. The tile will be marked when the LIDAR recognizes something in that location of a certain intensity.
Quote:
The tile size is determined by the wheel diameter. If you have small wheels, you have small tiles. If you have large wheels, you have large tiles. The current environment that I am testing in has tiles that are about 4.19 inches by 4.19 inches. This is because I have wheels that are 4 inches in diameter and if you take the wheel diameter * pi / 3, you come up with 4.188790266.... I round this to 2 decimal places. If you had wheels that were 2 inches in diameter, you would have tiles that are 2.09 inches. If you had wheels that were 12 inches in diameter, the tiles would be 12.57 inches. The logic is that the wheels would be much smaller for robots in smaller environments and much larger for robots in larger environments. Larger wheels means faster moving robots and thus the updating of the environment would have to account for faster moving robots. The number of tiles in the map is determined by the configuration screen by setting the size you want your map to be. In the test, the map is 50 feet x 50 feet. Using a robot with 12 inch diameter wheels indoors in a 50x50 foot house could become problematic. These are all subject to change depending on testing.
Well the information quoted above has changed. I am in the US and as such am more comfortable using inches and feet, so I am making 1 inch tiles for everything. The wheel diameter is still important but not as important in laying out the grid. I am converting the mm readings from the LIDAR to inches and marking the squares. We will see how this works out and go from there. This, along with everything else is subject to change as I go through it all.
The map on the screen is loaded from the SQLLite3 database initially. As things are seen by the LIDAR, the map table is updated and the display is updated by marking the corresponding tile on the map.
Eventually my goal is to take this logic and use it in SLAM. I plan on starting with some simple SLAM using the RANSAC algorithm which is best used in indoor environments. This is because it estimates and creates landmarks based on straight lines. From there I will use the Extended Kalman Filter for data association. This allows the robot to recognize landmarks and then adjust its current position on the map based on these landmarks.
One of the reasons that I want to store this information in a SQLLite3 database is that this would allow me to have multiple maps housed in different tables. The configuration screen could be modified to allow the user to specify which environment the robot is in (office 1, Office 2, home, Mom's house for example). These maps would be stored in different tables and the user would just switch to the map that pertains to the current environment. Another thing that these multiple maps could be used for is to handle different floors of an office building, one for each floor.
The test map is about 13 meg in size. This isn't too large but is only based on a 50x50 foot house on a robot with 4 inch diameter wheels. If you were in a warehouse or large office building with a robot with small wheels, the size of the database could get really large I would imagine. The goal is to get this to work in a smaller environment, and then see what needs to be done to handle larger environments.
Eventually, I plan on incorporating a path finding algorithm. This shouldn't be too hard to do because it is done in video games like crazy. There is plenty of sample code to build from.
Anyway, that is what I am working on currently. I suspect it will take some time before I have something to share. This is a pretty ambitious project and I will post updates as I accomplish different things with it.
I am not sure if I will sell this plugin or make it freely available. This is something that I will decide after I know how it works in multiple environments. If it turns out to be simply amazing, I might sell it. If it just works, I will give it away for free and continue working on a final solution.
Have a wonderful New Year everyone.
Cheers
Chris
As far as the schedule, this new direction sets me back a bit, but I think it helps out a lot in the long run. Gains will be had by not having to maintain an interface to ARC using the SDK.
Writing plugins takes a bit more time than just writing the code, but it works out in the long run and I believe in the product (EZ-Builder). I have been impressed with how DJ has done a lot of things. It isn't always easy to understand initially, but once you grasp how it works you realize the genius behind it. It lets Rafiki use other peoples plugins over time also. I wouldn't feel right about using other people's plugins if I didn't share mine, so as I complete the others, I will share them too.
The others that I am working on currently are used for ground height sensors, car bumper sensors, the Volvo motors (that dont have onboard controllers) that I am using and eventually I will get to the Omron B5T HVC. It just takes time to get things right and SLAM with path finding is what has my attention at the moment.
I had an issue a while back that blew all of the 5v devices on my prototype. I have discovered more things as time has gone on and just discovered that a kangaroo and one of my motor encoders also got taken out. I just ordered a replacement encoder and another Kangaroo. A lot of my devices could handle 12 volts so most weren't damaged, but unfortunately what was damaged was very difficult to get to. This caused me to have to disassemble a lot of the robot that I didn't want to disassemble, but also helped me to decide on a couple of design changes that will allow for easier access to the parts inside of the robot. Hard lessons to learn, but I just keep pushing forward with it.
Anyway, a lot more information than you asked, but I hope that this gives you an idea of all of the things going on. Focusing on SLAM is nice for me. It is a fun project for sure.
Also, the compass will determine when a turn is complete and will be used to determine the heading and thus the location of what the robot is seeing.
Documenting more for my own but also sharing so that others can understand what is happening when they use this.
Code:
Have you utilized the magnetometer (compass) in 'real life' over an extended run yet?
I ask because these compasses are very sensitive to all magnetic sources, not just "Magnetic North". In addition, the magnetic lines of flux can be disturbed by many outside influences such as metal structure and/or fastners that are near the sensor and the effect of motors or wires carrying large currents.
My experience with these sensors is from several years developing multi rotor UAVs (drones). In those applications the compass was a critical part of the autonomous flight control. To assure proper operation, the sensors are typically placed away from the motors, power leads, and any metallic objects, often on stalks atop the airframe in nonmetallic cases.
For accurate directional indications there is usually some sort of calibration process. The drone is rotated 360 degrees about each axis. The sensor readings are stored and compared to null out any static disturbances. Then an offset is introduced to compensate for the local magnetic declination. After that the directional data is good as long as you don't run across any buried metallic objects (like rebar) or nearby flux distorting objects like structural metal.
Your application may not need absolute directional accuracy but it will need static repeatibility.
I hope this information is helpful.
The starting point for the compass when turning in a location should have a simular level of error as at the stopping point of a turn.
Also, there will be external devices that are available to validate the location of the robot using cameras doing object recognition in black and white mode to limit distortions based on lighting conditions. These will be stationary sevices.
Really, the compass is the smallest and least used component of the system.
completed -
- The user can specify the size of the environment, which will build a blank map.
- Map units are in inches. one pixel = 1 inch.
- The user can specify a point on a map as a location that will be used when the path finding is complete to allow the user to say "go to the kitchen" and the robot will know what point on the map represents the kitchen. These labels are visible on the map.
- Math has been programmed to know the angle that the robot should turn and how far it should travel.
- LIDAR is updating the map correctly when objects are detected.
- The user can specify where the robot currently is by double clicking the map. This would be used when a map is initialized the first time or if the robot is moved.
- Made Landmark class
* RemoveBadLandmarks
* UpdateAndAddLineLandmarks
* UpdateAndAddLandmarksUsingEKFResults
* UpdateLandmark
* UpdateLineLandmark
* ExtractLineLandmarks
* LeastSquaresLineEstimate
* DistanceToLine
* ExtractSpikeLandmarks
* GetLandmark
* GetLineLandmark
* GetLine
* GetOrigin
* GetClosestAssociation
* GetAssociation
* RemoveDoubles
* AlignLandmarkData
* AddToDB
* GetDBSize
* Distance
* Distance (between landmarks)
working on:
- Add A* type path finding
* Functions in place for path finding
* Use the MapArray which is set when the map is reloaded to set the squares that are either passable or not passable.
* Make sure the path calculations can handle larger maps.
* Draw decided route on the map
* Get new route if the path is blocked.
* Get route based on destination locations ("go to bedroom" ).
- Use the Landmark class
todo:
- Adjust the points returned from the LIDAR based on the direction that the robot is looking. (compass or calculation)
- Put back in the code that will account for robots that do not have a full 360 degree view from the LIDAR.
- Put in code to move the robot - make configurable via ConfigDictionary values (thinking of using scripts for this which will allow the user to make a Move Forward x distance, Move Backward x distance, Turn right x degrees and Turn Left x degrees script. This should allow the plugin to be used for pretty much any robot with different motor configurations and controllers. Config screen would have four variables, one for each script name (1 per direction). Also, there would be a variable that would be updated by the script when the movement has completed. I would wait for this variable to change, and then do the next step.
* move forward 12 inches
* move backward 12 inches
* turn right 1 degree
* turn left 1 degree
- Add method to move the robot to a location on the map from some sort of a click event. (move robot button and then single click event I think).
- Add method to turn the robot to face a specific direction based on user input.
- Rewrite subsystem controller to work with Kangaroo.
Waiting on:
Replacement encoder - Received and installed...
Replacement Kangaroo - Should receive 1/7/2016
Brain is getting worn out. Taking a break. Also have laundry to do and Christmas stuff to take down.
I could continue to write my own SLAM module in either RoboRealm or ARC or both. There are some things in RoboRealm that could be helpful, like path finding, floor finding for camera, object recognition and a lot of video processing things that could be helpful in using a camera feed and the lidar together to accomplish SLAM. I just dont know until I dig into it pretty far.
I am trying to decide if I continue just making a SLAM module for ARC as a plugin or if I stop this and try to use RoboRealm for this. I may end up doing both, IDK at this point. I guess I could also use the RoboRealm API in an ARC skill plugin but that limits how many people would be able to use the plugin.
On a non selfish answer, if using RoboRealm makes it easier for you to achieve the goals, and allows you to move onto other functions that you have unique capabilities to deliver, then having two different applications which provide the exact same capability seems a bit silly.
Alan
I am going to be focusing on EZ-AI this weekend, and then on this after the plugin for EZ-AI is complete. I do like the idea of having everything work in plugins and I also like the thought of promoting EZ-Robot. I also like RoboRealm but I am not nearly as comfortable with VBScript and how RoboRealm works, so I dont know that path is the one that I want to go down first.
@OldBotBuilder:
Regarding the compass, i agree i have mixed feelings, i had success with small robots and same room, indoor navigation and/or big robots (more metal, more current) is difficult to obtain reliable results. Some IMUs devices have a proprietary fusion algorithm combining gyro & compass, and the results are more stable, i have a few IMUs to try but no time yet. I use ROS for slam and navigation, there is a node which combines different IMUs plus the odometry and provides a correct/combined output.
@David:
Regarding encoders, i agree too, is very important to have high resolution. I have 6" wheel robot with 144 CPR encoders and is not enough for PID control when driving slow speed.
What is your encoder resolution ? Is built in with the motor ?
RobotRealm, do they have SLAM/Mapping functionalities builtin ?
if you can identify the missing functionalities you can challenge DJ Sures to build. if EZB is not powerful or does not provide the build blocks, is time to check the EZR roadmap otherwise you can end up overlapping efforts.
I have been spoiled by Android where the apps are based on the user id, not the device. I have the same apps on two tablets, two phones, and an emulator under Windows all with one purchase, but if someone else wants to use them, they need my Google credentials, which is NOT happening...
Alan
Ros slam is good but the goal here is to add slam to ARC. DJ allows plugins and I am working on one that uses specific sensors and allows customization with out programming. The plugin allows me to leverage what is already in ARC. Basically the gist is that the user would build scripts to move their bot a specific distance. They can specify which script to use to move the robot in any direction. This allows the plugin to work with a wide range of robots.
I'll write more when I'm not on a phone
No, it is external to the motor driven off of the back shaft of the motor.
RoboRealm has the ability to generate an image from an array of data, and then analyse that image using a lot of different filters. You can also update the image. It also has path finding built in.
There isn't a SLAM module in there yet.
I am just not comfortable in their environment at all compared to C# and ARC. I understand what is going on in ARC. I get lost in RoboRealm but it is probably because I haven't spent the time trying to understand it like I have ARC.
There are some cool features in RoboRealm that could be leveraged, but their pricing model isn't free like ARC's.
I also just cant bring myself to write in vbscript. It is way too unstructured for me and I always have to go back and correct code to remove ; or other silly things. Old habits die hard. There are other scripting languages available in RoboRealm but I am far more comfortable with C#.
My problem right now is that the encoder has too many pulses for the kangaroo. I have to figure out how to make a flip flop circuit now.
https://www.ez-robot.com/Community/Forum/Thread?threadId=6225&page=3
Parts have been ordered to make this flip flop circuit. I will post the results when I have them.
I could live off of what others have done, but then I wouldn't be a programmer.
I could leverage what is done and build onto it, then I would be a smart programmer.
I could build something open ended enough to allow others to use it in different configurations, which would make me a good programmer.
I could bang my head up against something that will never work, which would make me a dumb programmer.
I had started to go down the ROS path. It is a good path if that is what the rest of your stuff is built on. I decided that I didn't want everything built on ROS, and DJ opened up ARC to allow plugins to be developed to do anything you want it to do. ROS is the wild west of robot programming. There is a lot of half working code that only works in a specific configuration without ever considering that someone else might use it. It is good in that there are a lot of things being developed there, but nothing that I have found is a working solution outside of the specific config that it was designed for.
EZ-Builder's road map became whatever we want to make it as programmers when Plugins were added. This allows you to use the controls that someone else wrote, along with the controls that are in ARC by default. It allows you to write your own controls for things that are not there, and share them if you choose to with others. The real difference that I now see with ARC and ROS is ARC is more controlled and runs on Windows. ROS is more wild west and runs mainly on Linux. Either is a good option in my opinion, but I like to leverage controls that work for a wider audience than those that work only for a specific configuration.
Cheers and thanks for your efforts in this,,,
Richard
The key to SLAM is accurate movement. I cant control if someone has accurate movement in their robot nor do I want to. If I take the movement equation and place it on the user of the plugin, and assume that their movement is accurate, I then can focus on the mapping, landmark and path finding portions of the plugin. I would love to take this to the point that it could work with multiple sensor configurations at some point, but I have to start somewhere.
There is no reason that SLAM cant work with ping sensors, PIR sensors, IR sensors or really anything that can detect distances from an object. The difficult thing about using these types of sensors is that there is a very limited field of view, so figuring out if what the robot is seeing is a straight line or a point takes much more movement by the robot. The LIDAR can make this determination due to the data that is returned in a very quick time. This drastically reduces the movements required by the robot. By taking these measurements from mm to inches, I am able to reduce the map by 25.4 times in size and reduce the level of error from the LIDAR by taking all objects that are in the inch size to the pixel on the map.
By using a scale of 1 inch = 1 pixel as a standard, the calculations for how far to move become pretty easy to do. The concern that I have with this is the requirement to use Inches which is not the universal standard. I could go with 1 pixel = 1 cm but the issue there is that the map becomes too large to display reasonably inside of ARC, or on any application for that matter. This may need to be revisited after everything is working but I don't have a better unit of measure that isn't too large or too small at this time.
The compass would only be used to account for obvious wheel slippage. It has been pointed out that the compass may have interference and thus not produce accurate results. I do feel that this is still a valuable tool to have, but may also incorporate a camera to account for movement. For example, I can take a picture prior to a turn, and then one after a turn and compare them to each other to see how much variance that there is in the pictures. If a threshold has been met, then I could assume that there was no wheel slippage. This would have to be combined with the compass I would think as it would be possible to miscalculate this with walls. 2 walls in a house could look very similar in a corner situation if the environments decorations were not done by my lovely wife
The EZ-Robot 4-in-1 sensor also provides a gyroscope. This could be used I suppose but I would have to understand its readings more to make a call on if this should be used or not.
Another option is to use another sensor like a ping sensor or car bumper sensor and the known points on the map to know what you expect to see change with these sensors as the robot turns. The same could be done with the LIDAR I suppose, but I think I want to complete a move, then get the LIDAR data as timing between these two actions (and the resulting odometer data or grid estimation) is important in order to get an accurate results. The defusing of the data from mm to inches might make this not as big of an issue though. Also, this will become a non issue once the map is build and the landmarks are populated as the position and direction of the robot will be adjusted based on the previously seen landmarks...
Okay, done thinking about this right now. Work calls.
I think a small list of reccomended components would be the best way to go. If anyone is serious about adding this function to a robot they would have to use the components that best suit the project. I know you want to have the option of using many different sensors, but trying to work them all in the project means a lot of work. List the optimal components and one or two options if they exist and that's it. The costs for this system will not be cheap and everyone is aware of it.
I look forward to hearing more and seeing the end result of your project.
I just wanted to present my opinion.!
Ron Rh
What I do normally is spew ideas as they come into my head. They are quickly forgotten if I don't write them down due to other things getting in the way and confusing things. Its funny how age and my office moving from the spare bedroom to the living room cause this to happen. My son needed a place to stay while he was in College so I gave up my office. I do have a lot of distractions now that I didnt before, so when it is quiet and I can think, I like to document my thoughts. It helps me to go back and read them before I start the next day of working on things.
I also go back and update posts with newer information on these thoughts quite a lot. What is funny is that I could do this in notepad or winword or a pen and paper, but find myself much more likely to go read these if they are posted here. I guess this is my second home or something now.
I also think that seeing what other people are doing or think about my ideas is important to me and I get more motivated to figure out the next issue.
Thanks for your thoughts, they are appreciated.
Ron R
I am initially going to set this up with the components you have ordered or already have. The spewing of ideas is only that. It is me thinking "out loud" and most of it is just ideas for further improving what I will build.
The ramblings that I do for Rafiki behind the scenes is quite long and it is interesting to go back and read some of the early thoughts that I had. Some of them I think, "What was I even talking about?" and others I think "Wow, I had completely forgotten about that. That's a good idea that I had." My son got me started on this. Its a form of story boarding kind of I guess.
Those young ones have some good ideas sometimes
This is the thread. It started as a somewhat off-topic post in another thread and just continued there, so not obvious if browsing, but pops up if you search for 4in1 compass.
http://www.ez-robot.com/Community/Forum/Thread?threadId=8604
Alan
I hope Jeremy isn't too busy with other things to address it right now.
The thread I linked to has information on how to get a 3rd party sensor working correctly with EZ-B. You might want to go that route if you are in a hurry.
Alan
I am going to focus on getting the encoder issue (too many counts with 2 motors of my config with a Kangaroo) going first I think. This is where I am with SLAM right now anyway, so it is important to get this resolved. It may be possible not to use the compass and will probably make me code the other stuff better anyway
If I come to a time that I need it, I will either make a subsystem for it or get the 4-in-1 v2 when it is available.
Ron R
My brain was on circuit board design and I found it really hard to get my brain off of that. The cost of the first 64 of these little logic divider boards would be really high for me if all I did was produce the 2 that I needed for my prototype, so I toyed with the idea of selling the others to recover the cost. I gave it some thought and decided that I needed to build a subsystem controller board anyway, so I could just include these two logic divide by 2 circuits on this same board, thus making the cost go away for them. I had a far less difficult time convincing myself that the cost for these subsystem controller boards was out of line when there are a lot of benefits to doing this in a robot. The good news is that there was enough room on the board to include 13 additional digital divide by 2 circuits per board meaning that I will still be able to recover the cost to produce the Subsystem controller boards if I sell these. There will be a total of 52 of these logic /2 circuits up for sale with the first order.
I mention this just because it changes, quite drastically how the robot is wired. This will take some time to get the boards, and then to build them, and then to rewire the robot to use these boards. This is all good as in the long run it will be a much cleaner robot inside. It does mean that I lost my test platform for working on the navigation part. I estimate that it will be a couple of weeks before I have everything assembled again to a point that I can work on navigation.
This guy does not seem to reply publicly but maybe you could ping him off line to see if you can get ay tips to complete this awesome app.
Thanks
I made a script to use your Lidar plugin and was hoping there is a way to shorten it, example..
[code}
$LIDARclose =800 # Change for minimum distance
#Lidardistance[] Base Right
$LIDARDistance[0]
$LIDARDistance[1]
$LIDARDistance[2]
thru
$LIDARDistance[80]
#Lidardistance[] Base Left
$LIDARDistance[354]
$LIDARDistance[353]
Thru
#$LIDARDistance[269]
#Right side lidar
if($LIDARDistance[0] < $LIDARclose or $LIDARDistance[1] < $LIDARclose or ..... and $LIDARDistance[325]> $LIDARclose and $LIDARDistance[324]> $LIDARclose and $LIDARDistance[323]> $LIDARclose and $LIDARDistance[222]> $LIDARclose and $LIDARDistance[321]> $LIDARclose)
goto(Left)
sleep(3000)
[/code]
My current script has every variable set in the script, $LIDARDistance[0]
$LIDARDistance[1]
etc.. ; is there a way to define a range, eg, $LIDARDistance[0] thur $LIDARDistance[359] ?
Hope this post makes sense.
Thanks
Sorry, I just saw this. I have been really busy lately. I won't have time until maybe tomorrow night to look at this. I have to go to Dallas this evening to make sure that a data center move for this weekend is planned well. This weekend will be busy with this move. I should have time tomorrow night to look at it depending on some other things, but will do my best to look into how you could make the script shorter.
Thanks for understanding.
David
Thank again, Mike
Without writing everything up for you, I wanted to give you the direction that I would go...
The gist is that you would use a variable that would be updated as you cycle through the array variable to get and evaluate the values from the LIDAR. When you get to 360, you drop out. You could also evaluate the other values returned from the LIDAR to judge how reflective the item is at those degrees.
Code:
This isn't tested code as I am on a linux laptop right now, but it should be close to working. Remove the comments and you should be close to a solution.
I installed vpython (with some help from the vpython forum as it is not 100% easy) and pyserial (in fact already installed), ran the visual python 3D script from GetSurreal, and hey presto I have portable lidar.
I might get the Raspberry to talk to my EZB at some point which would be a neat additional thing to do.
You need a Raspberry Pi 3 to do this and earlier Raspberry models will not work.
I think this is a good alternative to the impressive and really easy to use ez-robot plugin which is described in this thread as there is no need for a pc.
Cheers
Chris
Just to finish off my comment. I found a way to put teamviewer on the raspberry pi and so now I have remote access from my master pc to the 3D neato lidar output on my raspberry .
As noted this could perhaps more easily be done with a compact PC but my way is probably a bit cheaper ...
This is what you do:
1) use a raspberry 3 pi model b (or better) or vpython will not run
2) use a large SD card in your raspberry as the software is large in size. I used a 32GB card after smaller ones were not big enough.
3) use raspi-config in terminal to enable the experimental GL Driver graphics
4) install vpython (sudo apt-get install python-visual)
5) install exagear (which costs money) and teamviewer following these instructions:
https://eltechs.com/run-teamviewer-on-raspberry-pi/
7) Download lidar.py from:
http://www.getsurreal.com/products/xv-lidar-controller/xv-lidar-controller-visual-test
8) Identify the correct com port for the teensy and edit lidar.py
9) run python lidar.py from terminal
10) use teamviewer and enjoy lidar graphics on your master pc
Let me know if you have any problems but once you know the pitfalls it really is quite easy (ha!)
Cheers
Chris
Are you using the information for anything other than the display from lidar.py?
Thanks
David
If I had time I would want to do more but progress for me is slow due to other less fun commitments like work.
I think there is a real market possibility for someone to write mapping and pathfinding software for the lidar if it were written in a flexible way to:
a) support the multiple lidars which exist
b) run on multiple machines (raspberry, pc etc)
Perhaps in python which seems quite efficient.
In the short term, at the very least I intend to make a serial connection between the EZB and the raspberry to do basic things like turn it on and off but this is so weak compared to the possibilities.
Cheers
Chris
Anyway, for me now, I am going to progress my plans to put a whole bunch of arduino driven modular additions on to the roli including a geiger counter (just for fun). After that, I plan to find a second hand wild thumper or build my own and work out to auto-charge it. Then get it to climb up and down stairs which would be a fabulous challenge.
Once all that is done, then I will return to the lidar software issue. Probably several years at my rate of progress!