The boards have been pretty quiet so I thought I would share what I am working on now.
This is for robots using an on-board computer I have started making a plugin that is designed to use the xv-11 sensor, wheel radius and encoder counts per revolution of the wheels to make a map of the environment. It also uses the compass part of the 4-in-1 sensor sold on this site.
The map is currently stored in a SQLLite3 database. It is housed in the plugin directory. I don't know if this will be the final design or not but it is what I have working right now. There is a map table that is created by the user with the plugin. It contains the X coord, Y coord, tile and Counter fields. This will be used to store the map as it is being updated. The tile will be marked when the LIDAR recognizes something in that location of a certain intensity.
Quote:
The tile size is determined by the wheel diameter. If you have small wheels, you have small tiles. If you have large wheels, you have large tiles. The current environment that I am testing in has tiles that are about 4.19 inches by 4.19 inches. This is because I have wheels that are 4 inches in diameter and if you take the wheel diameter * pi / 3, you come up with 4.188790266.... I round this to 2 decimal places. If you had wheels that were 2 inches in diameter, you would have tiles that are 2.09 inches. If you had wheels that were 12 inches in diameter, the tiles would be 12.57 inches. The logic is that the wheels would be much smaller for robots in smaller environments and much larger for robots in larger environments. Larger wheels means faster moving robots and thus the updating of the environment would have to account for faster moving robots. The number of tiles in the map is determined by the configuration screen by setting the size you want your map to be. In the test, the map is 50 feet x 50 feet. Using a robot with 12 inch diameter wheels indoors in a 50x50 foot house could become problematic. These are all subject to change depending on testing.
Well the information quoted above has changed. I am in the US and as such am more comfortable using inches and feet, so I am making 1 inch tiles for everything. The wheel diameter is still important but not as important in laying out the grid. I am converting the mm readings from the LIDAR to inches and marking the squares. We will see how this works out and go from there. This, along with everything else is subject to change as I go through it all.
The map on the screen is loaded from the SQLLite3 database initially. As things are seen by the LIDAR, the map table is updated and the display is updated by marking the corresponding tile on the map.
Eventually my goal is to take this logic and use it in SLAM. I plan on starting with some simple SLAM using the RANSAC algorithm which is best used in indoor environments. This is because it estimates and creates landmarks based on straight lines. From there I will use the Extended Kalman Filter for data association. This allows the robot to recognize landmarks and then adjust its current position on the map based on these landmarks.
One of the reasons that I want to store this information in a SQLLite3 database is that this would allow me to have multiple maps housed in different tables. The configuration screen could be modified to allow the user to specify which environment the robot is in (office 1, Office 2, home, Mom's house for example). These maps would be stored in different tables and the user would just switch to the map that pertains to the current environment. Another thing that these multiple maps could be used for is to handle different floors of an office building, one for each floor.
The test map is about 13 meg in size. This isn't too large but is only based on a 50x50 foot house on a robot with 4 inch diameter wheels. If you were in a warehouse or large office building with a robot with small wheels, the size of the database could get really large I would imagine. The goal is to get this to work in a smaller environment, and then see what needs to be done to handle larger environments.
Eventually, I plan on incorporating a path finding algorithm. This shouldn't be too hard to do because it is done in video games like crazy. There is plenty of sample code to build from.
Anyway, that is what I am working on currently. I suspect it will take some time before I have something to share. This is a pretty ambitious project and I will post updates as I accomplish different things with it.
I am not sure if I will sell this plugin or make it freely available. This is something that I will decide after I know how it works in multiple environments. If it turns out to be simply amazing, I might sell it. If it just works, I will give it away for free and continue working on a final solution.
The beauty of SLAM is that it handles a certain degree of errors. Right now, the code above is more for the calculations. Good high Def encoders should prevent the compass from being needed. It is going to be used more for checking for wheel slipage.
The starting point for the compass when turning in a location should have a simular level of error as at the stopping point of a turn.
Also, there will be external devices that are available to validate the location of the robot using cameras doing object recognition in black and white mode to limit distortions based on lighting conditions. These will be stationary sevices.
Really, the compass is the smallest and least used component of the system.
Thanks for the advice though. I do plan on having some sort of initialization process to calibrate the robot.
More notes completed -
working on:
todo:
Waiting on: Replacement encoder - Received and installed... Replacement Kangaroo - Should receive 1/7/2016
Brain is getting worn out. Taking a break. Also have laundry to do and Christmas stuff to take down.
With the addition of the LIDAR that I am working with to RoboRealm, and some of the other features that are already in RoboRealm, I am now torn...
I could continue to write my own SLAM module in either RoboRealm or ARC or both. There are some things in RoboRealm that could be helpful, like path finding, floor finding for camera, object recognition and a lot of video processing things that could be helpful in using a camera feed and the lidar together to accomplish SLAM. I just dont know until I dig into it pretty far.
I am trying to decide if I continue just making a SLAM module for ARC as a plugin or if I stop this and try to use RoboRealm for this. I may end up doing both, IDK at this point. I guess I could also use the RoboRealm API in an ARC skill plugin but that limits how many people would be able to use the plugin.
I'll answer in a purely selfish way... RoboRealm charges per computer (they used to allow a license to be used on 2, but not anymore). If you were going to release your plugin for free, or would have a multi-computer for one cost model (maybe based on EZ-Robot login ID to prevent abuse), then I would encourage you to continue in your efforts because I have several computers that I want to be able to use depending on where I am and how many robots I currently have running.
On a non selfish answer, if using RoboRealm makes it easier for you to achieve the goals, and allows you to move onto other functions that you have unique capabilities to deliver, then having two different applications which provide the exact same capability seems a bit silly.
Alan
I am leaning toward just building the ARC skill plugin. It will teach me a lot along the way and I am all for that.
I am going to be focusing on EZ-AI this weekend, and then on this after the plugin for EZ-AI is complete. I do like the idea of having everything work in plugins and I also like the thought of promoting EZ-Robot. I also like RoboRealm but I am not nearly as comfortable with VBScript and how RoboRealm works, so I dont know that path is the one that I want to go down first.
Interesting discussions
@OldBotBuilder: Regarding the compass, i agree i have mixed feelings, i had success with small robots and same room, indoor navigation and/or big robots (more metal, more current) is difficult to obtain reliable results. Some IMUs devices have a proprietary fusion algorithm combining gyro & compass, and the results are more stable, i have a few IMUs to try but no time yet. I use ROS for slam and navigation, there is a node which combines different IMUs plus the odometry and provides a correct/combined output.
@David: Regarding encoders, i agree too, is very important to have high resolution. I have 6" wheel robot with 144 CPR encoders and is not enough for PID control when driving slow speed.
What is your encoder resolution ? Is built in with the motor ?
RobotRealm, do they have SLAM/Mapping functionalities builtin ? if you can identify the missing functionalities you can challenge DJ Sures to build. if EZB is not powerful or does not provide the build blocks, is time to check the EZR roadmap otherwise you can end up overlapping efforts.
I'm also running several different PCs using EZB. The Roborealm I purchased will not allow me to upgrade to use the new Lidar module unless I shell out another $29.95, and as Alan mentioned, I can only use Roborealm on a single computer.