
Andy Roid
USA
Asked
— Edited
Any updates on the indoor navigation system, camera and beacon system? There was discussion it was in the works. Just curious. Ron
Any updates on the indoor navigation system, camera and beacon system? There was discussion it was in the works. Just curious. Ron
There are a couple of keys to what I am going to be trying to do in January.
My son is very interested in coming up with a navigation system for the inmoov. He also wants to build an inmoov so he has some incentive to get this feature working well with mine. Right now his time is being spent programming at work, programming for some contracts that he has on the side and school. His time should free up in January.
Here is my logic on 3 cameras. If you know the distance from 3 set points, you know exactly where something is. If you know the distance from 2 points, you have a pretty good idea of where something is. If one of those cameras is blocked by something and you have 2 cameras, you have no idea where something is. If you have 3 cameras and one is blocked, you have a pretty good idea of where it is. The object that you are tracking on the robot would need to be a ball for the shape of the object not to mutate at different angles from the cameras. If the size of the ball were known at a specific distance, the distance could be calculated by the size of the ball on each camera. The color of the ball would become a concern also. The balls color would change as lighting conditions changed. You also wouldn't want something that is a common color in the environment. It could be that a glowing ball would be the best option but I haven't looked into how the camera would pickup a glowing ball at different light levels of the environment. Filters could be used to give you a range of colors.
I am still a ways away from this. One of the other things that I need to come up with is a reliable data storage layout to map a floor to data points. I plan on using a grid layout. This grid layout would mark each square of the grid with a 1 if an object were determined to be in that grid. Each pass of the robot through the house would then add one to any grid that had an object in it, and subtract 1 if that grid has a positive value (meaning that something has been detected in that grid before) and that grid doesn't have an object in it now. This would allow me to map hard vs soft grids. When telling the robot to go from the study to the kitchen, the robot would calculate the shortest path using this information. It would look first for the shortest path and then see how this path would need to be modified for hard targets in its path. It would then modify its path for soft targets based on the value of the soft target. The external cameras would tell it where it is right now along with the log that is being held when it moves around the house, and it would use the encoders to calculate the speed in which it is moving so that it knows which grid it expects to be in. Additionally, the cameras would tell the robot which grid it is in as it moves. Along with all of this, there is software that uses cameras to record gates images it expects to see while it navigates from one location to another (roborealm avm). The 3 of these systems, working in conjunction with each other should provide a pretty accurate way to navigate. The key is that all of these systems have a way to communicate with each other. The SDK for ARC, the SDK for RoboRealm, Database technology and ARC itself will all work together from multiple machines to accomplish this. Thats the plan anyway at this point.
The only concern that I have is people's reaction to having cameras in every room of a house. This is also a concern in an office building scenario but not as much.
On specific items that you want the robot to validate its recognition, you could use object recognition and barcode type labels for the robot to have multiple forms of verification. The navigation systems would get you close to an object but you wouldn't be able to know that you were aligned with the object. the stickers could be used on those items that you wanted to make sure your robot was aligned to.
I can't wait to see what EZ-Robot has up its sleeves for this. I don't know that I will use it yet or not based on what it is. It might be that it is a great start or a great overall solution. I have until January to wait and see because my son and I won't have time to work on it until then.
As I thought you system is way beyond the very basic concept I have, but I can see how what you are proposing works. This gives me direction and ideas to play with. My goal is a very simple set point to set point navigation which will be accessed by a voice command. If I can get it to work I will meet my simple goal.
The development of your system is the future of robotics an AI. I look forward to you and your sons projects come January..
Thanks for direction in my project. Ron
David, I developed "Volume Occupancy Mapping" in the nineties and it may be similar to what you are thinking, it is detailed in this thread.
synthiam.com/Community/Questions/3389&page=2
Hope this is of some use.
Tony
I had read that which is what really got me thinking. Your advice is always spot on and I always enjoy your advice. A large majority of it is taken and used.
BTW, check your email. I had shot over a question to you about those LiFePo4 batteries you were looking to use a few months back.