Asked — Edited

Indoor Navigation ?

Many members are waiting for indoor navigation. Some discussions have been posted about other software which is able to do this, but is a heavy duty learning curve, even though it can interface with the EZB.

Dj had been developing a concept to take on this difficult task. Time has gone by and no true navigation system has developed.

I am looking for a simple point to point system. I want to use a colored light beam, somewhat focused. I want to incorporate this "light tracking" with an object avoidance ir sensor which would adjust for variations or object in the path. I know I am simplifying the idea and this is not a true navigation system. I just want my Adventure Bot to go to a place when I tell it to, not have it be a glorified bump and go toy.

Is this a valid idea, or am I wasting my time. Rich, Richard R, dcochcran, thetechgruru, Steve, WBS, jsterne and the many other members who have experience using the ezb, software and hardware please let me know your thoughts. A simple system is better than nothing for now.

Thanks,

Ron R

After Halloween project.


ARC Pro

Upgrade to ARC Pro

Harnessing the power of ARC Pro, your robot can be more than just a simple automated machine.

#1  

For now you can use roborealm and your ezb... its not perfect but avm navigator does a decent job navigating a path using your robot's Movement Panel and the ezb4 camera. ... it takes a bit to setup but easy to use the software afterwards. .. D.Cochran has a video on how to setup Roborealm to pass variables back to ezb builder in order to control your robot's movement and navigation. ... Until dj comes up with something integrated with ezb builder there is limited solutions to indoor navigation... I mean easy solutions.... ROS is the benchmark for indoor nav but as pointed out.... the learning curve isn't for the faint of heart....

#2  

In order for indoor navigation to work as I see it, you need the following.

  1. A map of the environment
  2. The ability to know where you are in the environment
  3. The ability to plan a path in the environment
  4. The ability to recognize objects at different levels of your robot
  5. The ability to know which direction on the map you are facing.
  6. Accurate means of measuring movement distance.

Toymaker is accomplishing most of this by using sensors to pickup where objects are based on their sensor readings, and knowing where the robot is on the map due to very accurate encoders on the drive wheels. Toymaker is building his own map and is doing his own path planning along with some other things.

Some are using LIDARs like the Neato lidar to detect objects. This gives you a part of the equation. Without encoders, it becomes unreliable, but the LIDAR is pretty good at picking up the current environment in 360 degrees. Kinect type sensors can also be used and may be the best option due to their ability to also take 3D type images of the environment, while also offering distance information to the objects. This would allow you to identify chairs vs people for example.

There is a technology called SLAM. SLAM uses sensors to build and refine the map on the fly and also identifies the location that the robot is on the map based on its understanding of the current environment. It doesn't handle path planning. There are a number of technologies that have been in existence for a while that are used for path planning in the video game industry that people are using for the path planning piece.

Now, with all of this, you still have to know where the Living Room is vs the Kitchen or a bedroom. Being able to say "BOTNAME, Go to the living room" without being at an interface that would allow you to click on the map requires that the robot know the coords for the living room. This would have to be either programmed in the robot to know these coords or have an external method of knowing that it is the living room. Glyths have been used for this and there are some other ideas around also, but for the robot to be able to know that it is there, I am of the opinion that the robot would have to have some external means of verification that it has arrived.

As you can see, this is a complex problem. As mentioned by Richard, RoboRealm does a decent job of using cameras with a process called AVM that can navigate. The issue with this is that if objects are moved, the matching to the video queues that the robot recorded while learning this path are not valid anymore. As my wife repairs furniture and the furniture in my house changes on almost a weekly bases, this type of a setup doesn't work in my environment at all. Even navigating to different markers can be disrupted frequently. Because of this, I need a system that is self learning and adaptable to the environment. Your environment might be far more stable so this might work well for you.

This is really the next frontier IMHO. The goal of placing a robot in any environment and having it to be able to map that environment, and know where it is in that environment (living room, kitchen,...) has a huge push right now from many robot researchers. SLAM has been around for a long time, but the processing power and cheap sensors to perform SLAM really are just now becoming available. This will push this technology forward, but there are issues with how it is being developed.

Each developer is developing for specific sensors that match their robot platform. This means that unless you use the same motors and encoders, with the same sensors and the same build base (for example, the distance that the wheels are apart from each other affects things) then you really cant just take someone else's work and apply it to your own. There isn't a concerted effort to come up with a self contained solution really at this point. There isn't a product that can be purchased and added to your robot that will allow you to then choose a few settings and be off and running. The closest thing to this is right now is the RoboRealm path.