Asked
— Edited

Rich mentioned today that it would be great to see a robot autonomously find an object, pick it up and move it. I agree! How the heck do you do that? I've been looking around at old posts but can't find anything to get me started.
I would like to do something like this with the robotic arm I just completed and eventually with my InMoov.
I suppose that a place to start would be to have it find an object (camera), then to navigate to the object, moving it would be the easiest part. Anyway, just blue skying here.
This, of course, would be a great feature for a robot to have to assist a person with limited mobility. Anyone have any thoughts on this.
Edit, Sorry about the title of this post. Why can't I edit that?
I think all three of us are looking for something on the floor, not a table, so that isn't the issue. I think we also can all come up with a way to wander around until we see the "thing". Rich's Ping Roam script is a good place to start if we can't. It is once the "thing" is spotted, getting the robot close enough to it and properly oriented that is the challenge. Like @kamaroman68 said, once we get close enough that we can treat the robot like a fixed arm on a table, the earlier example in this thread solves the getting the arm in place and picking up, it is the coordination of a pan/tilt camera and a mobile robot to get to something once seen that seems to be the harder part.
For simplicity, let's use a red ball as the item we all want to find and pick up. (Mine will actually be an assortment of cat toys, but each will be a previously trained object, and I'll only be looking for one at a time. i.e. "Roli, fetch the green bug toy").
Other parameters we can agree on to start is that the item will be within 10 feet or so, and a size that can be picked up by an EZ-Robot claw (kamaroman68 may be going for something bigger, but that should be easier, not harder, since a larger object is easier to identify from a distance). Let's also assume for now that the object is easily trainable and distinct from anything else in the room. A bright red ball in a room with no other red objects to confuse it. Once we get the basics working, getting more complex recognition can be an exercise for the students.
@kamaroman68 and @RoboHappy, feel free to correct my assumptions or provide additional guidance to help explain what you are looking for or to help simplify the task.
Alan
Hmmm, wait so this is fun! Could it be a game of hide and seek with the robot?
We can use Roli's as a good base to get started. Have the robot run around until it finds the red ball! I like it.
Also, how many of you would be interested to use the lidar scanner thingy that dave cochrane and richard we talking about using? If i added a control for it?
I just haven't figured out the importance of it yet... Of the purpose. Guess this is a good place to start!
if you know the room layout, you know the boundries. This helps in the equation. I just haven't had time to get back to the LIDAR lately. I have soooo much going on. Help on the LIDAR would be awesome.
I am interested in the Neato Lidar if for nothing other than really good collision avoidance. The ping and ir sensors are OK, but the Lidar is much more precise.
And, yes, (not to be selfish, just working with what I know you have) a Roli with the arm either mounted to the front or extended (see the project MyRoliMKii for an example) is probably a good small scale model of what the other guys are doing.
Alan
lidar is second in the addons list
first is important to have a "dead reckoning" system this implies wheel encoders. Roli does not have encoders, so the next question is how to add encoders ?
when choosing an encoder is important resolution something less than 1000 ticks per revolution can be short.
it's my opinion, but i can be wrong, can someone validate what should come first ?
The thought I have is that even without encoders, the LIDAR can be used to tell you if you are going relatively straight or not, but used together you have a really accurate system of measurement.
if you know that an object is 7 feet away, and you move forward a distance, the object would then be 7 - distance moved. so, if the object is now 5 feet away and you moved toward the object, you know that you moved 2 feet. If you can't tell if anything is in front of you and you move forward, the encoder solution is what would tell you the distance.
without the LIDAR, the encoders would work great to tell you the distance traveled but you loose the rest of the room.
the ultimate solution is a SLAM based approach where you build a map grid and know where you are and what has changed in the environment. the camera is then used to detect the object and as verification of which direction you want to move in to get to the object.
this is all just my opinion. I can't wait to be able to get back to working on it.
but the question is can a robot perform navigation/localization without encoders ?
if you have only lidar data you will need to perform a lot of calculations to understand if your moving straight or if you are reaching a wall and an obstacle at same time.
Yeah - the robot works better without wheel encoders. There is far too much slip when turning, specifically with tracks to be reliable with wheel encoders.
I refuse to use wheel encoders after having terrible past experiences. In theory, it's great... Wheels turn, controller counts and knows how to keep distances. But that's not the case when turning or driving over rough terrain.
If it's okay with everyone, i would prefer to focus on localized navigation with a lidar or similar approach rather than continue the wheel encoder discussion. I've had this discussion in the past on this forum to lengthy means and do not wish to revisit it again if possible