
Rich mentioned today that it would be great to see a robot autonomously find an object, pick it up and move it. I agree! How the heck do you do that? I've been looking around at old posts but can't find anything to get me started.
I would like to do something like this with the robotic arm I just completed and eventually with my InMoov.
I suppose that a place to start would be to have it find an object (camera), then to navigate to the object, moving it would be the easiest part. Anyway, just blue skying here.
This, of course, would be a great feature for a robot to have to assist a person with limited mobility.
Anyone have any thoughts on this.
Edit, Sorry about the title of this post. Why can't I edit that?
Cy weighs in @ 25lbs, so I too want to make sure he doesnt crash into walls or runs over someones foot (dont't ask,heheh)
Challenge accepted!
DJ, Great to have your help on the project. I am sure the solution will be awesome now. If it would help to have a 4in1 sensor to tell the robot to move a specific number of degrees rather than a set amount of time or until the object reaches the edge of its vision, I have a few extra that I got in the Brookstone BoGo sale as spares that I would be happy to share with @kamaroman68 and @RoboHappy.
Alan
I still haven't been able to identify a goal. Obviously a robot can't just drive around a room and pick up random stuff - there's philosophical conversations regarding motivation and free will to understand first
However, to navigate across a room to identify a table and search for a specific object on that table, and pick it up. Now that's more like it.
Firstly, even a human can't just wander around a room "looking" for something without an idea of where that something should be "looked for". As an example, if I told you to get my keys from my house, would you start looking in sofa cushions first? Would you look behind the television and pickup the stove to look under it? Obviously there needs to be some rules defined.
Let's say it's
1) find table
2) look for object on table
3) pickup object
Sure, that's easy. Of course what ezrobot is still missing is a method to identify its location and navigation waypoint within a home. Lidar is far too expensive for ezrobot to invest into as an ezbit. But the vacuum thing that everyone seems to care about is affordable... Just not easily integrated, yet. I have one, and should really spend some time with it.
However, if you saw the dev schedule that Jeremie and I are undertaking right now to get these ezbits shipping (8x8 rgb, inverted pendulum, rgb serial led, line follower, Ezb mini, etc...) geez we have been working every day and I think we're keeping our local pcb manufacturer very wealthy with daily prototype redesigns!
Anyway, that aside.... Identifying the table location is waypoint navigation. Can be done.
Identifying how high the table is... Well, I can try and use the inmoov... But the inmoov kinda sucks for any coordinated activity. Sure it looks great, but it's slow as molasses and it's hands are absolutely useless. I will say that inmoov's hands are great for burning out servos... That's about all they can do. Picking something up? Ha, good luck!
Guess I'll have to put more thought into what we are trying to achieve here...
For simplicity, let's use a red ball as the item we all want to find and pick up. (Mine will actually be an assortment of cat toys, but each will be a previously trained object, and I'll only be looking for one at a time. i.e. "Roli, fetch the green bug toy").
Other parameters we can agree on to start is that the item will be within 10 feet or so, and a size that can be picked up by an EZ-Robot claw (kamaroman68 may be going for something bigger, but that should be easier, not harder, since a larger object is easier to identify from a distance). Let's also assume for now that the object is easily trainable and distinct from anything else in the room. A bright red ball in a room with no other red objects to confuse it. Once we get the basics working, getting more complex recognition can be an exercise for the students.
@kamaroman68 and @RoboHappy, feel free to correct my assumptions or provide additional guidance to help explain what you are looking for or to help simplify the task.
Alan
We can use Roli's as a good base to get started. Have the robot run around until it finds the red ball! I like it.
Also, how many of you would be interested to use the lidar scanner thingy that dave cochrane and richard we talking about using? If i added a control for it?
I just haven't figured out the importance of it yet... Of the purpose. Guess this is a good place to start!
And, yes, (not to be selfish, just working with what I know you have) a Roli with the arm either mounted to the front or extended (see the project MyRoliMKii for an example) is probably a good small scale model of what the other guys are doing.
Alan
first is important to have a "dead reckoning" system this implies wheel encoders.
Roli does not have encoders, so the next question is how to add encoders ?
when choosing an encoder is important resolution something less than 1000 ticks per revolution can be short.
it's my opinion, but i can be wrong, can someone validate what should come first ?
if you know that an object is 7 feet away, and you move forward a distance, the object would then be 7 - distance moved. so, if the object is now 5 feet away and you moved toward the object, you know that you moved 2 feet. If you can't tell if anything is in front of you and you move forward, the encoder solution is what would tell you the distance.
without the LIDAR, the encoders would work great to tell you the distance traveled but you loose the rest of the room.
the ultimate solution is a SLAM based approach where you build a map grid and know where you are and what has changed in the environment. the camera is then used to detect the object and as verification of which direction you want to move in to get to the object.
this is all just my opinion. I can't wait to be able to get back to working on it.
if you have only lidar data you will need to perform a lot of calculations to understand if your moving straight or if you are reaching a wall and an obstacle at same time.
I refuse to use wheel encoders after having terrible past experiences. In theory, it's great... Wheels turn, controller counts and knows how to keep distances. But that's not the case when turning or driving over rough terrain.
If it's okay with everyone, i would prefer to focus on localized navigation with a lidar or similar approach rather than continue the wheel encoder discussion. I've had this discussion in the past on this forum to lengthy means and do not wish to revisit it again if possible
One other thing in a previous conversation with me you were thinking about adding" servo trim" for my dynamixel mx64 t servos cause they don't have full range like the ax12a. Thanks for all help everyone!
So no redesign is necessary for you.
only to be clear the camera is another usb camera, or is possible to buy an usb adapter to the EZ-B camera ?