My robot would be sitting at a table, and it would look down at the objects on that table. It would be able identify the objects on the table: salt, pepper, a can of beer, a knife, a plate a spoon, and carrots on the plate. It would also know the coordinates of each object in 3D space of each object. Knowing that it would reach out and pick up the beer. I think Tensorflow can do some of this already. Microsoft Cognitive vision gives you the objects it sees, but not the locations.
Other robots from Synthiam community
DJ's The Real Wall-E
DJ Sures has made a few Wall-E mods in the past, but this one is sure to be unique! DJ added a camera and a vertical...
Leversofpower's Un2003a 28Byj48 Stepper Motor Project
This is a tutorial for implementing UN2003A 28BYJ48 stepper motor controller combo. This motor stepper controller combo...
Djandco's Djandco's Wall.E
Hi, This is my take on the EZ-B Wall.e While I wait for my kit to arrive I thought I would share some ideas. The first...
