My robot would be sitting at a table, and it would look down at the objects on that table. It would be able identify the objects on the table: salt, pepper, a can of beer, a knife, a plate a spoon, and carrots on the plate. It would also know the coordinates of each object in 3D space of each object. Knowing that it would reach out and pick up the beer. I think Tensorflow can do some of this already. Microsoft Cognitive vision gives you the objects it sees, but not the locations.
Discover more robots
Kenny's Humanoid #3, The Miniplan
3D-printed humanoid built on a FLSun Kossel Mini Delta from Thingiverse, photo build progression and upcoming motion...
Jstarne1's Security Cam Robot, This Droid Is Watching For...
Project update: MagPi magazine to feature project build with step-by-step instructions and a fresh new look.
Ezang's I Am A. B. - J D's Bigger Brother
Taller humanoid with longer legs and arms, added head servo and rearranged RGB lights for increased movement and...
