My robot would be sitting at a table, and it would look down at the objects on that table. It would be able identify the objects on the table: salt, pepper, a can of beer, a knife, a plate a spoon, and carrots on the plate. It would also know the coordinates of each object in 3D space of each object. Knowing that it would reach out and pick up the beer. I think Tensorflow can do some of this already. Microsoft Cognitive vision gives you the objects it sees, but not the locations.
Other robots from Synthiam community
Lemon's Galapagos Mode- Scorpion Tail
Tortoise bot rebuilt with short legs and a scorpion tail, showcasing custom modifications and creative design.
Steve's James, Commodore 64 Retro Robot That Learns And Uses...
Revived 35-year-old Retro Robot runs on 7.4V LiPo with buck converter; Commodore BASIC learns 3D motions via joystick...
Steve's Navigation Test Robot With Intel Realsense T265
Navigate an iRobot Create using Intel RealSense T265 and EZB with Synthiam ARC for reliable visual tracking navigation...
