My robot would be sitting at a table, and it would look down at the objects on that table. It would be able identify the objects on the table: salt, pepper, a can of beer, a knife, a plate a spoon, and carrots on the plate. It would also know the coordinates of each object in 3D space of each object. Knowing that it would reach out and pick up the beer. I think Tensorflow can do some of this already. Microsoft Cognitive vision gives you the objects it sees, but not the locations.
Other robots from Synthiam community
Steve's Mini 6 Fabricated Robot
Mini Six EZ Robot with EZB3, micro servos and custom aluminum brackets - compact DIY build with Bluetooth speaker audio...
DJ's Robotis XM430 Dynamixel Arm With Open CM9.04
Robotis arm pick-and-place setup with Open CM9.04 controller and firmware upload tips for reliable grasping
Ezang's Photoresistor Project With ARC, Arduino, Python...
Photoresistor project using Synthiam ARC, Arduino and Python: red LED lights at high resistance, green LED at low...
