My robot would be sitting at a table, and it would look down at the objects on that table. It would be able identify the objects on the table: salt, pepper, a can of beer, a knife, a plate a spoon, and carrots on the plate. It would also know the coordinates of each object in 3D space of each object. Knowing that it would reach out and pick up the beer. I think Tensorflow can do some of this already. Microsoft Cognitive vision gives you the objects it sees, but not the locations.
Other robots from Synthiam community
Steve's 2 Ft Biped Walking(Shuffle) Robot
Omni-wheel robot build: overcome turning and weight distribution with center-pivot knees/hips; PVC construction and...
DJ's Trs-80 Model 100 Controlled Wall-E
Control an EZ-Robot Wall-e from a hacked 1983 TRS-80 via TTL Bluetooth; load BASIC programs over Bluetooth; EZ-SDK...
Kab's Ogie The Bear In Action
Ogie the talking bear with EZ Robot: HD servos for eyes/mouth/head, speech recognition, lip-sync, personality, and...
