My robot would be sitting at a table, and it would look down at the objects on that table. It would be able identify the objects on the table: salt, pepper, a can of beer, a knife, a plate a spoon, and carrots on the plate. It would also know the coordinates of each object in 3D space of each object. Knowing that it would reach out and pick up the beer. I think Tensorflow can do some of this already. Microsoft Cognitive vision gives you the objects it sees, but not the locations.
Other robots from Synthiam community
Nomad's Using Robosapien V2
Customize V2 robot with EZ-B4: left shoulder camera synced to head rotation and right-side laser for shooting balloons;...
Cardboardhacker's Raiko Personal Robot
Raiko robot: tracked, curvy companion with removable wireless-powered head, LED eyes, carrying arms, social AI. EZ-B V4...
SBANAS09's Kitty Hawk
Engage 7th graders in STEM with Kitty Hawk aviation program: wing design, watercolor flight art, and EZ-Robot coding to...
