My robot would be sitting at a table, and it would look down at the objects on that table. It would be able identify the objects on the table: salt, pepper, a can of beer, a knife, a plate a spoon, and carrots on the plate. It would also know the coordinates of each object in 3D space of each object. Knowing that it would reach out and pick up the beer. I think Tensorflow can do some of this already. Microsoft Cognitive vision gives you the objects it sees, but not the locations.
Other robots from Synthiam community

Deuel18's Oculus Rift Is Out, Any Ideas?
Hey guys. Oculus Rift is out, wondering if we can hack it and use it as another controller for our EZ-Robots. Im also...

Steve's R2-D2 Hasbro Toy Hack
So, it’s been a while since I’ve had the screwdrivers and soldering iron out for fun, so figured I’d start a new project...

Thecrustychicken's Crustybot 1.2
CrustyBot 1.2 The CrustyBot 1.2 is made from: Jet3 mobility chair base Sabertooth 2X60 Motor Controller EZ-B 3.0 Robot...