My robot would be sitting at a table, and it would look down at the objects on that table. It would be able identify the objects on the table: salt, pepper, a can of beer, a knife, a plate a spoon, and carrots on the plate. It would also know the coordinates of each object in 3D space of each object. Knowing that it would reach out and pick up the beer. I think Tensorflow can do some of this already. Microsoft Cognitive vision gives you the objects it sees, but not the locations.
Other robots from Synthiam community

Bhouston's Inmoov's Touch Sensors
The other day DJ suggested an easy way to make a touch sensor, see this thread...

Clode's The Canadian Mars Explorer :-)
New project to work on this winter Canaxplorer. The Canadian Mars Explorer. This rover is controlled by EZ-B with cam,...

Markthebotbldr's Drone Tracking Fun
I had a blast today having my DJI P4 actively track my hacked Friendly Robotics Lawnbot. I use the Sabertooth motor...