My robot would be sitting at a table, and it would look down at the objects on that table. It would be able identify the objects on the table: salt, pepper, a can of beer, a knife, a plate a spoon, and carrots on the plate. It would also know the coordinates of each object in 3D space of each object. Knowing that it would reach out and pick up the beer. I think Tensorflow can do some of this already. Microsoft Cognitive vision gives you the objects it sees, but not the locations.
Other robots from Synthiam community

Redzone's Gideon Inmoov
I have not been here for sometime been busy with that dirty word, work. I have ,however, been redesigning my bottom half...

DJ's Machine Learning And Cognitive Services Jd Humanoid...
We had been working on plugins for the Microsoft Cognitive Services, and I was testing them with the JD Humanoid. This...

Hazardbt's I Have A Telemetry Suit-Contro
I have a telemetry suit-controlled Cyborg Zombie. Functions as a mobile, real-time robot surrogate through which I can...