My robot would be sitting at a table, and it would look down at the objects on that table. It would be able identify the objects on the table: salt, pepper, a can of beer, a knife, a plate a spoon, and carrots on the plate. It would also know the coordinates of each object in 3D space of each object. Knowing that it would reach out and pick up the beer. I think Tensorflow can do some of this already. Microsoft Cognitive vision gives you the objects it sees, but not the locations.
Other robots from Synthiam community

Dave's First Real Look At My Ez-B Controlled Full Size Lis...
Hi all, Id like to share a video I just took of my full size Lost in Space B9 robot thats controlled by two EZ-B...

DJ's Apple Watch Robot Control
Not sure how many of you have seen this, but we added Apple Watch support to the ARC iOS app! For the longest time, no...

Perry's Inmoov Conversion
Hello all, I wanted to start my own thread to discuss my Inmoov and my conversion to EZ Robot. I appreciate all the work...