My robot would be sitting at a table, and it would look down at the objects on that table. It would be able identify the objects on the table: salt, pepper, a can of beer, a knife, a plate a spoon, and carrots on the plate. It would also know the coordinates of each object in 3D space of each object. Knowing that it would reach out and pick up the beer. I think Tensorflow can do some of this already. Microsoft Cognitive vision gives you the objects it sees, but not the locations.
Other robots from Synthiam community
![Tevans's Help With My R2](/uploads/user/E0642C0D4EDDCAC70CD8F32356536093/IMG_6656-635792044009863281.jpg)
Tevans's Help With My R2
Hi Friends!. well Im finally ready to put my full size R2 to roll... But I need a tip from you. What kind of H-Bridge or...
![Luis's R2d2 Project Ready To Document](/uploads/R2D2-SoftwareandWiringLayout-635666798245361016.jpg)
Luis's R2d2 Project Ready To Document
Hello all, I have been working for the past 6 months with James V. He has been building the physical R2D2 Model as I...
![Waynea's Tall Tower Light Controller - Ezb V4](/uploads/user/FA6853BA152D89939506921B8B05595F/ctrlrelay-635997604362080092.jpg)
Waynea's Tall Tower Light Controller - Ezb V4
Good Morning Ez-Robot Community. I would like to introduce myself. My daytime job is a Television Broadcast Engineer....