My robot would be sitting at a table, and it would look down at the objects on that table. It would be able identify the objects on the table: salt, pepper, a can of beer, a knife, a plate a spoon, and carrots on the plate. It would also know the coordinates of each object in 3D space of each object. Knowing that it would reach out and pick up the beer. I think Tensorflow can do some of this already. Microsoft Cognitive vision gives you the objects it sees, but not the locations.
Discover more robots
Bhouston's A New Hand For Inmoov?
InMoov hand hack: Exiii Handii modified for independent finger control, wrist with EZ-Robot lever servo, EZBv4 control...
Bhouston's Inmoov's Sensors
Sensor scripting assistance to integrate sensors and enable proper operation, with collaborative support from Rich,...
Robot's R3 Roomba Control Using 5 Volt Relays
Control Roomba power via EZB + 5V relays, run SCI commands and sensor diagnostics using Synthiam ARC's UART terminal and...
