Asked — Edited

Discussing TOF Cameras For Robot Vision And Navigation

I have some scanning software for my Primesense camera called, Skanect. The company that produced that software (Occipital) also created hardware for the ipad,  a clip on scanner called, the Structure Sensor.

Recently they've created another device called the Structure Core Mark II. Its a time of flight camera like the others, but its all inclusive.  They have many case uses for the vision camera, like robotics, VR, AR etc.

My thought is could this somehow be used with ARC for navigation? What are the pros and or cons? Time of Flight vs Slam. What advantages would one have over the other?


Related Hardware EZ-B v4

ARC Pro

Upgrade to ARC Pro

Take control of your robot's destiny by subscribing to Synthiam ARC Pro, and watch it evolve into a versatile and responsive machine.

#1  

Amazing stuff. I'm not sure however this technology is viable and affordable for picking fruit out in the strawberry fields. However I understand a self guided light and agile touch would be a huge advantage for a robot to have.

PRO
USA
#2  

It’s a really nice depth camera. Accurate up to a couple millimeters. The device and software are only $399. That price is equivalent to the primesense that was around several years ago and was bought by Apple to put in the iPhone X for face scanning to open the phone.

Maybe this is obsolete since you can use a color camera and slam. Do you really need a depth camera on a robot?

#3  

"Do you really need a depth camera on a robot?"

Maybe I'm misunderstanding but wouldn't depth come in handy for navigation and reaching out to hold something?

PRO
USA
#4  

Exactly. Slam gives you slices of what’s around you. Depth camera gives you almost the same stereo vision we have with the added benefit of seeing in darkness. Not sure how well it would navigate outside with infrared radiation from the sun.