Requested — Edited

Support For Orbbec Astra Embedded S Score 858

I use a USB camera, so I think this skill is the most relevant ARC skill to use as a starting point.  My cam also returns a depth stream though, which I am making heavy use of.  I see some other skills related to Kinect and such, so maybe there is a better skill for me to look at.

I am interested in collaborating with anyone else here on ideas for using the regular 2D video that comes from a USB cam in concert with a 3D depth stream.  There are a huge amount of use cases for using the 3D data alone or together with the 2D RGB data.  My own starting point was to use the depth stream as a really good front facing obstacle avoidance sensor.  Its very much like a lidar but in many planes all at the same time.  Obviously, this relates to mapping, obstacle, and nav skills already in ARC.  There are many other uses.  I wrote an algo for detecting walls and room corners from a depth stream for example as another way to localize in a room.  Anyone that is interested in having some discussions around using depth cams, let me know, I'm all in.

Want to see this feature happen? Like it to increase the score.

ARC Pro

Upgrade to ARC Pro

Unleash your creativity with the power of easy robot programming using Synthiam ARC Pro

#1  

I use a USB camera, so I think this skill is the most relevant ARC skill to use as a starting point.  My cam also returns a depth stream though, which I am making heavy use of.  I see some other skills related to Kinect and such, so maybe there is a better skill for me to look at.

I am interested in collaborating with anyone else here on ideas for using the regular 2D video that comes from a USB cam in concert with a 3D depth stream.  There are a huge amount of use cases for using the 3D data alone or together with the 2D RGB data.  My own starting point was to use the depth stream as a really good front facing obstacle avoidance sensor.  Its very much like a lidar but in many planes all at the same time.  Obviously, this relates to mapping, obstacle, and nav skills already in ARC.  There are many other uses.  I wrote an algo for detecting walls and room corners from a depth stream for example as another way to localize in a room.  Anyone that is interested in having some discussions around using depth cams, let me know, I'm all in.

#2  

I think a good starting point would be to post the specs of your camera...:)

#3  

There are many depth cams out there.  To be clear, I am interested in collaborating on the 3D problem in a generic way, independent of any given camera.

To your question, I use the Orbbec Astra Embedded S.  The specs are at the bottom of the following link.  I like the Orbbec but I am not trying to advocate for it.  Given the T265 thread, a case could be made for the newer intel cams.  The big reason I got it is it is very small (about the size of a AA battery) and you can use several of them at the same time without interference.  They say they will work outside in sunlight too, but I haven't tried that. Specs: https://orbbec3d.com/astraembeddeds-2/

I think a skill or skills would need to be able to take in depth info from any depth cam and thus some code would have to be written to make a handoff.  I have code for my specific sensor to work with the image and depth frames.  So far I am trying to simplify and summarize this massive amount of data (300K points) down to something easier to deal with.  To do this, there's also a lot of trig/geometry/matrix stuff to deal with, which I am not very good at.

I guess what I am talking about is additional ARC "middleware" skills on top of the cams, unless someone thinks the existing skill should do 2D and 3D.  Kinect, OpenNI, and some notable others have middleware, but with a few exceptions, most of this middleware is focused on gaming use cases instead of the use cases of mobile robots.

PRO
Synthiam
#4   — Edited

Moved this conversation into it's own thread because it is not relevant to the camera device that it was originally started in.

There's no functionality of the camera device that would be compatible with a 3d depth camera. The extracted 2d rgb image could be fired into a camera device for viewing and existing tracking features. The depth data would be its own skill that can either publish it's 2d plane to the NMS or have it's own tracking algorithms. A new skill would need to be created. To create a skill, there's a great tutorial here: https://synthiam.com/Support/Create-Robot-Skill/Overview

It's as simple as installing Visual Studio and pressing the "Create New Robot Skill" button in ARC. Once you do that, type in your code, compile, run, test, publish.:)