Connect the D435i Intel Realsense depth cameras to the ARC navigation messaging system (NMS) for distance detection and mapping.
How to add the Intel Realsense D435i robot skill
- Load the most recent release of ARC (Get ARC).
- Press the Project tab from the top menu bar in ARC.
- Press Add Robot Skill from the button ribbon bar in ARC.
- Choose the Navigation category tab.
- Press the Intel Realsense D435i icon to add the robot skill to your project.
Don't have a robot yet?
Follow the Getting Started Guide to build a robot and use the Intel Realsense D435i robot skill.
How to use the Intel Realsense D435i robot skill
Connect the D435i Intel Realsense camera to the ARC Navigation Messaging System (NMS) for distance detection and mapping. This should also work with other Intel RealSense depth camera models because the FOV and parameters are used from the device.
Main Screen

1) Select the device by the serial number. This allows multiple Intel Realsense devices to be added to a project.
2) START/STOP button connecting to the RealSense device.
3) Log window for status and errors.
Image Align
The Image Align tab allows selecting the area of the image that you wish to detect for the point cloud. For example, having the detection too low will detect the floor at a close distance. Ideally, set the bar high enough to avoid detecting the floor.

1) Adjust the top of the detected image range.
2) Adjust the bottom of the detected image range.
3) detected image range is highlighted in purple.
Configuration

1) Offset of the T265 from the center of the robot. The positive number means the sensor is toward the rear; a negative number means the sensor is toward the front. The measurement is from the center of the robot. The sensor must sit aligned with the center of the robot, not toward the left or right. The sensor must face 0 degrees heading.
2) Sensor resolution during initialization. If the values are changed from the default, obtain them from the Intel RealSense viewer. Changing these values may be necessary if using a different sensor than the D435i. Set both width & height to 0 for auto-size, which isn't the most efficient for performance.
3) The number of depth pixels to skip when processing the depth data. This is useful for lower CPU PCs or when high-resolution depth is not required. Since there is a limited field of view (FOV) on the sensor, it may not be necessary to process every depth pixel. If the sensor resolution is 1280x720, the CPU would be processing 921,600 depth points for every scan of the specified framerate. At 1280x720 @ 15 FPS, that's 13,824,000 depth points per second. If the Skip Scan Points for X & Y are set to 10 for that resolution/fps, the CPU would only need to process 9,216 depth points per scan (or 138,240 per second).
The editor will recommend a value for the best balance between performance and detection based on the sensor FOV.
4) The camera video from the RealSense can be pushed into a selected Camera Device. In the selected camera device, choose CUSTOM as the device type. This will accept the video feed from this robot skill. Ensure START is pressed on the selected camera device as well.
Troubleshooting
If you experience any issues, ensure the USB 3 port is used on the computer. Second, load the Intel RealSense Viewer. There may be a prompt to "Enable USB Meta Data," which is a system-wide change. Ensure you answer ENABLE to that option. Lastly, update the firmware if there is a prompt to do so.
If the sensor works in the Intel RealSense viewer, it will work with this robot skill. Depending on the version of the sensor, you may need to configure the capture width/height/framerate. This can be done in the robot skill configuration screen. The RGB and Depth camera resolution values must be the same. You can use the Intel RealSense Viewer to see what values work for your camera. If the RGB camera is 640x480 x 15 FPS, the depth camera must also have a supporting resolution with the same settings. This is because this robot skill will parse the depth and RGB data together.
Variables
This skill will create four global variables that scripts can reference.

$D435IsRunning - status of the D435 robot skill if it is connected or not to the sensor.
$D435FurthestDistanceCM - furthest distance detected
$D435FurthestDistanceDegree - furthest degree of the furthest distance CM variable.
$D435NearestDistanceCM - nearest distance detected
$D435NearestDistanceDegree - nearest degree of the closest distance CM variable.
ARC Navigation Messaging System
This skill is part of the ARC navigation messaging system. It is encouraged to read more about the messaging system to understand available skills HERE. This skill is in level #3 group #1 in the diagram below. This skill contributes telemetry positioning to the cartesian positioning channel of the NMS. Combining this skill with Level #3 Group #2 skills for obstacle avoidance. And for Level #1, The Navigator works well.

There is a small performance improvement as well, which will be noticeable on SBCs
Also, this only displays depth sensors in the drop-down
Here you can see the wall, corner of the wall, and the edge of the monitor.
And here's a low scan of just the table in front - where it's flat and no longer curved.
Should experience a 10 times improvement. I went from 27% cpu to 2% cpu
I am sure we'll get this figured out.
Should I run the calibration app that came with the driver download too? I didn't both since you didn't mention it in the documentation. Also, there are several profiles in the viewer app for what you are expecting to see (ie, hand signals, far distances close, etc...). Is there a preferred one and does the device remember it after it is used in the app to when the skill accesses it?
Alan
or 1280 x 720 which works but only when set to 6 frames per second and actually only updates the distance about every 20 seconds so the robot drives into walls.
All other resolutions give some variation of:
Here are screen shots of the supported resolutions:
and
I also tried removing the Intel driver and letting Windows find its own driver in case it was a driver issue, with the same results. If this is not what you see in the viewer, I will return the device and get a 435i. Otherwise, I am going to go back to Lidar for a while so I can work on navigation scripts that work until we have a 435 skill that actually works. (that sounded more pissy than I meant. My mom is in the hospital with Pneumonia, Thank G-D it is not Covid, but I am stressed).
As always, if you want to provide a debug version with high or highest logging level set to figure out what is causing the issue, I am glad to install and test, although depending on my Mom's condition and if they send her home, I may be offline for a week or so... They won't let me visit the hospital despite the fact I am fully vaccinated, so at least for the next few days I should be available to test.
did you set width and height to 0? And same error?
I know you keep asking about a debug version - but you have it already :). the error message that you see is the most verbose there is.
This is why I dislike working with any intel products... discontinued products with discontinued support and incomplete sdk.
I noticed your screenshots do not have the rgb sensor or depth sensor active. Can your future screenshots include real-world test scenario. Sometimes things can be missed if liberties are taken.
I’d like to know the these things when you get an ideal resolution working with the realsense viewer. Make sure the rgb and depth resolutions & framerate are identical and both sensors are active. Once that’s done, tell me these things...
- the common resolution
- the common frame rate
- the depth format (ie z16)
- the rgb format (ie rgb24)
There are only 2 common resolutions available. 640x480 and 1280x720. 1280x720 is the only one that kind of works in ARC, but not in a usable way.
*Note: Read about this setting in the manual at the top of this page.
Alan
- default resolution values are 640x480x6fps to match the latest firmware settings
- fix for the offset value that wasn't being calculated correctly
One idea to take advantage of the depth information (outside the NMS) is to use it for example in the arm to have better precision to manipulate and recognize objects. Datapoints volume could be manipulated in a similar way as is in the current skill with the image gap selection. Thanks