Use the Intel Realsense T265 tracking camera for way-point robot navigation
+ How To Add This Control To Your Project (Click to Expand)
Manual
With its small form factor and low power consumption, the Intel RealSense Tracking Camera T265 has been designed to give you the tracking performance for your robot. This ARC user-friendly robot skill provides an easy way to use the T265 for way-point navigation.
*Note: this skill requires ARC 2020.12.12.00 or greater
The T265 combined with this robot skill provides your robot a SLAM, or Simultaneous Localization and Mapping solution. It allows your robot to construct a map of an unknown environment while simultaneously keeping track of it’s own location within that environment. Before the days of GPS, sailors would navigate by the stars, using their movements and positions to successfully find their way across oceans. VSLAM uses a combination of cameras and Inertial Measurement Units (IMU) to navigate in a similar way, using visual features in the environment to track it’s way around unknown spaces with accuracy. All of these complicated features are taken care of for you in this ARC robot skill.
Robot Skill Window
The skill has a very minimal interface because it pushes data in the NMS and is generally used by other robot skills (such as The Navigator).
1) START/STOP the Intel T265 connection. Due to a bug from Intel's SDK, the sensor cannot be stopped and restarted in the same session. The only way to restart the T265 connection is to close ARC and reload. We understand this is unconventional and would normally be fixed. However, Intel has claimed they "stopped development and any additional bug fixes on the T265"
2) The confidence of the tracking status between 0 (low) and 3 (highest). In a brightly lit room with many points of interest (not just white walls), the tracking status will be high. Tracking will be low if the room does not have enough light and/or detail for the sensor to track.
3) Log text display for errors and statuses.
Config Menu
1) Radius Offset (mm) is the distance in mm of the T265 from the center of the robot. A negative number is toward the front of the robot, and a positive number is toward the rear. The sensor must be facing 0 degrees toward the front of the robot. The sensor must not be offset to the left or right of the robot.
Video Demonstration
Here's a video of the Intel RealSense T265 feeding The Navigator skill for way-point navigation
ARC Navigation Messaging System
This skill is part of the ARC navigation messaging system. It is encouraged to read more about the messaging system to understand available skills HERE. This skill is in group #2 of level #3 in the diagram below. This skill contributes telemetry positioning to the cartesian positioning channel of the NMS. Combining this skill with Level #3 Group #1 skills for obstacle avoidance. And for Level #1, The Navigator works well.

Environments
This T265 will work both indoors and outdoors. However, bright direct light (sunlight) and darkness will affect performance. Much like how our eyes see, the camera will is also susceptible to glare and lack of resolution in the dark. Because the camera visual data is combined with the IMU, the camera must have reliable visible light. Without the camera being able to detect the environment, the algorithm will be biased to use the IMU and will experience drift, which greatly affects the performance of the sensor's accuracy.
Screenshot
Here is a screenshot of this skill combined with The Navigator in ARC while navigating through a room between two way points.

Starting Location
The T265 does not include a GPS/Compass or any ability to recognize where it is when initialized. This means your robot will have to initialize from a known location and direction to reuse saved maps. Make sure you mark the spot on the ground with masking tape where the robot starts from.
How To Use This
1) Connect your Intel RealSense T265 camera to the computers USB port
2) Load ARC (version must be >= 2020.12.05.00)
3) Add this skill to your project
4) Now we'll need a Navigation skill. Add The Navigator to your project
5) Press START on the Intel RealSense skill and data will begin mapping your robot's position
How Does It Work?
Well, magic! Actually, the camera is quite interesting and it breaks the world down into a point cloud of features. It remembers the visual features so it can re-align itself on the internal map. It uses a VPU, is what Intel calls it. Here's a video of what the camera sees.
Examples
We'll be creating tutorials and examples in the upcoming future
@DJ have you seen this on the T265 Intel page?
What is wheel odometry and how does it help T265 navigate?
Wheel odometry is the use of sensors to measure how much a wheel turns, it can be used to estimate changes in a wheeled robots position. T265 has wheel odometry support built in, allowing it to use the data from these sensors to refine the position data of the robot. Providing robotic wheel odometer or velocimeter data over USB to TM2 will make the tracking much more robust on wheeled robots, which otherwise can experience many tracking failures. We consider odometer input to be a requirement for robust tracking on wheeled robots.
once you get the calibration right on a specific surface or in a specific environment then you can remove the sensor.
I envision you don’t want to put a $250 sensor on every robot doing minor tasks like vacuuming but you could calibrate it with the T265 and then send it on its way.
In the NMS are some additional reading links that talk about the challenges with navigation and pose. I highly encourage you read them prior to this discussion.
Intel is very clear in that paragraph that you copied. It clearly says the T265 is compatible with wheeled robots. It does NOT state that it works with wheel encoders. It can correct wheel odometry but it cannot accept an input from a wheel encoder. It can be used to correct wheel encoded robot pose because wheel encoder robots have terrible accuracy - which is why the T265 exists.
I really recommend reading the additional links in the NMS page to catch up on the difference between a wheeled robot and a wheel encoder. And also the challenges of the two etc
*edit: the api seems to have an older interface that allows wheel odometry input. The early release of the T265 does mention you can push wheel odometry data into the api and it gets fused with the T265. So that must be an early attempt at converting the old school wheeled ros guys to vslam
Oh and Happy New Year Everyone!!
I would love if we had D435 support. Watch this video.
long term goal would be Map room with T265 and D435. Use object recognition to identify and find the object. Use T265 data to go to the object and use data from D435 to calculate exact location and orientation of object, now use inverse kinematics to calculate how to pick it up and run a bunch of simulations, finally use robot arm and gripper to pick up object. Mounting the T265 on robot arm would verify are calculations as we pick up object and also be used to train robot using ML to improve IK calculations.
So GPU TPU support (Nvidia Jetson?) for accurate object recognition and IK calculations. D435i support for 3D point cloud and T265 for location and orientation and movement of robot arm in 3D space.
I had the cam on the pole but it creates some vibration and having an offset parameter lets you place the camera where you want.
How are you using your T265?
The Jetson is ARM based, so not ARC compatible.
right now just playing with T265 (not enough hours in day) but my goal is a robot that can do some simple tasks around the house. Pick up shoes put them away, vacuum without smashing the wife’s furniture up or getting stuck under the coffee table and most important. GET ME A BEER.
what are your plans @proteusy?
Since we are only getting telemetry data of T265 I am wondering if it would be better to just pull the data off the T265 with a pi or stick pc and send it to Navigator on a remote desktop. I mounted a NUC with 2 * 3 cell Lipo's in series for ~22v to run everything off, works fine but battery life is short.
I am curranty working on the obstacle avoidance when using The Navigators way points.
The Chrome skull is fairly heavy about 8 pounds so i bet without it,would get at least 1 hour 45 minutes.
where is the best place to position the realsense. I did play with new offset a bit.
i Have a todo to add speed adjustment settings for moving and turning. It’ll come in time