
smiller29
The Birth Of The XR-1 DIY Robot
Introducing the Updated XR1 and the New XJ-5: A New Class of Hobby Robots
The XJ-5 robot platform, available at Robots Builder, stands out for its user-friendly design, emphasizing accessibility and ease of assembly. Designed to make robotics more approachable, the XJ-5 is meticulously engineered for easy 3D printing and assembly with standard parts like servos and bearings.
Our goal is to simplify the building process so that enthusiasts and robot hobbyists of all skill levels can engage in constructing their own robots. We've invested time in perfecting the design, ensuring that users can quickly move on to the exciting tasks of programming and teaching their robots using Synthiam ARC. The XJ-5 is highly customizable, featuring a variety of hands and heads to allow personalization of each creation. To encourage a collaborative and open community, we're proud to announce that the XJ-5 is entirely open source, inviting users to contribute, modify, and share their innovations globally. Get ready to embark on a journey of creativity and exploration with the XJ-5!
For more information, check out Robots Builder.
As a creator of the XJ-5 and our other XR series, I will be leveraging the Synthiam ARC platform for robot control, and I hope others here will join me in developing an ARC project to support the XJ-5 robot. As of January 9th, 2024, we have started posting files for 3D printing. Meanwhile, we are developing documentation and the above website, so please check back regularly.
My goal is to have a walking, talking, listening, and seeing robot the size of a small child using ARC and its various skills. As I progress, I will be sharing more about my efforts here.
Now, let's dive into the project focusing on the new and improved XR1, the XJ-5, which I have been designing and building over the past year.
As a young enthusiast, I loved the movie Short Circuit and fell in love with the Johnny 5 robot, inspiring me to create a similar robot. It was not designed to replicate the original Johnny 5. About a year ago, DJ suggested preferring a robot with wheels or tracks over a bipedal robot due to the complexity of making them walk, a challenge I am familiar with from our XR2 project. But hey, that's another story.
Now, in May 2025, I've almost completed the assembly and will start implementing ARC and the AI brain I've been working on to support ARC. More updates will follow.
I have started building the torso from the hips up, as shown in the pictures below. The shoulders are not fully installed because I'm waiting for the correct screws to arrive. Please excuse the messy workbench.
To support the robot, I have designed a set of boards with ARC firmware that will connect to the onboard PC, which will be situated in the middle of the chest.
The above board will be mounted on the backplate between the two shoulder sockets and connected to the USB hub that will be connected to the PC. Two more USB cables, along with servo power, run down to the boards in each hand. The USB camera and headboard will be connected to the PC's other two USB ports. ARC will run headless on the Windows 11 Pro PC using RDP for remote connectivity. Additionally, there is an MPU connected to the main board that I plan to leverage with ARC.
I have added the shoulders and arms down to the wrists, and they appear to be strong and capable.
I decided to create a Johnny 5-style head for my build featuring two cameras. Here is the completed head. I have added six more servos to the build for the eye flaps, bringing the total count to 50 servos. It includes a directional mic array and ultrasonic radar for enhanced audio and visual capabilities. For sound, two speakers are mounted in the head with an audio amplifier.
I have also decided to add an option to the XR-1 to support a 5-inch touch display, as shown below.
The display will provide more options for operator interaction using ARC remote controls.
I have created both two and three-finger claw grippers and human-like hands, providing versatility in interaction for the XR-1.
To achieve reliable SLAM (Simultaneous Localization and Mapping) pose estimation, it's generally recommended that the LIDAR sensor has a clear line of sight for as close to 360 degrees as possible. This ensures that the sensor can detect obstacles and map the environment accurately from all directions.
LIDAR Coverage
Current Setup
Adding a Second LIDAR
If you add a second LIDAR sensor to cover the remaining blind spots, ARC does not automatically merge the data streams from multiple LIDAR sensors. However, you can achieve this by using the Navigation Messaging System (NMS) in ARC.
Steps to Merge Data Streams
Example Configuration
Primary LIDAR (Front):
Secondary LIDAR (Back):
Navigation Skill:
Custom Script Example
Here’s a basic example of how you might set up a script to merge data from two LIDAR sensors:
Conclusion
For more detailed guidance, you can refer to the Synthiam documentation on the Navigation Messaging System (NMS) and the specific LIDAR skills you are using:
Feel free to ask if you need further assistance!
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
Your latest diagram is much clearer regarding placement. Images like that would speed conversations lIke this up.
270 degress is 75% which leaves an error of 25%
hopefully the slam will compensate for the 25% error. If your robot moved very slow I suspect it’ll be ok. One way is to experience it by testing. You don’t need to 3D print a robot. Just put a lidar on a small 2 wheeled robot and block off 25% to see how it works
I would probably mount a D435 on front for distance tracking and a T265 for navigation location tracking on this type of robot.
Ok @Athena, I was able to get 280 degrees of scan area so I thank you and DJ for your help and guidance on this. I am looking forward to setting this up on the XR-1.
That's great to hear! Achieving 280 degrees of scan area should significantly improve the accuracy of your SLAM and overall navigation capabilities. Here are some additional tips and steps to help you set up your XR-1 robot with the improved LIDAR coverage:
Setting Up Your XR-1 with Improved LIDAR Coverage
Mounting the LIDAR:
Configuring the LIDAR in ARC:
Integrating with Navigation Skills:
Adding Additional Sensors:
Testing and Calibration:
Fine-Tuning:
Field Testing:
Example Configuration for LIDAR and RealSense Sensors
LIDAR Configuration:
RealSense D435 Configuration:
RealSense T265 Configuration:
Navigation Skill Configuration:
Conclusion
With 280 degrees of LIDAR coverage and the addition of RealSense sensors, your XR-1 robot should have a robust navigation system. Continue testing and fine-tuning to achieve the best performance. If you encounter any issues or need further assistance, feel free to reach out.
Good luck with your XR-1 project, and happy building!
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
I find the t265 goes out of sync real bad and can’t be fixed. Because it’s map is internal, it starts to drift and there’s no way to fix that without resetting it.
the d435 or other depth cameras aren’t necessary if the lidar is being used. The depth does add more detail, but in many cases that extra detail affects the slam pose estimation. To believe it’s because too many data points increases the false positives to a point where the error is too high.
Depth cameras (in my opinion today) are most useful for interacting with a robot arm. Identify object, know the Cartesian coordinates of objects, and use kinematics to pickup or interact with object.
id stick with lidar for navigation.
another option is putting a TOF time of flgiht sensor on the rear of the robot pointing exactly 180 degrees from the lidar. That might be useful for the slam I find having distances from behind aid pose estimation. But I’d only consider it if you have too much trouble with the lidar alone.
DJ, do think I would be better served to add the second lidar in the rear of the unit? The reason I ask is because this would be the time to add it to the design. The front has 280 scan and the rear has 330 scan degrees.
Yeah - both athena and i have said that above
you could add 100 of them if you wanted - it'll just require a bit more processing as it's a lot more data... but not that much more, probably not noticable.