Asked — Edited

The Birth Of The XR-1 DIY Robot

Introducing the Updated XR1 and the New XJ-5: A New Class of Hobby Robots

User-inserted image

The XJ-5 robot platform, available at Robots Builder, stands out for its user-friendly design, emphasizing accessibility and ease of assembly. Designed to make robotics more approachable, the XJ-5 is meticulously engineered for easy 3D printing and assembly with standard parts like servos and bearings.

Our goal is to simplify the building process so that enthusiasts and robot hobbyists of all skill levels can engage in constructing their own robots. We've invested time in perfecting the design, ensuring that users can quickly move on to the exciting tasks of programming and teaching their robots using Synthiam ARC. The XJ-5 is highly customizable, featuring a variety of hands and heads to allow personalization of each creation. To encourage a collaborative and open community, we're proud to announce that the XJ-5 is entirely open source, inviting users to contribute, modify, and share their innovations globally. Get ready to embark on a journey of creativity and exploration with the XJ-5!

For more information, check out Robots Builder.

As a creator of the XJ-5 and our other XR series, I will be leveraging the Synthiam ARC platform for robot control, and I hope others here will join me in developing an ARC project to support the XJ-5 robot. As of January 9th, 2024, we have started posting files for 3D printing. Meanwhile, we are developing documentation and the above website, so please check back regularly.

My goal is to have a walking, talking, listening, and seeing robot the size of a small child using ARC and its various skills. As I progress, I will be sharing more about my efforts here.

User-inserted image

Now, let's dive into the project focusing on the new and improved XR1, the XJ-5, which I have been designing and building over the past year.

User-inserted image

As a young enthusiast, I loved the movie Short Circuit and fell in love with the Johnny 5 robot, inspiring me to create a similar robot. It was not designed to replicate the original Johnny 5. About a year ago, DJ suggested preferring a robot with wheels or tracks over a bipedal robot due to the complexity of making them walk, a challenge I am familiar with from our XR2 project. But hey, that's another story.

Now, in May 2025, I've almost completed the assembly and will start implementing ARC and the AI brain I've been working on to support ARC. More updates will follow.

I have started building the torso from the hips up, as shown in the pictures below. The shoulders are not fully installed because I'm waiting for the correct screws to arrive. Please excuse the messy workbench.

User-inserted image

To support the robot, I have designed a set of boards with ARC firmware that will connect to the onboard PC, which will be situated in the middle of the chest.

User-inserted image

The above board will be mounted on the backplate between the two shoulder sockets and connected to the USB hub that will be connected to the PC. Two more USB cables, along with servo power, run down to the boards in each hand. The USB camera and headboard will be connected to the PC's other two USB ports. ARC will run headless on the Windows 11 Pro PC using RDP for remote connectivity. Additionally, there is an MPU connected to the main board that I plan to leverage with ARC.

I have added the shoulders and arms down to the wrists, and they appear to be strong and capable.

I decided to create a Johnny 5-style head for my build featuring two cameras. Here is the completed head. I have added six more servos to the build for the eye flaps, bringing the total count to 50 servos. It includes a directional mic array and ultrasonic radar for enhanced audio and visual capabilities. For sound, two speakers are mounted in the head with an audio amplifier.

User-inserted image

I have also decided to add an option to the XR-1 to support a 5-inch touch display, as shown below.

User-inserted image

The display will provide more options for operator interaction using ARC remote controls.

I have created both two and three-finger claw grippers and human-like hands, providing versatility in interaction for the XR-1.

User-inserted image

User-inserted image

User-inserted image


ARC Pro

Upgrade to ARC Pro

Harnessing the power of ARC Pro, your robot can be more than just a simple automated machine.

PRO
Canada
#41  

When placing a 360-degree LiDAR sensor like the RPLIDAR A1 on your XR-1 robot, there are several factors to consider to ensure optimal performance:

Placement Tips:

  1. Height from the Floor:

    • General Recommendation: Typically, placing the LiDAR sensor at a height of around 20-30 cm (8-12 inches) from the floor is a good starting point. This height allows the sensor to detect obstacles at various heights, including low-lying objects and furniture.
    • Environment-Specific: If your environment has many low obstacles (e.g., toys, cables), you might want to place it slightly lower. Conversely, if you need to detect higher obstacles (e.g., tables, countertops), you might place it higher.
  2. Field of View:

    • Unobstructed View: Ensure that the LiDAR has an unobstructed 360-degree view as much as possible. While some minor obstructions behind the sensor are acceptable, the front and sides should be clear to maximize the sensor's effectiveness.
    • Mounting Position: Mount the LiDAR on top of the robot or on a raised platform to minimize obstructions. If you need to mount it lower, ensure that the surrounding structure does not block the sensor's line of sight.
  3. Protection:

    • Physical Protection: Consider adding a protective cover or housing around the LiDAR to shield it from dust, debris, and accidental impacts. Ensure that the cover does not obstruct the sensor's field of view.
    • Environmental Factors: If the robot operates in a dusty or dirty environment, regular cleaning and maintenance of the LiDAR sensor will be necessary to maintain accuracy.
  4. Integration with Other Sensors:

    • Complementary Sensors: If you have other sensors (e.g., ultrasonic, infrared), ensure that their placement does not interfere with the LiDAR's operation. Position them in a way that they complement each other, providing a comprehensive view of the surroundings.
  5. Testing and Adjustment:

    • Initial Testing: After mounting the LiDAR, perform initial tests to check for blind spots and ensure that the sensor can accurately detect obstacles in the intended area.
    • Adjustments: Based on the test results, make any necessary adjustments to the height, angle, or position of the LiDAR to optimize its performance.

Example Placement:

  • Top of the Robot: Mounting the LiDAR on the top of the robot's head or a raised platform on the torso can provide a clear 360-degree view.
  • Front and Center: If top mounting is not feasible, placing the LiDAR at the front center of the robot, slightly above the main body, can also work well, provided the sides and front are clear.

Conclusion:

The key is to ensure that the LiDAR has a clear and unobstructed view of the environment. Regular testing and adjustments will help you find the optimal placement for your specific use case. If you have any specific constraints or additional questions, feel free to ask!


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Synthiam
#42  

@athena thanks. Can you also explain how a lidar like the rplidar a1 works?

also explain how SLAM works when used with a lidar. The best SLAM for Synthiam is The Better Navigator.

the system for merging sensors and navigation in Synthiam ARC is called NMS (navigation messaging service)

PRO
Canada
#43  

How LiDAR Works:

LiDAR (Light Detection and Ranging) is a remote sensing method that uses light in the form of a pulsed laser to measure distances. Here's a breakdown of how a LiDAR sensor like the RPLIDAR A1 works:

  1. Emission of Laser Pulses:

    • The LiDAR sensor emits rapid laser pulses in a 360-degree sweep around the sensor.
    • Each pulse travels outward until it hits an object and reflects back to the sensor.
  2. Detection of Reflected Pulses:

    • The sensor detects the reflected laser pulses.
    • The time it takes for the pulse to return to the sensor is measured.
  3. Distance Calculation:

    • The distance to the object is calculated using the time-of-flight (ToF) principle: Distance = (Speed of Light * Time of Flight) / 2.
    • This calculation is done for each pulse, resulting in a set of distance measurements.
  4. Data Collection:

    • The sensor collects distance data for multiple points around it, creating a 360-degree map of the surroundings.
    • This data is typically represented as a point cloud, where each point corresponds to a detected object at a specific distance and angle.

How SLAM Works with LiDAR:

SLAM (Simultaneous Localization and Mapping) is a computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. Here's how SLAM works when used with a LiDAR sensor:

  1. Data Acquisition:

    • The LiDAR sensor continuously scans the environment, collecting distance measurements and creating point clouds.
  2. Feature Extraction:

    • The SLAM algorithm identifies and extracts features from the point cloud data, such as walls, corners, and objects.
  3. Pose Estimation:

    • The algorithm estimates the robot's position (pose) based on the detected features and the known movement of the robot.
    • This involves using odometry data (e.g., wheel encoders, IMU) to predict the robot's movement and correct it using the LiDAR data.
  4. Map Building:

    • The algorithm incrementally builds a map of the environment by adding new features and updating existing ones.
    • The map is typically represented as a grid or a graph, with nodes corresponding to detected features and edges representing the distances between them.
  5. Localization:

    • The algorithm continuously updates the robot's position on the map using the latest LiDAR data and odometry information.
    • This involves matching the current point cloud with the existing map to determine the robot's location.
  6. Loop Closure:

    • When the robot revisits a previously mapped area, the algorithm detects this and corrects any accumulated errors in the map and the robot's position.
    • This process, known as loop closure, helps maintain the accuracy of the map over time.

The Better Navigator and NMS in Synthiam ARC:

Synthiam's ARC platform provides advanced tools for integrating sensors and navigation using the Navigation Messaging System (NMS) and The Better Navigator SLAM skill.

The Better Navigator:

  • Overview: The Better Navigator is a SLAM-based navigation skill that uses LiDAR or depth sensors to create a map of the environment and navigate to waypoints.
  • Features:
    • Mapping: Creates a detailed map of the environment using LiDAR data.
    • Waypoints: Allows users to define waypoints and navigate to them.
    • Obstacle Avoidance: Detects and avoids obstacles in real-time.
    • Pose Estimation: Uses external sensors (e.g., Intel RealSense T265) for accurate pose estimation.

Navigation Messaging System (NMS):

  • Overview: NMS is a framework within ARC that allows different sensors and navigation skills to communicate and share data.
  • Levels:
    • Level 1: Map Navigation - Skills that create and use maps for navigation.
    • Level 3: Sensor Groups - Sensors that provide data to the NMS.
      • Group 1: Lidar/Depth Scanners - Sensors that detect obstacles.
      • Group 2: Localization Telemetry - Sensors that track the robot's position.
  • Integration:
    • Sensor Data: Sensors like LiDAR push data to the NMS.
    • Navigation Skills: Skills like The Better Navigator subscribe to the NMS and use the sensor data for mapping and navigation.

Example Workflow:

  1. Setup:

    • Connect the LiDAR sensor to the robot and configure it in ARC.
    • Add The Better Navigator skill to the ARC project.
    • Configure the NMS to use the LiDAR sensor for obstacle detection and a localization sensor for pose estimation.
  2. Mapping:

    • Start the LiDAR sensor and The Better Navigator skill.
    • Drive the robot around the environment to create a map.
  3. Navigation:

    • Define waypoints on the map.
    • Use The Better Navigator to navigate to the waypoints, avoiding obstacles and updating the map in real-time.

By leveraging The Better Navigator and NMS in Synthiam ARC, you can create a robust and accurate navigation system for your XR-1 robot, enabling it to explore and interact with its environment effectively.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
#44  

So what are your thoughts about this mounting location?

User-inserted image

PRO
Synthiam
#45  

your diagram has the lidar only seeing 180 degrees.

@athena explain how many degrees the lidar should be able to see for a reliable slam pose estimation. Explain that a small pipe or something in the way behind it can be okay.

Athena can’t see images. It’s best to describe the image in degrees visible to the lidar. She’ll be able to assist you more than me. All I can do is ask Athena to help you.

#46   — Edited

DJ it see's a little over 255 degrees in this location.

PRO
Synthiam
#47   — Edited

No it’s not. It’s 180. Maybe 190. Draw lines directly out from the sensor. Anything past the 90 degree CW or -90 CCW is just hitting the body.

Pretend the lidar is a spinning flashlight but it’s a very small laser. So it just looks directly out at the angle. You’d have to move your lidar out 30cm away from the width of the body to see 270 cm (just guessing numbers here but Athena can explain more)

@athena can explain the math to you.  But using the width of the body and how far the lidar is out in front of the body determines the total angles the lidar can see. Assuming the lidar is placed in the center of the front of the body.

#48   — Edited

User-inserted image

this show 260 degrees of clear line of site.

If I added a second one in the back does ARC merge the two data streams together automatically?