Asked — Edited

The Birth Of The XR-1 DIY Robot

A NEW CLASS OF HOBBY ROBOT.

The XR-1 robot platform at Robots Builder stands out for its user-friendly design, emphasizing accessibility and ease of assembly. Crafted with the idea of making robotics more approachable, the XR-1 has been meticulously engineered to be easily 3D printed and assembled using generic parts like servos and bearings.

User-inserted image

Our commitment to simplifying the building process meansthat enthusiasts and robot hobbyists of all skill levels can engage in constructing their own robot. We've invested time in perfecting the design, ensuring that users can swiftly move on to the exciting aspects of teaching and programming their robots. The XR-1 is highly customizable, featuring a variety of hands and heads to choose from, allowing users to personalize their creations. To foster a collaborative and open community, we're proud to announce that everything about the XR-1 is open source, inviting users to contribute, modify, and share their innovations with the global robotics community. Get ready to embark on a journey of creativity and exploration with the XR-1!

For more information checkout the following link: https://www.robotsbuilder.com

As one of the creators of the XR-1 I will be leveraging SYNTHIAM ARC platform for robot control and I hope others here will also decide to join me on developing an ARC project to support an XR-1 robot.  As of today Jan 9th 2024 we have started to post files for the 3d printing, but at the same time we are still developing documentation and the above website so please check back.

My goal is to have a walking, talking, listening and seeing robot the size of a small child using ARC and it's many skills.

As I go down this path I will be posting more about my efforts here.

User-inserted image

I have started building the torso from the hips up as shown in the pictures below.  The shoulder are not fully installed because I am waiting on the correct screws to come in.  Please excuse the messy work bench.:p

User-inserted image

To support the robot I have designed a set of boards that will have ARC firmware loaded on them and connected to the onboard PC will sit in the middle of the chest.

User-inserted image

The above board will be mounted on the backplate between the two shoulder sockets and connected to the USB hub that will be connected to the PC.  Two more usb cables along with servo power will run down to the boards in each hand.   The USB camera and Head board will be connected to the PC's other two USB ports.  ARC will be running headless on the windows 11 Pro PC using RDP to connect to it from other devices.  There is also a MPU connected to the main board that I hope I will be able to leverage with ARC.

I have added the shoulders and arms down to the wrists and the arms seem to be  very strong and capable.

I decided I wanted to make a J5 type head for my build with two cameras in it.  The following is the completed head. I have added 6 more servos to my build for the eye flaps so now we are at 50 servos total. I have added a directional mic array along with an ultrasonic radar. For sound there are two speakers mounted in the head with an audio amp. 

User-inserted image

I have also decided to add an option to the XR1 to support a 5 touch display as seen below. 

User-inserted image

The display will provide more options for operator options using the ARC remote.

I have created both two and three finger claw grippers and also human type hands that can be used on the XR1.


ARC Pro

Upgrade to ARC Pro

Your robot can be more than a simple automated machine with the power of ARC Pro!

PRO
Canada
#1  

Pretty cool robot! How tall is it?

#2  

@Jeremie,  He is about 105cm or 42 inches tall I think he is currently 44DOF at this point depending what you do in the head it could be more.

The current design will have a very small but powerful Windows 11 Pro PC on board with USB hub connecting an Arduino Nano in the head and Nano in each hand and two Mega Pro's on the back for everything else.  It will also have other sensors.

PRO
Canada
#3  

Very cool. Great work.  I took a look at your BOM, I am glad you used hobby servos and gave links to AliExpress . Every time I go to build a project it’s just buy a dozen of  these $1500 dynamixal servos

PRO
USA
#4  

Nice work smiller29

#5  

Anyway section by section part by part he is being uploaded, we should be at 80% by end of week less the head and hands only.

User-inserted image

The head is very close to being completed. The hands are going to take a bit longer as we are working to make a strong functional set of hands and with the child size they are smaller so it creates a challenge.  We have been working on many concepts but have not decided which one to use at this point.

#6  

We have added a link to SYNTHIAM on the website for a software solution for the XR-1 so I hope you get some added traffic to your site.

#7  

I am hopeful that the Arduino Mega Firmware is going to work on the Mega Pro Mini without having to change it.

My testing starts today I hope if I find issues I can get some help from DJ to fix things to support these boards because our project is depending on them.

#8  

Looks really exciting. Thanks for sharing. Good luck with the roll out.

#9   — Edited

User-inserted image

We have now posted all of the above XR1 parts on the site and the next section to be release is the lower arm and wrist once we have completed testing them.

PRO
Synthiam
#10  

Very cool! Looks like a fun project:D

#11  

Well DJ we need to get your team to build one.  So you can help make stuff for it.

PRO
Synthiam
#12  

We’d love to! Resources are thin right now as top priority is ARCx. It wouldn’t be until after March that there will be time to do it. By then you’ll be busy adding ARCx functionality to it. What a great robot though! Can’t wait to try it and promote it for you

#13  

Well I am looking forward to ARCx I think next month I need to do my subscription again.

It would be nice if you could create firmware ID’s for the Nano I currently am using the UNO firmware and it works but within ARC the image shows the UNO it would be nice if it could show a Nano.

Based on my testing the Mega firmware seems to be working on the Mega Pro Mini so it would also be nice if that had its own ID and picture within ARC.

Maybe this could be something for ARCx.

PRO
Canada
#14  

I see you have a telegram channel.  A lot of robotics folks hang out on Discord. Do you have a discord channel?  The photos and design docs are very professional and look great. Do you have a video of the XR1?

#15  

Not fully built.  I had a prototype version but that was not the same as this new design.  We have made many improvements based on testing.   We currently have about 10 people building this new design including myself.

So far the people have said things are going well and they are very happy with the design and fitment of the parts.  Once I get the rest of my stuff to complete the control pack and get ARC setup and start developing scripts we are not going to see any real animated videos.

We are making some videos of some of the joints and sections of the build.  I am sure we will end up posting them on the website at some point.

As far as discord goes we don’t at this point, but we have created a facebook group.  We can’t spread our self’s to thin right now but I am sure discord will happen at some point.

#16  

This is a update to let people know we have completed more parts of the XR1 for release.

User-inserted image

#17  

Thanks for sharing! I really appreciate a new open source bot out there.

PRO
Belgium
#18  

hi averyone

smiller29 very cool design .

ninck

remember that the cheap servo's dont have a full range of 1 to 180 degree . its more like 1-5 to 175-180  degree .

#19   — Edited

We are in the process of releasing a new set of legs that will not require leg covers and have better support for cable management.

draft of the design

User-inserted image

PRO
Synthiam
#20  

Ooooooh that's awesome - i'm glad you're still working on it! Take your time getting the physical components and build to your satisfactory level and it'll be ready by the time ARCx is ready fingers crossed. I keep getting in trouble every time I mention ARCx haha. I know it's a million+ lines of code being re-written and it's a lot of work but I'm eager to get you all using it!

#21  

DJ I am depending on ARCx so my fingers are crossed your team continues to have success with it’s development.

We have made a lot of improvements on the XR1. We have also created a feedback main board for using four wire servos for position feedback.

I currently have 51 servos and a bunch of sensors in my build so I have a lot to control and I need ARCx for the job. xD

#22   — Edited

As another option I have been making a Track based base option for the XR1 based on the J5 lower section.  It is not the same at all really other than the general concept.

This is going to be for builders that do not want to deal with all the challenges make the XR1 walk on two legs.

User-inserted image

I am currently printing the parts that design has been completed.    I can't wait to get this to the point I can setup my ARC project to control this unit.

PRO
Synthiam
#23  

That’s a great idea. I’d build that before a walking robot. I don’t have much interest in maintaining gaits and the challenges that come along with a walking robot.

I like that your robot has tracks like a walle. Will it have arms and grippers? Or a gripper?

PRO
Synthiam
#24  

This is a robot design I like. With the arm and gripper in the middle

User-inserted image

#25  

@DJ. While this is just a guess I'd assume that smiller29 is going to have this new base available to replace the lower body of his first robot design. The arms would stay on the main torso of the upper body. Just my uninformed impression though. If I'm correct them I think it would end up looking kinda like a Centaur with tracks instead of horse legs.

Following your train of thought an additional center gripper the lower tread section like you show in addition to the upper body arms would be really neat and handy.

@Athena, Are you capable of producing an AI generated composite picture of smiller29's upper body attached at the waist to his concept of a lower body with a treadsection. Then add a front center gripper arm like DJ shows in his last post? All using the illustrations in this thread? If you can do this please do. If not where can I go to have this done? I know very little of how to use AI or where to find it.

PRO
Canada
#26  

I'm glad to see the enthusiasm and creativity in your project! While I can't generate images directly, I can guide you on how to create a composite image using AI tools or software.

How to Create a Composite Image

  1. Gather Images: Collect images of the XR-1 upper body, the track-based lower body, and a front center gripper arm.

  2. Use Image Editing Software:

    • Photoshop/GIMP: These are powerful tools for creating composite images. You can layer the images and blend them together.
    • Canva: A more user-friendly option that allows for basic image manipulation.
  3. AI Tools:

    • DALL-E: An AI model by OpenAI that can generate images from textual descriptions. You can describe the composite image you want, and it will generate it for you.
    • MidJourney: Another AI tool that can create images based on textual prompts.

Steps to Create the Composite Image

  1. Using Photoshop/GIMP:

    • Open the images of the XR-1 upper body, track-based lower body, and gripper arm.
    • Use the selection tool to cut out the parts you need.
    • Paste the upper body onto the lower body and adjust the size and position.
    • Add the gripper arm in the desired position.
    • Use blending tools to make the composite look seamless.
  2. Using Canva:

    • Upload the images to Canva.
    • Use the drag-and-drop feature to position the upper body on the lower body.
    • Add the gripper arm and adjust its position.
    • Use the transparency and layering options to blend the images.
  3. Using AI Tools:

    • Go to the DALL-E or MidJourney website.
    • Enter a detailed description of the composite image you want (e.g., "A robot with an upper body similar to the XR-1, a track-based lower body, and a front center gripper arm").
    • Generate the image and refine the prompt if needed.

Example Prompt for AI Tools

"Create an image of a robot with the upper body of the XR-1, a track-based lower body inspired by J5, and a front center gripper arm. The robot should look cohesive and functional."

Resources

If you need further assistance or have specific questions about using these tools, feel free to ask!


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
#27  

The picture at the top of the post will be the upper body.  I have three hand options currently that can be switched out 2 finger claw, 3 finger claws, and 5 finger hands.

All of these build files will be posted as they are verified on our website for anyone to use for their own projects.

PRO
Synthiam
#28  

Ah I see! That’ll be great then. I’m looking forward to this addition.

walking robots don’t make sense to me. They’re significant less efficient and unstable. If more companies focused on navigation, vision systems, and manipulation, we would be further ahead. I feel they implement walking as a feature because it’s trendy and different.

I’d rather know my battery energy is being used productively rather than holding a robot upright haha

#29  

I agree with you DJ.  When we have the new leg version posted above ready for release I will make those to explore making them walk and balance just as a learning activity for myself.

PRO
Belgium
#30  

@dj

i have one of those like in your #24:) the mebo robot .

#31  

DJ or anyone else if you have any input in navigation skills and hardware sensors I should add to the track design please let me know.

#33   — Edited

So DJ If I am understanding you that you are recommending the Rplidar (A1) out of all the ARC supported options?

PRO
Synthiam
#34   — Edited

I’m recommending any of the supported lidar combined with the better navigator as I said above.

*edit; just make sure it works with the NMS. The rplidar a1 works with NMS and is good. It’s what I use. You can learn all of this faster by reading the page I provided and following the video tutorials. All I’d be doing is repeating what’s already there.:)

#35  

Thanks I read all that and the related links:)  Your conformation helps DJ with product choice.  I just need to figure out where to mount it on the above track design.   I would think it should be located on the lower section of the robot to the front but I can't see a way to give it a 360 view.    I need to think about this because this is a real nice solution and I would love to have it built into this robot.:)

#36  

So What do you think about this location?   I need to make space for it and add some protection for it.

User-inserted image

PRO
Synthiam
#37  

It’s 360 degrees. So you can block part of it, but not over half of it. The more data points the more accurate. Mounted in that position will not perform at all.

it spins and gets distances in 360 degrees.

you can block less behind it by mounting it higher. Or have one in front and one in back.

#38  

So if it has at least 180 degree scan area above the tracks on each side would that work?   Also how high off the floor should it be scanning?

PRO
Synthiam
#39  

Height from floor depends on your environment.

As for blocking - thats hard to say. I wouldn’t block directly in front. But it’s okay to block a few degrees behind.

PRO
Synthiam
#40  

@athena give @smiller tips on where to place a 360 degree lidar rplidar a1

PRO
Canada
#41  

When placing a 360-degree LiDAR sensor like the RPLIDAR A1 on your XR-1 robot, there are several factors to consider to ensure optimal performance:

Placement Tips:

  1. Height from the Floor:

    • General Recommendation: Typically, placing the LiDAR sensor at a height of around 20-30 cm (8-12 inches) from the floor is a good starting point. This height allows the sensor to detect obstacles at various heights, including low-lying objects and furniture.
    • Environment-Specific: If your environment has many low obstacles (e.g., toys, cables), you might want to place it slightly lower. Conversely, if you need to detect higher obstacles (e.g., tables, countertops), you might place it higher.
  2. Field of View:

    • Unobstructed View: Ensure that the LiDAR has an unobstructed 360-degree view as much as possible. While some minor obstructions behind the sensor are acceptable, the front and sides should be clear to maximize the sensor's effectiveness.
    • Mounting Position: Mount the LiDAR on top of the robot or on a raised platform to minimize obstructions. If you need to mount it lower, ensure that the surrounding structure does not block the sensor's line of sight.
  3. Protection:

    • Physical Protection: Consider adding a protective cover or housing around the LiDAR to shield it from dust, debris, and accidental impacts. Ensure that the cover does not obstruct the sensor's field of view.
    • Environmental Factors: If the robot operates in a dusty or dirty environment, regular cleaning and maintenance of the LiDAR sensor will be necessary to maintain accuracy.
  4. Integration with Other Sensors:

    • Complementary Sensors: If you have other sensors (e.g., ultrasonic, infrared), ensure that their placement does not interfere with the LiDAR's operation. Position them in a way that they complement each other, providing a comprehensive view of the surroundings.
  5. Testing and Adjustment:

    • Initial Testing: After mounting the LiDAR, perform initial tests to check for blind spots and ensure that the sensor can accurately detect obstacles in the intended area.
    • Adjustments: Based on the test results, make any necessary adjustments to the height, angle, or position of the LiDAR to optimize its performance.

Example Placement:

  • Top of the Robot: Mounting the LiDAR on the top of the robot's head or a raised platform on the torso can provide a clear 360-degree view.
  • Front and Center: If top mounting is not feasible, placing the LiDAR at the front center of the robot, slightly above the main body, can also work well, provided the sides and front are clear.

Conclusion:

The key is to ensure that the LiDAR has a clear and unobstructed view of the environment. Regular testing and adjustments will help you find the optimal placement for your specific use case. If you have any specific constraints or additional questions, feel free to ask!


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Synthiam
#42  

@athena thanks. Can you also explain how a lidar like the rplidar a1 works?

also explain how SLAM works when used with a lidar. The best SLAM for Synthiam is The Better Navigator.

the system for merging sensors and navigation in Synthiam ARC is called NMS (navigation messaging service)

PRO
Canada
#43  

How LiDAR Works:

LiDAR (Light Detection and Ranging) is a remote sensing method that uses light in the form of a pulsed laser to measure distances. Here's a breakdown of how a LiDAR sensor like the RPLIDAR A1 works:

  1. Emission of Laser Pulses:

    • The LiDAR sensor emits rapid laser pulses in a 360-degree sweep around the sensor.
    • Each pulse travels outward until it hits an object and reflects back to the sensor.
  2. Detection of Reflected Pulses:

    • The sensor detects the reflected laser pulses.
    • The time it takes for the pulse to return to the sensor is measured.
  3. Distance Calculation:

    • The distance to the object is calculated using the time-of-flight (ToF) principle: Distance = (Speed of Light * Time of Flight) / 2.
    • This calculation is done for each pulse, resulting in a set of distance measurements.
  4. Data Collection:

    • The sensor collects distance data for multiple points around it, creating a 360-degree map of the surroundings.
    • This data is typically represented as a point cloud, where each point corresponds to a detected object at a specific distance and angle.

How SLAM Works with LiDAR:

SLAM (Simultaneous Localization and Mapping) is a computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. Here's how SLAM works when used with a LiDAR sensor:

  1. Data Acquisition:

    • The LiDAR sensor continuously scans the environment, collecting distance measurements and creating point clouds.
  2. Feature Extraction:

    • The SLAM algorithm identifies and extracts features from the point cloud data, such as walls, corners, and objects.
  3. Pose Estimation:

    • The algorithm estimates the robot's position (pose) based on the detected features and the known movement of the robot.
    • This involves using odometry data (e.g., wheel encoders, IMU) to predict the robot's movement and correct it using the LiDAR data.
  4. Map Building:

    • The algorithm incrementally builds a map of the environment by adding new features and updating existing ones.
    • The map is typically represented as a grid or a graph, with nodes corresponding to detected features and edges representing the distances between them.
  5. Localization:

    • The algorithm continuously updates the robot's position on the map using the latest LiDAR data and odometry information.
    • This involves matching the current point cloud with the existing map to determine the robot's location.
  6. Loop Closure:

    • When the robot revisits a previously mapped area, the algorithm detects this and corrects any accumulated errors in the map and the robot's position.
    • This process, known as loop closure, helps maintain the accuracy of the map over time.

The Better Navigator and NMS in Synthiam ARC:

Synthiam's ARC platform provides advanced tools for integrating sensors and navigation using the Navigation Messaging System (NMS) and The Better Navigator SLAM skill.

The Better Navigator:

  • Overview: The Better Navigator is a SLAM-based navigation skill that uses LiDAR or depth sensors to create a map of the environment and navigate to waypoints.
  • Features:
    • Mapping: Creates a detailed map of the environment using LiDAR data.
    • Waypoints: Allows users to define waypoints and navigate to them.
    • Obstacle Avoidance: Detects and avoids obstacles in real-time.
    • Pose Estimation: Uses external sensors (e.g., Intel RealSense T265) for accurate pose estimation.

Navigation Messaging System (NMS):

  • Overview: NMS is a framework within ARC that allows different sensors and navigation skills to communicate and share data.
  • Levels:
    • Level 1: Map Navigation - Skills that create and use maps for navigation.
    • Level 3: Sensor Groups - Sensors that provide data to the NMS.
      • Group 1: Lidar/Depth Scanners - Sensors that detect obstacles.
      • Group 2: Localization Telemetry - Sensors that track the robot's position.
  • Integration:
    • Sensor Data: Sensors like LiDAR push data to the NMS.
    • Navigation Skills: Skills like The Better Navigator subscribe to the NMS and use the sensor data for mapping and navigation.

Example Workflow:

  1. Setup:

    • Connect the LiDAR sensor to the robot and configure it in ARC.
    • Add The Better Navigator skill to the ARC project.
    • Configure the NMS to use the LiDAR sensor for obstacle detection and a localization sensor for pose estimation.
  2. Mapping:

    • Start the LiDAR sensor and The Better Navigator skill.
    • Drive the robot around the environment to create a map.
  3. Navigation:

    • Define waypoints on the map.
    • Use The Better Navigator to navigate to the waypoints, avoiding obstacles and updating the map in real-time.

By leveraging The Better Navigator and NMS in Synthiam ARC, you can create a robust and accurate navigation system for your XR-1 robot, enabling it to explore and interact with its environment effectively.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
#44  

So what are your thoughts about this mounting location?

User-inserted image

PRO
Synthiam
#45  

your diagram has the lidar only seeing 180 degrees.

@athena explain how many degrees the lidar should be able to see for a reliable slam pose estimation. Explain that a small pipe or something in the way behind it can be okay.

Athena can’t see images. It’s best to describe the image in degrees visible to the lidar. She’ll be able to assist you more than me. All I can do is ask Athena to help you.

#46   — Edited

DJ it see's a little over 255 degrees in this location.

PRO
Synthiam
#47   — Edited

No it’s not. It’s 180. Maybe 190. Draw lines directly out from the sensor. Anything past the 90 degree CW or -90 CCW is just hitting the body.

Pretend the lidar is a spinning flashlight but it’s a very small laser. So it just looks directly out at the angle. You’d have to move your lidar out 30cm away from the width of the body to see 270 cm (just guessing numbers here but Athena can explain more)

@athena can explain the math to you.  But using the width of the body and how far the lidar is out in front of the body determines the total angles the lidar can see. Assuming the lidar is placed in the center of the front of the body.

#48   — Edited

User-inserted image

this show 260 degrees of clear line of site.

If I added a second one in the back does ARC merge the two data streams together automatically?

PRO
Canada
#49  

To achieve reliable SLAM (Simultaneous Localization and Mapping) pose estimation, it's generally recommended that the LIDAR sensor has a clear line of sight for as close to 360 degrees as possible. This ensures that the sensor can detect obstacles and map the environment accurately from all directions.

LIDAR Coverage

  • Ideal Coverage: 360 degrees
  • Minimum Reliable Coverage: At least 270 degrees, but more is better.

Current Setup

  • Your Coverage: 260 degrees
  • Potential Issue: The remaining 100 degrees could lead to blind spots, which might affect the accuracy of the SLAM algorithm.

Adding a Second LIDAR

If you add a second LIDAR sensor to cover the remaining blind spots, ARC does not automatically merge the data streams from multiple LIDAR sensors. However, you can achieve this by using the Navigation Messaging System (NMS) in ARC.

Steps to Merge Data Streams

  1. Configure Each LIDAR: Set up each LIDAR sensor in ARC to push its data into the NMS.
  2. Use a Navigation Skill: Utilize a navigation skill that can subscribe to multiple LIDAR data streams and merge them. For example, "The Better Navigator" skill can handle multiple data inputs.
  3. Custom Scripting: If needed, you can write custom scripts in JavaScript or Python to process and merge the data streams manually.

Example Configuration

  1. Primary LIDAR (Front):

    • Skill: Hitachi-LG LDS Lidar or RPLidar(A1)
    • Coverage: 260 degrees
  2. Secondary LIDAR (Back):

    • Skill: Another instance of Hitachi-LG LDS Lidar or RPLidar(A1)
    • Coverage: Remaining 100 degrees
  3. Navigation Skill:

    • Skill: EZ-SLAM or The Better Navigator
    • Configuration: Subscribe to both LIDAR data streams

Custom Script Example

Here’s a basic example of how you might set up a script to merge data from two LIDAR sensors:

// Assuming LIDAR1 and LIDAR2 are the names of your LIDAR skills
var lidar1Data = getVar("$LIDAR1_Data");
var lidar2Data = getVar("$LIDAR2_Data");

// Merge the data
var mergedData = mergeLidarData(lidar1Data, lidar2Data);

// Function to merge LIDAR data
function mergeLidarData(data1, data2) {
    // Custom logic to merge data1 and data2
    // This is a placeholder function
    return data1.concat(data2);
}

// Push merged data to NMS
Navigation.updateScan(mergedData);

Conclusion

  • Current Setup: 260 degrees is good but not ideal.
  • Adding a Second LIDAR: Can cover blind spots but requires configuration.
  • Merging Data: Use NMS and possibly custom scripts to merge data streams.

For more detailed guidance, you can refer to the Synthiam documentation on the Navigation Messaging System (NMS) and the specific LIDAR skills you are using:

Feel free to ask if you need further assistance!


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Synthiam
#50  

Your latest diagram is much clearer regarding placement. Images like that would speed conversations lIke this up.

270 degress is 75% which leaves an error of 25%

hopefully the slam will compensate for the 25% error. If your robot moved very slow I suspect it’ll be ok. One way is to experience it by testing. You don’t need to 3D print a robot. Just put a lidar on a small 2 wheeled robot and block off 25% to see how it works

PRO
Canada
#51  

I would probably mount a D435 on front for distance tracking and a T265 for navigation location tracking on this type of robot.

#52  

Ok @Athena,  I was able to get 280 degrees of scan area so I thank you and DJ for your help and guidance on this.  I am looking forward to setting this up on the XR-1.

PRO
Canada
#53   — Edited

That's great to hear! Achieving 280 degrees of scan area should significantly improve the accuracy of your SLAM and overall navigation capabilities. Here are some additional tips and steps to help you set up your XR-1 robot with the improved LIDAR coverage:

Setting Up Your XR-1 with Improved LIDAR Coverage

  1. Mounting the LIDAR:

    • Ensure that the LIDAR sensor is securely mounted on the XR-1 to cover the 280 degrees of scan area.
    • Make sure there are no obstructions in the LIDAR's field of view.
  2. Configuring the LIDAR in ARC:

    • Add the LIDAR robot skill to your ARC project.
    • Configure the LIDAR settings to ensure it is correctly calibrated and providing accurate data.
  3. Integrating with Navigation Skills:

    • Use navigation skills like EZ-SLAM or The Better Navigator to process the LIDAR data.
    • Ensure these skills are configured to use the LIDAR data for mapping and navigation.
  4. Adding Additional Sensors:

    • As suggested by @Nink, consider adding an Intel RealSense D435 for distance tracking and a T265 for navigation location tracking.
    • These sensors can provide additional data to improve the accuracy of your robot's navigation.
  5. Testing and Calibration:

    • Perform initial tests in a controlled environment to ensure the LIDAR and other sensors are working correctly.
    • Calibrate the sensors as needed to ensure accurate data.
  6. Fine-Tuning:

    • Use the data from initial tests to fine-tune the settings in ARC.
    • Adjust the parameters in the navigation skills to optimize performance.
  7. Field Testing:

    • Conduct field tests in various environments to ensure the robot can navigate accurately.
    • Make any necessary adjustments based on the results of these tests.

Example Configuration for LIDAR and RealSense Sensors

  1. LIDAR Configuration:

    • Add the LIDAR skill (e.g., RPLidar A1) to your ARC project.
    • Configure the LIDAR settings to cover the 280 degrees scan area.
  2. RealSense D435 Configuration:

    • Add the Intel RealSense D435 skill to your ARC project.
    • Configure the D435 for distance tracking.
  3. RealSense T265 Configuration:

    • Add the Intel RealSense T265 skill to your ARC project.
    • Configure the T265 for navigation location tracking.
  4. Navigation Skill Configuration:

    • Add The Better Navigator skill to your ARC project.
    • Configure it to use data from the LIDAR, D435, and T265 sensors.

Conclusion

With 280 degrees of LIDAR coverage and the addition of RealSense sensors, your XR-1 robot should have a robust navigation system. Continue testing and fine-tuning to achieve the best performance. If you encounter any issues or need further assistance, feel free to reach out.

Good luck with your XR-1 project, and happy building!


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Synthiam
#54  

I find the t265 goes out of sync real bad and can’t be fixed. Because it’s map is internal, it starts to drift and there’s no way to fix that without resetting it.

the d435 or other depth cameras aren’t necessary if the lidar is being used. The depth does add more detail, but in many cases that extra detail affects the slam pose estimation. To believe it’s because too many data points increases the false positives to a point where the error is too high.

Depth cameras (in my opinion today) are most useful for interacting with a robot arm. Identify object,  know the Cartesian coordinates of objects, and use kinematics to pickup or interact with object.

id stick with lidar for navigation.

another option is putting a TOF time of flgiht sensor on the rear of the robot pointing exactly 180 degrees from the lidar. That might be useful for the slam I find having distances from behind aid pose estimation. But I’d only consider it if you have too much trouble with the lidar alone.

#55   — Edited

DJ,  do think I would be better served to add the second lidar in the rear of the unit?  The reason I ask is because this would be the time to add it to the design.  The front has 280 scan and the rear has 330 scan degrees.

User-inserted image

PRO
Synthiam
#56  

Yeah - both athena and i have said that above:)

you could add 100 of them if you wanted - it'll just require a bit more processing as it's a lot more data... but not that much more, probably not noticable.

#57  

Thanks DJ I will be ordering two of them today and making the required changes to the design of the track system.   I am getting very excited to build this and use ARCX on this build.   I have not started to script to much in ARC because I am waiting for your new product.

PRO
Synthiam
#58  

The scripts will be pretty much the same in ARCx, but when the parts show up, you can always test them with ARC because the robot skills will be similar. I don't think much has changed with the NMS because it works well. I know there's a NMS3D which will use 3d depth but i haven't seen it in use yet, and i know it's mostly for integration with ROS packages. I'd avoid integrating ROS with ARCx because ROS is a PITA. I'd instead switch careers and then have to program in ROS and deal with the inconsistencies, deprecated packages, broken/missing dependencies, etc.

Anyway, that was a slight tangent. What I'm saying is that you can script, and it'll be the same with ARCx. However, you don't need much to script with NMS because it does it on its own. If you watch the tutorial video, no scripting is necessary.

#59   — Edited

I just hope my PC can deal with all the inputs

I have a two cameras in the head a mic array, audio amp, two speakers, six Arduino’s with EZB firmware, one MPU6050, two ultra sonic sensor,  ten ADC touch sensors in the hands,  along with a IR distance sensor in each palm of the hands, one 5 display and a ton of servos.  Two H-bridges and two dc geared motors for the tracks.

PRO
Synthiam
#60  

Oh yah that'll be fine. ARC might run a bit sluggish because the UI is the biggest drawback. But once u get to ARCx it should be better. I wouldn't worry about ARC though, as long as you have a decent graphics card it isn't a problem. The heaviest lifting of ARC is the UI because it's a graphical program. Anything graphical requires a good graphic card. ARC is a graphic intensive program.