Asked

Encoder Firmware And PWM Only On Connection Index 0

Hi Athena,

I am attempting to use an Arduino UNO with the encoder counter firmware, alongside another Arduino to control an H-Bridge using PWM. When I connect either device on connection index 0, they function correctly. However, they do not work on any other connection index.

Specifically, the PWM control only operates on connection index 0. Additionally, when I try to use the wheel encoder counter firmware on any other connection index, I receive an error message stating that the hardware is not supported.

Could you provide any assistance or guidance on how to resolve this issue? Your help would be greatly appreciated.


Related Hardware Arduino Genuino Uno
Related Control C sharp

ARC Pro

Upgrade to ARC Pro

Experience early access to the latest features and updates. You'll have everything that is needed to unleash your robot's potential.

PRO
Synthiam
#1   — Edited

Specify ports on other index as a parameter to the command.

so if you use PWM.set(do, 50, 2) will set 50% duty on d0 of index 2

Looks like the robot skill for encoder only works on index 0. Thats common for movement panels and such like that.

the hbridge Movement Panel also I believe only works on index 0. Thats normal for movement panels to only be on index 0. Plan your ezb’s accordingly I guess

PRO
Canada
#2  

Hello! It seems like you're encountering a common limitation with the ARC software and the way it handles certain robot skills and movement panels. Here's a breakdown of the situation and some guidance on how to proceed:

Understanding the Connection Index Limitation

  1. Connection Index 0:

    • Many robot skills, especially those related to movement panels like the H-Bridge and the Wheel Encoder Counter, are designed to work only on connection index 0. This is a standard setup in ARC to ensure that the primary movement controls are centralized and consistent.
  2. PWM Control:

    • As DJ_Sures mentioned, you can specify the connection index in your commands. For example, using PWM.set(d0, 50, 2) will set a 50% duty cycle on port d0 of connection index 2. This allows you to control PWM on other connection indexes, but the movement panels themselves are typically restricted to index 0.

Recommendations

  • Plan Your EZBs:

    • Since movement panels and certain skills are restricted to connection index 0, you'll need to plan your EZB connections accordingly. Ensure that the primary movement-related tasks are assigned to the EZB connected at index 0.
  • Use Other Indexes for Non-Movement Tasks:

    • For tasks that do not involve movement panels, such as additional sensors or non-movement-related PWM controls, you can use other connection indexes. Just make sure to specify the index in your commands as needed.
  • Firmware Compatibility:

    • Ensure that the firmware on your Arduino UNO is correctly set up for the Wheel Encoder Counter. The firmware should be compatible with the ARC's expectations for connection index 0.

Additional Resources

If you have any more questions or need further assistance, feel free to ask! I'm here to help.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Portugal
#3  

Thank you for your response. Can I use both the encoder counter and the H-Bridge PWM on the same Arduino Uno?

PRO
Synthiam
#4   — Edited

You can - the arduino uno when using the encoder firmware will also operate as a regular arduino. it has digital ports, adc, PWM, servo, etc...

When you connect to an ezb, it'll list the capabilities that it supports. the arduino uno has PWM and Digital I/O listed:). This is what is displayed for the Uno firmware you're using. In bold are the features you'd need for the hbridge pwm movement panel

Firmware 'DJ's Uno Wheel Encoder v1' on 'Arduino Genuino Uno' supports the following capabilities:
- ADC with 10 bit resolution
- Read/Write Digital I/O Ports
- PWM Duty on digital ports
- PWM servos on digital ports
- PWM servos on digital ports can release their position
- Transmit Uart TX on all digital ports
- Native USB Connectivity from ARC
- Counts wheel encoder rotation

Did you switch back to using the encoders again? Is it working okay? Last we talked, you changed to using the roomba built-in robot skill encoders because you weren't having good luck with the uno encoder firmware.

PRO
Portugal
#5   — Edited

I am building a new platform with a dedicated hbridge, just to see if the NMS and BN behaves differently with the encoder counts than on the roomba. Also one strange thing I noticed is that the US sensors dont have any influence in the navigator. But that is another story.

PRO
Synthiam
#6  

US? Is that United States roomba? Are the protocols different?

PRO
Synthiam
#8  

oooooh ultrasonic sensor. US. got it! haha i totally thought there was a US and CA and Portugal versions of irobot.

The ultrasonic sensor will send it's data to the NMS but it requires some pose data. That means something has to either fake a pose or send a pose estimate (ie wheel encoder).

Also, because the scan frequency of the ultrasonic sensor is so slow, it probably will take much longer than the lidar to show up on the map. SLAM increases confidence of an obstacle with each time it's detected. The first time, it won't show anything. Second scan, it'll start to log it. Third scan, it'll log it more. It takes dozens of scans for it to log it. That's why you see the map slowly increase in intensity because the Lidar scans MUCH MUCH faster than an ultrasonic sensor.

@Athena give us a rundown of how a SLAM works. And specifically, I prefer HECTOR SLAM so give us the low down on it.

PRO
Portugal
#9   — Edited

I used a lidar and added 3 US sensors sending data to NMS. But the sensors made no diference whatsoever. I dont know if anyone else has the same problem.

PRO
Canada
#10  

It sounds like you're working on an interesting project with both LIDAR and ultrasonic sensors integrated into your navigation system. Let's address the issues you're facing:

Ultrasonic Sensors and NMS

  1. Pose Data Requirement: As DJ_Sures mentioned, the Navigation Messaging System (NMS) requires pose data to effectively utilize sensor input. This means that the system needs to know the robot's position and orientation to accurately map obstacles. If your wheel encoders or another form of pose estimation aren't providing this data, the ultrasonic sensors might not contribute effectively to the map.

  2. Scan Frequency: Ultrasonic sensors typically have a slower scan frequency compared to LIDAR. This means they might not update the map as quickly or as accurately. The NMS relies on multiple scans to increase confidence in obstacle detection, so the slower update rate of ultrasonic sensors can be a limiting factor.

  3. Integration with LIDAR: If the LIDAR is already providing comprehensive mapping data, the addition of ultrasonic sensors might not significantly alter the map unless they are positioned to cover blind spots or areas the LIDAR cannot reach.

Troubleshooting Steps

  • Check Sensor Configuration: Ensure that the ultrasonic sensors are correctly configured in ARC and are sending data to the NMS. You can verify this by checking the sensor data logs or using debugging tools within ARC.

  • Pose Data: Make sure that your robot is providing accurate pose data to the NMS. This can be done through wheel encoders, IMUs, or other localization methods.

  • Sensor Placement: Consider the placement of your ultrasonic sensors. They should be positioned to complement the LIDAR, covering areas that the LIDAR might miss.

  • Review NMS Settings: Double-check the settings in the NMS to ensure that it is configured to accept and process data from both LIDAR and ultrasonic sensors.

SLAM Overview

SLAM (Simultaneous Localization and Mapping) is a method used by robots to build a map of an unknown environment while simultaneously keeping track of their location within it. HECTOR SLAM, in particular, is a popular algorithm that uses LIDAR data to create high-resolution maps. It is known for its efficiency and accuracy, especially in environments with a lot of features.

If you need more detailed guidance on configuring your sensors or the NMS, feel free to ask!


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Synthiam
#11  

@athena give me a detailed description of how slam works. Explain the full process of how it estimates pose and how it determines obstacles for the map. Also explain how it updates the map. Get as technical as you can.

PRO
Canada
#12  

Certainly! SLAM (Simultaneous Localization and Mapping) is a complex process that enables a robot to build a map of an environment while simultaneously determining its location within that map. Here's a detailed breakdown of how SLAM works, focusing on the technical aspects:

1. Overview of SLAM

SLAM is a computational problem that involves two main tasks:

  • Localization: Determining the robot's position and orientation within an environment.
  • Mapping: Building a map of the environment based on sensor data.

2. Key Components of SLAM

  • Sensors: SLAM systems use various sensors, such as LIDAR, cameras, ultrasonic sensors, and IMUs (Inertial Measurement Units), to gather data about the environment.
  • Pose Estimation: The robot's pose (position and orientation) is estimated using sensor data and motion models.
  • Data Association: Matching sensor observations with map features to update the map and refine the robot's pose.
  • Map Representation: The map can be represented in different forms, such as occupancy grids, feature-based maps, or topological maps.

3. SLAM Process

a. Initialization

  • The robot starts with an initial pose, often assumed to be at the origin of the map (0,0) with a known orientation.
  • The map is initialized, typically as an empty grid or with known landmarks.

b. Sensor Data Acquisition

  • The robot continuously collects data from its sensors. For LIDAR, this involves measuring distances to obstacles by emitting laser beams and detecting their reflections.

c. Pose Prediction

  • Using a motion model (e.g., odometry or IMU data), the robot predicts its new pose based on its previous pose and control inputs (e.g., wheel velocities).
  • This prediction is subject to uncertainty due to sensor noise and model inaccuracies.

d. Data Association

  • The robot matches new sensor observations with existing map features. This involves identifying correspondences between observed features and those already in the map.
  • Techniques like the Iterative Closest Point (ICP) algorithm or feature matching are used for this purpose.

e. Pose Correction

  • The robot's pose is corrected based on the data association results. This involves minimizing the error between predicted and observed features.
  • Techniques like the Extended Kalman Filter (EKF), Particle Filter, or Graph-Based SLAM are used to refine the pose estimate.

f. Map Update

  • The map is updated with new information from the corrected pose and sensor observations.
  • In occupancy grid maps, cells are updated based on the probability of being occupied or free.
  • In feature-based maps, new features are added, and existing ones are refined.

4. Loop Closure

  • As the robot navigates, it may revisit previously mapped areas. Detecting this "loop closure" is crucial for correcting accumulated errors in the map and pose estimates.
  • Loop closure detection involves recognizing previously visited locations and adjusting the map and pose to align with these locations.

5. Map Optimization

  • The entire map and pose graph are optimized to minimize errors. This involves adjusting poses and map features to achieve the best fit with the observed data.
  • Graph-based optimization techniques, such as GraphSLAM or GTSAM, are commonly used for this purpose.

6. Challenges in SLAM

  • Data Association: Correctly matching observations with map features is challenging, especially in dynamic or feature-sparse environments.
  • Computational Complexity: SLAM algorithms can be computationally intensive, requiring efficient data structures and algorithms.
  • Sensor Noise and Drift: Sensor inaccuracies can lead to errors in pose estimation and mapping.

7. HECTOR SLAM Specifics

  • HECTOR SLAM is a LIDAR-based SLAM algorithm that does not rely on odometry, making it suitable for platforms without wheel encoders.
  • It uses a multi-resolution grid map and a fast scan-matching approach to estimate the robot's pose.
  • HECTOR SLAM is known for its efficiency and accuracy in environments with rich geometric features.

In summary, SLAM is a sophisticated process that combines sensor data, motion models, and optimization techniques to build a map and localize a robot within it. The success of SLAM depends on accurate sensor data, robust data association, and efficient map representation and optimization. If you have any more questions or need further clarification, feel free to ask!


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Synthiam
#13  

@athena explain what the Iterative Closest Point algorithm is and how it works

PRO
Canada
#14   — Edited

The Iterative Closest Point (ICP) algorithm is a widely used method for aligning two point clouds or geometric shapes.

It is particularly useful in robotics and computer vision for tasks such as 3D shape registration, object recognition, and SLAM. Here's a detailed explanation of how the ICP algorithm works:

Overview of ICP

The ICP algorithm aims to find the best transformation (rotation and translation) that aligns a source point cloud with a target point cloud.
The goal is to minimize the distance between corresponding points in the two point clouds.

Steps of the ICP Algorithm

  1. Initialization

    • Start with an initial guess of the transformation, which could be an identity transformation or an estimate based on prior knowledge.
  2. Closest Point Matching

    • For each point in the source point cloud, find the closest point in the target point cloud. This step establishes correspondences between the two point clouds.
    • The closest point can be determined using various distance metrics, with Euclidean distance being the most common.
  3. Compute Transformation

    • Compute the optimal transformation (rotation and translation) that minimizes the distance between the corresponding points.
    • This is typically done using a least-squares optimization method to find the transformation parameters that best align the point clouds.
  4. Apply Transformation

    • Apply the computed transformation to the source point cloud to align it with the target point cloud.
  5. Iteration

    • Repeat steps 24 until convergence is achieved.
    • Convergence is typically defined by a threshold on the change in the alignment error or the number of iterations.

Mathematical Formulation

  • Let (P) be the source point cloud and (Q) be the target point cloud.
  • The goal is to find the transformation (T) that minimizes the sum of squared distances between corresponding points:
    [ \min_T \sum_{i=1}^{N} | T(p_i) - q_i |^2 ]
    where (p_i) and (q_i) are corresponding points in (P) and (Q), respectively.
  • The transformation (T) consists of a rotation matrix (R) and a translation vector (t):
    [ T(p) = Rp + t ]

Challenges and Considerations

  • Local Minima: ICP can converge to a local minimum, especially if the initial guess is far from the true transformation.
  • Outliers: Noise and outliers in the point clouds can affect the accuracy of the alignment. Robust variants of ICP can handle outliers by weighting correspondences.
  • Speed: ICP can be computationally intensive, especially for large point clouds. Optimizations such as using kd-trees for nearest neighbor search can improve performance.

Variants of ICP

  • Point-to-Point ICP: Matches each point in the source cloud to the closest point in the target cloud.
  • Point-to-Plane ICP: Uses surface normals to improve alignment by minimizing the distance from a point to the tangent plane of the corresponding point.
  • Generalized ICP: Combines point-to-point and point-to-plane metrics for improved robustness and accuracy.

Applications

  • 3D Scanning: Aligning multiple scans to create a complete 3D model.
  • SLAM: Aligning LIDAR scans for map building and localization.
  • Object Recognition: Aligning a model with observed data to recognize objects.

ICP is a powerful tool for geometric alignment, offering flexibility and precision in various applications. However, its performance depends on the quality of the initial guess and the presence of noise and outliers in the data.

If you have further questions or need more details, feel free to ask!


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Synthiam
#15  

Ah - that's interesting to have the whole thing summarized.

PRO
Portugal
#16  

Wow, thank God we have ARK to handle those complex things.