Asked — Edited
Resolved Resolved by Athena AI!

Assistance Needed For Javascript Scripting

@athena, I am looking for help with writing a JavaScript script for a comprehensive robotics project using Synthiam ARC. Below is a detailed outline of what I aim to achieve:

System Setup:

  1. RPLIDAR Integration:

    • Even though it may be more advanced than necessary compared to a simple ultrasonic sensor, I plan to utilize RPLIDAR for precise environment scanning.
  2. Camera Setup for Glyph Tracking:

    • I need to enable the camera within ARC to identify and track specific glyphs for object recognition and navigation tasks.
  3. Chassis Movement Control:

    • I will employ the iRobot Roomba Movement Panel skill to facilitate and coordinate the movement of the robot chassis effectively.

Obstacle Avoidance Strategy:

  • Front Obstacle Detected (Less than 20 cm):

    • Implement logic to make the robot reverse 10 cm, perform an exploratory scan to identify clear paths, and then navigate towards the safest available direction.
  • Side Obstacles Detected (Less than 15 cm on either side):

    • The robot should halt, reverse slightly, adjust its trajectory with a minor turn, and then continue moving forward to persist in glyph scanning operations.

For detecting obstacles and managing navigation decisions, I intend to use the following global variables:

  • $LidarFurthestDegree
  • $LidarFurthestDistance
  • $LidarNearestDegree
  • $LidarNearestDistance

Glyph Interaction Protocol:

  • The camera has been configured to recognize four distinct glyphs (Glyph 1, 2, 3, and 4).
  • When Glyph 2, 3, or 4 is detected:
    • The robot should vocalize, "I found something," and proceed with the scanning process.
  • Upon detecting Glyph 1:
    • The robot should halt operations and announce a "[custom message] else."

Could you provide guidance or examples on scripting this setup and the associated logic in JavaScript within the ARC environment?


Related Hardware Roomba

ARC Pro

Upgrade to ARC Pro

Harnessing the power of ARC Pro, your robot can be more than just a simple automated machine.

PRO
Canada
#1  

Below is an example of how you might begin constructing your JavaScript script in ARC to meet your requirements. In this example, we assume that your other skills (RPLidar and Camera Glyph Tracking) update global variables-such as:

$LidarNearestDistance and $LidarNearestDegree
$LidarFurthestDistance and $LidarFurthestDegree
$GlyphDetected (which you can set to "Glyph1", "Glyph2", etc. when the camera detects a particular glyph)

You can adjust the names of any global variables as needed. The script uses simple checks to decide whether a front or side obstacle is too close and then commands the Roomba Movement Panel accordingly. For instance, when a front obstacle is detected (within 20 cm at roughly the forward direction) the robot reverses a short interval, makes a small turn, and then resumes forward movement. Similarly, if an obstacle is detected on either side (assumed by checking the angle from the lidar data), the robot will stop, reverse a little, make a slight turn to correct its course, and then continue.

Below is an annotated sample script:


// This function is our main control loop.
function main() {
  // Start with the robot moving forward and glyph scanning active
  Movement.forward();
  
  // Main control loop - adjust sleep times and motion durations based on your robot’s calibration.
  while (true) {
    // -------------------------------
    // Glyph Interaction Protocol:
    // -------------------------------
    // We assume that your camera skill sets a global variable $GlyphDetected when a glyph appears.
    // It might be an empty string when nothing is detected.
    var glyph = getVar("$GlyphDetected", "");

    if (glyph !== "") {
      // For Glyph 2, 3, or 4: announce and continue scanning.
      if (glyph === "Glyph2" || glyph === "Glyph3" || glyph === "Glyph4") {
        Audio.say("I found something");
        // Optionally, you can clear the glyph variable so it doesn’t re-fire.
        setVar("$GlyphDetected", "");
      }
      // For Glyph 1: stop and announce a custom message, then break out of the loop.
      else if (glyph === "Glyph1") {
        Movement.stop();
        Audio.say("Halting operations. Custom message here.");
        break; // Exit the loop or you can add further conditional behavior.
      }
    }

    // -------------------------------
    // Obstacle Avoidance Strategy:
    // -------------------------------
    // Read the global lidar variables.
    // These variables are assumed to be updated by your RPLidar skill.
    var nearestDistance = getVar("$LidarNearestDistance", 1000);
    var nearestDegree = getVar("$LidarNearestDegree", 0);
    
    // Example: Front obstacles are checked within a sector.
    // Let’s assume the forward direction is given by degrees near 0 or close to 360.
    if (nearestDistance < 20 &&
       ((nearestDegree >= 330 && nearestDegree <= 360) || (nearestDegree >= 0 && nearestDegree <= 30))) {
      // Front Obstacle detected: Reverse a little (adjust duration for 10 cm - calibration dependent)
      Movement.stop();
      Audio.say("Front obstacle detected, reversing");
      Movement.reverse();
      sleep(500);  // Reverse for 500ms (adjust as needed)
      Movement.stop();

      // After reversing, perform an exploratory scan.
      // For example, turn left a bit and resume forward.
      Movement.left();
      sleep(300);  // Turn for 300ms (minor turn)
      Movement.stop();
      Movement.forward();
    }
    // Side obstacles (assuming that if the nearest obstacle is not in the forward sector,
    // it might be on one of the sides defined by angle ranges).
    else if (nearestDistance < 15) {
      if (nearestDegree > 45 && nearestDegree < 135) {
        // Obstacle on left side
        Movement.stop();
        Audio.say("Left side obstacle detected");
        Movement.reverse();
        sleep(300);
        Movement.stop();
        Movement.right();
        sleep(300);
        Movement.stop();
        Movement.forward();
      } else if (nearestDegree > 225 && nearestDegree < 315) {
        // Obstacle on right side
        Movement.stop();
        Audio.say("Right side obstacle detected");
        Movement.reverse();
        sleep(300);
        Movement.stop();
        Movement.left();
        sleep(300);
        Movement.stop();
        Movement.forward();
      }
    }
    
    // Optionally show current lidar values in the log for debugging
    print("NearestDistance: " + nearestDistance + " at " + nearestDegree + "");
    
    sleep(100); // Loop delay (100ms)
  }
}

// Run the main control loop.
main();

Explanation & Tips

  1. Glyph Handling:

    • The script checks a global variable (e.g., $GlyphDetected) to see if any glyph has been recognized.
    • For Glyph 2, 3, or 4 it speaks a message and continues operating.
    • For Glyph 1 it stops and announces a custom message before breaking out of the loop.
  2. Obstacle Avoidance:

    • For front obstacles, the script checks if the closest (nearest) lidar reading is under 20 cm in the forward sector (assumed as 330-360 or 0-30).
    • For side obstacles, it checks for distances under 15 cm. Adjust the angle ranges according to how your project defines side.
    • Timing (using sleep(milliseconds)) is used to control motion durations. You may need to calibrate these times (e.g., 500 ms for reversing ~10 cm) based on real-world testing.
  3. Movement Commands:

    • The script uses built-in ARC movement functions like Movement.forward(), Movement.reverse(), Movement.left(), and Movement.right().
    • You may also use ControlCommand() if you need to call specific commands in a different robot skill.
  4. Global Variables:

    • The script uses getVar() and setVar() to interface with global variables shared between skills.
    • Ensure that your RPLidar and camera skills update the correct global variables so the script can read current values.
  5. Testing & Calibration:

    • Use the Variable Watcher in ARC to verify that your global variables (e.g., $LidarNearestDistance) are updated correctly.
    • Adjust sleep durations and angle sectors based on your robot’s speed and sensor configuration.

Feel free to ask followup questions or provide more details if you need further customization or refinements to the script!


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Synthiam
#3  

I don’t think Athena’s answer helps much. This is because there’s too many questions in a single question. For example, you asked to have multiple things answered in a single question. Forum threads are difficult to answer more than one question in a thread because they each take unique unrelated steps.

When dealing with the camera, you’d have to simply add a script in the start tracking script to perform a task when the specific glyph is detected. That’s a very different process and unrelated to the lidar movement.

My advice would be to split the questions into smaller tasks and accomplish one at a time. Much like if you were to make a sandwich. You wouldn’t include buying the dishes and kitchen as part of the sandwich making process.

So my advice is to choose one of those tasks and start a new question with Athena to answer. This is because she wrote the best answer she could as a single script. And that’s not how you’d program a robot. Because they’re each unique unrelated tasks using their own robot skills, the answer for each task is considerably different.

The camera robot skill has what you asked about built into it. But because the question has so many questions, Athena isn’t able to answer them in a single response.

Advice

Start with first asking a new thread to Athena about the lidar behavior script you want. Once you got that working, move on to the camera related question.

PRO
Canada
#4  

Thanks for your feedback, DJ_Sures. I understand that tackling several independent tasks at once can be overwhelming-and you're right, combining these different functionalities (the lidar-based movement and the camera glyph tracking) into one script does complicate matters.

If you prefer to split the project into smaller tasks, I can definitely help with that! For example, you might start by focusing on the lidar behavior, then later address the glyph tracking and integrate that separately.

What would you like to work on first? Would you prefer to refine the lidar-based obstacle avoidance script or address the camera glyph detection logic? Let me know so we can take a step-by-step approach.

I'm here to help break it down into manageable pieces!


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Germany
#5  

lets begin with refine the lidar-based obstacle avoidance script

#6   — Edited

DJ’s suggestion to start a new thread based on one of the questions is a good way to begin. We have some great documentation in the support section that covers his to plan and phrase questions, but I think you can do just fine without it. In a question provide as much detail about the specific goal as you can. Mention the goal’s outcome and what you expect the robot to do.

There is a comedic term used by programmers: GIGO. Which stands for Garbage In, Garbage Out. It implies that your answer is only as good as your question.

DJ makes similar references in his posts about GIGO to Douglas Adam’s hitch hikers guide to the galaxy. When the answer of 42 was presented. Everyone was confused by the answer - so the computer said because they didn’t understand the question, therefore they do not understand the answer.

I should also mention that Athena only responds when her name is tagged.

PRO
Synthiam
#7  

@athena, create an example question that @jp can ask about the lidar. Format it using the information providers. Thanks! Jp can use that as an example question to ask in a new thread.

PRO
Canada
#8  

Below is an example of how jp might ask a focused question about the LiDAR behavior. For instance, jp could post the following question in a new thread:


Subject:
Help refining my LiDAR-based obstacle avoidance script using global variables

Body:
Hi Athena,

I’m working on a LiDAR-based obstacle avoidance script for my Roomba robot. My current setup uses the following Synthiam ARC robot skills:
iRobot Roomba Movement Panel (for chassis control)
RPLidar (for environment scanning)

The RPLidar skill updates these global variables:

  • $LidarFurthestDegree
  • $LidarFurthestDistance
  • $LidarNearestDegree
  • $LidarNearestDistance

My goal is to have the following behavior:

  • Front Obstacle Detection:
    If an obstacle is detected directly ahead at less than 20 cm, the robot should reverse about 10 cm, perform an exploratory scan for clear paths, and then navigate toward the safest direction.

  • Side Obstacle Detection:
    If obstacles are detected on either side (distance less than 15 cm), the robot should halt, reverse slightly, adjust its heading with a minor turn, and then proceed forward.

I’d like to implement this using a JavaScript script in ARC. My questions are:

  1. How should I structure the script to continuously monitor these global LiDAR variables and decide which avoidance routine to run?
  2. What are the best practices for controlling the Roomba Movement Panel (using functions like Movement.forward(), Movement.reverse(), etc.) in this scenario?
  3. Could you provide an example code snippet that demonstrates a basic loop where the script checks the LiDAR distance values, determines whether a front or side obstacle is present, and then issues the relevant movement commands?

Any sample code, guidance, or debugging tips would be really helpful!

Thanks in advance for your assistance,

jp15sil24


This example question provides details about: The project setup and goals (using RPLidar with Roomba movement control)
How you are using the global variables for LiDAR data
The specific behaviors you want to implement for obstacles
Clear questions on script structure, use of movement commands, and a code example

Feel free to use or modify this template as needed. Let me know if you need further clarification or additional examples!


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.