ARC Pro

Upgrade to ARC Pro

Unlock the true power of automation and robotics by becoming a proud subscriber of Synthiam ARC Pro.

Author Avatar
PRO
Canada
#1  

Below is a practical plan + stepbystep configuration and a ready-to-adapt ARC JavaScript script you can use to have your Roli Rover locate a red Coke can, approach it, and grab it.

Highlevel approach

  • Teach the camera to recognize the can (either train the object with Train Vision Object By Script or use Color/MultiColor tracking).
  • Configure the Camera Device skill to drive your Movement Panel (Camera Movement tracking) so the robot follows the detected object.
  • Configure/test the gripper/arm servos (set safe min/max positions).
  • Run a script that enables tracking, follows the can, uses a distance/size threshold to stop, then commands the gripper servos to grasp.

Important resources

Before you start - questions for you

  1. Do you have a physical gripper/arm installed on Roli? If so, which servo ports are the arm and gripper on (e.g. d2 for wrist, d3 for gripper)?
  2. Do you have a distance sensor (ultrasonic / ping) mounted on Roli? If yes, which digital/ADC ports?
  3. Do you prefer using Train Vision Object (object-based) or simple Color/MultiColor tracking for the red can?

Stepbystep setup (nonscript)

  1. Add/verify Camera Device skill in your project (Project Add robot skill Camera Camera Device).
  2. Configure the camera:
    • Select your camera and a low resolution (320x240 or 160x120 recommended for speed).
    • In the Camera settings (kebab / three lines Settings) set Tracking type to either:
      • Object Tracking (best when you train the can with Train Vision Object by Script), or
      • Color Tracking / MultiColor (simpler: define a "red" color that matches the can).
  3. If using Train Vision Object By Script:
    • Add that skill and attach it to Camera (open the Train Vision skill settings or script ControlCommand Attach).
    • Use the skill’s Train UI or the script to Learn the object (present the can and move it while training).
  4. Configure Movement:
    • Add or configure a Movement Panel robot skill that controls Roli’s motors (H-Bridge, AutoPosition, etc.).
    • In Camera Settings Servo/Movement Tracking, enable Movement tracking (and test with CameraMovementTrackEnable). Tune the grid lines and the forward/turn thresholds so Roli centers the can and moves forward.
  5. Configure gripper/arm servos:
    • In the Camera settings or via Servo.setMinPositionLimit / Servo.setMaxPositionLimit set safe positions for the gripper and arm.
    • Manually test opening/closing the gripper with Servo.setPosition(dX, pos).

Where to configure robot skills (kebab menu)

  • Click the three lines (hamburger) in the topright of a robot skill window Settings to change tracking, movement, and scripts. Use the Script fields for start/stop scripts if desired.

Example ARC JavaScript script (template)

  • Edit the variables at the top (objectName, gripper/arm servo ports and positions, widthThreshold or distanceThreshold).
  • This script:
    • Attaches the Train skill, optionally triggers learning (commented).
    • Ensures camera/object tracking and movement tracking are enabled.
    • Waits until the camera sees the object and then loops while following the object.
    • Stops when the object appears large enough (uses $CameraObjectWidth variables as a sizing heuristic).
    • Runs the gripper servo to grasp.

Copy this into a JavaScript script robot skill and adapt servo ports & thresholds:

// === CONFIGURE THESE FOR YOUR ROBOT ===
var objectName = "COKE";            // name if you trained via Train Vision Object
var gripperPort = d2;               // change to your gripper servo port (no quotes)
var armPort = d3;                   // optional arm lift port (change to your port)
var gripperOpenPos = 30;            // servo angle to keep gripper open
var gripperClosePos = 110;          // servo angle to close gripper
var armLiftPos = 80;                // lift position after grasp (adjust)
var approachWidthThreshold = 140;   // camera object width pixels threshold for "close enough"
var checkDelay = 200;               // loop sleep ms

// === HELPERS ===
function printStatus(msg) {
  // ARC JavaScript uses print() to log in Script Console
  print(msg);
}

// Attach Train Vision skill (optional - safe to call even if already attached)
ControlCommand("Train Vision Object By Script", "Attach");

// If you want to trigger an automatic learn step from script (uncomment to use).
// NOTE: You must present the object to the camera while learning runs.
// ControlCommand("Train Vision Object By Script", "Learn", objectName);
// // wait for learning to finish (polling the learning variable)
 // while (getVar("$CameraIsLearning", "0") !== "0") {
 //   sleep(200);
 // }

// Ensure camera is started (use CameraStart if needed)
ControlCommand("Camera", "CameraStart");

// Enable object tracking and movement following
ControlCommand("Camera", "CameraObjectTrackingEnable");
ControlCommand("Camera", "CameraMovementTrackEnable");
ControlCommand("Camera", "CameraUseTrackingScripts", true);

printStatus("Waiting for object to be detected...");

// Wait until the camera sees the object
while (true) {
  var isTracking = getVar("$CameraIsTracking", "false");
  if (isTracking === "true") {
    // If you used Train Vision Object, verify object name matches (optional)
    var curName = getVar("$CameraObjectName", "");
    // If multiple objects have suffixes, check one of them:
    if (curName !== "" && curName.toLowerCase().indexOf(objectName.toLowerCase()) === -1) {
      // Not our trained object; wait a bit and continue
      sleep(checkDelay);
      continue;
    }
    printStatus("Object detected: " + curName);
    break;
  }
  sleep(checkDelay);
}

// MAIN APPROACH LOOP
printStatus("Approaching object...");
while (true) {
  // Camera gives width/height per detected object variables.
  // Try multiple possible variable names. Use Variable Watch to see exact names in your project.
  var w = parseInt(getVar("$CameraObjectWidth", "0"), 10);
  if (isNaN(w) || w === 0) {
    // try common suffixed name (first object) - many projects use _1 suffix
    w = parseInt(getVar("$CameraObjectWidth_1", "0"), 10);
  }
  // If you have a Ping sensor, prefer using Ping.get(trigger, echo) instead of width
  printStatus("Object width (px): " + w);

  if (w >= approachWidthThreshold) {
    // close enough
    printStatus("Reached close threshold. Stopping movement.");
    Movement.stop();
    break;
  }

  // otherwise, let camera movement tracking handle driving
  // (If you prefer to manually drive, use Movement.forward/left/right based on $CameraObjectCenterX)
  sleep(checkDelay);
}

// GRASP
printStatus("Positioning gripper for grasp...");
// ensure gripper is open
Servo.setPosition(gripperPort, gripperOpenPos);
// small delay for servo movement
sleep(750);

// lower arm if you have an arm (optional)
if (typeof armPort !== "undefined") {
  Servo.setPosition(armPort, armLiftPos - 30); // lower slightly - adjust to your hardware
  sleep(700);
}

// close gripper
printStatus("Closing gripper...");
Servo.setPosition(gripperPort, gripperClosePos);
sleep(1000);

// lift arm after grasp (optional)
if (typeof armPort !== "undefined") {
  Servo.setPosition(armPort, armLiftPos);
  sleep(800);
}

printStatus("Grab sequence completed.");

Notes, tuning and testing

  • Variable names: check the exact camera variables in your project with the Variable Watch skill. Camera object variables may be $CameraObjectWidth, $CameraObjectWidth_1, etc. Use the Variable Watch to confirm which name your camera instance uses and adjust the script accordingly.
  • If the Camera’s builtin movement tracking isn’t tuned, the robot may not center or may overshoot. Tune the Camera Servo/Movement Tracking settings (grid lines and thresholds) before relying on the script.
  • If you have an ultrasonic sensor, use Ping.get(triggerPort, echoPort) or the Ping.wait* helpers to detect distance more reliably than image size.
  • Always run servo movements slowly and within safe min/max limits first to avoid mechanical binding.
  • Reduce camera resolution to improve framerate and response time (320x240 or 160x120 recommended).

Next steps I can help with

  • Provide a tuned script for your exact gripper/arm servo ports and their safe min/max/open/close positions if you tell me the ports (e.g., gripper = d4, arm = d5).
  • Help convert a trained object workflow into an autonomous routine that returns to a home position after pickup.
  • Walk through how to train the object via Train Vision Object By Script stepbystep and show the exact ControlCommand sequence to call from JavaScript.

Tell me:

  • Which servo ports your gripper and arm use, and whether you have a distance sensor (and its port). I’ll update the script to match your hardware and include safe limits and recommended values.

I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.