ARC Pro

Upgrade to ARC Pro

Unlock the true power of automation and robotics by becoming a proud subscriber of Synthiam ARC Pro.

Author Avatar
PRO
Canada
#9  

Nice find - thanks Nomad_6R. That larger metal-coupled HDD-style gripper plus highertorque servos is exactly the right direction if you want to reliably pick up a full soda can. Below is a concise checklist, torque math you can use, recommended servo specs, wiring / power notes, ARC integration steps, and a ready-to-adapt JavaScript test + grasp sequence you can paste into an ARC Script robot skill.

Please edit the CONFIG values in the script to match your ports and measured values (I’ll help tune it after you reply with specifics).

  1. Quick reality checks (do these first)
  • Measure max jaw opening after you print and mount the Thingiverse adapter. A 12 oz can is ~66 mm diameter - jaws must open >66 mm with pads fitted.
  • Measure arm length (distance from gripper pivot to center-of-grip). This is needed for torque calc.
  • Note servo model(s) and supply voltage. If using HDD-style servos, identify stall torque rating (kgcm or ozin).
  • Use a dedicated servo power supply (not the EZB 5V). For high torque servos, use 6-8.4V BEC / battery rated for peak stall current.
  1. Torque estimate (simple)
  • Weight of a full 12 oz can ~0.37 kg (3.63 N).
  • Torque (Nm) = weight_force_N * arm_length_m.
  • Convert to kgcm: torque_kgcm = torque_Nm / 0.0980665

Example:

  • arm = 0.10 m (10 cm)
  • torque_Nm = 3.63 N * 0.10 m = 0.363 Nm
  • torque_kgcm = 0.363 / 0.09807 3.7 kgcm

That number is the static torque at that arm length. Because of dynamics, friction, leverage, and safety margin, pick servos that exceed required torque by at least 2-3×. So for the example above, choose a gripper servo >= 8-12 kgcm (and preferably 15+ kgcm). If the arm is longer (0.15-0.20 m) or you plan to lift the can well off the ground, choose 20-50+ kgcm for the lift servo.

  1. Recommended hardware
  • Gripper servo: metal-geared high torque servo 10-20+ kgcm (or higher if jaw geometry or leverage is poor).
  • Arm lift servo: heavier - 20-50+ kgcm depending on arm length and desired lift height (use higher torque here).
  • Use metal spline / square coupling and strong hardware as you described.
  • Use rubber pads on jaws for friction.
  • Consider a small cradle/tray so you can capture the can instead of fully lifting if torque is tight.
  1. Power & wiring safety
  • Dedicated servo battery/UBEC sized for peak current. Heavy servos can draw many amps under stall.
  • Add a large electrolytic capacitor near servos to smooth peaks (e.g., 1000 F+ at servo voltage).
  • Tie servo ground to EZB ground.
  • Do not use EZB internal 5V to power heavy servos.
  • Test servo sweep slowly and check for heating or stall.
  1. ARC integration notes (what to set in ARC UI)
  • Camera Device + Train Vision Object By Script - train the can or use MultiColor tuned for the can color.
  • In Camera skill Settings (kebab menu -> Servo/Movement Tracking) enable Movement tracking or servo tracking depending on approach.
  • Configure Camera grid lines and movement thresholds before running scripts.
  • Add Ping ultrasonic as a stopping safety (recommended) or rely on $CameraObjectWidth as a heuristic if no Ping available.
  • Use Servo.setMinPositionLimit / Servo.setMaxPositionLimit and Servo.setSpeed to protect hardware.
  1. ARC JavaScript template (adapt)
  • Change ports and positions in the CONFIG area to match your hardware.
  • This uses print(), servo APIs, Ping.get() if you have a Ping sensor, Camera control commands, and Movement.stop().
// ARC JavaScript - Can pickup template
// === CONFIG === edit these to match your robot ===
var trainedObjectName = "COKE";        // leave "" if using color tracking instead
var gripperPort = d2;                 // gripper servo port (d0-d23 or v0-v99)
var armPort = d3;                     // arm lift servo port (or undefined)
var gripperOpenPos = 30;              // degrees - adjust after mechanical test
var gripperClosePos = 110;            // degrees - adjust after mechanical test
var gripperSafeMin = 5;               // software min limit (deg)
var gripperSafeMax = 170;             // software max limit (deg)
var gripperSpeed = 6;                 // 0 fastest - 10 slowest (slower = safer under load)
var armPreLowerPos = 60;              // arm pos for grasp
var armLiftPos = 90;                  // lift pos post-grasp
var armSafeMin = 5;
var armSafeMax = 170;
var armSpeed = 5;
var pingTriggerPort = d7;             // optional Ping trigger (set to undefined if not used)
var pingEchoPort = d8;                // optional Ping echo
var approachWidthThreshold = 140;     // fallback camera width pixels that mean "close enough"
var checkDelay = 200;                 // ms loop delay

// === HELPERS ===
function printStatus(msg){
  print(msg); // ARC uses print() in JS
}

// === SAFETY: set soft limits and speeds ===
try{
  Servo.setMinPositionLimit(gripperPort, gripperSafeMin);
  Servo.setMaxPositionLimit(gripperPort, gripperSafeMax);
  Servo.setSpeed(gripperPort, gripperSpeed);
}catch(e){ printStatus("Warning: could not set gripper limits/speed: " + e); }

if(typeof armPort !== "undefined"){
  try{
    Servo.setMinPositionLimit(armPort, armSafeMin);
    Servo.setMaxPositionLimit(armPort, armSafeMax);
    Servo.setSpeed(armPort, armSpeed);
  }catch(e){ printStatus("Warning: could not set arm limits/speed: " + e); }
}

// ===  manual gripper sweep test (safe) ===
printStatus("Starting gripper sweep test. Stop script if servo stalls.");
Servo.setPosition(gripperPort, gripperOpenPos);
sleep(500);
Servo.setPosition(gripperPort, gripperClosePos);
sleep(800);
Servo.setPosition(gripperPort, gripperOpenPos);
sleep(400);

// === Start camera & attach Train Vision  plugin ===
ControlCommand("Train Vision Object By Script", "Attach");
ControlCommand("Camera", "CameraStart");
ControlCommand("Camera", "CameraObjectTrackingEnable");
ControlCommand("Camera", "CameraMovementTrackEnable");
ControlCommand("Camera", "CameraUseTrackingScripts", true);

printStatus("Waiting for object detection...");
while(true){
  var isTracking = getVar("$CameraIsTracking", "0");
  if(isTracking == "1" || String(isTracking).toLowerCase() == "true"){
    if(trainedObjectName && trainedObjectName.length > 0){
      var curName = getVar("$CameraObjectName", "");
      if(curName && curName.toLowerCase().indexOf(trainedObjectName.toLowerCase()) == -1){
        // Not our object yet
        sleep(checkDelay);
        continue;
      }
    }
    break;
  }
  sleep(checkDelay);
}

printStatus("Object detected. Approaching...");

// === Approach / stop loop ===
while(true){
  var closeEnough = false;

  // Prefer Ping if configured
  if(typeof pingTriggerPort !== "undefined" && typeof pingEchoPort !== "undefined"){
    try{
      var dist = Ping.get(pingTriggerPort, pingEchoPort); // 0-255 cm
      printStatus("Ping distance: " + dist + " cm");
      if(dist > 0 && dist <= 20){ // 20 cm example stopping distance, tune this
        closeEnough = true;
      }
    }catch(e){
      printStatus("Ping error: " + e);
    }
  } else {
    // fallback to camera width
    var w = parseInt(getVar("$CameraObjectWidth", "0"), 10);
    if(isNaN(w) || w <= 0){
      w = parseInt(getVar("$CameraObjectWidth_1", "0"), 10); // common suffix
    }
    printStatus("Object width(px): " + w);
    if(w >= approachWidthThreshold) closeEnough = true;
  }

  if(closeEnough){
    Movement.stop();
    break;
  }
  sleep(checkDelay);
}

// === Grasp sequence ===
printStatus("Opening gripper for grasp");
Servo.setPosition(gripperPort, gripperOpenPos);
sleep(700);

if(typeof armPort !== "undefined"){
  printStatus("Lowering arm to pre-grasp pos");
  Servo.setPosition(armPort, armPreLowerPos);
  sleep(800);
}

printStatus("Closing gripper");
Servo.setPosition(gripperPort, gripperClosePos);
sleep(900);

if(typeof armPort !== "undefined"){
  printStatus("Lifting arm");
  Servo.setPosition(armPort, armLiftPos);
  sleep(900);
}

printStatus("Pickup complete.");

// Optionally disable camera movement tracking
ControlCommand("Camera", "CameraMovementTrackDisable");
  1. What I need from you to tune this exactly
  • Which EZB ports for gripper, arm, (and wrist if present)? (e.g., gripper = d2, arm = d3)
  • Measured jaw maximum opening in mm after you print & mount.
  • Arm length from pivot to grasp center (mm or cm).
  • Servo model(s) and stall torque rating (kgcm or ozin) and operating voltage.
  • Do you have a Ping ultrasonic (which ports)? Or will you rely on camera width?
  1. Extra tips & next steps
  • If you plan to lift the can fully and carry it: size the arm servo for worst-case torque (long arm + can).
  • If torque is tight, use cradle capture instead of lift (push into tray).
  • Add retries and failure detection in script (release + reattempt if gripper torque doesn't change).
  • When you give me the ports and servo specs I’ll:
    • Insert exact port constants into the script.
    • Compute required torque and recommend specific servo models (links).
    • Set conservative Servo.setMaxPositionLimit / setMinPositionLimit and supply-specific timing.

Helpful links

Ready to finalize - paste your servo ports, measured jaw opening, arm length and servo model/voltage and I’ll produce a fully tuned script and specific servo/power recommendations.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
Author Avatar
PRO
Belgium
#11  

hi nink

why did you stop ? i remember a wall E looking for the red ball . very cool .

Author Avatar
PRO
Canada
#12  

Great references - thanks, Nink. DJ’s camera-to-XYZ idea is exactly what you’ll want if you plan to use vision to position the arm/wrist precisely for a pickup. Below I’ll summarize two practical approaches (quick & robust vs precise XYZ), give the math for converting camera pixels real-world coordinates, and provide ARC JavaScript you can run to calibrate and compute XYZ. I’ll also ask a few follow-ups so I can tailor the script to your Roli hardware.

High-level options

  • Simpler & robust (recommended for first runs)

    • Use Camera Device + Train Vision Object (or Multi-Color) to detect the can.
    • Enable Camera Movement tracking so the robot auto-centers and drives to the object.
    • Use a Ping ultrasonic to stop at safe grasp distance.
    • Use a short arm/gripper open-close script to grasp.
    • Pros: easy to configure, less camera calibration, good reliability.
    • Cons: less precise arm positioning (but usually fine if gripper geometry/cradle is forgiving).
  • Precise XYZ (useful if you want the camera to guide the arm/wrist)

    • Compute distance (Z) from object pixel width and known object real width using a focal-length calibration.
    • Compute X & Y offsets (cm) relative to camera center using pinhole camera math:
      • X_cm = (px - cx) * Z / fx
      • Y_cm = (py - cy) * Z / fy
    • Use those coordinates to compute arm servo target positions (requires arm geometry mapping).
    • Pros: accurate placing for complicated grippers, you get "where the object is in 3D".
    • Cons: requires calibration and conversion mapping arm geometry servo angles.

Required camera math (pinhole model)

  • Known object real width W_cm (e.g., 6.6 cm for a 12oz can).
  • Measured pixel width at known distance: pxWidthAtKnownDist (pixels) at distance D_cm (cm).
  • From a calibration measurement:
    • focal_px = pxWidthAtKnownDist * D_cm / W_cm
  • Then for a detection with pixel width pxWidth:
    • distance_cm = W_cm * focal_px / pxWidth
  • For center offsets:
    • fx = fy = focal_px (assuming square pixels). cx = imageWidth/2 ; cy = imageHeight/2
    • X_cm = (centerX - cx) * distance_cm / focal_px
    • Y_cm = (centerY - cy) * distance_cm / focal_px

What to measure / prepare

  • Is the camera mounted on the claw (moves with the arm) or fixed on the rover body?
    • Camera-on-claw makes relative alignment simpler (camera gripper transform small). Fixed camera needs coordinate transform between camera and arm base.
  • Real object width W_cm (e.g., Coke can diameter ~ 6.6 cm).
  • A calibration measurement: place the object at a known distance D_cm from the camera (e.g., 30 cm), note $CameraObjectWidth in pixels (or run the calibration helper below to capture it).
  • Camera resolution used (e.g., 320x240) - code reads these vars automatically.
  • Do you have a Ping (ultrasonic) sensor? If yes, which trigger/echo ports?
  • Gripper/arm servo ports and whether the arm is using HDD servos you mentioned.

Quick calibration + compute XYZ script

  • Paste this into an ARC JavaScript Script skill. It:
    1. Attaches Train Vision Object By Script and Camera.
    2. Allows you to run a calibration step using a known distance and known object width (you can run this once).
    3. Computes focal_px and stores as a global variable.
    4. Provides a function that returns X,Y,Z (cm) for the current detected object.

Edit only the CONFIG section (knownWidthCm and the optional ping ports). The script will prompt you to place the object for calibration.

// Camera calibration and pixel -> XYZ helper (ARC JavaScript)
// CONFIG - edit these
var knownWidthCm = 6.6;            // real width of the object (cm). Coke can ~6.6cm
var calibrationDistanceCm = 30;    // distance from camera during calibration (cm)
var pingTriggerPort = undefined;   // e.g., d7 (or undefined)
var pingEchoPort = undefined;      // e.g., d8 (or undefined)
var checkDelay = 200;

// Helpers
function printStatus(s){ print(s); }
function setGlobal(name, val){ setVar(name, String(val)); }
function getGlobal(name, def){ return getVar(name, def); }

// Ensure camera / train  plugin attached
ControlCommand("Train Vision Object By Script", "Attach");
ControlCommand("Camera", "CameraStart");
ControlCommand("Camera", "CameraObjectTrackingEnable");

// Wait for camera resolution variables to be set
var camW = parseInt(getVar("$CameraResolutionWidth", "320"));
var camH = parseInt(getVar("$CameraResolutionHeight", "240"));
printStatus("Camera resolution reported: " + camW + " x " + camH);

// Calibration step: put the known object at calibrationDistanceCm from camera
printStatus("Place the object centered in the camera at ~ " + calibrationDistanceCm + " cm and press OK in the console.");
// Wait for detection and get $CameraObjectWidth in pixels
function waitForObjectPixels(timeoutMs){
  var start = Date.now();
  while(Date.now() - start < timeoutMs){
    var isTracking = getVar("$CameraIsTracking", "0");
    if(isTracking == "1" || String(isTracking).toLowerCase() == "true"){
      var pxW = parseFloat(getVar("$CameraObjectWidth", "0"));
      var pxH = parseFloat(getVar("$CameraObjectHeight", "0"));
      var cX = parseFloat(getVar("$CameraObjectCenterX", "0"));
      var cY = parseFloat(getVar("$CameraObjectCenterY", "0"));
      if(pxW > 2){ // valid reading
        return {pxWidth: pxW, pxHeight: pxH, centerX: cX, centerY: cY};
      }
    }
    sleep(checkDelay);
  }
  return null;
}

// Calibration routine
printStatus("Waiting to detect object for calibration...");
var calib = waitForObjectPixels(20000);
if(!calib){
  printStatus("Calibration failed: object not detected. Re-run after starting camera and training object.");
  halt();
}
printStatus("Calibration detection: pixel width = " + calib.pxWidth + " px at ~ " + calibrationDistanceCm + " cm.");

// Compute focal length in pixel units: focal_px = pxWidthAtKnownDist * D_cm / W_cm
var focal_px = calib.pxWidth * calibrationDistanceCm / knownWidthCm;
printStatus("Calculated focal length (pixels) = " + focal_px.toFixed(2));

// Save global values
setGlobal("$CAM_FOCAL_PX", focal_px.toFixed(3));
setGlobal("$CAM_KNOWN_W_CM", knownWidthCm.toFixed(3));
setGlobal("$CAM_CX", (camW/2).toFixed(1));
setGlobal("$CAM_CY", (camH/2).toFixed(1));

printStatus("Calibration saved. You can now run the getObjectXYZ() routine.");

// Function to get current object XYZ (cm) relative to camera center
function getObjectXYZ(){
  var isTracking = getVar("$CameraIsTracking", "0");
  if(!(isTracking == "1" || String(isTracking).toLowerCase() == "true")){
    return null;
  }
  var pxWidth = parseFloat(getVar("$CameraObjectWidth", "0"));
  var cx = parseFloat(getVar("$CameraCenterX", getGlobal("$CAM_CX", camW/2)));
  var cy = parseFloat(getVar("$CameraCenterY", getGlobal("$CAM_CY", camH/2)));
  var centerX = parseFloat(getVar("$CameraObjectCenterX", "0"));
  var centerY = parseFloat(getVar("$CameraObjectCenterY", "0"));
  var focal = parseFloat(getGlobal("$CAM_FOCAL_PX", "0"));
  var Wcm = parseFloat(getGlobal("$CAM_KNOWN_W_CM", knownWidthCm));
  if(pxWidth <= 0 || focal <= 0){ return null; }

  var Z = Wcm * focal / pxWidth; // distance in cm
  var X = (centerX - (camW/2)) * Z / focal; // lateral offset (cm), + right, - left
  var Y = (centerY - (camH/2)) * Z / focal; // vertical offset (cm), + down, - up

  return {X_cm: X, Y_cm: Y, Z_cm: Z, pxWidth: pxWidth, centerX: centerX, centerY: centerY};
}

// Example usage: print live XYZ for 10 seconds
printStatus("Printing XYZ (cm) for 10 seconds...");
var start = Date.now();
while(Date.now() - start < 10000){
  var xyz = getObjectXYZ();
  if(xyz){
    printStatus("X: " + xyz.X_cm.toFixed(2) + " cm, Y: " + xyz.Y_cm.toFixed(2) + " cm, Z: " + xyz.Z_cm.toFixed(2) + " cm");
  } else {
    printStatus("object not tracked currently");
  }
  sleep(400);
}
printStatus("Done.");

How to use that output

  • If camera is mounted on the claw: X,Y,Z are camera-relative and can be directly used to compute small servo corrections (hand-eye simplified).
  • If camera is fixed on the body: X,Y,Z are relative to camera - you’ll need the transform to the arm base (a simple offset and rotation if the camera position is fixed and known).
  • You will almost always pair this with a final Ping sensor check (stop within ~5-8 cm) so you don’t smack the robot into the can.

Mapping XYZ servo angles

  • This depends on arm kinematics (link lengths, pivot centers).
  • For a simple single-joint lift + parallel jaw:
    • Z controls arm extension / lift servo angle.
    • X maps to wrist lateral offset (rotate base or translate with small turning moves).
  • You’ll usually derive a linear mapping experimentally:
    • Move the gripper to known X offsets and record servo angles compute degPerCm for that axis.
    • Use the degPerCm for conversions in the script.

Practical workflow I recommend

  1. Mount gripper + camera (camera-on-claw easier while tuning).
  2. Train the object with Train Vision Object By Script or use color/multicolor.
  3. Run the calibration script above with the can at a known distance and save focal_px.
  4. Test getObjectXYZ() outputs while moving object to verify values.
  5. If using camera-driven driving: enable Camera movement tracking and use Ping for stopping.
  6. For arm: experimentally map X/Z servo angles and implement a pick routine:
    • approach (movement tracking or move forward)
    • stop with Ping
    • optionally use camera XYZ to micro-adjust arm/wrist
    • open gripper, lower arm, close gripper, lift.

Follow-up questions so I can make a tailored script

  • Is the camera mounted on the claw/arm or fixed on the Roli body?
  • Will you do the calibration (place can at a measured distance D and collect $CameraObjectWidth)? If not, tell me a good D (cm) and I’ll provide a compact script to prompt you and save the result.
  • Do you have a Ping ultrasonic sensor (ports)? If yes, give the trigger/echo ports.
  • Which EZB ports are gripper & arm servos connected to? (e.g., gripper = d2, arm = d3)
  • Do you want me to produce a full pick script that:
    • uses Camera movement tracking to approach,
    • uses Ping to stop,
    • uses camera XYZ to adjust the arm/wrist before grasp,
    • then closes gripper and lifts? If yes, I’ll need the servo ports and a simple mapping (or I’ll include a short calibration routine for servo angle cm).

References and next steps

Tell me camera mount (claw or fixed), gripper/arm servo ports, and whether you want the full pick script. If you want, run the above calibration snippet and paste the printed focal_px (or paste the $CameraObjectWidth reading at a known distance) and I’ll compute focal_px and prepare the next-level pick routine.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.