Asked

Map Loading And Wavefront Inquiry

,

Do we currently need to manually load an occupancy grid or map into the Synthiam ARC software, and if so, what is the procedure for doing this? I have noticed that the robot skill does not have a settings user interface, as the 3-dot menu doesn’t open any configuration options.

Additionally, is the Wavefront robot skill fully developed and functional at this point, or are there still features related to map loading and configuration that are in development?


Related Hardware AdventureBot
Related Controls Wavefront EZ-SLAM

ARC Pro

Upgrade to ARC Pro

ARC Pro will give you immediate updates and new features needed to unleash your robot's potential!

Author Avatar
PRO
Canada
#1  

Short answer

  • Yes - Wavefront currently expects you to provide the occupancy grid into ARC yourself (the skill reads an ARC global variable called $MyMap).
  • Wavefront is an experimental/simple planner and does not provide a full map-loading GUI. It expects an 80×80 integer array in $MyMap where: 255 = wall, 0 = empty, 1 = source, 254 = destination.
  • The Wavefront skill itself is fairly basic (see docs) and does not have a kebab/settings UI because there are no options to show. If you need a nicer workflow, you can either (A) script the map load yourself or (B) use an NMS-enabled SLAM skill (EZSLAM / a LIDAR NMS chain) to produce maps automatically and then translate that into $MyMap.

What Wavefront expects (important)

  • $MyMap must exist and be an 80×80 array (total 6400 elements).
  • Values:
    • 255 = obstacle / wall
    • 0 = free space
    • 1 = start (source)
    • 254 = destination (goal)
  • Trigger command (replace "Wavefront" with your skill window name if different): ControlCommand("Wavefront", "StartWavefront", "Start")

How to load a map into ARC (three recommended ways)

  1. Quick JavaScript script (create/populate $MyMap programmatically)
  • This creates an 80×80 array, fills it with zeros, places a start and destination and some walls, then starts Wavefront and reads $MyRoute.
// Example JavaScript for ARC
// Creates $MyMap as 80x80, populates a simple scene and triggers Wavefront.

var width = 80;
var height = 80;
var total = width * height;
var skillName = "Wavefront"; // change if your wavefront skill window name is different

// 1) create array and initialize to 0
Utility.defineGlobalVariable("$MyMap", total, "0"); // fills with "0" strings

// 2) helper to set a cell (x: 0..79, y: 0..79)
function setCell(x, y, val) {
  var idx = x * height + y; // same indexing used by Wavefront skill
  setVar("$MyMap[" + idx + "]", val.toString());
}

// 3) basic test map: surround edges with walls, start at (1,1), dest at (78,78)
for (var x = 0; x < width; x++) {
  for (var y = 0; y < height; y++) {
    if (x === 0 || y === 0 || x === width-1 || y === height-1) {
      setCell(x, y, 255); // walls on border
    } else {
      setCell(x, y, 0); // free space
    }
  }
}
setCell(1, 1, 1);      // source
setCell(78, 78, 254);  // destination
// add a sample obstacle block
for (var bx = 20; bx < 30; bx++) {
  for (var by = 20; by < 30; by++) {
    setCell(bx, by, 255);
  }
}

// 4) trigger the Wavefront skill
ControlCommand(skillName, "StartWavefront", "Start");

// small pause while skill processes (adjust if needed)
Sleep(500);

// 5) read $MyRoute (if skill wrote it)
var route = getVar("$MyRoute", null);
if (!route) {
  print("No $MyRoute returned. Wavefront may not have run or did not find a path.");
} else {
  print("Received $MyRoute length: " + route.length);
  // simple text rendering (rows)
  for (var y = 0; y < height; y++) {
    var line = "";
    for (var x = 0; x < width; x++) {
      var idx = x * height + y;
      var cell = route[idx] || " ";
      line += cell;
    }
    print(line);
  }
}

Notes about that script:

  • Utility.defineGlobalVariable creates the array and fills with default values.
  • setVar("$MyMap[IDX]", "value") sets individual array entries (this is the usual ARC array syntax).
  • If you already have a 1D/2D map file (CSV, JSON, etc.) you can parse it in a similar script and set each $MyMap element.
  1. Load a map from a file (CSV/JSON) using Net/File + script
  • Read the file with File.readAllText() or Net.HTTPGet, parse the CSV/JSON, then set each $MyMap index using the same setVar("$MyMap[index]", value) pattern. I can provide a sample loader if you tell me the file format you have.
  1. Use EZSLAM or a NMS Level #3 sensor to create maps automatically
  • EZ-SLAM is intended to visualize SLAM and works best with a LIDAR feeding scans into the Navigation Messaging System (NMS). If you have a LIDAR plus a Level 3 Group #2 pose source (e.g., Intel T265 or wheel odometry skill), you can get an automated map in a Level #1 mapping skill (like The Better Navigator or a SLAM mapper). From that mapping output you would still need to convert the map into the 80×80 $MyMap format for Wavefront (or use a Level #1 that accepts NMS maps directly).
  • EZ-SLAM itself does not automatically populate $MyMap for Wavefront; it's primarily for SLAM visualization and NMS integration.

Why the Wavefront skill has no settings UI

  • The Wavefront skill (as documented) was written as an experimental/path-planning utility. It reads $MyMap and writes $MyRoute. There are no internal configuration options, so the 3dot / hamburger menu does not show settings. This is expected for this skill.
  • If you need additional features (different map size, file import, GUI to place start/goal), those would need to be added (either modify the skill or create a helper robot-skill/script). If you want a GUI-ish flow, I can show how to add a small script window or custom robot skill to accept file input and populate $MyMap.

Is Wavefront "fully developed"?

  • It is functional for the limited use-case it was built for: reading an 80×80 $MyMap and computing a route (4directional wavefront). It is described as experimental in the docs, so features like map import UI, different map sizes, or more advanced path options are not implemented. If you expect a full production navigation stack (automatic map creation, dynamic obstacles, path smoothing, robot-following), use The Better Navigator or other full NMS Level #1 solutions that pair with LIDAR and pose sensors.

Pointers and links

Next steps - how I can help you now

  • Do you want:
    • A script to import a CSV/PNG into $MyMap? (If CSV, please share sample; if image, say what pixelcell mapping you want.)
    • Help connecting EZSLAM / a LIDAR to the NMS and converting NMS map output to Wavefront $MyMap?
    • A GUI helper robot skill that lets you load/select start/destination and push to $MyMap?

Tell me which option you'd like and what map format/data you already have, and I’ll provide the exact script or stepbystep instructions.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.