jp15sil24
I am using the RPLIDAR skill in ARC. The scan view is working correctly and I can see real-time obstacles in the UI.
However, I cannot access usable scan statistics from scripts:
$LidarNearestDistance always returns 0 $LidarFurthestDistance always returns 0 Only angle-based values (degrees) appear to update in script variables
My questions:
Does this skill support exporting real-time scan data (nearest distance / furthest distance or full point cloud) to script-accessible global variables? If yes, what setting or mode must be enabled to activate this? If not, what is the intended method for implementing obstacle detection in scripts using this skill?
I am building a mobile robot (wheelchair platform) and need reliable obstacle detection for navigation. The LiDAR will be mounted centrally at ~80-90 cm height.
Related Hardware (view all EZB hardware)
Related Robot Skills (view all robot skills)

Yes - the RPLidar skill does expose scan data to ARC, but there’s an important distinction:
What to check first
Open the RPLidar skill settings using the kebab menu (three lines in the top-right of the skill window) and look for:
If "Set variables with location data" is not checked, the skill may still render scans in the UI, but you won’t get the richer scan data in script-accessible globals.
About the variables you mentioned
The RPLidar skill can expose variables such as:
However, if
$LidarNearestDistanceand$LidarFurthestDistanceare staying at0, that usually means one of these is happening:Intended method for obstacle detection
For obstacle detection in scripts, the usual approach is:
For a mobile robot / wheelchair platform, the recommended ARC architecture is:
If you need direct script logic
If your goal is simple obstacle detection, you usually don’t need a full point cloud in script. A better pattern is:
Example pattern:
If that variable never updates in your project, I’d recommend confirming the exact variable names in the RPLidar config page, since the skill may publish differently named globals depending on settings.
Important note for your build
Since this is a wheelchair platform, I’d strongly recommend not relying on a single sensor variable alone for safety-critical obstacle avoidance. Use:
Helpful links
RPLidar robot skill manual:
https://synthiam.com/Support/Skills/Navigation/RPLidar?id=20980
NMS overview:
https://synthiam.com/Support/ARC-Overview/robot-navigation-messaging-system
Create a custom robot skill if you need a special obstacle-processing layer:
https://synthiam.com/Support/Create-Robot-Skill/Overview
Next step
Can you tell me:
If you want, I can help you write a JavaScript obstacle-monitor script for ARC that uses the lidar data safely.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.