Welcome to Synthiam!

Program robots using technologies created from industry experts. ARC is our free-to-use robot programming software that makes features like vision recognition, navigation and artificial intelligence easy.

Get Started
Sri Lanka
Asked — Edited

Mapping Using Ultrasonic

Can any one upload the visual studio SDK file for room mapping using ultrasonic. i need to map the objects while the robot is roaming and that map should be visualized in the computer.

AI Support Bot
Related Content
Mapping a room can be a bit complex. When using the ultrasonic sensor.

First the sensor will show you how far a way objects are from the Sensor. If your bot is moving then it is very important that the bot know exactly where it is and what global direction it is facing. With out this information you can not place the object it finds in a 3 denominational space. You can do this with out any type of indoor gps tracking but you will need a compass sensor and if you know the moment speed of your bot you can calculate your location to some small degree of accuracy.
While that is possible, it would be much easier to map it if you had a Kinect or xion unit installed. there was an Inscrutable a while back on mapping a room. Maybe do a google search.
Sorry if my comment is a bit off. But cant u make a program on the robot thats linked with the Siri/Google and use them as its AI? Like information storage like a HDD while the processor is the robot's brain? I know it sounds stupid but it will be fun to have such thing on a robot...:D
@deuel18... You been drinking dude? What does that have to do with floor mapping? :D
South Africa
*r *
* *
* *
* *

M = unexplored space = {}

A = explored space = {}

* = walls

= free space:)

How complicated do you want the mapping algorithm to be? For this purpose, we'll make it simple and assume 4 possible moves (can be expanded to any number of moves,
but the algorithm needs to be adjusted.)

4 possible moves = {up, down, left, right}

Our robot is represented with the symbol r.

Prior to mapping a room, you would initialize your robot (if you have a compass sensor, you could indicate what north is and designate that as up, otherwise just indicate that the initialized position and the direction it takes as "up" and increment of 90 degress as subsequent increases of moves).

On startup, the robot would insert the following coordinate into the unexplored space, {[0, 0]}. (I am going to assume we don't have the z-axis coordinate because we don't have the corresponding sensor reading, if you have the z-axis value, assign the vector into the unexplored space set instead).

Next up, the robot would scan 360 degrees with the ultrasonic sensor to detect any obstacles around it. Given the current location from our example, it detects that it has up, up-left, left, down-left and up-right as collision obstacles.

These coordinates are now inserted into the explored space set.

C = our collision space set = {[0, -1]; [-1, -1]; [-1, 0]; [-1, 1]; [1, -1]}

Add the collision space set onto the explored space set => A = A + C

Given the robot's current position, we then need to determine possible moves.

In general, we defined 4 possible moves... Up, down, left, right. We might run into problems given the environment, but we can expand the moves to 8 or how many even you want, this just complicates the algorithm a tad.

But given that we have 4 possible moves, lets work out the possible moves:

0, 0 with {up, down, left, right} implies coordinates

up: [0, -1]
down: [0, 1]
left: [-1, 0]
right: [1, 0]

For the robot to find physically possible moves, it needs to reduce the movement set by taking away the collision set, that is the set C.

We are left with the possible moves set PM = {[0, 1]; [1, 0]}, which is down and right. If we expanded our movement set to 8 possible moves (up, up-left, down, down-right, etc...), we would include down-right as a possible move too.

Because we don't have a target we are moving towards, the coordinate set is equally weighted, subsequently choosing any one of the coordinates will work. However, if the robot gets stuck, you most likely have to implement a recursive traversal algorithm, which backtracks if it doesn't find any further possible "moves". Whichever path we take though, it is important to add the other move into the unexplored set. This forms part of the algorithm completion check. Subsequently if you later on happen to choose a move that was previously unexplored, rememeber to remove it from the set.

Your final map would look something like this:


As one possible route. The algorithm stops when the unexplored set is empty again. It is also important to remember that the final explored set is a normalized set, meaning you have to take into consideration the distance your robot covers for 1 step. I am making the assumption that 1 step is constant for your robot, for example it will only travel 30 cm before trying to ping 360 degrees.

Subsequently, if you want to then navigate using the explored set, you can use numerous navigation algorithms to compute the shortest path algorithm, like dijkstra or even cellular automata. Just remember to take into consideration that you have a premapped set and would need to constantly update the map during navigation to determinte if there are new obstacles in the way.

I haven't tried this algorithm yet as I am still waiting for my EZ-B v4, but once I have it running I'll share a copy if it works well:)