Welcome to Synthiam!

The easiest way to program the most powerful robots. Use technologies by leading industry experts. ARC is a free-to-use robot programming software that makes servo automation, computer vision, autonomous navigation, and artificial intelligence easy.

Get Started
Asked — Edited

Charging Dock Ideas

Repost :


I am designing a charging dock for my bot, and a lot of what @Rich said, I'm already doing in the background...I'm already in touch with a few great programmers willing to help - compensated of course. However I think we need to approach this problem from a different angle? If sensors or coding is so much work, why not do it from the engineering/mechanical perspective? I have a few ideas I'm drawing up, I will post this on the forum maybe we can all dissect it.

Here's one of them(rough sketch of course be kind):

User-inserted image

I was thinking omni wheels going on the back of the bot, sideways, so all it needs to do is find the QR code or distance sensor, and the wheels would pretty much "force" the robot to go to the corner of the wall, where the charging leads could touch. we can do it to where the charging leads are hidden or have some cover on it, that will pop open (using the distance sensor) when the bot is within the vicinity. Any thoughts? This seems doable to me.


Upgrade to ARC Pro

Get access to the latest features and updates with ARC Early Access edition. You'll have everything that's needed to unleash your robot's potential!

United Kingdom

You do bring up a good point. Here's an idea I'm throwing out there. This could be based on two separate location sources. An inferred beacon (close quarters) and a WiFi signal of sorts (long distance) both located on the charging station dock. A 2.4g Wireless signal can cover quite a large area (depending on the quality of router) which as we know can flood different rooms.

So here's the Sanrio. A robot is roaming around in one room. It detects that it's batteries are running low and activates a WiFi locator script. From here the bot would follow the strength of the WiFi signal, navigating towards where the signal is strongest which is where the docking station is (like following the signal bars on a cell phone. The more bars that light up, the robot knows it's getting closer to the dock). This would help the robot determine where it needs to turn left, right, go straight ahead just by following the strength of the signal. Once the bot is in the same room as the dock station, the IR beacon would take over and do the precision guidance so the robot can succsessfully dock and charge.

It's a bit of a cruid discription, but I hope you get the idea. Is this a fesable idea?
cant you use more dockings?
United Kingdom

You could, but I'd say It would depend on the size of robot and docking station. For smaller robots then yes, that would be a good idea. But personally if I had a full size robot (like I'm planning on building), then I wouldn't want large docking stations dotted all around the house taking up room and power sockets. Just the one station tucked away in the corner of a room.

The docking idea I mentioned is based on an idea I had that I posted about an additional feature that could be added to ARC. Much like a mobile/cell phone or laptop/tablet has a signal strength indicator in the corner of the screen, a control in ARC could display the current status of the WiFi signal in either AP or Client mode in a numeric value or bar graph indicator.

This then could be scripted for autonomous or manual control where if the robot roams to a place where the signal hits one bar, it would do a 180 about face, and head back to the last known stronger signal location. Also useful for debugging EZ-B disconnection issues so you would know if it was due to weak signal or not, thus narrowing down the variables of the disconnection issue. The docking station idea works on the same principle, but instead of avoiding a weak signal, the bot navigates towards a stronger signal.
Design a robot and charging station so it can swap out a dead battery for a charged one.

Have a small backup battery to keep the ez-b powered while the robot is swapping out the batteries.
You're back to step one. The robot has to find the battery changing station robot to have the battery changed.
Haven't checked your links yet @rgordon, but some more good for thought. I think there is a lot of promise for location recognition and navigation back to home using hte EZ-B integration to Roborealm. Specifically the AVM module.

See http://www.roborealm.com/help/AVM_Navigator.php

I want to start experimenting with this, my only issue is that you used to be able to use Roborealm on two computers, one development and one embedded, but now need a separate license for each, so that is a more significant investment to experiment with a function that DJ is more than likely going to build into ARC at some point.

@Alan... I too am going to be looking into just that very thing since I also have roborealm and the avm module...

About the lic thing... If I had of known that before I bought a copy I would have installed roborealm on a different computer. I have it on my desktop but I wish I had installed it on my laptop for obvious mobility reasons...
@Richard, you can move it from one computer to another. You need to uninstall it from one, and then install it on the other, and sometimes send an email to their support if the license gets "stuck". It is the installing on two at once that is an issue.

United Kingdom
Here is a solution that lets the robot move from room to room and find the docking charger, it can do this because it knows where it is, where doors are and where the charger itself is thanks to accurate mapping - this is the reason my robots have highly accurate odometry and low slippage wheels. This system also allows the robot to predict the best path to take based on a dynamically updating volume occupancy algorithm.

In the Nineties, I did a lot of work on these kind of algorithms and my testbed was my cybernetic animal ELF cyberneticzoo.com/?p=3984

My work culminated in a tech that I named "Volume Occupancy Mapping"

It works like video memory (an X/Y grid)

Each grid point (X/Y location) is a byte, which is broken up into 2 x 4bit nibbles of ( learnt) data about that grid point. The lower nibble is the probability of that grid point being blocked or free to move across. The system is totally dynamic and self adjusting, here is how it works.

When you start every grid (X/Y) location is set to 0, from this point the algorithm starts to learn about the area (or room) that it is in. With a matrix map filled with zero's it has no idea yet how to plan the best path across the room, so the first job of the algorithm is to get an idea of where all the fixed (stationary) objects are like tables, armchairs etc. It first starts with a wall following algorithm that gives it a idea of the area its trying to map, if armchairs etc are against the wall then it builds that into the map. From the wall following, it goes into a crisscross following pattern across the area to map this part out. Now say that at grid point 5,9 it senses an obstacle then it increments that locations nibble, so it now has a value of "1". Now if this was a piece of fixed furniture then grid point 5,9 would always be impassable so after some more exploring of the area over time then its location nibble would soon fill up to F (decimal 15). Now if it was say a dog or something transient at that location then at some point the grid point clears (now passable), when the algorithm finds this it decrements the location nibble, so in the first example above, the "1" would return to "0".

What this gives the robot is a method for it to compute a high probability route that will give it a clear path across the room. This is done by looking at all the X/Y grid locations, and it knows that any with a zero (or very low value) has a high probability of being clear and any grid locations with high values have a high probability of being blocked, from this the best route can be computed.

This concept needs seriously good odometry, the AIMEC/ALTAIR motor drive encoders, give 64000 "clicks" per single drive wheel Revolution so the resolution is amazing. The next problem to overcome is wheel slippage which can introduce errors, and any major errors obviously have a exponential effect on the map accuracy, on our robots we limit wheel slippage by a special design of our tires.

The upper nibble is used to tell the robot what is at that location in the ELF and AIMEC/ALTAIR robots the highest bit denotes danger and a "don't go there" mechanism, this is useful for things like fireplaces or tops of staircases where clearly you do not want your robot wandering into. So if the robot see's a grid location of >127 then it will just never go there or plot a path through that location. The lower three bits of the upper nibble gives info on things like "entry door", "exit door", "docking charger" position etc, so the map not only has a method for the robot to find a clear path (with high probability), but also knows where to find certain things that is useful to its operation. Using a map for each room and knowing where the doors are located means that the robot can navigate by itself around the home.

Here is the simple volume occupancy map from the old AIMEC:3 robot

User-inserted image

@Tony...Thanks for sharing this. I have a few questions:

Is this algorithm proprietary?
Is this something that we can download?
Is there any other software that is needed?
You have this working with the EZ-B?
Are there any tutorials on this that you would recommend?

I, for one, would need a lot of coaching and help to get something like this this up and running with my robot. This is exactly what I am wanting my robot to be able to do. Otherwise it just wanders around aimlessly. I would like to be able to tell the robot to go to a certain room without any assistance or for it to be able to find its own charger no matter where it happens to be.

@Doombot...Thanks for starting this thread man. It gets the creative juices flowing again. Check out the links I provided earlier. They may spawn some more ideas for docking. Collectively I think we can make this a reality.... :D
United Kingdom
Rex, To use Volume occupancy mapping (VOM) you need a locomotion drive system with motor encoders and (PID) motor controllers that can read these encoders and sync both drive motors to move/turn accurately and drive in a straight line.

Below is a picture of the original AIMEC locomotion unit using the Motor Mind 3 motor controllers and next to it is the new ALTAIR loco unit using the Kangaroo x2/Sabertooth combo which as you can see the latter is much less complicated.

User-inserted image

Here are the answers to your questions

<< Is this algorithm proprietary? >>

I invented it in the late nineties, but it is in the public domain now so you can use it.

<< Is this something that we can download? >>

Not at this time

<< Is there any other software that is needed? >>

I use microcontrollers to interface with the EZ-B v4 to limit its workload and this is the case here I am developing a new EZ-B v4 interface PIC to do all the mapping hard work but you could code the EZ-B v4 to do all the mapping/reading.

<< You have this working with the EZ-B? >>

This is on the "next to do" list so will it be working with the EZ-B v4 in the near future.

<< Are there any tutorials on this that you would recommend? >>

Not that I am aware of

So in conclusion VOM could be coded to run direct from the EZ-B v4 if someone wanted to write the scripts, in my system an external PIC will do this so that the EZ-B v4 can get on with other important stuff.

I put the idea up here so people can try this concept if they want to. Hope this helps explain the concept more.


Don't mention it. I was surprised the previous thread got buried. I would think this is a common thing robotic enthusiasts would wanna learn to do...:D
@Marc... Eventually I will be working on an auto docking and recharging routine... However, I will be coming at it from a slightly different angle... I recently bought RoboRealm and with it's advanced object recognition features (using the avm module) my goal will to have my bot navigate by object recognition check points (like a visual beacon) using the camera to find it's way to not only the docking station but anywhere around the room in general... Challenging for sure...
Mind you, that's down the road a little, but definitely on my to do list...:)
@Richard R
Yes that's what I was gonna get at with you when my bot is done...however I was gonna do mine with just QR codes plastered all over the house...the bot would be consistently scanning after an x amount of time of inactivity...as part of it's "personality generator" script...the cleverly placed QR codes would act as directions to whatever task needs to be done (like here for example, looking for the charging dock). That was my initial idea anyway. Forgive my ignorance but what's an AVM Module?
@Marc... The avm module is an add-on that you purchase with Roborealm... It is what you use to do object recognition and navigation with... LOL... QR codes would work well too... You know me, have to do things the hard way... LOL... Roborealm does a lot more, but I haven't really delved into to it yet.... Eventually I will....
Roborealm sounds awesome and I've heard great things about their object recognition...however I'm going for simpler here with no other things to buy as I'm planning to package everything with Dirgy...I was gonna make QR code stickers that the new owner could just stick all over their homes...scripting will do the rest...so different floorplans would be irrelevant it's just gonna work...correct me if I'm wrong, anyone. Just seems more foolproof (and cheaper) to me.
@Marc... You don't need to be corrected... your idea will work very well.... Simple is better with what you want to do (a marketable product), less to go wrong and less expensive to produce/support.... What I want to do is more or less just experimenting....