Asked — Edited

For Rich

I noticed that you had a nice script for Navigation with the Sonars and IRs.

Why not go a step further and make it a SMART Navigation system?

You could do that by two more steps. (use confidence and Random number selection)

First you set confidence levels from zero to three, or even as high as zero to 15.

Next you store the previous move. And use it to know if you are moving in the correct direction.

After that you have done most of it, you check the confidence level for the move you are about to make. If it is High, you make the move. If it is low, you use a random number to select a different move and go through the whole process again.

After trial and error, the robot becomes more intelligent after each move. Then he will KNOW by Learning experiences which way to go.

I have never gotten this to work 100%, but it is possible. I am just not a great programmer. But , you seem to be good enough to pull this off.

I would love to see this machine become more intelligent and actually have the capabilities to Learn.


Upgrade to ARC Pro

Synthiam ARC Pro is a new tool that will help unleash your creativity with programming robots in just seconds!

United Kingdom

Watch this space;)

in the topic with r2d2 and the ping sweep avoidance I've mentioned advanced avoidance, I have many ideas but currently lack the time to put pen to paper (or fingers to keys).

advanced avoidance and scanning will be on its was soon though:)


Yes, this topic was very interesting and could continue forever, currently lack free time enough, but always be ready to give my opinion in navigation. I think the concept that learning MovieMaker descrive navigation sounds very smart, and I think it is a step which is high in artificial intelligence: artificial intelligence + learning intelligence + evolutionary intelligence. Do not think it's easy to achieve, but if anyone can get it, this is Rich. It could also stored the information learned to reverse the movement and return to the original starting place. or divide the path into sectors to take shortcuts to other places. I think the code would be a kind of variables $ altering progress and by the code, while taking decisions on exploration.:)


I see cooperative achievements coming soon;)

United Kingdom

That's my plan so I hope so:) The way I see it is if I can help others understand how to put a script together then we will soon have a whole lot more awesome scripts that make out robots so much better than they already are (which is difficult as they are already awesome).


We all have different talents that we are really good at so when we put them together it will be a force to be reckoned with! From a process and organization standpoint I can arrange processes that interact but I'm still new to the scripting.


@Rich I've been reading the threads for a few weeks now and I have to tell you that I too feel you are a very talented programmer. So it is my unfortunate responsibility to inform you that, if you create Skynet I will have to send some cheesy 80's actor back from thousands of years ago, where the future has already occurred, to put you down.

LOL, I'm just kidding of course but I can tell you have a passion an aptitude for this. I can't wait to get some ezb exp to be able to chat with you more. And speaking of communications, it seems that there are only a handful of people really active in the forums. If I had to guess I'd say it's probably only about 3%-5% of ezb owners (And I feel that to be a kind estimate). I'm guessing at that number based on my observation of the forum for about 3 weeks and the assumption that there are at least 300 ezb owners. I don't know how many ezbs have been sold but I know they've been around for a few years.

I think this looks like an amazing product vs all of the other stuff out there and it's cheaper than alot of stuff that appears to be crap floating around out there. I'm just amazed that this forum isn't a little more active. I see you guys banding together here for a common goal and think, "There, that's it. That's exactly how the world is supposed to work." Why are there not more players in the game posting here? Is there another site that people are more active at? Like a hobbyist robotics site or something? Like the Droid Forums of robotics?

I haven't bought my hardware yet. I've been researching robotics pretty heavily for about 5 weeks now so I'm confident enough to dive in. It's just time and money now. I've opened ARC a couple of times and from a programming standpoint it seems pretty straight forward. But I'm looking forward to getting an ez complete and can't wait to get some experience with ezb and robotics so I will be a little more qualified to speak.

Keep it up guys. I hope to one day soon join your ranks.


I post more and ask more questions when I am actually working on a project. My wife told me with no uncertainty "NO MORE ROBOTS." So, I have to live with what I have now. But, I would like to see my robots develop more A.I.


United Kingdom

There is another "factor" which had driven some away over the last couple of months or so, but that's cleared up now and a few have started coming back now, hopefully more to follow. I wont go into specifics, I don't want to turn any subject on to that really.

Also, as @moviemaker said, some really only post and ask more questions when they are building. But also, a lot of answers are already here now, searching brings up answers to most questions so there's no real need for discussions on some aspects. Other aspects of building will trigger bigger discussions, look at the auto charging docking station discussion for instance (one which I plan on reviving shortly).

And thanks for the kind words.

United Kingdom

Hi @moviemaker, this is a good idea, did you ever read "How to design and build your own custom robot" (book from the Eighties), I still have a copy here in the lab, the book has some similar ideas to what you are proposing.

In the Nineties, I did a lot of work on these kind of algorithms and my testbed was my cybernetic animal ELF

My work culminated in tech that I named "Volume Occupancy Mapping"

It works like video memory (an X/Y grid)

Each grid point (X/Y location) is a byte, which is broken up into 2 x 4bit nibbles of (learnt) data about that grid point. The lower nibble is the probability of that grid point being blocked or free to move across. The system is totally dynamic and self adjusting, here is how it works.

When you start every grid (X/Y) location is set to 0, from this point the algorithm starts to learn about the area (or room) that it is in. With a matrix map filled with zero's it has no idea yet how to plan the best path across the room, so the first job of the algorithm is to get an idea of where all the fixed (stationary) objects are like tables, armchairs etc. It first starts with a wall following algorithm that gives it a idea of the area its trying to map, if armchairs etc are against the wall then it builds that into the map. From the wall following, it goes into a crisscross following pattern across the area to map this part out. Now say that at grid point 5,9 it senses an obstacle then it increments that locations nibble, so it now has a value of "1". Now if this was a piece of fixed furniture then grid point 5,9 would always be impassable so after some more exploring of the area over time then its location nibble would soon fill up to F (decimal 15). Now if it was say a dog or something transient at that location then at some point the grid point clears (now passable), when the algorithm finds this it decrements the location nibble, so in the first example above, the "1" would return to "0".

What this gives the robot is a method for it to compute a high probability route that will give it a clear path across the room. This is done by looking at all the X/Y grid locations, and it knows that any with a zero (or very low value) has a high probability of being clear and any grid locations with high values have a high probability of being blocked, from this the best route can be computed.

This concept needs seriously good odometry, the AIMEC motor drive encoders, give 64000 "clicks" per single drive wheel revolution so the resolution is amazing. The next problem to overcome is wheel slippage which can introduce errors, and any major errors obviously have a exponential effect on the map accuracy, on our robots we limit wheel slippage by a special design of our tires.

The upper nibble is used to tell the robot what is at that location in the ELF and AIMEC robots the highest bit denotes danger and a "don't go there" mechanism, this is useful for things like fireplaces or tops of staircases where clearly you do not want your robot wandering into. So if the robot see's a grid location of >127 then it will just never go there or plot a path through that location. The lower three bits of the upper nibble gives info on things like "entry door", "exit door", "docking charger" position etc, so the map not only has a method for the robot to find a clear path (with high probability), but also knows where to find certain things that is useful to its operation. Using a map for each room and knowing where the doors are located means that the robot can navigate by itself around the home.

Here is simple volume occupancy map from the AIMEC:3 robot

User-inserted image


@Rich Could you give a few pointers please This is part of the code I am using to monitor digital inputs (ir beacon ) and drive two servos ,it works very well but because of an issue between ir beacon and the digital inputs I need to use the ADC so I need to change the script to read ADC, tried to adapt "scaredy cat" but got lost along the way. Any help would be appreciated Pat


United Kingdom

Quickly (since my tea is cooking and I'm starving!)

Change all GetDigital(port)=1 to GetADC(port)=255 hopefully that'll work, if not you may need to play with the =255, maybe >0 will work better.

Edit: and GetDigital(port)=0 to GetADC(port)=0

A digital on/off sensor on the ADC should report 0 and 255, 0 being off, 255 being on, in theory.


Thank you Rich Wonderful smooth transition to ADC will list details in seperate thread to assist others,while the ir beacon worked fine on digital ,if I lost connection with EZ-B the result was flashing led's on ir beacon until I re made connection. Having blown my first beacon ( not sure if this was actuall cause ) I decided to change over to ADC Pat



I am really interested in the idea of "Volume Occupancy Mapping" to use with the Auto Docking Project. I would much rather use something like this instead of IR beacons to help the robot navigate from room to room. Having the robot understand what room it is in so that it can make the right decision on what route to take to find his battery charger has been a challenge. I am still saving up to purchase more parts for the IR beacons and the detection system I was planning to build. However, I would much rather use some type of mapping system. I hope this topic will blossom into a simpler answer to my problem. Problem is, I have no experience with programming or scripting....yet.

Rich has been such a big help to a lot of people for script help and now with this topic maybe we could advance our Auto Docking Project forward. I look forward to more discussion on this and I plan on visiting your website to learn more about your robots. Hope you will have the time to help and guide us in utilizing this mapping concept.

So much talent here on this forum. I am proud to be associated with all of you and my life is better for it.

Rex Automatic Battery Charger Docking

Battery Charger Docking Project

United Kingdom

@Rex, I will be looking to bring those topics back to life as soon as I have the time.

I have to admit, Tony's volume occupancy map is a little overwhelming but I only scanned over it, I'm sure once it's broken down bit by bit it'll become simpler. A great idea none the less. However encoders are used which I have halted on looking in to further purely due to a comment DJ said the other day about a new feature which will be better than encoders.

I'm confident that between us all we will come up with a great solution:)


The idea came from one of David Heiserman's other books: HOW TO BUILD YOUR OWN SELF PROGRAMMING ROBOT. It was a good book. But, it is based around the 8080A cpu. But, the Concepts are still good.

And, yes, I also have his book "How to design and build your own custom robot" and , also "Build your own working robot".

There was another author that had part of this idea too. He wrote "How to build your own working robot pet". It had the confidence levels.

These were some of my favorite books on building robots.


@MovieMaker, By gosh I think I have that book you mentioned. Purchased it at a flea market a few years back but never took the time to read it properly I guess. I will have to look for it this evening. I do have the "Build your own working Robot" It was my first robot building book. I am still using ideas from it.

@Rich, Yeah... maybe holding off till Revolution is released will be a good idea. It may open up a whole new concept for us to use. I hope @Toymaker will be able to (and have the time to) teach us how to utilize the "Volume Occupancy Mapping" idea.

United Kingdom

@Rex, Tony's (toymaker) has explained it well enough so once I have understood the logic it shouldn't be too difficult a task to put it in place somehow. The main issue is working out the X,Y coordinates of the robot and it's direction. If those details are known and sensor reading accurate a map can be made easily. But it does all come back to the robot position and direction as far as I can see.

I think the subject for this thread needs changing, it's far from "For Rich" now, or more to the point, I'm sure I'm not the only one who can help here:)


Rich, I tried to change the title but,it would not let me. sorry.


United Kingdom

@rgordon, The EZ-B (which is a great robot controller), was never designed really to link with encoders and most of the hackable toys do not have encoders anyway (Bigtrak being the exception but with lower resolution encoders).

Volume occupancy mapping is a great way to get a robot to find a docking charger, and for general autonomous navigation, but as I said earlier you need really good (and accurate) odometry. For this on our EZ1 (development robot) and EZ:2 (production robot), we are developing a custom locomotion drive controller based on a PIC microcontroller that will handle all the encoder operations and motor control for the EZ-B via an I2C link. These new robots will also have a second PIC for all the head electronics/mechanics so again this will take a lot of mundane operations away from the EZ-B and both these microcontrollers will also greatly reduce the I/O overhead on the EZ-B itself. At some point in the future when our EZ robots are fully developed and ready for retail we will make available these sub-boards so EZ-B users will be able to add these functions to their own robots. DJ has said he will support this new EZ-B based robot range and even produce custom controls, we are really pleased to be working with the EZ-Robot team.

@moviemaker, yes I had all those books to, with the Rodney and Buster robots! I have had some involvement with a number of robotic publications/books. The great robotic hobbyist author Gordon McComb credited me as "Tony Ellis a real life Q if I ever meet one" when I assisted on his (still brilliant) book "The Robot Builders Bonanza", I can really recommend this book.


I have all of those books and I built the Rodney robot. I LOVE Blankenship's program in figure 16-4. It has A.I. really built into it. Too bad it is only Basic. That is really a powerful amount of code.

United Kingdom

Hi Moviemaker, thats really neat to hear from someone that actually made a Rodney robot! I would really like to hear how it went and did you take it to Gamma class intelligence? Do you still have the robot or any pictures.


I did everything Exactly How David Heiserman said. I was about 90% complete. Several of my buddies told me that it would not work as directed in the book. So, I called up David Heiserman and asked him. I think he told me that it would not work but I could purchase a factory made one that would work. It was a Rodney package that was fancied up. So, at that time, I purchased two RB5-X robots. One had an arm and the other did not. All they did was go around and run into things and say "Excuse me!" and navigate around things. They were cool looking , but nothing special. The same was for my Hero1 robot, nothing special. My Novag by Gavon chess playing robot played a good game of chess, but that was all. I have tried to get the robots to demonstrate gamma activity, but it sees it only works in theory. It SHOULD work. It all makes sense. But, I haven't seen it. Andy at Parallax built a Boe-bot that had the Gamma software installed. It was in ASM language. Several weeks were spent on the experiments but, the robot did not show many signs of A.I. I recently bought a QBO robot only to be disappointed in it. You have to program it in C++ and Python or it won't do anything. All of the things in the demo videos are not included in the purchase. It has potential, but you almost have to have a Doctorate in Computer Science in order to do anything with it. There were only two robots in my past that has been close. One was the Leaf project and the other the EZ-B which does rings around the Qbo.

It has been very depressing. I was hoping to see Singularity before I died. But, it looks like it is very very far away.

There was some work in Sweden that was centered around evolution in robots and that work turned out pretty well. Robots evolved and did some pretty neat stuff. I had gotten out of robotics for 12 years to let the computers catch up and be fast enough to do something useful. But, they are STILL not as fast as I would like them. Intel has 100 core cpus out there, but they won't release them.

The EZ-B has been the ONLY platform that has come up with anything close to doing MOST of the things I want done.

I guess I will get off of my soapbox now.


United Kingdom

@moviemaker I am sorry to hear that Ai experiences have been dissapointing for you, but it seems you have owned some increadible robots!

I hope that maybe I will be able to help you at some point as we are looking at using our Ai core with the EZ1 and EZ:2 robots and as you like the EZ-B, then this could mean that you get to use our Ai in your own robots.

With me the Ai bug started in 1969 when I first saw 2001 a Space Odyssey, from then onward I was determined to make an Ai like HAL9000, and its taken me 40 years to produce a self learning Ai and there is still much work to do! There is a lot of info on our Ai (and robots) in the July/August 2011 edition of Robot magazine, if you interested I can send you a copy of the article?

A great friend of mine is Guile Lindroth Filho from Guile 3D who you probably know produced the Ai Denise he and his team have done some amazing work on virtual humans!

As I said I have studied Ai for over 40 years, but I do not think we will see the singularity in the next 20 years, but I am pretty sure it may happen maybe within 30 years.


Ah, I think in 2001 Hal got paranoid and tried to kill everyone then lost it's mind. sick Hope we're not heading in this direction. Please install an off button. ;)

United Kingdom

The three laws of robotics would take over, we are safe:)


Ya know , I'm not really all that worried about the three laws of robotics , more like suggestions.... Like a speed limit;)



I have also had Denise for quite some time. She works GREAT for me. She did not work for Thomas, I don't know why.

I am a subscriber to Robot Magazine, but you can still send me the article if you wish. I would like that. I am also looking hard at your new robot to be released. But, my wife says "No More Robots!" So, I don't know how it will end up. But, please tell me more.





Toymaker could you please send that article to me also? email Thanks Chris

United Kingdom

There is a reason I now put an OFF switch on the front of every big robot that I develop, this is because the first large programmable robot that I made (Herbie) in 1979 malfunctioned and went nuts causing some damage to my home at that time. This was before I started to use microprocessors and microcontrollers, so the only way I could figure out to make a programmable robot was with tone decoding and storing the tones on a stereo cassette. At this time there was a neat chip available the NE567 phase locked loop which could be set with a very narrow center frequency, in Herbie there was a bank of them set to different center frequencies (tones), One tone would make the robot move forward, another left etc. The robot was programmed with a stereo cassette recorder connected to the RF transmitter that I built for the job. Pressing the required buttons the transmitter would send out the tone for the movement or arm control, and the duration of the tone was the amount of movement that the robot did. At the same time as these tones are being generated they are also being recorded on the cassette tape, so once the robot had been taught to do something, I just had to place him back in his charging pod then rewind the tape and insert it in the tape player in its body a press play and it then would follow the same tone instructions that it had been taught. But I made a BIG mistake with the design, and thought I would be clever and use the RF link to turn the robot on/off (sleep mode), while this worked ok for some time the large DC motors that the robot used for locomotion where very (RF) noisy and I had to use a "delta" configuration of suppression capacitors on each motor to reduce the wideband RF noise that the motors emitted. The problem occurred when one of the sets of suppression capacitors failed and basically when the motor was running it filled the local area with wideband RF noise which the robots receiver also picked up and saw the noise as tones which effectively made the robot go completely berserk! Herbie was 5foot tall built in aliminium and was very heavy (using car batteries), so this robot careering around out of control did damage to wallpaper, a coffee table and even dented the thin skinned door to the room it was in. I was in panic chasing it around to get the back undone and pull the main drive fuse.

There are more details of Herbie on Cyberneticzoo

Mel and Chris, I have emailed you the Robot magazine article on our robots and Ai development.


Thanks Toymaker. So from what I am reading is... you are going to be making possibly some kind of add on module to add to the EZB controller? Awesome thanks again for the article. Chris


I will probably be asking more questions because I just bought another EZB.



I am going to revisit a previous post for Rich to think about:

Now, Moving more into the subject of Self-Awareness. I was recently reading Scientific America when i noticed some articles on a very different way of doing programming.

It said that Scientist had made their robots self-aware. The way that they did it was:

Program the robot like you do normally. Preform an action after you have made a decision. Think about what choice you have made. Give yourself a score. You have either did well or made a mistake. The score will be graded, so the highest score is what you are looking for. They called it dividing it into two parts, the regular part and the new part that does nothing but "Think about what you have Thought about." They said by doing this it is like a human brain, two virtual hemispheres and a Algorithm connecting the two by generalizing and evolving.

This sort-of reminded me of a system that used Confidence levels after actions. each action would bump UP the CLevel or bring down the FLevel. Next time it made a choice, it would not make the same mistakes, Making the robot smarter the longer it operated.

It is very hard for me to put into words. If you had such an Algorithm it would be nice. This could be put on the wait list , maybe.

Hope that this helped.


United Kingdom

I get the general idea, I think. But the thing with robots is, we can always program them not to make mistakes:) Not quite as fun to watch though... kinda like having a newborn baby that can walk.


by mistake, I meant like it was running into a wall and it learned not to go in that direction. Each time it would bump into the wall it would lower it's confidence level until it was zero. From that point on, if you told it to go in the direction of the wall, it would know already not to go there and choose another direction with more confidence. It would not be a mistake per say, just a way to Learn what NOT to do under different circumstances.

United Kingdom

I know what you are getting at.

Take the ping roam script for an example. Remove the part where it knows which way to turn based on the sensor readings. Now we add in code to make it choose a random direction based on previous results, but while it does that it still records the sensor readings and checks confidence. To start with it will be bumping in to walls (unless it is lucky with it's guesses) but as it goes on it will learn that a high reading on the left will have a lower confidence if turning left and a higher for turning right, so it will eventually turn right more often...

Wow that is hard to explain, I may not have even explained it right... I guess the only way to explain properly is by making such a script...

But, like i said, we can program the robot not to make mistakes. If you want to. But mistakes make it seem more alive in my opinion:) I like mistakes :D


Someone once said that "Every Mistake is an opportunity to Learn something." and Bob Ross made a lot of "Happy Mistakes" in his paintings. I used to teach and I would tell my students not to be afraid to make mistakes. In fact, go out and make as many as you can in order to learn. They liked that.:)