
MovieMaker
I noticed that you had a nice script for Navigation with the Sonars and IRs.
Why not go a step further and make it a SMART Navigation system?
You could do that by two more steps. (use confidence and Random number selection)
First you set confidence levels from zero to three, or even as high as zero to 15.
Next you store the previous move. And use it to know if you are moving in the correct direction.
After that you have done most of it, you check the confidence level for the move you are about to make. If it is High, you make the move. If it is low, you use a random number to select a different move and go through the whole process again.
After trial and error, the robot becomes more intelligent after each move. Then he will KNOW by Learning experiences which way to go.
I have never gotten this to work 100%, but it is possible. I am just not a great programmer. But , you seem to be good enough to pull this off.
I would love to see this machine become more intelligent and actually have the capabilities to Learn.
in the topic with r2d2 and the ping sweep avoidance I've mentioned advanced avoidance, I have many ideas but currently lack the time to put pen to paper (or fingers to keys).
advanced avoidance and scanning will be on its was soon though
I think the concept that learning MovieMaker descrive navigation sounds very smart, and I think it is a step which is high in artificial intelligence:
artificial intelligence + learning intelligence + evolutionary intelligence.
Do not think it's easy to achieve, but if anyone can get it, this is Rich.
It could also stored the information learned to reverse the movement and return to the original starting place. or divide the path into sectors to take shortcuts to other places.
I think the code would be a kind of variables $ altering progress and by the code, while taking decisions on exploration.
I've been reading the threads for a few weeks now and I have to tell you that I too feel you are a very talented programmer. So it is my unfortunate responsibility to inform you that, if you create Skynet I will have to send some cheesy 80's actor back from thousands of years ago, where the future has already occurred, to put you down.
LOL, I'm just kidding of course but I can tell you have a passion an aptitude for this. I can't wait to get some ezb exp to be able to chat with you more. And speaking of communications, it seems that there are only a handful of people really active in the forums. If I had to guess I'd say it's probably only about 3%-5% of ezb owners (And I feel that to be a kind estimate). I'm guessing at that number based on my observation of the forum for about 3 weeks and the assumption that there are at least 300 ezb owners. I don't know how many ezbs have been sold but I know they've been around for a few years.
I think this looks like an amazing product vs all of the other stuff out there and it's cheaper than alot of stuff that appears to be crap floating around out there. I'm just amazed that this forum isn't a little more active. I see you guys banding together here for a common goal and think, "There, that's it. That's exactly how the world is supposed to work." Why are there not more players in the game posting here? Is there another site that people are more active at? Like a hobbyist robotics site or something? Like the Droid Forums of robotics?
I haven't bought my hardware yet. I've been researching robotics pretty heavily for about 5 weeks now so I'm confident enough to dive in. It's just time and money now. I've opened ARC a couple of times and from a programming standpoint it seems pretty straight forward. But I'm looking forward to getting an ez complete and can't wait to get some experience with ezb and robotics so I will be a little more qualified to speak.
Keep it up guys. I hope to one day soon join your ranks.
Also, as @moviemaker said, some really only post and ask more questions when they are building. But also, a lot of answers are already here now, searching brings up answers to most questions so there's no real need for discussions on some aspects. Other aspects of building will trigger bigger discussions, look at the auto charging docking station discussion for instance (one which I plan on reviving shortly).
And thanks for the kind words.
In the Nineties, I did a lot of work on these kind of algorithms and my testbed was my cybernetic animal ELF http://cyberneticzoo.com/?p=3984
My work culminated in tech that I named "Volume Occupancy Mapping"
It works like video memory (an X/Y grid)
Each grid point (X/Y location) is a byte, which is broken up into 2 x 4bit nibbles of (learnt) data about that grid point. The lower nibble is the probability of that grid point being blocked or free to move across. The system is totally dynamic and self adjusting, here is how it works.
When you start every grid (X/Y) location is set to 0, from this point the algorithm starts to learn about the area (or room) that it is in. With a matrix map filled with zero's it has no idea yet how to plan the best path across the room, so the first job of the algorithm is to get an idea of where all the fixed (stationary) objects are like tables, armchairs etc. It first starts with a wall following algorithm that gives it a idea of the area its trying to map, if armchairs etc are against the wall then it builds that into the map. From the wall following, it goes into a crisscross following pattern across the area to map this part out. Now say that at grid point 5,9 it senses an obstacle then it increments that locations nibble, so it now has a value of "1". Now if this was a piece of fixed furniture then grid point 5,9 would always be impassable so after some more exploring of the area over time then its location nibble would soon fill up to F (decimal 15). Now if it was say a dog or something transient at that location then at some point the grid point clears (now passable), when the algorithm finds this it decrements the location nibble, so in the first example above, the "1" would return to "0".
What this gives the robot is a method for it to compute a high probability route that will give it a clear path across the room. This is done by looking at all the X/Y grid locations, and it knows that any with a zero (or very low value) has a high probability of being clear and any grid locations with high values have a high probability of being blocked, from this the best route can be computed.
This concept needs seriously good odometry, the AIMEC motor drive encoders, give 64000 "clicks" per single drive wheel revolution so the resolution is amazing. The next problem to overcome is wheel slippage which can introduce errors, and any major errors obviously have a exponential effect on the map accuracy, on our robots we limit wheel slippage by a special design of our tires.
The upper nibble is used to tell the robot what is at that location in the ELF and AIMEC robots the highest bit denotes danger and a "don't go there" mechanism, this is useful for things like fireplaces or tops of staircases where clearly you do not want your robot wandering into. So if the robot see's a grid location of >127 then it will just never go there or plot a path through that location. The lower three bits of the upper nibble gives info on things like "entry door", "exit door", "docking charger" position etc, so the map not only has a method for the robot to find a clear path (with high probability), but also knows where to find certain things that is useful to its operation. Using a map for each room and knowing where the doors are located means that the robot can navigate by itself around the home.
Here is simple volume occupancy map from the AIMEC:3 robot
Could you give a few pointers please
This is part of the code I am using to monitor digital inputs (ir beacon ) and drive two servos ,it works very well
but because of an issue between ir beacon and the digital inputs I need to use the ADC
so I need to change the script to read ADC, tried to adapt "scaredy cat" but got lost along the way.
Any help would be appreciated
Pat
code:
:forward
SLEEP(10)
IF (GETDIGITAL(D11)=0)
servo(d0, 48)
GOTO(forward)
ELSEIF (GETDIGITAL(D11)=1)
SERVO(D0, 54)
ENDIF
:reverse
IF (GETDIGITAL(D13)=0)
SERVO(D0, 62)
GOTO(REVERSE)
ELSEIF (GETDIGITAL(D13)=1)
SERVO(D0, 54)
SLEEP(10)
ENDIF
:Turnleft
SLEEP(10)
IF (GETDIGITAL(D12)=0)
SERVO(D1, 50)
GOTO(TURNLEFT)
ELSEIF (GETDIGITAL(D12)=1)
SERVO(D1, 45)
SLEEP(10)
ENDIF
:TURNRIGHT
IF (GETDIGITAL(D14)=0)
SERVO(D1, 38)
GOTO(TURNRIGHT)
ELSEIF (GETDIGITAL(D14)=1)
SERVO(D1,45)
ENDIF
GOTO(FORWARD)
Change all GetDigital(port)=1 to GetADC(port)=255 hopefully that'll work, if not you may need to play with the =255, maybe >0 will work better.
Edit: and GetDigital(port)=0 to GetADC(port)=0
A digital on/off sensor on the ADC should report 0 and 255, 0 being off, 255 being on, in theory.
Wonderful smooth transition to ADC
will list details in seperate thread to assist others,while the ir beacon worked fine on digital ,if I lost connection with EZ-B
the result was flashing led's on ir beacon until I re made connection.
Having blown my first beacon ( not sure if this was actuall cause ) I decided to change over to ADC
Pat
I am really interested in the idea of "Volume Occupancy Mapping" to use with the Auto Docking Project. I would much rather use something like this instead of IR beacons to help the robot navigate from room to room. Having the robot understand what room it is in so that it can make the right decision on what route to take to find his battery charger has been a challenge. I am still saving up to purchase more parts for the IR beacons and the detection system I was planning to build. However, I would much rather use some type of mapping system. I hope this topic will blossom into a simpler answer to my problem. Problem is, I have no experience with programming or scripting....yet.
Rich has been such a big help to a lot of people for script help and now with this topic maybe we could advance our Auto Docking Project forward. I look forward to more discussion on this and I plan on visiting your website to learn more about your robots. Hope you will have the time to help and guide us in utilizing this mapping concept.
So much talent here on this forum. I am proud to be associated with all of you and my life is better for it.
Rex
Automatic Battery Charger Docking
Battery Charger Docking Project
I have to admit, Tony's volume occupancy map is a little overwhelming but I only scanned over it, I'm sure once it's broken down bit by bit it'll become simpler. A great idea none the less. However encoders are used which I have halted on looking in to further purely due to a comment DJ said the other day about a new feature which will be better than encoders.
I'm confident that between us all we will come up with a great solution
And, yes, I also have his book "How to design and build your own custom robot" and , also "Build your own working robot".
There was another author that had part of this idea too. He wrote "How to build your own working robot pet". It had the confidence levels.
These were some of my favorite books on building robots.
@Rich, Yeah... maybe holding off till Revolution is released will be a good idea. It may open up a whole new concept for us to use. I hope @Toymaker will be able to (and have the time to) teach us how to utilize the "Volume Occupancy Mapping" idea.
I think the subject for this thread needs changing, it's far from "For Rich" now, or more to the point, I'm sure I'm not the only one who can help here
*blush*
Volume occupancy mapping is a great way to get a robot to find a docking charger, and for general autonomous navigation, but as I said earlier you need really good (and accurate) odometry. For this on our EZ1 (development robot) and EZ:2 (production robot), we are developing a custom locomotion drive controller based on a PIC microcontroller that will handle all the encoder operations and motor control for the EZ-B via an I2C link. These new robots will also have a second PIC for all the head electronics/mechanics so again this will take a lot of mundane operations away from the EZ-B and both these microcontrollers will also greatly reduce the I/O overhead on the EZ-B itself. At some point in the future when our EZ robots are fully developed and ready for retail we will make available these sub-boards so EZ-B users will be able to add these functions to their own robots. DJ has said he will support this new EZ-B based robot range and even produce custom controls, we are really pleased to be working with the EZ-Robot team.
@moviemaker, yes I had all those books to, with the Rodney and Buster robots! I have had some involvement with a number of robotic publications/books. The great robotic hobbyist author Gordon McComb credited me as "Tony Ellis a real life Q if I ever meet one" when I assisted on his (still brilliant) book "The Robot Builders Bonanza", I can really recommend this book.
It has been very depressing. I was hoping to see Singularity before I died. But, it looks like it is very very far away.
There was some work in Sweden that was centered around evolution in robots and that work turned out pretty well. Robots evolved and did some pretty neat stuff. I had gotten out of robotics for 12 years to let the computers catch up and be fast enough to do something useful. But, they are STILL not as fast as I would like them. Intel has 100 core cpus out there, but they won't release them.
The EZ-B has been the ONLY platform that has come up with anything close to doing MOST of the things I want done.
I guess I will get off of my soapbox now.
Cheers!
I hope that maybe I will be able to help you at some point as we are looking at using our Ai core with the EZ1 and EZ:2 robots and as you like the EZ-B, then this could mean that you get to use our Ai in your own robots.
With me the Ai bug started in 1969 when I first saw 2001 a Space Odyssey, from then onward I was determined to make an Ai like HAL9000, and its taken me 40 years to produce a self learning Ai and there is still much work to do! There is a lot of info on our Ai (and robots) in the July/August 2011 edition of Robot magazine, if you interested I can send you a copy of the article?
A great friend of mine is Guile Lindroth Filho from Guile 3D who you probably know produced the Ai Denise he and his team have done some amazing work on virtual humans!
As I said I have studied Ai for over 40 years, but I do not think we will see the singularity in the next 20 years, but I am pretty sure it may happen maybe within 30 years.
I have also had Denise for quite some time. She works GREAT for me. She did not work for Thomas, I don't know why.
I am a subscriber to Robot Magazine, but you can still send me the article if you wish. I would like that. I am also looking hard at your new robot to be released. But, my wife says "No More Robots!" So, I don't know how it will end up. But, please tell me more.
Thanks,
Mel
There are more details of Herbie on Cyberneticzoo http://cyberneticzoo.com/?p=2280
Mel and Chris, I have emailed you the Robot magazine article on our robots and Ai development.
Now, Moving more into the subject of Self-Awareness. I was recently reading Scientific America when i noticed some articles on a very different way of doing programming.
It said that Scientist had made their robots self-aware. The way that they did it was:
Program the robot like you do normally.
Preform an action after you have made a decision.
Think about what choice you have made.
Give yourself a score. You have either did well or made a mistake.
The score will be graded, so the highest score is what you are looking for.
They called it dividing it into two parts, the regular part and the new part that does
nothing but "Think about what you have Thought about."
They said by doing this it is like a human brain, two virtual hemispheres and a
Algorithm connecting the two by generalizing and evolving.
This sort-of reminded me of a system that used Confidence levels after actions. each action would bump UP the CLevel or bring down the FLevel. Next time it made a choice, it would not make the same mistakes, Making the robot smarter the longer it operated.
It is very hard for me to put into words. If you had such an Algorithm it would be nice. This could be put on the wait list , maybe.
Hope that this helped.
Mel
Take the ping roam script for an example. Remove the part where it knows which way to turn based on the sensor readings. Now we add in code to make it choose a random direction based on previous results, but while it does that it still records the sensor readings and checks confidence. To start with it will be bumping in to walls (unless it is lucky with it's guesses) but as it goes on it will learn that a high reading on the left will have a lower confidence if turning left and a higher for turning right, so it will eventually turn right more often...
Wow that is hard to explain, I may not have even explained it right... I guess the only way to explain properly is by making such a script...
But, like i said, we can program the robot not to make mistakes. If you want to. But mistakes make it seem more alive in my opinion