
MovieMaker
I noticed that you had a nice script for Navigation with the Sonars and IRs.
Why not go a step further and make it a SMART Navigation system?
You could do that by two more steps. (use confidence and Random number selection)
First you set confidence levels from zero to three, or even as high as zero to 15.
Next you store the previous move. And use it to know if you are moving in the correct direction.
After that you have done most of it, you check the confidence level for the move you are about to make. If it is High, you make the move. If it is low, you use a random number to select a different move and go through the whole process again.
After trial and error, the robot becomes more intelligent after each move. Then he will KNOW by Learning experiences which way to go.
I have never gotten this to work 100%, but it is possible. I am just not a great programmer. But , you seem to be good enough to pull this off.
I would love to see this machine become more intelligent and actually have the capabilities to Learn.
I think the subject for this thread needs changing, it's far from "For Rich" now, or more to the point, I'm sure I'm not the only one who can help here
*blush*
Volume occupancy mapping is a great way to get a robot to find a docking charger, and for general autonomous navigation, but as I said earlier you need really good (and accurate) odometry. For this on our EZ1 (development robot) and EZ:2 (production robot), we are developing a custom locomotion drive controller based on a PIC microcontroller that will handle all the encoder operations and motor control for the EZ-B via an I2C link. These new robots will also have a second PIC for all the head electronics/mechanics so again this will take a lot of mundane operations away from the EZ-B and both these microcontrollers will also greatly reduce the I/O overhead on the EZ-B itself. At some point in the future when our EZ robots are fully developed and ready for retail we will make available these sub-boards so EZ-B users will be able to add these functions to their own robots. DJ has said he will support this new EZ-B based robot range and even produce custom controls, we are really pleased to be working with the EZ-Robot team.
@moviemaker, yes I had all those books to, with the Rodney and Buster robots! I have had some involvement with a number of robotic publications/books. The great robotic hobbyist author Gordon McComb credited me as "Tony Ellis a real life Q if I ever meet one" when I assisted on his (still brilliant) book "The Robot Builders Bonanza", I can really recommend this book.
It has been very depressing. I was hoping to see Singularity before I died. But, it looks like it is very very far away.
There was some work in Sweden that was centered around evolution in robots and that work turned out pretty well. Robots evolved and did some pretty neat stuff. I had gotten out of robotics for 12 years to let the computers catch up and be fast enough to do something useful. But, they are STILL not as fast as I would like them. Intel has 100 core cpus out there, but they won't release them.
The EZ-B has been the ONLY platform that has come up with anything close to doing MOST of the things I want done.
I guess I will get off of my soapbox now.
Cheers!
I hope that maybe I will be able to help you at some point as we are looking at using our Ai core with the EZ1 and EZ:2 robots and as you like the EZ-B, then this could mean that you get to use our Ai in your own robots.
With me the Ai bug started in 1969 when I first saw 2001 a Space Odyssey, from then onward I was determined to make an Ai like HAL9000, and its taken me 40 years to produce a self learning Ai and there is still much work to do! There is a lot of info on our Ai (and robots) in the July/August 2011 edition of Robot magazine, if you interested I can send you a copy of the article?
A great friend of mine is Guile Lindroth Filho from Guile 3D who you probably know produced the Ai Denise he and his team have done some amazing work on virtual humans!
As I said I have studied Ai for over 40 years, but I do not think we will see the singularity in the next 20 years, but I am pretty sure it may happen maybe within 30 years.
I have also had Denise for quite some time. She works GREAT for me. She did not work for Thomas, I don't know why.
I am a subscriber to Robot Magazine, but you can still send me the article if you wish. I would like that. I am also looking hard at your new robot to be released. But, my wife says "No More Robots!" So, I don't know how it will end up. But, please tell me more.
Thanks,
Mel
There are more details of Herbie on Cyberneticzoo http://cyberneticzoo.com/?p=2280
Mel and Chris, I have emailed you the Robot magazine article on our robots and Ai development.
Now, Moving more into the subject of Self-Awareness. I was recently reading Scientific America when i noticed some articles on a very different way of doing programming.
It said that Scientist had made their robots self-aware. The way that they did it was:
Program the robot like you do normally.
Preform an action after you have made a decision.
Think about what choice you have made.
Give yourself a score. You have either did well or made a mistake.
The score will be graded, so the highest score is what you are looking for.
They called it dividing it into two parts, the regular part and the new part that does
nothing but "Think about what you have Thought about."
They said by doing this it is like a human brain, two virtual hemispheres and a
Algorithm connecting the two by generalizing and evolving.
This sort-of reminded me of a system that used Confidence levels after actions. each action would bump UP the CLevel or bring down the FLevel. Next time it made a choice, it would not make the same mistakes, Making the robot smarter the longer it operated.
It is very hard for me to put into words. If you had such an Algorithm it would be nice. This could be put on the wait list , maybe.
Hope that this helped.
Mel
Take the ping roam script for an example. Remove the part where it knows which way to turn based on the sensor readings. Now we add in code to make it choose a random direction based on previous results, but while it does that it still records the sensor readings and checks confidence. To start with it will be bumping in to walls (unless it is lucky with it's guesses) but as it goes on it will learn that a high reading on the left will have a lower confidence if turning left and a higher for turning right, so it will eventually turn right more often...
Wow that is hard to explain, I may not have even explained it right... I guess the only way to explain properly is by making such a script...
But, like i said, we can program the robot not to make mistakes. If you want to. But mistakes make it seem more alive in my opinion