I have almost completed my Omnibot code as an autonomous pet. I am about an elegant and effecient solution to AI. Of course, giving the impression of awareness is all in reactive behaviour. By combining a few concepts and sensors, Omnibot now acts like he has a mind of his own.
For example, when he is in Sleep (or stop) mode, his camera is always follow motion or a color. This means he will turn towards the TV and watch it. Or he will follow a person with his head.
His decision is based on two random generators, and of course some environmental decisions. But primarily he decides what to do by a random mode selection, and then a random time selection. The random mode determines between
- autonomous (long, short)
- follow motion
- follow a color (red green or blue)
- body language (turn head, nudge forward, nudge left, beep, etc)
The length of time to run each mode is randomized.
The microphone is always listening to voice commands. I set the voice command recognition to 90%, so he will recognize words that aren't exact. This adds some interesting behaviour by responding to words that are similar. For example, I am having a conversation with my friend while Omnibot appears to be watchin tv. He says something and the robot turns around and looks at him. When he moves, the robots head follows him. This of course freaks him out. Why? Because a phrase "turn around" was recognized at near 90%. He didn't actually say that, but the robot thought he did and reacted. This made it appear that Omnibot did it on his own.
Now for the Autonomous exploding 2D mapping code...
The concept is quite simple by implementation, but requires much testing for your robot size and speed. Some of my other robots have a much more compicated version of this concept using a 3rd dimension in the map.
The distance sensor is attached to the servo on the chest of Omnibot. The servo sways back and forth. As it scans, it remembers each distance in an array from left to right (or vice versa depending on the direction). When the servo gets to the end of the scan, it performs a quick calculation by examining the array.
It adds the Right and Left distance values up of every respective position in the scan. It compares the left and right distances against some constants that are determined by its speed and size during testing. The decision to properly align itself down hallway, or enter a door is determined.
Download the source code on his robot page to get a good look at the concept: www.ez-robot.com/Robots/Tomy-Omnibot-V1