Hoping this will spark a huge discussion on what everyone is looking for when it comes to their robot's AI.
AI is something I've been working on since before I even learned of EZ-Robots. My JARVIS replica from IronMan is coming up to being 3 years old come December and, while not started in ARC, over the last few months I've been porting parts over to ARC and those which are beyond the capabilities of ARC are integrated via Telnet. These include such things as voice controlled media playback, voice activated control of appliances, lights etc. and, well to be honest, far more than I can really explain right now.
Basically, up until now it is entirely built around home automation and automated media acquisition, storage, playback and logging. Recently I have been integrating and porting over parts of it in to ARC and where ARC is not capable of carrying out the actions, integration via Telnet so that ARC (and it's scripts) are aware of everything they need to be aware of (i.e. if media playback starts, EventGhost sends ARC a script command $mediaplayback = 1, when it's finished it sends $mediaplayback = 0 (that's a very simple example, it also sends more info on the media). This will be demonstrated soon by Melvin when I get around to making the video of him knowing what's on TV.
Like I said, so far it's mainly based around Media and Home Automation. What I want to discuss is...
What do you want in your robot's AI?
What do you want him/her to be able to do without human interaction? What do you want him/her to react or respond to? What do you want the AI to enhance? Why do you want AI?
And, for anyone who already has some kind of AI running; What does your AI add to your robot?
Hopefully this will spark up some interesting conversation, get some ideas out there, inspire others (and myself) to push on with the AI and make robots more intelligent
I have thought of this too. I was thinking if the robot had a mapping feature connected to the object avoidance sensors, when the robot found a barrier of obstruction, it would ask what is this, the human would tell it (for instance, that's a chair) the robot would then remember that is a chair and be able to map out objects in a room that way. Also, after the room has been mapped, the human could tell the robot to go to the chair and the robot would remember where the chair is and go to it. Is there a program that allows this, or would this be too hard to do? Thanks, Clint
What wiki? Do you have a url?
Basically put, when using an API for anything the info that comes back needs parsing. With text to speech, which is what we will be using, this needs to be in a specific format for the parsing script or application. With Wolfram (and possibly others) the information that comes back is displayed on a web page and the format is not consistent. This causes problems for the parsing scripts/apps.
You may be able to get around it by using multiple different methods of parsing the results but then you would need something to decide which method to use.
On the wiki, you will be prompted to enter a subject and it will go out and find the information on it. I should have put WikiPedia. I am sure you know what I am talking about.
Yeah I know what you mean. It's straight forward enough to use Wikipedia's API to look stuff up.
That said, the age old problem will crop up... dictation. Windows speech api is renowned for having a very poor ability to understand words that are dictated, or not part of a command set. Personally I think this is the biggest drawback of any voice activated system. Jarvis suffers from this with some of his commands (look up on google, add new items to grocery list etc.).
For the whole dictation thing to work well it requires a very good quality microphone and a voice profile that has had extensive training. Alternatives are to use a different voice engine such as DNS however that is not free nor is it possible to replace windows SAPI with DNS so integration of DNS in to ARC is required (something on my to do list but never gets near the top at the moment).
My computer understands me pretty well through the ARC.
With a set command list or with Pandora Bot?
The set command list would be easier to understand. The dictation required for Pandora Bot on the other hand, not so much.
Good Point. With a set command list.
An important aspect of our Ai is "associated memory" which is part of the Ai's self learning algorithm, here is a video of it in action.
First we see the Ai core's associated memory working on video input level - this video shows the core making associations from seeing a recognised face.
The video also shows the Ai core making associations on its own general knowledge - it learns general knowledge by its "smart parser" using specialist Ai websites and also from its tutors (the primary user and system programmers).
The Ai core attempts to make associations by itself, any errors in associated data is corrected by the tutors.
Tony