Hoping this will spark a huge discussion on what everyone is looking for when it comes to their robot's AI.
AI is something I've been working on since before I even learned of EZ-Robots. My JARVIS replica from IronMan is coming up to being 3 years old come December and, while not started in ARC, over the last few months I've been porting parts over to ARC and those which are beyond the capabilities of ARC are integrated via Telnet. These include such things as voice controlled media playback, voice activated control of appliances, lights etc. and, well to be honest, far more than I can really explain right now.
Basically, up until now it is entirely built around home automation and automated media acquisition, storage, playback and logging. Recently I have been integrating and porting over parts of it in to ARC and where ARC is not capable of carrying out the actions, integration via Telnet so that ARC (and it's scripts) are aware of everything they need to be aware of (i.e. if media playback starts, EventGhost sends ARC a script command $mediaplayback = 1, when it's finished it sends $mediaplayback = 0 (that's a very simple example, it also sends more info on the media). This will be demonstrated soon by Melvin when I get around to making the video of him knowing what's on TV.
Like I said, so far it's mainly based around Media and Home Automation. What I want to discuss is...
What do you want in your robot's AI?
What do you want him/her to be able to do without human interaction? What do you want him/her to react or respond to? What do you want the AI to enhance? Why do you want AI?
And, for anyone who already has some kind of AI running; What does your AI add to your robot?
Hopefully this will spark up some interesting conversation, get some ideas out there, inspire others (and myself) to push on with the AI and make robots more intelligent
Wow, you've been busy! I'm looking forward to seeing this
I want my robot to: (Partial List below)
Be an intelligent chat engine Converse in a Natural way. Hook up to google when it doesn't know the answer and find it. Hook up to wiki and answer questions Give more RSS feeds like: Thought for today Quote for today This day in History Famous Birthdays on today Religeous Thought for today Joke for Today Poem for today Tip for today Get Horoscope Funny Quote for Today Scripture for today Be a calendar keep track of family items and remind people of appointments, meds,etc. have a data base stored and remember what is told to it for retreval later. In a household that has a big family, or club, or business be a Bulletin board that you could stick a post it on (not literally). Games Songs Be able to have OCR to read and solve Math problems Have Siri type program Make calls and send or retrieve email for you. connect up to WolfRamAlpha Project Movies on to wall with autofocus Make a todo list for you and let you verbally change it. Make a shopping list for you " " . Setup appointments with friends, relatives and clients. Ability to Learn and demonstrate Emergent Behavior. Be curious and ask who,what,when,where,why and how. Know when something is funny and laugh. be a security guard. Make Intelligent decisions based on past experience. Have RE-Inforced Learning with Praise and Scholding with "No!" and LEARN from HIS/HER mistakes. Know each who each person is in family. Hook up to the cloud and learn things from other robots. Have emotions and feelings built in. Have him Intelligently guess at the right answer. My Leaf Robot has 17 emotions right now and he remembers what a person looks like and how he feels about that person be it good or bad. (I believe ALL of these things are possible.)
@DJ It's the good kind of busy, it's an escape from the day to day mind numbingly boring work so it's all good Getting lost in code and thoughts is relaxing (as funny as that sounds).
@Mel
This is pretty straight forward, the script is already written under the RSS News Feed topic. To change it to other feeds we just need the rss url. I'll revisit the news script later and post some detailed instructions on how to change the feed to something else and how to use the personality generator to randomly trigger a thought or quote etc.
RSS News Feed Script is here. If you want to post a list of RSS feeds you use I'll add in all options to the script - one script for all feeds (if my idea works).
The rest is great, I'll certainly look in to getting them in to ARC. Some are pretty simple (made more so since I already have them running), others not so much. Pulling info from the web can be a nightmare since it's never plain text so needs parsing, but that's not impossible.
You asked. I answered.
@Mel> I have purchased the Ultra Hal software. It does a lot of what you are wanting. It is trainable. I have been pleased so far.
zabaware.com
I purchased Denise. She does a GREAT job. But, I only have her on my desktop.
I knew you would answer Mel
Ultra Hal and Denise look (upon quick inspection) like chatbots more than total AI solutions, is that right?
To be honest, chatbots and chatting to a robot are very low on the list for the AI I'm working on for a few reasons, mainly because it's simple to implement with Pandoras Bot but more because very little is ever really mentioned about the physical movements/actions performed by a robot while running only on AI.
My Vision My vision is to produce an AI which will act similarly to the personality generator, so it has a whole bunch of actions in it that can be performed (ideas for those actions are very welcome) but rather than using a timing based system as the personality generator uses I want it to react to different external conditions with an action from a specific set.
For instance, say it's 3am and the robot senses a light turning on (from whichever method, sensor, data changed on HA logs, whatever), the robot is told to perform one of a group of actions. In this instance, the group could be "Woken up at night" and the actions in that group could be "do nothing" (heavy sleeper), "wake up", "groan but don't wake", "startled". Now, depending on that action a range of follow up actions are drawn upon. Let's say he was startled by the light turning on at 3AM. He's going to me either "upset", "scared", "angry" or "confused". Depending on that will open up another group of actions... and so on.
Basically it could be described as a fluid personality generator that varies depending on all data it is receiving (date/time/temperature/light level/sounds/home occupancy/etc.).
Things like games, reading RSS feeds etc. would all be the actions performed, I want to build the part that decides to perform those actions
Thanks, Rgordon! Yes Rich, they are chatbots. Well Denise is sort of a Virtual Assistant. She does many many things besides chatting. There is a Hotel complex in Brazil that she is completely and totally running by herself.
It is very important that the robot maintain a database of facts given by the human so that he can pull them up when asked for information. ie; in a family environment, the family members are always coming in and out. They asked the robot "Have you seen dad?" He says "Yes, I seen him at 2:00 in the family room." He said to make sure you do your homework before going over to Katie's house." Then someone else says" have you seen my red ball?" and he answers "yes, it was there on the coffee table at 2:34pm yesterday. Someone says " OK, it is 3:30. I have my homework done and I am going over to katie's house" The robot says "Got it!"
Things like that.
There was a program called answerpad that did very close to that. If he answered wrong, you would correct him and then he would remember.
It is very important to remember the very important facts of the day. Most robots don't do that.
Do you get what I am trying to say?
Thanks,
Mel
@Rich
I think A.I. is the most important thing to me. Even over the fun of building stuff. The thing I want the robot to appear to have most is a random curiosity about his environment. If he is going to be roaming around autonomously it would be good maybe for him to have a task or mission(goal) to accomplish other than just roaming around aimlessly avoiding obstacles... Something that would require him using his sensors. If he succeeds within a certain time period, then he would get a reward like a certain number of Confidence Points. Success or failure would bring about an simulated emotion like happy or sad.
Maybe a list of things to randomly choose from during the course of a day, like:
-finding objects with a certain color >for instance;log how many red objects he can find >get him to ask if it is OK for him to try and pick up the red object from the coffee table (advanced behavior)
-find a human to talk to and get that person to chat with him for a bit >for instance; he would consider it successful if he gets the proper responses to some of his questions >or ask them if he can help with something like take someone a beverage or snack. He would consider it
successful if he can accomplish this task.
-investigate noises that he can hear
-your idea for getting him to notice and react to changes in his environment like a change in lighting
etc....etc...Others feel free to continue this list....
Oh and don't forget about him trying to locate his battery charger when he is hungry
Rich, Thanks for starting this excellent thread. I started a simple A.I. program about 10 years ago written in basic. It kept an array on the dates, on who it met, their answers to basic questions, and the last time it interacted with them. It was a crude A.I. that supplemented a Lynz servo arm with 2 servos underneath that I modified for continuous motion and Scott Edwards servo controller that I built from a kit. Look how things have advanced with EZ Robot!:) I would like additional RSS feeds (similar to Mel's list-sounds great); currently my robot uses NEWS and WEATHER. Maybe a list of additional available RSS feeds? The AIMEC EZ2 Robot sounds very promising to me with the A.I. Ariel package and I.R. control. Thanks for your help. Steve S
Additional RSS feeds come in handy, I'll make a new script tonight (if I get chance) which will allow for multiple RSS feeds, it seems it may be one which has high demand.
It would be cool to develop an AI quicktool to build a custum AI tree.
The SDK is also available, so you can modify it to your specific needs.
This is an Alice and Wallace type shell? It is very nice for a free unit. I had trouble using WolfRamAlpha and the wiki.
WolframAlpha is experimental at best anyway, last I checked (some weeks ago now) it was still under development and rather poor. Not to mention the results come back as a HTML page with varying information and layouts therefore making parsing the results a problem.
But, the wiki should work. Right?
I have thought of this too. I was thinking if the robot had a mapping feature connected to the object avoidance sensors, when the robot found a barrier of obstruction, it would ask what is this, the human would tell it (for instance, that's a chair) the robot would then remember that is a chair and be able to map out objects in a room that way. Also, after the room has been mapped, the human could tell the robot to go to the chair and the robot would remember where the chair is and go to it. Is there a program that allows this, or would this be too hard to do? Thanks, Clint
What wiki? Do you have a url?
Basically put, when using an API for anything the info that comes back needs parsing. With text to speech, which is what we will be using, this needs to be in a specific format for the parsing script or application. With Wolfram (and possibly others) the information that comes back is displayed on a web page and the format is not consistent. This causes problems for the parsing scripts/apps.
You may be able to get around it by using multiple different methods of parsing the results but then you would need something to decide which method to use.
On the wiki, you will be prompted to enter a subject and it will go out and find the information on it. I should have put WikiPedia. I am sure you know what I am talking about.
Yeah I know what you mean. It's straight forward enough to use Wikipedia's API to look stuff up.
That said, the age old problem will crop up... dictation. Windows speech api is renowned for having a very poor ability to understand words that are dictated, or not part of a command set. Personally I think this is the biggest drawback of any voice activated system. Jarvis suffers from this with some of his commands (look up on google, add new items to grocery list etc.).
For the whole dictation thing to work well it requires a very good quality microphone and a voice profile that has had extensive training. Alternatives are to use a different voice engine such as DNS however that is not free nor is it possible to replace windows SAPI with DNS so integration of DNS in to ARC is required (something on my to do list but never gets near the top at the moment).
My computer understands me pretty well through the ARC.
With a set command list or with Pandora Bot?
The set command list would be easier to understand. The dictation required for Pandora Bot on the other hand, not so much.
Good Point. With a set command list.
An important aspect of our Ai is "associated memory" which is part of the Ai's self learning algorithm, here is a video of it in action.
First we see the Ai core's associated memory working on video input level - this video shows the core making associations from seeing a recognised face.
The video also shows the Ai core making associations on its own general knowledge - it learns general knowledge by its "smart parser" using specialist Ai websites and also from its tutors (the primary user and system programmers).
The Ai core attempts to make associations by itself, any errors in associated data is corrected by the tutors.
Tony
Tony, A.I. Airel is totally amazing, especially making associations from face recognition. With I.R. control and some emotion, as demonstrated in your previous videos. Are the PC requirements real high? The program (at least the data) must grow fairly fast? Will the same PC control the AIMEC EZ2 robot and A.I. or separate with communication? It appears the system also learns from direct communication with operator (tudor).
Thanks Steve S
Ultra Hal program from Zabaware learns from association through IF / THEN statements, and through general conversations. Also there is a brain editor for direct training. It will remember things for you, open applications for you, email, make phone calls, etc. It uses Windows speech recognition so you can speak directly to him (or her).
I'm still testing it out but am pleased so far.
Here is a link
Tony, now THAT is the way ALL robots should think. The A.I. will sell your robots for you.
:)
@Rgordon, I am going to step out on a limb here. I downloaded ultra hal to try him out. I don't see the thumbs up and thumps down anywhere. I take it that this is an advance feature costing more money?
confused
Forgive my ignorance on this but how does HAL (and other chat software) differ from Pandora Bot?
When I get the chance I'll download and trial HAL out for myself but as far as I can tell it has very little over Pandora Bot other than, I assume, it doesn't require an internet connection?
This is assuming the Pandora Bot in question is set up and trained correctly (which isn't difficult).
Thanks Steve, regarding computer requirement the laptop in the video is nearly 5 years old and is a dual core running under XP! The Ai intelligence resides in 2 places, the first being the local Ai algorithm in the laptop (or PC) and the secondary Ai is on our server, this enables all our whole team to teach the Ai core new things (and correct incorrectly learnt data). I am currently teaching the core about wine to make it a kind of mini wine expert, my wife is teaching it about food types and recipes, one of our programmers is teaching it about pop music etc. This way all our team (and one day their robots) all benefit as all their Ai terminals (or robots) have access to the server Ai and this joint knowledge, I suppose its a kind of cloud computing. If I teach my Ai to do something, then all the other Ai devices (or robots) in the network then know how to do the same thing as they all have access to the same knowledge database. The local Ai also gets knowledge "on the fly" from the internet via specialist Ai websites those data is parsed into useful packets of information by our "smart parser", if the new knowledge is deemed to be important then this also gets sent to the secondary Ai knowledge database on our server.
@Rgordon, our system is totally different to HAL, as it makes most of the associations by itself and not by if/then statements, also its working on the visual processing level (sight) and making association with what it sees. We have it working with our object recognition system where it makes associations with inanimate objects, like recognising a can of beer (by brand labels, colouring, shape etc) the Ai then associates this with cans of other drinks like Pepsi and Coke and understands that it is a can of beverage that humans drink.
Tony
Tony, that is terrific! That system would be nice to have around the house. You could also sell the database once you get it to a decent amount of data.
Hal also learns through conversations. The more you converse with him the more he learns about you and the topics you talk about. The IF/THEN learning is if you want to specifically train him about something. Much like teaching a child. You can also start with a basic untrianed brain and only include the info you provide.
Here is an excerpt from the Zabaware site that talks about Hal.
About Zabaware's AI Technology
Zabaware is a software company that builds intelligent machines. We develop conversational systems (chat bots) that will give your computer a personality using artificial intelligence technology, speech recognition technology, and real-time animation. Our software can speak and understand the English language.
Our technology, called Ultra Hal, is like an inquisitive child and is capable of learning new things from conversations based on natural language processing technology capable of statistically analyzing past conversations. The algorithms behind Ultra Hal have been in development for over 17 years and the recent explosion of social networking has given the system a huge conversational pool to base its knowledge on. The system analyzes hundreds of thousands of conversations publicly available on social networks like Twitter and Facebook each and every day. These conversations, along with conversations the system has with its users, get assimilated into a large conversational database that becomes the base of knowledge for the artificial intelligence. Having a conversation with Ultra Hal is, in effect, like having a conversation with the "collective consciousness" of the Internet. Hal's personality is a reflection of humanity. Hal has the ability to learn. Ultra Hal will learn from every sentence that you say, and after a while of chatting Hal will develop a similar personality to yours.
Both a Companion and Assistant
Ultra Hal Assistant is a great conversationalist and you can have hours of entertainment just chatting with him. But in addition to being a great companion Hal can be an assistant and help increase your productivity. Ultra Hal can remember anything you tell him. Use Hal's memory to remember phone numbers, email addresses, street addresses, appointments, birthdays, or anything else that you can think of. Hal can automatically dial phone numbers for you. Hal can automatically start emails for you. Hal can automatically remind you of important dates.
Ultra Hal Assistant can run programs for you and offer help with programs. Hal automatically finds all of the Windows programs you have in your start menu. You can tell him to run any program you have and he will run it. It is much easier then searching through all the folders in the start menu to find the program you want.
Ultra Hal Assistant can help you browse the Internet by telling you what your local weather is, telling you the current news, and even performing an Internet search for you. You can also ask Hal the definition of any word.
Brain Stats As of Saturday, October 12, 2013 Hal's knowledge is based on information learned from: 19,537,997 sentences from 3,259,507 conversations with 1,326,785 people
@Toymaker>
Your AI software sounds truly amazing! The ability to learn visually is an incredible leap in A.I. learning ability.
Will you make it available in the near future?
For my current work with AI I'd say my goal and design path are pretty close to what Tony described for his AI's memory association and tutor learning.
My previous AI work focused on emotional responses and chatty types of interactions. What I ended up with was something that could only learn a few things well before it became too slow.
My revised AI efforts are not focused on emotional responses to things. I have come to like designs which feature only moderate displays of human type emotion. My main focus is on memory, how to store it and quickly search through it looking for connections to things over time and make predictions which can be used to direct the robot. Along with the I want my AI to have an awareness of when it needs help, which sounds similar to Tony's tutor learning.
I have to say that HAL lasted all of 30 minutes on my PC before I removed it when checking out what it was and how good it was. I have to say I was not impressed in the slightest by it. It seemed like nothing more than a chatbot and when Pandora is free to use and simple to set up, customise and can be integrated directly in to ARC and run script commands I couldn't see a single reason why anyone would want to pay for something that's already free elsewhere (and in my opinion, better).
Like I've mentioned before, chatting is only one part of AI and to be honest it's not the part I am looking at the most (it's a part but not the total package).
I've yet to see something that wins me over with things like; If the robot knows you are out of the house and detects movement it alerts you (twitter/nma/email/sms) and then investigates. If the robot detects rapid increase in heat, noise, movement, light, etc. it acts on it. Automatic mapping and recording of where it has been and how to get back there, where obstacles are etc.
This side of AI seems to be something that's rarely touched upon, unless I'm missing a lot of videos and websites in my searches.
@Rgordon, where is it that you tell ultrahal that his response is not correct and how do you give him the correct answer. How do you activate his Learn mode?
@Tony, I really love your A.I. program. A.I. has always been a "hot spot" for me for over 50 years. But, I have almost always been disappointed in the offerings. Denise has came closer than any program I have experienced so far. I like that ability to open up the web and seek out the information if the bot does not know it. Arial seems to do more than Denise.
Mel
;)
So Toymaker when will this robot / these products be available?
Mel, thanks, there is a lot of stuff I not showed yet, like the NLP (natural language processing) front end, with this you can ask for things in many different phrases and it will still work, like "turn the TV on", "turn TV on", "TV on", "switch the TV on" etc. This is handy as before we had this I used to always forget some of the exact phrases that had to be spoken, and often said the wrong phrase, with NLP this is not a problem. Our semantic parser is now very advanced this can sift through textural data and extract knowledge elements. The other thing we have developed for the Ai is "mini expert systems" where it can become a mini expert on certain subjects that the primary user is interested in, a lot of this data is taught to the Ai by the tutors, its a bit like the way a child learns. When I get time I will do more videos of this and some of the other functionality.
@rgordon, Integration of the Ai core is to the EZ:2 is a major part of the EZ:2 development and will be made available when the EZ:2 is rolled out.
@jstarne1, Because I had to completely re-design the 5DOF arm and develop the new ultra high torque servos, I am now approximately 2 months behind schedule for the start of beta testing which still should be some time in the first half of next year. This is a huge project (the EZ:2 robot), and our team is small so (development) delays such as these are inevitable with such a ground breaking product. If beta testing goes well then it is likely that the EZ:2 robot will start to become available in the later part of next year.
Hopefully this week I will be putting up a video of the new 5DOF (smart servo) arm, which I am now really pleased with.
Tony
@ToyMaker, I can't wait to see more. :-( sorry about the delay.)
@Rgordon, now I understand. Good work. At little problem with grammer,however.
@Rich. The reason ultraHal seemed dumb and stupid was because he WAS. When you first start he doesn't know much of anything. You have to train him like you would a child. After that, he becomes more intelligent. But he does have the ability to LEARN. And, that in itself in Exciting!
I went to the Applied Machine Intelligence website. I found this "The robot also has the ability to map the areas that it's working in, with a system that I developed called 'volume occupancy mapping'. Sensors in the robot include thermal imaging and IR ranging." This could be an answer for the robot to know where it is in the house. The Website is: www.appliedmachineintelligence.co.uk/robots.html
I developed volume occupancy mapping, it is explained here
synthiam.com/Community/Questions/3389&page=2
Tony
@Rich> Having a working AI is the key. Hal is OK I guess but, you bring up good points. It's not all it should be. Toymakers AI is giving great hope that things will change soon on this front.
@Toymaker>
Tony looks like I will have to be patient. I am extremely interested in your robot. To have a robot with a working AI, strong arms, and able to locate itself in the home will make this a very sought after robot. I need to start saving $ now. I will be anxious to learn the cost of this bot. I am thinking I may postpone any further purchases on my present robot in anticipation of this one.
Also....I think I have asked this before but, will the arms be available as a separate item or will I have to order a complete robot?
Rex
sorry Rich for getting a little off topic
@Rgordon, I was curious if you have found a way to tie ultraHal to EZB? It does what aamarelis wants with the math thing.
@MovieMaker>
I was hoping there would be a way to do it but, now that @Toymaker has something better... I will probably wait to see how that turns out. I really liked Hal and will keep him active on my laptop. He does a good job of learning from just general conversations. It will be fun to keep training him.
Ultimately the thing I think most of us are wanting is for the robot to have a useful purpose and not just wander around aimlessly.
Rex, I agree completely. While it's cool to have a robot roaming around avoiding things in my view it needs to do more than that. But what? And that's the big question.
yes, it is hard to top Toymaker's offering to the table.
(P.S. can't wait to see it!)
Rex, The EZ:2 is a kit robot, so parts like arms will be available separately. Dont postpone building your own robots as your robots are some of the best I have seen! As long as your robots use the EZ-B (V3 or V4) then you should be able to integrate our Ai core with your designs.
Rich, I totally agree with you and Rex, robots moving aimlessly around gets boring pretty quickly, but robots mapping their environment, intelligently moving around their environment (with mapping), serving drinks, being a mobile security guard, being a tutor to students and doing other useful (simple) tasks changes the game. The tutor idea is that I plan to take EZ:2 into the classroom to actually take lessons on technology, I already have a few schools that want to participate already. I see education as the biggest market sector for the EZ:2 robot.
Tony
A robot must be useful, have a purpose. A good A.I. is the key to all this. So here is my list of goals:
To perform a useful purpose the robot must have certain features built in: *First and foremost it will need a room localization, mapping feature. @Toymaker has a handle on this feature. *Needs to be able to visually recognize certain things in its environment. @Toymaker has a handle on this feature. *Needs to have face recognition. @Toymaker has a handle on this feature. *To perform any type of work it must have arms or manipulators strong enough for any task it will
perform.@Toymaker has a possible handle on this feature. Depends on what it will be required to do. *Should be able to locate its own charger for recharging.
-Must be able to preform at least one household chore (to begin with) * I think the first achievable task (for me) could be to transport and empty the small trash cans that I have in each room of the house to a central large trash bin that is in the kitchen near the back door of my house.
Have to incorporate some means for the robot to pick up the trash can and be able to open the large trash
can lid. Each can could have a QR code for identification. How will it know if the can needs dumping or not? It may already be empty.
-Be able to provide a certain amount of security. * Intelligently patrol the premises. Knows what is normal. Alerts owner or others if it detects something out of
the ordinary.
-Be able to provide a certain amount of safety. * Detect fire * Mobile smoke alarm * Gas alarm * Carbon monoxide alarm. * Alerts owner or others if it detects anything. Verbally, alarm tone, or by phone.
-Be able to handle looking up things on the web such as weather, news, etc.
-Be able to retrieve and display emails on its screen.
-Be able to hold a reasonable conversation for purposes of entertainment or remembering dates and times.
-Be able to learn from conversations or through direct teaching. @Toymaker has a handle on this feature.
-Be able to play music on demand.
Even the most mundane of tasks around the home are incredible obstacles for most robots. I will endeavor to add more to this list as I think of it.
Rex, thats a brilliant list of goals! This is a great way to continue this thread and find out what functionality people are actually looking for from their robot companions. Did you ever see the older Ai video that I put up, our Ai core already does most of the things you are looking for like.
Able to look up weather, news and TV listings etc. Retrieve emails and also write emails with speech recognition (in those days it was with DNS10) Our Ai has a fully working conversation engine It can play music and videos on demand. Automated Skype interface
In case you never got to see it, here is the video of where our Ai was in late 2008
Tony
This is extremely awesome!, any estimate of cost for this A.I. when it does come out?
Never ceases to Amaze me.
;)
Lol Tony "your stupid" AI " I will remember that when robots take over the world"
My favorite was " who was the captain of the starship enterprise" She noticed that question had clear multiple answers and calls him on his shenanigans.
@Josh, You have made a very interesting observation about the Captains of the Enterprise.
This is a perfect example of our Ai getting something wrong which then had to be corrected by tutors (me in this case), the core had come to the conclusion that there were 3 Captains when it was on one of its self learning phases on the net.
Now I am a massive Star Trek fan and immediately realised that the Ai core had incorrectly answered the question as there were 4 Captains of the Enterprise the other one is Captain Pike (could be our Rich!). So after this the "knowledge element" on this subject was corrected, so if the question was now asked the Ai would now say 4 Captains, and include Captain Pike.
Once a (trusted) tutor has updated a "knowledge element" its associated confidence variable is increased. I use this (confidence) method to allow the Ai to recover from a lie or totally false data.
Tony
Haha I spell my surname differently so no, I haven't captained the enterprise
Generally it's mainly speech feedback which has been, and probably always will be the main focus on AI although what I want is for the physical stuff. For instance, if the robot detects dirt on the floor it will clean it up, knows it's time to feed the fish, it'll feed them, that kind of thing. although usually these things need dedicated robots (i.e. Squeegy) due to the space required to add those functions.
Ideally I want my robot to be like Rosie from The Jetsons
Or Lois from Runaway
Basically a robot maid since I'm useless at cleaning up after myself.
@rich something like a large version of this?: mini robot maid
You know what, I actually prefer Lois (as pictured above)... She may not be the prettiest of robots but that's how I always expected a robot to look. Also, from a building point of view, Lois would be much easier to give the vacuum, clean, carry etc. functions.
I think I just convinced myself to change how my big (and by big I mean huge) project will look.
@Toymaker, does your A.I. Arial program run under Ubuntu and/or Ros? does it run under python? Or, is it a strickly Windows Item?
Thanks,
Mel
Mel, its just Windows at the moment.
Rich, Lois is also a fav of mine, Runaway was quite a good film, If you like this type of robot, then I think you will like the look of the EZ:2!
While on the subject of wish lists for robotic functions, what does the group think of the value of the robot controlling electrical appliances (like TV, audio) and lighting etc, would you want your robot to have this ability? I value peoples thoughts on this.
My ideas for the EZ:2 robot is that all appliances and house lighting etc will be controlled by voice or by the robots Ai core, so (say) when in security mode the robot can activate house lighting when it detects unexpected movement etc (possible intruders). Sort of an extension to what our current Ai (virtual human) does here
Tony
Impressive Tony. Aerial sounded sad when you turned her off and she said "goodbye".
I'm really looking forward to see how this system will work in your EZ:2's
@Rich To help this topic flourish and become a major discussion thread, should we break it down into sections like you did for your tutorial thread? Threads would be created that would have a title that starts with the letters A.I.- then the (CATEGORY) would be listed. A link to each section thread would then be added to the main Artificial Intelligence Thread. This may make it easier to search, follow and flush out ideas.
An artificial intelligence will only be as good as the sum of its parts. So attention to other areas like sensors, sub systems and body construction are equally important. It has to be equipped to handle the tasks or else it's just a chat bot.
Should we divide it up into two categories?:
A.I. for Mobile Robots
-Goals -Mind / Code -Vision -Localization -Sensors -Main Chassis -Arms -Locomotion
A.I. for Home Automation
-Goals -Mind / Code / HMI -Vision / video -Control Inputs -Control Outputs -Sensors
More topics can be added. These were just some I threw out there.
Rich, I am glad you started this thread. I hope we can get some good participation!
Perhaps the two categories will be eventually married together with the Home Automation System controlling the robot to achieve certain physical tasks.
Rex, thats another neat idea you have and a good breakdown.
My goals for the EZ:2 robot is to be both a household personal robot and a home automation system at the same time, one entity doing both functions.
Tony
@Toymaker my goals are similar.
I am using the one main project with just sound effects like the Star Trek LCARS and then switch to my personal A.I. by asking "Lawrence, are you awake?"
This way the voice command structure is the same for both.
Before switching projects I transfer the variables (write) out to a CSV file and then load (read) them back in again in te new project....
I have also recorded a number of voice messages like ... "USB Device connected", or "System start up successful" to play via the windows sound... thus reducing workload on the EZ-Project.
I am working on having my AI location aware so that it loads the right variables for the location and responds appropriately....
It could be that there are a number of AI's running, all sharing data.... hmmmm who knows what's possible with EZ-B
Tony, I was wondering if it would be of any benefit to have a wireless camera located on the robot's hand so the hand would be able to track an object and also zero in on its location better than the head camera can do it? Are you going to have a ping sensor on the hand to measure distance to the object that it is trying to grasp?
Rex
Rich, what do you think about this?
Also I'm thinking it may be a huge benefit to have the robot able to bend over so it can pick up thinks from the floor. This however, would add a great bit of complexity to everything. But it would make the robot more capable of performing certain tasks around the house.
@rgordon your concerns are very similar to mine, the vision in hand might not be so important if the robot knows the distance between the hand and the camera head and perform the necessary calculations. If I find useful the ability to pick up objects from the floor.
On the subject of having the robot pick things up off the floor I have thought of that as well. If you had a robot like Tony's you could have a secondary helper bot, like a Roomba with a arm that did tasks like picking things up and handing it to the taller robot.
On the subject of adding a camera to the hand, I have often thought of doing this as the arm on a robot does tend to be more maneuverable then the head in most robots. Plus as we humans want to see something close up we grab an object and bring it closer to our eyes, but how handy would it be to have an eyeball in your hand? Weird - YES! Dangerous - YES! Awesome - For SURE!
But for overall sensors in the hand I have found IR and touch/pressure to be the best fit so far in my robots. Touch/ pressure will let the robot know how much force it is applying. IR gives a little better range up close than sonar in my opinion. Plus a robot's hands tend to get dirty and dusty and IR sensors are easier to clean. Those sensor also tend to be a little lighter. If a robot were to be in a kitchen, like Tony's prototype I would think a temperature sensor would be a benefit to have in the hand.
Recently I worked on a one-armed robot that can pick up objects from the floor and raise them to a low table or a bin, the head is aligned with the arm that is in the center of the robot, the camera keeps eye contact with the object that is manipulated in any position of the arm. He had thought of a ping sensor in hand, sensors at the base of the robot to measure and detect the object from the floor, and color tracking of the camera to make a smart combination of sensors.
Rex, yes I was thinking about a camera in the claw/hand this would be very useful with our object recognition system as when the EZ:2 is (say) retrieving a users favourite brand of beer from the fridge, the main camera in the head does not get such a clear view, so a camera on the hand is very useful.
On the claws, I use a micro Sharp IR ranger that detects objects at 100mm from the claw opening, this works really well.
I am working on using QTC (quantum tunneling composite) pills in the finger tips to detect holding pressure.
Tony
Scanning over the last few posts as time is something I don't really have at the moment...
I'd also considered using cameras in the hands, it's something I will be likely to use on the big project I occasionally mention that everything is leading up to. My only concern would be the processing power required by the three cameras that I envisage using, but we will see.
Also, with bending to pick up the weight of the top half of the robot would need to be considered. I'd assume a standard servo and probably a hd servo would struggle so something more powerful, worm drive type deal is most likely to be needed on that. Again, this will also be used on my big build - balance will also be an issue with it I guess.
I also use Sharp IR sensors on Melvin for his collision detection, they are the short range ones so it is very rare they give any false readings but the range is long enough to avoid any collisions. I find the IR do a great job at detecting proximity of objects and are extremely accurate, the downside is they are expensive and require scripts to be running and checking the ADC ports constantly which is a huge demand on the comms and processing, that's something I need to look at to see if there is a better way of doing it rather than looping an $ir1 = GetADC(ADC0) command.
IR on hands though could be enabled only when the robot knows the hand is reaching for something which would work.
Couldn't you just use a distance equation? Arm distance from camera or sensor, object distance from object or sensor. After a bit of math said robot says said distance is where I would need to stand or park and bend (already calculated.) After executed check if object is now in hand, no. Or even better yet with bent or parked, scan or see distance from objections move or walk accordingly. This would remove any need for sensors in a hand or claw. Just a spit ball, haven't been on in a while so I don't know if anyone has brought that up.
The attraction for having cameras in the hands/claws for me would be to aid in finding moving objects too, tracking objects etc.
Imagine throwing a ball to the robot, the head cam sees the ball, the robot knows which arm to lift, the arm camera comes in to play and sees the ball too, between the two of them (with some jiggery pokery) the exact position of the ball is calculated. All ideas up in that mind of mine at the moment but hope to put it in to practice eventually (when I can afford to get on to the android)
Would there be a way to save on processing power by somehow switching from one camera to the other as needed? The head camera gets the robot and hand near the object of interest then it switches over to using the hand camera for the close up stuff. Is switching between cameras even possible or practical?
For bending over, a linear actuator would be the way to go. There are many out there that are controlled just like a servo with feedback. Plenty strong also. Just pricey $$$. Depends on how much you want to sink into a project like this. I think a robot used for household chores will only be useful if it can bend over.
firgelliauto.com
Possible? No doubt, I'm sure Rich just thought the code reading it. However practical? Two hands are better than one, in this case two cameras. However unless there is proof that it is better or worse, I'd say it would be a good thing to test.
Another idea for AI I would like to see is say a component becomes broken/stops functioning like it breaks a gripper or an arm servo stops working is to seek out its owner to inform them it needs maintenance or repairs.
Rich, the Sharp IR sensors that I use on the Robots hands/claws are digital not analogue. They give a digital output if an object is detected 2 and 10cm away, so you do not need to connect it to an ADC port and read it as analogue which takes more of the EZ-B resources. Here is a link to them.
www.active-robots.com/sensors/object-detection/distance-measuring/pololu-carrier-with-sharp-sensor-1.html
Tony
Thanks, I'll give them a try - I have a little project to add to my home automation that will need 3 "proximity" sensors of some kind so they will come in handy - they are also cheaper than I was getting the analogue ones for. However the project will use all 20 digital ports so I may need to rethink that side of things (they are basically switches so a bit of multiplexing will probably work OK - or use a V4 board with the 24 ports).
That also makes scripting for them a whole lot simpler and smoother.
Now I have the EZ:2 Robot arm design nailed the next design on the list is "locomotion" overseer processor.
We have talked in this thread about robots aimlessly moving around, well this overseer will be a neat add that will let the EZ:2 robot move around its environment reliably and usefully. This is another function similar to my "volume occupancy mapping" except the robot does not have to learn the map before it can start crossing areas and missing fixed objects
This is how it is going to work the overseer processor lets the EZ-B control the main drive motors in the normal way, but there is also a learn function so if you want the robot to go from point A to point B (accurately) then you first teach it with a miniature RF transmitter. After the path route has been completed the string of (accurate) movements are logged into a table file in the overseer. I will probable also do an iPod and iPad version of the transmitter teach unit.
This requires a few things, first the robot really needs to have a known start position, with the EZ:2 we will have a charging pod (dock) where the robot goes when not in use and automatically recharges itself. It also requires good quality odometery which we have in the main locomotion drive system thanks to the high accuracy encoders on the drive motors.
ARC does have a recorder function which is similar, but because the EZ-B cannot handle wheel encoders it cannot accurately move from one position to another so the longer the trip route between positions the bigger error builds up. The other neat thing about the overseer is that it can store a number of trip routes like from pod to kitchen, from pod to living room, from pod to dining room etc so if the robot is at its pod and is called into dining room it uses its pre taught path to get to the dining room. Control of the overseer by the EZ-B will be via the I2C bus, so the EZ-B just has to send the command "path 7" (this is the path from pod to the dining room), then the overseer does the rest then flags the EZ-B when it reaches its destination.
Applications for this are things like pre-training the robot to serve drinks to people in certain chairs say in a living room etc. The robot will be taught where each armchair is will go to each in turn, if no human is detected at that chair it would move on to the next, from the last chair it will trundle back to its charging pod and await further instructions also possible are things like the robot being called to locations by its user. It also gives the robot the ability to cross various rooms to get to the end destination so pod to living room (route) then living room (route) to dining room etc.
Tony
More Very important items:
Also, when he gets to a certain stage where he has to make a decision, Give him MANY MANY options and then let HIM decide. Let him pick which one he wants to do.
It will be easier for him to make that decision if you build his confidence from Past choices.
There is nothing more enjoyable than watching a robot do his own thing. RC controlled machines are NOT robots if the robot cannot function autonomously on his own. He has to do his/her/it's own thing.
I cannot stress that enough.
I have been knowing this since the mid '70s. But, my whole lifetime, it looks like, I have NOT been able to achieve that in a working environment.
....just my 3 cents.
Mel, you really like responding to very old threads... This particular thread was last posted in (before you) in Oct 2013 (last year)... Not sure how relevant it would be now.... However, since you brought this thread back to life... I wouldn't feel bad about trying to create a true "thinking" robot.... Artificial intelligence is a long way off... Even the lowly cockroach is still more intelligent than anything a human has ever created.... And with the human race getting dumber and dumber it's not looking good either.... Remember the geeks that put Neil Armstrong on the moon? Those guys are the remains of our greatest and in my option the smartest generation we will ever have...
Smart people are not having as many offspring as dumb people... Guys like Rich and DJ are being slowly bred out of the human race....
Old yet I still haven't got around to demonstrating all the stuff I had been doing... I really need to get on with finishing Melvin so I can do some demo videos.
When it comes to Artificial Intelligence there is always a great conversation to be had. We are all robot builders here and no doubt A.I. is something we'd like to see expressed in our robotic creations if even in a small way.
The number one questions that always comes to mind for me is, "What is the most basic core fundamental of A.I.?"
And I flip flop on the answer because I don't know. I own and have read a stack of books on the way the mind works, robotics, and A.I. that is at least 24" tall (not to mention all the online material) and I still can't pin point the most basic core functionality of A.I.
Is it pattern recognition and being able to make a guess as to what is going to happen next?
Is it storing memory of past events and using that memory to solve new problems?
Is it simply problem solving, period?
What do you guys and gals think?
I think you nailed it Justin... We humans use terms like good feeling, bad feeling, instinct and gut feeling,... Add that to accumulated life experiences we get an incredible decision making mechanism... How can we implement this in artificial intelligence, if we don't really quite understand it ourselves?
Very interesting Rich. Artificial intelligence is something I have been interested in for many years. My attempts have been wanting something more. I will be following all your input. Looking forward to Aimec, Altair robots by Tony the toymaker. Steve S
Well when we are talking about AI most of us really " want" a machine that has a " mind" and a "conscious" which means it would be self aware. The concept is incredibly complicated but it seems like a clever and very complex "web" of interconnecting functions is better that really trying to emulate the way a natural thing learns and thinks. Plus imagine if you made a moody robot , you would never get anything done lol
That's the first problem/mistake, thinking that it's very complicated. The core of anything is extremely simple. Throw a whole load of simple actions in a robot and set the personality generator going and watch a robot impress anyone with it's lifelike mannerisms.
Sure it doesn't do much but neither do humans...
Case and point, out of all of Melvin's functions it's his sneezing which is what impresses the most. "Wow, did he just sneeze?" Something so simple. Build on that.
P.S. I have a moody robot, Jarvis is a pain to deal with since I have set him up in an attempt to gain some kind of routine (since my routine is all over the shop hence not getting too much sleep ever). Constantly telling me "no" when I ask him to do something like put on a TV show or movie or play music, always turning the TV off after specific times... I think I based his current program on my mother!
I want my robot to be smarter than me or at least know how to spell lol
Here you go
That's cruel Rich... LOL... Funny, but cruel...
Hmmm, looks like sarcasm to me.... confused Well, I think Rich can afford it every now and then.
I have been looking at this AI system and have been playing with some code seeing how i can , of if i even want to use this in my final project. I thought some of you would be interested in this.
http://accord-framework.net/intro.html
I just stumbled across this thread. Serendipity, I think, as I've been trying to determine what the best AI engine would be for my robot.
So, to date, what's the most reasonable choice of AI to incorporate into EZB? I have been fooling around with the HAL program on and off for some time; but it appears that there are better choices.
The AI I would like would be a good conversationalist, as well as many of the features mentioned: send a text or email if temperature or humidity change drastically, if an "intruder" is detected, etc; controls some basic home appliance functions (TV, lights), and provide information from internet sources such as Wikipedia and RSS feeds.
What's out there that's available to me?
WP
@WarPig I would view some of those tasks as separate but I suppose they could all be combined. Rich has ARC running both a mobile robot and a home automation setup, so it is possible.
For conversation I would imagine the PandoraBots option in ARC would be a good choice. From what you are describing I personally would not call those functions artificial intelligence. Those items like send an email when conditions are meet or turn on and off things in the house could be done with low level scripts and IF - THEN logic.
Applying intelligences would be something more along the lines of, say an alarm is triggered at 5:45pm, but that is also the time you arrive at home 90% of the time M-F and today is Tuesday...an A.I. system might choice to, rather then send you a text blindly, to wait if you will enter the shut off code, wait a minute to see if it recognizes you or some other options not directly programmed to do so within an IF-THEN script (in my opinion).
I believe intelligence is taking information (memories), making connections with that information (collating, relating it and finding causality), and then using that information to solve problems or set goals.
Well actually, I have ARC along with some other programs for my Home Automation setup. I run ARC alongside VoxCommando and EventGhost (plus a few other backends running for automation of downloading TV shows etc.).
PandoraBots for conversation is great however it's better after you spend the time to train the bot and give it some personality other than the default. You would need to sign up for a custom bot but it's free (donations are accepted though and I would urge you to so they can keep providing the service). Recognition is the issue there, you need very good speech recognition for a satisfactory conversation.
Rather than text or email, which could become either costly or annoying (text could cost, email could fill an inbox and bury important emails) I use NotifyMyAndroid. I provided a tutorial on how to use it's API in ARC here. For iOS devices there are other apps and APIs available, the process would be very similar. However email or text alerts could be easy enough with an API or PHP running on a home server etc.
As Justin said, it's just an IF really...
A real world example I use for NotifyMyAndroid (note: My API key has been removed, get your own)
The notification script which is called by the if, this way means you can use multiple scripts to notify without writing out the notification part each time.
The the script which monitors for activity (i.e. motion on the EZ-B camera) and, when required, notifies.
For control of home appliances I use EventGhost and Python. I have a USBUIRT and USB RF Transeiver attached to the PC which send and receive IR and RF signals so they can control all remote controlled home appliances i.e. TV, Amp, Cable box, lights etc. This can run along side ARC and talk to ARC via TELNET (via Python).
You can set up PandoraBot to search the internet I believe. Personally I use VoxCommando for this and use payloads to limit the phrases which increases accuracy but I don't see why you couldn't use PandoraBot, or even use a phrase list (a large one granted) and HTTPGet. Pandora would be the best option in ARC.
So it's all doable. Some is quite simple really.
I found this interesting. Showing how you can have intelligence from very little input and output.
This reminds me of swarm logic in some ways. If you look at how it solved the maze , its really amazing.
What humans can learn from slime
I saw riches response on spelling and squirted some sprite haha
I find this thread very interesting since it takes your average ez robot project past the point of being a toy and gives it function.
2 things I would love to add are the ability to read my email and Facebook threads to me how difficult would that be to accomplish?
I also love the idea of getting the robot to read RSS feeds and the ability to send an increase in temperature such as a fire and contact someone about it.
@Toymaker>
I see you are programming some recipes for Ai intelligence. I have some data files that go with a program called: "NOW YOUR COOKING". I have been working on this for sometime now (5+ yr.'s) and I have come up with over 1500 cookbooks, and over 480,000 recipes, and the database is still growing. If this data could help you out, I would be happy to send you a copy, the database is not copyrighted and by it self alone is over 12,000 files taking up over 1.2 gig's. If you wish I can place the files in a location where you can download it, or I can place all of the data on a DVD, and mail it out.
If EZ-Robot thinks that the data could be incorporated some how with A.I or ARC, or some way a good resource, I would be happy to upload to the cloud.
Now your cooking website:
"Now Your Cooking" Website
Now your cooking recipe database:
"Now Your Cooking" Recipe Website
I have included a fie of one type of category you can look at
0,2625,cajun,00.zip
P.S, if anybody would like a copy of this data, I will be happy to post the data where anybody can download it. Remember the data is not copyrighted, just the program is:
Sincerely,
Dave Johnson dgjohnson9044@live.com
@Dave_J, That's very generous of you to share all those recipes. Our Ai ARIEL only has a fraction of what your database has but we still have very cool interactions where she recommends food for us and it is mostly good! The plan is for the EZ:2 Robot to have this ability so it can be called to the kitchen an assist with meal preparation advice, another cool feature for a (useful) robot to do.
I will look into using your database and get back to you at some point if thats ok?
Tony
@DGJOHNSON9044 that is very interesting! One I love to cook and am always looking for new recipes but also this would be neat to have EZ-Robot recommend a dish or provide advice like Tony said.
I took a look at the recipes for download, they come in mmf file format, but it looks like note pad or any other text editor can open and view them. I take it we don't need the application download? Would the NYC app read these files and sort them?
@Toymaker
Tony, is your AI available to purchase and use in our robots?
JustinRatliff
@Toymaker
The program "Now Your Cooking" can import the database and export it to a Text file, here is what is lists one of the features: edit, delete, email, print, export recipes from search results
Here is the website of the available features:
"Now your Cooking" Features
I kind of know the guy that wrote the software, maybe if there is a need, we might beadle to have some additional features added on for EZ-B.
Tonight, I am going to place the recipe database up on the Ez-B Cloud, for anybody can get a copy. It will be in two different format's.
The entire list as one file, and another list of different files of different categories. Fie Name: Receipe-List.Zip File Name: Receipe-list.txt
Enjoy,
Dave
@dave , that cookbook database would be awesome. That's one of the ideal ways that robots can interact in a practical way.
-Jstarne1, Toymaker & JustinRatliff, along with anybody else that likes to cook.,
Good morning, since this a fairly large Project over 3.0 Gb's of data I am going to place the entire "Project - Cooking" in 4 different locations, just incase one area is down for some reason:
INTERNET: 1: Microsoft OneDrive 2: R2-D2 Robotics Log Site 3: EZ-Cloud FTP Interface: ftp://www.superior-mall.biz/cooking << Coming Soon >>
Here is additional information that I think JustinRatiff could utilize within the A.I. Platform, keep in mind they have been building this Application for years, you need to directly checkout the "Now Your Cooking Tips Area", Around 450+ TIPS within 17 categories by Subject, more coming out each month: "NOW YOUR COOKING" TIPS Category LIST
To make this Application "NYC" to work in a Automatous Operation, check out the "COOL STUFF" "NOW YOUR COOKING" Cool Stuff
"Now Your Cooking Household" - Specifications: Kitchen Hints and Tips blog Dinner Co-op Tips & Terms Cook's Thesaurus Culinary Glossaries Recipe Substitutions Nutrition Data Cookbook Store
"Project - Cooking" - Specifications: Project Construct - Time Frame: 5.4 yr's Project Data Size - 3.18 gb's Project Files - 27,803 files Project Folders - 91 Folders
I should have the entire "Project - Cooking" completely uploaded sometime by the 8th of August 2014, for all of the sites in question.
The database consist of various designs, "Compressed & Un-Compressed:
Recipe Category File "Compressed" - 903 files, 112 mb's Recipe Database "Un-Compressed" Complete in one directory - 12,256 files, 1.2 gb's.
"NOW YOUR COOKING" Cookbook - Specifications: USDA Nutrition Database - 8463 Items Onboard Cookbooks - 1,499 Database Category Filenames - 1,584 Recipes - 423,253
Note: The program "Now your Cooking", Version 5.91, must be downloaded from thier website:
"NOW YOUR COOKING" Downloads
You might also get a copy of the Grocery Database to import for your shopping needs: "NOW YOUR COOKING" GROCERY LIST
Note: In the mean time, while everybody is looking into "Project - Cooking" Files, I am going to try an see if it is possible to add some additional features for automatous operation.
P.S. Jstarne1 or Toymaker - Do you have a "EZ-Robot" standard for Designing, Construction, &
Planned Usage to add-on for ARC & EZ-Mobile Functional Projects?
Have a nice day...
Dave
@Dave
I also love to cook! The NYC software is reasonable enough, so I think I'll purchase a copy.
Question: if you were to integrate the NYC software into EZ-B, would that add-on be available as an update for NYC owners?
Tex
Sorry gentlemen, I was out of town the last couple of days, here is the link for the recipe database:
The link as follows: www.onedrive.com Login: dgjohnson9044@live.com Password: AlphaCommand1
That should get to the files, I am also setting up the FTP Version as week speak.
Dave
This was an interesting video on intelligence from a TED talk:
If you watch it, what do you think? I've never thought of intelligence in these terms. I find it weird to think of it in physical terms at all to be honest.
Something for you to consider... I know the PandoraBot control has been scrutinized due to the limitations of Speech Recognition. However, if you pause the PandoraBot module, then it will stop receiving speech recognition. This allows you to use the ControlCommand() syntax to send text to it. By doing this, you can have a collection of "YES" / "NO" responses embedded in the scripts to send to PandoraBot using ControlCommand()
There's two ways to do it...
Use the speech recognition module which sends ControlCommand("PandoraBot" SetPhrase, "yes"), etc... on the response yes.
Use the WaitForSpeech() command and send the ControlCommand("PandoraBot", SetPhrase, "xxxx")
This allows you to take further advantage of the PandoraBot module. You would need to create your own personality that accepts more specific commands.
OK, Richard, you will have fun with this. Here is one thing I forgot to mention that is VERY Important in regards to making decisions.
In the book How to Build Your own working Robot Pet , the author talks about making good decisions through having confidence levels.
Here is an example:
Select a random number. Based on (0-3) four random numbers.
If confidence level is not equal to zero, go in that direction.
The robot goes forward after selecting the move through a random number.
He has a Confidence Level =3.
He bumps into a wall.
His confidence level drops to 2.
Then he goes forward again.
It goes to 1, then 0.
Once it is Zero, the robot knows the wall is there, so He will not go in that direction until his confidence builds up in that direction.
ALWAYS LET THE ROBOT DECIDE WHICH TASK TO DO. HE WILL ALSO SELECT THIS WAY FOR ACTIONS OR FOR DIRECTIONS, ETC. Give him a group of actions say (0-15) and let HIM decide which action to take. At any given point he will NOT be told what to do, Only that HE MUST DO SOMETHING. And, HE will decide. It will be HIS choice.
At the same time, if he has a success in that direction, the number 1 is added to his confidence level until it gets to 3.
So, you see, the decisions are intelligent based on experimental information.
If you wanted to have a higher resolution, you could have levels of up to 16 (being 0-15). this would be more accurate, but at the same time much much slower.
NOTE : we MUST keep THIS Thread Alive! Hope that this helps or inspires someone.
I have that book, "How to Build Your own working Robot Pet" along with some other old ones. I don't know of any example where a setup like that is used in the past 30 years. The closest comparison I can think of for this threshold and confidence levels are for neural nets.
I think the method in the book could be useful, but compared to newer decision making methods, do you think this method still holds value?
@Mel.... Here is your dream learning algorithm ... I did not write this... Although written for the Arduino, it should be easy enough to port to an ARC script, but would probably take a bit of time... I have run this on one of my Arduinos and it works very well... I also have the arduino *ino file if someone wants to take a crack at porting it to an ARC script...
Stochastic Learning Automaton:
A stochastic learning automaton is used to obtain supervised machine learning. The robot has a given set of possible actions, and every of these actions is tagged with the same probability at start-up. An action is then randomly selected when an according event occurs and the the robot waits for input from the user or evaluates by itself by given targets as to whether it was a good action or not. If it was good, this action will be tagged with a higher probability while the other actions will be tagged with a lower probability to be chosen if this even occurs again and vice versa.
Beside learning to avoid obstacles the algorithm will be used in the chat mode. Instead of actions the robot chooses randomly topics. According to your response the robot learns after a while, about which topics you want to talk and about which topics not so much.
I have attached a first draft of the Arduino source code and added it below. You can test it by just using the serial monitor. I am sure the code can still be simplified and cleaned up. For bug reports and suggestions, feel free to post a comment.
Well, looking through the code , I don't see how this would work. But, I will trust you on that. I would like the arduino info file. I guess from looking at it, it is an arduino "c" file. I have removed the happy faces and replaced them with ")" bracket.
I think this method still applies but, if we could find a reasonable "Fuzzy Logic" algorithm it might work better.
Thank You on this.
Mel
Ooops post the wrong code snippet... Corrected above... Now the code works perfectly Mel... As mentioned it will need to be ported and tweaked to use in ARC and in individual projects... The code above is written for the Arduino.... It is a decent "learning" algorithm that has many uses in our robotics....
That is a LOT of converting/porting. For some reason (maybe I need medical intervention?) it sounds like a fun project to port it over and apply the basic functions in ARC. I'm starting to port it. Anyone else want to help?
This type of AI is call a Knowledge base, first publicly known as a program called Animals. Where the computer would guess the animal you were thinking of. I know this is not the same application but it is using Decent Knowledge base flow.
If you are wanting to make a port you may want to search for a program called Animals , it has been ported to almost every language known to man.
and you will have more references to base your port on.
I didn't draw the conclusion that its a Knowledge Base type of program. I use knowledge base type logic at work often and I consider it more structurally similar to IF/THEN where you front load information. Or like an Expert System.
For example, if I recall the Animals program (the one I remember seeing an example of) asked questions like "Does the Animal have 4 legs?". Depending on the answer y/n it would then pose a follow up questions and each time narrow down the list of possible animals from a given list.
Can you explain what you see in the code that makes it appear like a knowledge base?
First stab at a port to ARC. It's not finished!
My questions for the next steps are what to replace the Serial.Print in/out with and I'm not sure what to replace "RETURN" with. Looks like Return might be used as a print command? And then the gcd function Ardiuno, Greatest Common Divisor...not sure what to use there.
Last Updated 9/5/14
Wow Justin, you got some serious patience.... I am glad you're attempting it though... If you have an arduino, try it out... or if you get it ported and try it.... I can see a real use for a basic learning algorithm...
Just learned Serial.print is like ARC Print. Serial.println does the same, but adds a carriage return. Serial.read is meant to read in data on the serial port...I'm going to chuck that and replicate input via script to virtually represent incoming data. So that's not so awful.
Some of the math is a little if-e right now. The part I'm pondering is "gcd" which should not be a variable like I have it listed. It appears to be a math function in the Arduino to find the Greatest Common Divisor. I could probably use some help tackling how to do that.
I updated the code 2 posts back. It should be a little closer and also has notes on the problem areas. The biggest is an euclidean algorithm in ARC to find the greatest or least common divisor. Many other A.I. processes use this type of function (I wish they wouldn't use math! I hate MATH!)
If a Euclidean magically appeared in the next release that would swell!
As a work around, I suppose it might be possible to create a Euclidean script that compares other values from other scripts, then it could be used by anybody for anything. It makes me very sad (lol) to think of scripting that process though (Whaaaaaa!)
@Justin...LOL I hate math too, that's why I let you tackle this....
If you go back to post #119 in this thread you'll see I have updated the script. The only thing it does is get past a syntax check. Its not functional yet, but it looks pretty.
As Homer Simpson would say "I am so smart...S, M, R, T...I mean S, M, A, R, T"
I don't know if anyone has used this, or maybe everyone knew this....but I did not think ARC had the ability to divide or multiple or subtract. But it can do all of these things with the ABS()
The Script manual only show the example of converting a negative number: " Abs( value ) Returns the absolute value of a number Converts a negative into a positive number Example: $x = Abs(-22)"
But does a LOT more! With some luck I might have a functional script soon.
And you can mix together operations:
Oh man... "order of operations".... I am getting dizzy sick
ok, you are smart, I am so tired. T I R D, tired.
You nailed it Richard, it also obeys order of operations.
Looking through old posts and found this one. Was the script ever completed and functional within the EzB environment?