United Kingdom
Asked — Edited

Artificial Intelligence

Hoping this will spark a huge discussion on what everyone is looking for when it comes to their robot's AI.

AI is something I've been working on since before I even learned of EZ-Robots. My JARVIS replica from IronMan is coming up to being 3 years old come December and, while not started in ARC, over the last few months I've been porting parts over to ARC and those which are beyond the capabilities of ARC are integrated via Telnet. These include such things as voice controlled media playback, voice activated control of appliances, lights etc. and, well to be honest, far more than I can really explain right now.

Basically, up until now it is entirely built around home automation and automated media acquisition, storage, playback and logging. Recently I have been integrating and porting over parts of it in to ARC and where ARC is not capable of carrying out the actions, integration via Telnet so that ARC (and it's scripts) are aware of everything they need to be aware of (i.e. if media playback starts, EventGhost sends ARC a script command $mediaplayback = 1, when it's finished it sends $mediaplayback = 0 (that's a very simple example, it also sends more info on the media). This will be demonstrated soon by Melvin when I get around to making the video of him knowing what's on TV.

Like I said, so far it's mainly based around Media and Home Automation. What I want to discuss is...

What do you want in your robot's AI?

What do you want him/her to be able to do without human interaction? What do you want him/her to react or respond to? What do you want the AI to enhance? Why do you want AI?

And, for anyone who already has some kind of AI running; What does your AI add to your robot?

Hopefully this will spark up some interesting conversation, get some ideas out there, inspire others (and myself) to push on with the AI and make robots more intelligent:)


ARC Pro

Upgrade to ARC Pro

Become a Synthiam ARC Pro subscriber to unleash the power of easy and powerful robot programming

#73  

Would there be a way to save on processing power by somehow switching from one camera to the other as needed? The head camera gets the robot and hand near the object of interest then it switches over to using the hand camera for the close up stuff. Is switching between cameras even possible or practical?

#74  

For bending over, a linear actuator would be the way to go. There are many out there that are controlled just like a servo with feedback. Plenty strong also. Just pricey $$$. Depends on how much you want to sink into a project like this. I think a robot used for household chores will only be useful if it can bend over.

firgelliauto.com

Canada
#75  

Possible? No doubt, I'm sure Rich just thought the code reading it. However practical? Two hands are better than one, in this case two cameras. However unless there is proof that it is better or worse, I'd say it would be a good thing to test.

#76  

Another idea for AI I would like to see is say a component becomes broken/stops functioning like it breaks a gripper or an arm servo stops working is to seek out its owner to inform them it needs maintenance or repairs.

United Kingdom
#78  

Thanks, I'll give them a try - I have a little project to add to my home automation that will need 3 "proximity" sensors of some kind so they will come in handy - they are also cheaper than I was getting the analogue ones for. However the project will use all 20 digital ports so I may need to rethink that side of things (they are basically switches so a bit of multiplexing will probably work OK - or use a V4 board with the 24 ports).

That also makes scripting for them a whole lot simpler and smoother.

PRO
United Kingdom
#79  

Now I have the EZ:2 Robot arm design nailed the next design on the list is "locomotion" overseer processor.

We have talked in this thread about robots aimlessly moving around, well this overseer will be a neat add that will let the EZ:2 robot move around its environment reliably and usefully. This is another function similar to my "volume occupancy mapping" except the robot does not have to learn the map before it can start crossing areas and missing fixed objects

This is how it is going to work the overseer processor lets the EZ-B control the main drive motors in the normal way, but there is also a learn function so if you want the robot to go from point A to point B (accurately) then you first teach it with a miniature RF transmitter. After the path route has been completed the string of (accurate) movements are logged into a table file in the overseer. I will probable also do an iPod and iPad version of the transmitter teach unit.

This requires a few things, first the robot really needs to have a known start position, with the EZ:2 we will have a charging pod (dock) where the robot goes when not in use and automatically recharges itself. It also requires good quality odometery which we have in the main locomotion drive system thanks to the high accuracy encoders on the drive motors.

ARC does have a recorder function which is similar, but because the EZ-B cannot handle wheel encoders it cannot accurately move from one position to another so the longer the trip route between positions the bigger error builds up. The other neat thing about the overseer is that it can store a number of trip routes like from pod to kitchen, from pod to living room, from pod to dining room etc so if the robot is at its pod and is called into dining room it uses its pre taught path to get to the dining room. Control of the overseer by the EZ-B will be via the I2C bus, so the EZ-B just has to send the command "path 7" (this is the path from pod to the dining room), then the overseer does the rest then flags the EZ-B when it reaches its destination.

Applications for this are things like pre-training the robot to serve drinks to people in certain chairs say in a living room etc. The robot will be taught where each armchair is will go to each in turn, if no human is detected at that chair it would move on to the next, from the last chair it will trundle back to its charging pod and await further instructions also possible are things like the robot being called to locations by its user. It also gives the robot the ability to cross various rooms to get to the end destination so pod to living room (route) then living room (route) to dining room etc.

Tony

#80  

More Very important items:

Also, when he gets to a certain stage where he has to make a decision, Give him MANY MANY options and then let HIM decide. Let him pick which one he wants to do.

It will be easier for him to make that decision if you build his confidence from Past choices.

There is nothing more enjoyable than watching a robot do his own thing. RC controlled machines are NOT robots if the robot cannot function autonomously on his own. He has to do his/her/it's own thing.

I cannot stress that enough.

I have been knowing this since the mid '70s. But, my whole lifetime, it looks like, I have NOT been able to achieve that in a working environment.

....just my 3 cents.