United Kingdom
Asked — Edited

Artificial Intelligence

Hoping this will spark a huge discussion on what everyone is looking for when it comes to their robot's AI.

AI is something I've been working on since before I even learned of EZ-Robots. My JARVIS replica from IronMan is coming up to being 3 years old come December and, while not started in ARC, over the last few months I've been porting parts over to ARC and those which are beyond the capabilities of ARC are integrated via Telnet. These include such things as voice controlled media playback, voice activated control of appliances, lights etc. and, well to be honest, far more than I can really explain right now.

Basically, up until now it is entirely built around home automation and automated media acquisition, storage, playback and logging. Recently I have been integrating and porting over parts of it in to ARC and where ARC is not capable of carrying out the actions, integration via Telnet so that ARC (and it's scripts) are aware of everything they need to be aware of (i.e. if media playback starts, EventGhost sends ARC a script command $mediaplayback = 1, when it's finished it sends $mediaplayback = 0 (that's a very simple example, it also sends more info on the media). This will be demonstrated soon by Melvin when I get around to making the video of him knowing what's on TV.

Like I said, so far it's mainly based around Media and Home Automation. What I want to discuss is...

What do you want in your robot's AI?

What do you want him/her to be able to do without human interaction? What do you want him/her to react or respond to? What do you want the AI to enhance? Why do you want AI?

And, for anyone who already has some kind of AI running; What does your AI add to your robot?

Hopefully this will spark up some interesting conversation, get some ideas out there, inspire others (and myself) to push on with the AI and make robots more intelligent:)


ARC Pro

Upgrade to ARC Pro

With Synthiam ARC Pro, you're not just programming a robot; you're shaping the future of automation, one innovative idea at a time.

#65  

Also I'm thinking it may be a huge benefit to have the robot able to bend over so it can pick up thinks from the floor. This however, would add a great bit of complexity to everything. But it would make the robot more capable of performing certain tasks around the house.

Author Avatar
Spain
#66  

@rgordon your concerns are very similar to mine, the vision in hand might not be so important if the robot knows the distance between the hand and the camera head and perform the necessary calculations. If I find useful the ability to pick up objects from the floor.

#67  

On the subject of having the robot pick things up off the floor I have thought of that as well. If you had a robot like Tony's you could have a secondary helper bot, like a Roomba with a arm that did tasks like picking things up and handing it to the taller robot.

On the subject of adding a camera to the hand, I have often thought of doing this as the arm on a robot does tend to be more maneuverable then the head in most robots. Plus as we humans want to see something close up we grab an object and bring it closer to our eyes, but how handy would it be to have an eyeball in your hand? Weird - YES! Dangerous - YES! Awesome - For SURE!

But for overall sensors in the hand I have found IR and touch/pressure to be the best fit so far in my robots. Touch/ pressure will let the robot know how much force it is applying. IR gives a little better range up close than sonar in my opinion. Plus a robot's hands tend to get dirty and dusty and IR sensors are easier to clean. Those sensor also tend to be a little lighter. If a robot were to be in a kitchen, like Tony's prototype I would think a temperature sensor would be a benefit to have in the hand.

Author Avatar
Spain
#68  

Recently I worked on a one-armed robot that can pick up objects from the floor and raise them to a low table or a bin, the head is aligned with the arm that is in the center of the robot, the camera keeps eye contact with the object that is manipulated in any position of the arm. He had thought of a ping sensor in hand, sensors at the base of the robot to measure and detect the object from the floor, and color tracking of the camera to make a smart combination of sensors.

Author Avatar
PRO
United Kingdom
#69  

Rex, yes I was thinking about a camera in the claw/hand this would be very useful with our object recognition system as when the EZ:2 is (say) retrieving a users favourite brand of beer from the fridge, the main camera in the head does not get such a clear view, so a camera on the hand is very useful.

On the claws, I use a micro Sharp IR ranger that detects objects at 100mm from the claw opening, this works really well.

User-inserted image

I am working on using QTC (quantum tunneling composite) pills in the finger tips to detect holding pressure.

Tony

Author Avatar
United Kingdom
#70  

Scanning over the last few posts as time is something I don't really have at the moment...

I'd also considered using cameras in the hands, it's something I will be likely to use on the big project I occasionally mention that everything is leading up to. My only concern would be the processing power required by the three cameras that I envisage using, but we will see.

Also, with bending to pick up the weight of the top half of the robot would need to be considered. I'd assume a standard servo and probably a hd servo would struggle so something more powerful, worm drive type deal is most likely to be needed on that. Again, this will also be used on my big build - balance will also be an issue with it I guess.

I also use Sharp IR sensors on Melvin for his collision detection, they are the short range ones so it is very rare they give any false readings but the range is long enough to avoid any collisions. I find the IR do a great job at detecting proximity of objects and are extremely accurate, the downside is they are expensive and require scripts to be running and checking the ADC ports constantly which is a huge demand on the comms and processing, that's something I need to look at to see if there is a better way of doing it rather than looping an $ir1 = GetADC(ADC0) command.

IR on hands though could be enabled only when the robot knows the hand is reaching for something which would work.

Author Avatar
Canada
#71  

Couldn't you just use a distance equation? Arm distance from camera or sensor, object distance from object or sensor. After a bit of math said robot says said distance is where I would need to stand or park and bend (already calculated.) After executed check if object is now in hand, no. Or even better yet with bent or parked, scan or see distance from objections move or walk accordingly. This would remove any need for sensors in a hand or claw. Just a spit ball, haven't been on in a while so I don't know if anyone has brought that up.

Author Avatar
United Kingdom
#72  

The attraction for having cameras in the hands/claws for me would be to aid in finding moving objects too, tracking objects etc.

Imagine throwing a ball to the robot, the head cam sees the ball, the robot knows which arm to lift, the arm camera comes in to play and sees the ball too, between the two of them (with some jiggery pokery) the exact position of the ball is calculated. All ideas up in that mind of mine at the moment but hope to put it in to practice eventually (when I can afford to get on to the android)