
OldBotBuilder
USA
Asked
— Edited
@d.cochran, @DJ Sures What effort, software development, or application(s) would be required to add the "Robots That Learn" capability like that of the braincorporation's eyeRover? Ref: http://www.braincorporation.com/products/
We have had on and off communication with Brain Corporation for a while - it's something we plan to continue discussing with. Perhaps they need another poke
Feel free to send them a message as well! I think we're both waiting on the opportunity to be in the same place at the same time...
@DJ Sures Thanks for the reply. I don't know how much poking I can do. I live in North San Diego County, near Qualcomm. Maybe some folks I know can do the poking. Are there learning apps or process tools available for machine learning?
The needed parts for a learning robot are really just storage. What do you want it to learn? Storing the variables that have been used in the past is entirely doable. There is software that can recognize things and be trained to do things. Speech recognition on your pc is always improving as it is always training as you use it. Having a PC gives you a world of knowledge at your applications disposal.
The most intelligent robots out there grade their own actions and try different things off of those actions. The way it knows if it did good or bad is by judging the result of those actions based on others that it has stored the results from in a database type of storage. All of this is doable with what is available, but it would take some coding using sensor output to evaluate if the result was good or not and something to identify what makes good or bad.
An example of this is trying to teach a robot to walk using the output of multiple tilt sensors as good or bad. Each movement of the robot takes readings from the tilt sensors so that it can know or chain together information to determine if the result was good or not. This type of data would be better suited in a data cube than in a database due to the fact that data cubes already contain aggregates of information from a database based on time or some other dimension. The robot would look at the cube and chain together what it takes to make a step based on the results of multiple failed attempts and small successful ones.
What I am trying to point out is that a robots memory or a learning robot is more an application that is running to feed the robot the data that it needs. The robot is more just a network of sensors that feeds back into the brain (computer) and stores the information in some sort of quick queryable format so that it can know what has worked or hasn't worked. The brain in their solution is a pretty weak linux machine with a customized kernel or operating system. Cool idea, but I think that being able to program on a full blown machine is much better in the long run for me. Think about this. Put a robot in a store and have it interact with customers based on either a touch screen or voice commands. The robot could access the POS database and have all of the products in the store available for the customer to look at. The robot would know what row and bin this information is in. All of that is cool. The power comes in knowing what the customer did. Did they search for lights in electronics or in home goods? How do you market your products based on these decisions that are recorded at the time that the robot and consumer interact? Can you tell facial expressions from video so that you can determine if the robot was being helpful or frustrating to the customers? Taking this data and crunching it down would allow the robot to know how to interact more favorably with the customers. Does it act differently for women and men? Elderly and young? Race of the person? It is all about storing the data and then deciding how to use the data. Data modeling is huge right now and is changing the way decisions are made by companies whether they are drafting the next sports hero or are selling life insurance via the mail.
This is why I am working on EZ-AI. The potential is there to have robots make decisions based on the data and their sensors just like companies are already doing with sales data. Its about storing the data and using it.
That's a tough question to answer - are there tools in ARC for machine learning? Well, there is the vision learning system using Object Training.
As for having the robot "learn" using artificial intelligence, or something similar - is a really big question. It's pretty much as big as "what is the answer to life, the universe and everything". Without understanding the specifics of the question, there is no applicable answer.
Are asking for a specific demonstration of a pre-built ez-robot with learning abilities?
Might be easier if you ask a specific question, such as...
How do I make my robot learn an recognize an object visually?
How do I have my robot learn the dimensions of the room during it's exploring process?
Remember, the Brain Corp robot demos are specific applications that perform the specific task on specific hardware - they're not a "Hi, let's talk to this robot and ask it to do stuff because it's a little child"
The magic with short demo videos is not telling the whole story - leaving a lot to your imagination - which may be assuming those robots have the intelligence and cognitive understanding of a chimp, which is false. They're still pre-programmed robots running a specific application to perform a specific demo.
Yep, the first question is "What do you want it to learn?" Without that, you got nothing.
A concrete case study of AI were the piece recognition for my jigsaw puzzle assistant. When solving such puzzles myself, I often wonder how I (my brain) finds out that one specific piece will fit into a specific location, while I'm searching for a piece for a different location. There must exist either heavy parallel processing, or kind of content addressable memory. That's certainly not a viable (technical) approach, even with nowadays PC and robot processors.
For now it were sufficient to find out the key characteristics of the pieces, required for reliable matching. Using the camera, the first step is the separation of pieces in a heap; this may be achievable by an attempt to fetch one piece on top of the heap. Next comes measuring the outline, which must not be very accurate - perhaps rough edge ratios are sufficient. More promising seem to be extreme shapes, like sharp spikes at the edges, or non-rectangular angles between the edges. Then comes a set of edge shapes (straight, convex, concave), and finally colors.
All that information could be stored in a huge database. That database also must include information about the current location of every physical piece. Now comes the question about the fastest matching algorithm. Brute force is only a last resort, it will take too much time when all the candidate pieces must be moved into their assumed places, and back again if they do not really fit. The primary goal will be the construction or reduction of the set of candidates, based on multiple possible algorithms. Here AI and machine learning may enter the scene, where e.g. the algorithms could be benchmarked for best operation based on certain (not necessarily predefined) characteristics of the piece shapes and pictures on them.
But back to the first step, the robot should be able to learn how to detect single pieces in a heap of pieces; this can be extended to concrete (motionless) objects in a room, for more common applications. Moving the camera or robot looks like a good approach, so that corresponding shapes can be extracted from multiple snapshots. Also movable light sources may be helpful, or turning them on and off, for detecting edges or (plane) surfaces. Of course there exist 3D laser scanners for exactly that purpose, but there exist many much cheaper ways for implementing similar capabilities with EZ robots.
For my own studies in that area, is it already possible to obtain videos or (preferrably) single pictures from the camera, for remote processing?
@d.chochran,
Oh, BTW, what is a DataCube? Are you talking about an Array?
Moravec's paradox is the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. As Moravec writes, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility."