ARC Pro

Upgrade to ARC Pro

Experience the transformation – subscribe to Synthiam ARC Pro and watch your robot evolve into a marvel of innovation and intelligence.

PRO
Synthiam
#1  

We have had on and off communication with Brain Corporation for a while - it's something we plan to continue discussing with. Perhaps they need another poke:) Feel free to send them a message as well! I think we're both waiting on the opportunity to be in the same place at the same time...

#2  

@DJ Sures Thanks for the reply. I don't know how much poking I can do. I live in North San Diego County, near Qualcomm. Maybe some folks I know can do the poking. Are there learning apps or process tools available for machine learning?

#3  

The needed parts for a learning robot are really just storage. What do you want it to learn? Storing the variables that have been used in the past is entirely doable. There is software that can recognize things and be trained to do things. Speech recognition on your pc is always improving as it is always training as you use it. Having a PC gives you a world of knowledge at your applications disposal.

The most intelligent robots out there grade their own actions and try different things off of those actions. The way it knows if it did good or bad is by judging the result of those actions based on others that it has stored the results from in a database type of storage. All of this is doable with what is available, but it would take some coding using sensor output to evaluate if the result was good or not and something to identify what makes good or bad.

An example of this is trying to teach a robot to walk using the output of multiple tilt sensors as good or bad. Each movement of the robot takes readings from the tilt sensors so that it can know or chain together information to determine if the result was good or not. This type of data would be better suited in a data cube than in a database due to the fact that data cubes already contain aggregates of information from a database based on time or some other dimension. The robot would look at the cube and chain together what it takes to make a step based on the results of multiple failed attempts and small successful ones.

What I am trying to point out is that a robots memory or a learning robot is more an application that is running to feed the robot the data that it needs. The robot is more just a network of sensors that feeds back into the brain (computer) and stores the information in some sort of quick queryable format so that it can know what has worked or hasn't worked. The brain in their solution is a pretty weak linux machine with a customized kernel or operating system. Cool idea, but I think that being able to program on a full blown machine is much better in the long run for me. Think about this. Put a robot in a store and have it interact with customers based on either a touch screen or voice commands. The robot could access the POS database and have all of the products in the store available for the customer to look at. The robot would know what row and bin this information is in. All of that is cool. The power comes in knowing what the customer did. Did they search for lights in electronics or in home goods? How do you market your products based on these decisions that are recorded at the time that the robot and consumer interact? Can you tell facial expressions from video so that you can determine if the robot was being helpful or frustrating to the customers? Taking this data and crunching it down would allow the robot to know how to interact more favorably with the customers. Does it act differently for women and men? Elderly and young? Race of the person? It is all about storing the data and then deciding how to use the data. Data modeling is huge right now and is changing the way decisions are made by companies whether they are drafting the next sports hero or are selling life insurance via the mail.

This is why I am working on EZ-AI. The potential is there to have robots make decisions based on the data and their sensors just like companies are already doing with sales data. Its about storing the data and using it.

PRO
Synthiam
#4  

That's a tough question to answer - are there tools in ARC for machine learning? Well, there is the vision learning system using Object Training.

As for having the robot "learn" using artificial intelligence, or something similar - is a really big question. It's pretty much as big as "what is the answer to life, the universe and everything". Without understanding the specifics of the question, there is no applicable answer.

Are asking for a specific demonstration of a pre-built ez-robot with learning abilities?

Might be easier if you ask a specific question, such as...

  1. How do I make my robot learn an recognize an object visually?

  2. How do I have my robot learn the dimensions of the room during it's exploring process?

Remember, the Brain Corp robot demos are specific applications that perform the specific task on specific hardware - they're not a "Hi, let's talk to this robot and ask it to do stuff because it's a little child":)

The magic with short demo videos is not telling the whole story - leaving a lot to your imagination - which may be assuming those robots have the intelligence and cognitive understanding of a chimp, which is false. They're still pre-programmed robots running a specific application to perform a specific demo.

#5  

Yep, the first question is "What do you want it to learn?" Without that, you got nothing.

Germany
#6  

A concrete case study of AI were the piece recognition for my jigsaw puzzle assistant. When solving such puzzles myself, I often wonder how I (my brain) finds out that one specific piece will fit into a specific location, while I'm searching for a piece for a different location. There must exist either heavy parallel processing, or kind of content addressable memory. That's certainly not a viable (technical) approach, even with nowadays PC and robot processors.

For now it were sufficient to find out the key characteristics of the pieces, required for reliable matching. Using the camera, the first step is the separation of pieces in a heap; this may be achievable by an attempt to fetch one piece on top of the heap. Next comes measuring the outline, which must not be very accurate - perhaps rough edge ratios are sufficient. More promising seem to be extreme shapes, like sharp spikes at the edges, or non-rectangular angles between the edges. Then comes a set of edge shapes (straight, convex, concave), and finally colors.

All that information could be stored in a huge database. That database also must include information about the current location of every physical piece. Now comes the question about the fastest matching algorithm. Brute force is only a last resort, it will take too much time when all the candidate pieces must be moved into their assumed places, and back again if they do not really fit. The primary goal will be the construction or reduction of the set of candidates, based on multiple possible algorithms. Here AI and machine learning may enter the scene, where e.g. the algorithms could be benchmarked for best operation based on certain (not necessarily predefined) characteristics of the piece shapes and pictures on them.

But back to the first step, the robot should be able to learn how to detect single pieces in a heap of pieces; this can be extended to concrete (motionless) objects in a room, for more common applications. Moving the camera or robot looks like a good approach, so that corresponding shapes can be extracted from multiple snapshots. Also movable light sources may be helpful, or turning them on and off, for detecting edges or (plane) surfaces. Of course there exist 3D laser scanners for exactly that purpose, but there exist many much cheaper ways for implementing similar capabilities with EZ robots.

For my own studies in that area, is it already possible to obtain videos or (preferrably) single pictures from the camera, for remote processing?

#7  

@d.chochran,

  There is already a robot that is being tested in some LOWE's stores. It goes further than what you have mentioned.  Let's say you walk into a Lowe's and need a bolt. All you do is hold the bolt up to the robot's face and it will come back with a complete description of the bolt and say "follow me" and it will take you to the exact location of the bolt. Of course it could have been anything. So, your dream has already become a reality.

Oh, BTW, what is a DataCube? Are you talking about an Array?

  I find that Machine Learning is very interesting. I have a passion for it.
#8  

Moravec's paradox is the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. As Moravec writes, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility."

#9  

@MovieMaker ... The Lowe's robot you speak of is not the type of robot they are talking about here... Although it is quite impressive it is not a learning robot in the true sense of the word... It has the database of the store's inventory, multi languages, and ability to answer simple questions programed into it... It can recognize objects and read bar codes... It uses this info to take the customer to the isle that the product is in... It has no ability to learn, it is just executing the programmer's code. The only way it can learn something new is if that information is added to it's code or database by the programmer...

What would make the Lowe's robot off the chart impressive is it did have AI built in... One example would be a customer coming into the store asking the Lowe's robot a question. The Lowe's robot immediately recognizes (facial recognition) that the same customer was in the store a month ago looking for a lawnmower part... The bot then asks the customer if he needs more lawnmower parts and even calls the customer by name (because of the previous conversation a month ago with the customer)...... That would be impressive... and that would be a form of AI...:)

United Kingdom
#10  

I couldn't agree more.

Jarvis learns, the programming behind it is extremely simple. Using data stored from "sensors" (for want of a better word, perhaps feedback is better?) plus looking up other data from various online resources he can, and does, learn.

Example: Jarvis was aware that I have Breaking Bad in my library, he was aware I watched each episode and Seasons 1 to 3 twice (rewatching it again at the moment) and all others at least once. He checked for similar programs and as a result I found Better Call Saul was automatically added to my library, downloaded and given a place in the priority list for new/unwatched TV shows.

Sure that's media/entertainment based but the principals can be used for anything. Use existing data plus resources which already exist to determine some autonomous actions. In this example he checked for similar TV shows to what I have watched. The same applies for movies and music.

I do have plans to alter this so if something is added by Jarvis and I do not watch or listen to it then it's classed as a bad choice. So, if in the above example, if Better Call Saul wasn't being watched or if only the first few episodes were watched he would learn not to continue downloading/recording. If I suddenly did start watching it again he would learn and acquire the missing episodes.

"Learning", as far as a robot is concerned is data storage and a string of questions to come to a decision. The more data held the more accurate the response.

I also agree that short demo videos are smoke and mirrors and it's not difficult to make something look it's better than it actually is. For example, Anthony's XLR-one is shown conversing with Anthony in his kickstarter video, we know Anthony pre-recorded and altered XLR-One's voice rather than using TTS but it looks as though the robot has personality and reacts to conversation when that's not exactly correct.

Jarvis is the subject of a lot of emails asking how real he is, along with comments and emails stating it's all fake (I assure you it isn't other than the pheromone sensor video). It works both ways. Although, if I was to use smoke and mirrors and short demo videos of scripted actions I could blow minds.

#11  

The lowes bot is cool, but they are not currently going far enough with the data.

A data cube is a multidimensional database that aggrigates multiple facts with multiple dimensions into a object that can be quickly queried. It's been around for 20+ years but because of the huge amount of data now being stored, especially demographic and historical data and the processing power of computers in the past 10 years or so, data modeling has become far more popular. Online Analytical Processing (OLAP) (and not online like the internet but online as in readily available) is worth Googleing. It allows users to quickly query a data cube for aggrigate information based on fact and dimension data of a data cube. It would be used to give a sales manager the ability to see something like sales amounts b y employee over the last quarter and quickly go to sales amounts over the last month or week by employees then move to by store and then to by Region. Then the sales manager could compare that to the demographic data (also in the cube) in that region or to natural disasters (also in the cube) in a region. Then they could take that and compare it to....... All of this would be done without the involvement of a computer guy other than him making sure the data is available.

I have used this process in the past to successfully predict the number of bankruptcies that will be filed in the U.S. at a 87% or so accuracy rate. The best previous attempts were done at about a 50% accuracy rate. I have also used data modeling to predict sales based on marketing campaign for a company that sells insurance via mail. We identified who would be the most likely to accept the offer and what the conditions were that would be the most favorable. We reduced the marketing campaign by 1/2 (costing 1/2 of the money to perform) and increased sales by about 150%. This completely changed the way this business marketed its product.

The two industries using this technology the most are sports (when trying to pick their next superstar) and casinos (tracking what you do while in a casino).

Now, take this to robotics. The robot could store its sensor readings while trying to do specific tasks. This data could be loaded into a cube with a time or attempt dimension. It would quickly be able to see which attempts were more successful and use those attempts as a base, and then try different things, storing the results. These would be loaded into the cube and the process would continue until the robot has a success rating for an attempt of above a certain level. The key is to quickly be able to identify what is a fail and what is a success. By aggregating this data, the robot can quickly access what the condition was and how it successfully overcame the condition.

There is work being done on self aware robots also. These robots determine what their makeup is and then use this makeup to figure out how to move currently. They make attempts at moving and then store the data based on that attempt. The above paragraph is pretty much how they determine what to do to make their next attempt. This is low level but took someone writing the application that looks at the data and makes determinations based on that data.

#12  

Again Rich that is higher level learning. Try teaching a robot how to recognize a blue ball in different lighting conditions and distances in a cluttered room and then move towards it and pick it up.

It would take a PHD student a few years to write the code and build a system that may or may not come close. After all that work place something new in the room like a chair and that's another couple of years worth of work.

The holy grail is to find an algorithm that would be able to do what nature has accomplished.

United Kingdom
#13  

I agree. I don't agree with your time scales nor the qualification requirements because I am confident that a high school drop out like myself or DJ (I believe he also dropped out, if you didn't please accept my apologies) could achieve such a result in less time and without any additional letters after our names our certificates on our walls but I agree it would be a lot more work than the higher level learning I described.

#14  

@Rich LOL High School drop out... Wait, I am one of those too. Glad to see I am in good company.

#15  

It would be great to have a team enter the Nasa Sample Return Robot Challenge.

This is the fourth annual running of the SRRC which takes place in a large grass field (80,000 square meters) near the campus of the Worcester Polytechnic Institute (WPI) in Worcester, Massachusetts. Scattered throughout the field are a number of samples that team robots must find, pick up, and store on board the robot. Once the robot has stored as many of these samples as possible within a 2 hour window, it must return to the home platform.

NasaSampleReturnChallenge.pdf

#16  

The competition sounds cool. I do have a problem with such competitions in that they are too limiting. A robot that does one thing or is programmed for one purpose is too limiting to the imagination. I understand why they do this. I just dont like it. A teacher at the school that I help has a son who was on the mars rover project. I would love to sit down with him one day and talk robots but the opportunity hasn't presented itself yet. Maybe one day.

#17  

You could learn a lot though because these basic problems if solved could advance robotics tremendously.

" The samples range from a relatively large 8 cm high white cylinder to a small red hockey puck that lies very low in the grass and is nearly invisible until the robot is a few feet away. The robot must run completely autonomously from beginning to end and the judges go to great lengths to ensure that no team can remotely control their robot. "

#18  

@mtiberia .... I would love to do something like that.... Pie in the sky maybe... I built this a few months ago... Steers just like the mars rovers (4 wheeled steering) and has a crude form of Rocker Bogie suspension... I am sure I could upscale it (with better wheels, motors and add an arm) and have you, Rich and David collaborate to write software for it...

User-inserted image

#20  

I think competitions are good and push people. It just think that there should be multiple competitions with the same robot and not just one.

IMHO, robotics will be pushed further by having competitions like this but with more completely different events that require the use of the same robot. If you tell a programmer that they are going to program a robot to hit a golf ball, that is all you will get. If you tell a programmer that you want a robot that can play a game of golf, it might not be completed, but you will get far further.

Programmers work off of a defined set of requirements for the most part. The good programmers go far beyond this set of defined requirements and develop something really useful not only in this area, but that could be reused in other areas. I manage a lot of programmers and have been one for the last 20+ years. It is interesting to watch them work. Code reviews of their work helps me know how they think. I like it when one of them comes to me and says "I want to rewrite project X" because I know that they are continuing to think about how to improve the project even after the project is complete. This is a good programmer. I dont always say yes because of the other projects that are coming along, but I like it.

Asking a robot to do a specific task or series of tasks is too limiting which is what these competitions promote and these robots are then scrapped for other builds or are left in a closet at a school. I want someone to grab what has already been done by the past programmer and rewrite it or add to it to make the robot continue in its usefulness. Maybe the first competition is good, but don't make a separate robot for next years competition. Use the first one and expand on its capabilities. This is how robotics will be able to advance. One trick pony's may impress at first, but quickly are turned to glue.

#21  

Thanks... The design actually works very well. It handles rough surfaces almost as well as smooth and it can rotate in place due to the 4 wheeled steering... So it is highly maneuverable... I would just need to use bearing blocks for servo mounts, stronger gear motors, larger wheels (mars rover type)... I also need to improve the rocker bogie articulating chassis... And of course an arm...

#22  

@ d.cochran

Do you believe it will be possible for a program to rewrite itself and adapt to new situations or is it the computational hardware that needs to change ?

#23  

What is possible is for the conditions to be stored in a database and then called based on the situation. These are more rules than rewriting the code. The code would look at a rule, and then perform an action based on the rule. The rule could be modified by the code based on the result of the action performed. We do this for a lot of things including parsing documents provided to our company. A core set of rules would be coded, and then there would be an adapted rules table that would be used if the core rule has been modified.

It is more about programming an open ended application that can adapt to the rule that it is given.