United Kingdom
Asked — Edited

Artificial Intelligence

Hoping this will spark a huge discussion on what everyone is looking for when it comes to their robot's AI.

AI is something I've been working on since before I even learned of EZ-Robots. My JARVIS replica from IronMan is coming up to being 3 years old come December and, while not started in ARC, over the last few months I've been porting parts over to ARC and those which are beyond the capabilities of ARC are integrated via Telnet. These include such things as voice controlled media playback, voice activated control of appliances, lights etc. and, well to be honest, far more than I can really explain right now.

Basically, up until now it is entirely built around home automation and automated media acquisition, storage, playback and logging. Recently I have been integrating and porting over parts of it in to ARC and where ARC is not capable of carrying out the actions, integration via Telnet so that ARC (and it's scripts) are aware of everything they need to be aware of (i.e. if media playback starts, EventGhost sends ARC a script command $mediaplayback = 1, when it's finished it sends $mediaplayback = 0 (that's a very simple example, it also sends more info on the media). This will be demonstrated soon by Melvin when I get around to making the video of him knowing what's on TV.

Like I said, so far it's mainly based around Media and Home Automation. What I want to discuss is...

What do you want in your robot's AI?

What do you want him/her to be able to do without human interaction? What do you want him/her to react or respond to? What do you want the AI to enhance? Why do you want AI?

And, for anyone who already has some kind of AI running; What does your AI add to your robot?

Hopefully this will spark up some interesting conversation, get some ideas out there, inspire others (and myself) to push on with the AI and make robots more intelligent:)


ARC Pro

Upgrade to ARC Pro

Synthiam ARC Pro is a new tool that will help unleash your creativity with programming robots in just seconds!

#105  

-Jstarne1, Toymaker & JustinRatliff, along with anybody else that likes to cook.,

Good morning, since this a fairly large Project over 3.0 Gb's of data I am going to place the entire "Project - Cooking" in 4 different locations, just incase one area is down for some reason:

INTERNET: 1: Microsoft OneDrive 2: R2-D2 Robotics Log Site 3: EZ-Cloud FTP Interface: ftp://www.superior-mall.biz/cooking << Coming Soon >>

Here is additional information that I think JustinRatiff could utilize within the A.I. Platform, keep in mind they have been building this Application for years, you need to directly checkout the "Now Your Cooking Tips Area", Around 450+ TIPS within 17 categories by Subject, more coming out each month: "NOW YOUR COOKING" TIPS Category LIST

To make this Application "NYC" to work in a Automatous Operation, check out the "COOL STUFF" "NOW YOUR COOKING" Cool Stuff

"Now Your Cooking Household" - Specifications: Kitchen Hints and Tips blog Dinner Co-op Tips & Terms Cook's Thesaurus Culinary Glossaries Recipe Substitutions Nutrition Data Cookbook Store

"Project - Cooking" - Specifications: Project Construct - Time Frame: 5.4 yr's Project Data Size - 3.18 gb's Project Files - 27,803 files Project Folders - 91 Folders

I should have the entire "Project - Cooking" completely uploaded sometime by the 8th of August 2014, for all of the sites in question.

The database consist of various designs, "Compressed & Un-Compressed:

Recipe Category File "Compressed" - 903 files, 112 mb's Recipe Database "Un-Compressed" Complete in one directory - 12,256 files, 1.2 gb's.

"NOW YOUR COOKING" Cookbook - Specifications: USDA Nutrition Database - 8463 Items Onboard Cookbooks - 1,499 Database Category Filenames - 1,584 Recipes - 423,253

Note: The program "Now your Cooking", Version 5.91, must be downloaded from thier website:

"NOW YOUR COOKING" Downloads

You might also get a copy of the Grocery Database to import for your shopping needs: "NOW YOUR COOKING" GROCERY LIST

Note: In the mean time, while everybody is looking into "Project - Cooking" Files, I am going to try an see if it is possible to add some additional features for automatous operation.

P.S. Jstarne1 or Toymaker - Do you have a "EZ-Robot" standard for Designing, Construction, &
Planned Usage to add-on for ARC & EZ-Mobile Functional Projects?

    I have another Project I have been working on for some time now called: &quot;Project - EBook&quot;
    I took a quick inventory, and it looks like I currently have stored on the server over 
    7900+  of different categories EBooks of which 1500 of them are solely related to &quot;Robotics&quot;. 
    I also have stored on DVD's some where around 10,000+ EBooks ranging into different 
    subjects.  I was thinking of utilizing the EBooks as a interactive tool with the support of A.I. to 
    read or teach to Disabled or Blind Personal, in locations like the Home, Hospitals, or Training 
    Centers that don't have the time or facilities or just can't afford an instructor to help. Note VA 
    Hospitals could use this technology, since they just passed 16 Billion Dollars Additional 
    Funds to help the VA.

Have a nice day...

Dave

#106  

@Dave

I also love to cook! The NYC software is reasonable enough, so I think I'll purchase a copy.:)

Question: if you were to integrate the NYC software into EZ-B, would that add-on be available as an update for NYC owners?

Tex

#107  

Sorry gentlemen, I was out of town the last couple of days, here is the link for the recipe database:

The link as follows: www.onedrive.com Login: [email protected] Password: AlphaCommand1

That should get to the files, I am also setting up the FTP Version as week speak.

Dave

#108  

This was an interesting video on intelligence from a TED talk:

If you watch it, what do you think? I've never thought of intelligence in these terms. I find it weird to think of it in physical terms at all to be honest.

PRO
Synthiam
#109  

Something for you to consider... I know the PandoraBot control has been scrutinized due to the limitations of Speech Recognition. However, if you pause the PandoraBot module, then it will stop receiving speech recognition. This allows you to use the ControlCommand() syntax to send text to it. By doing this, you can have a collection of "YES" / "NO" responses embedded in the scripts to send to PandoraBot using ControlCommand()

There's two ways to do it...

  1. Use the speech recognition module which sends ControlCommand("PandoraBot" SetPhrase, "yes"), etc... on the response yes.

  2. Use the WaitForSpeech() command and send the ControlCommand("PandoraBot", SetPhrase, "xxxx")

This allows you to take further advantage of the PandoraBot module. You would need to create your own personality that accepts more specific commands.

#110  

OK, Richard, you will have fun with this. Here is one thing I forgot to mention that is VERY Important in regards to making decisions.

In the book How to Build Your own working Robot Pet , the author talks about making good decisions through having confidence levels.

Here is an example:

Select a random number. Based on (0-3) four random numbers.

If confidence level is not equal to zero, go in that direction.

The robot goes forward after selecting the move through a random number.

He has a Confidence Level =3.

He bumps into a wall.

His confidence level drops to 2.

Then he goes forward again.

It goes to 1, then 0.

Once it is Zero, the robot knows the wall is there, so He will not go in that direction until his confidence builds up in that direction.

ALWAYS LET THE ROBOT DECIDE WHICH TASK TO DO. HE WILL ALSO SELECT THIS WAY FOR ACTIONS OR FOR DIRECTIONS, ETC. Give him a group of actions say (0-15) and let HIM decide which action to take. At any given point he will NOT be told what to do, Only that HE MUST DO SOMETHING. And, HE will decide. It will be HIS choice.

At the same time, if he has a success in that direction, the number 1 is added to his confidence level until it gets to 3.

So, you see, the decisions are intelligent based on experimental information.

If you wanted to have a higher resolution, you could have levels of up to 16 (being 0-15). this would be more accurate, but at the same time much much slower.

NOTE : we MUST keep THIS Thread Alive! Hope that this helps or inspires someone.

#111  

I have that book, "How to Build Your own working Robot Pet" along with some other old ones. I don't know of any example where a setup like that is used in the past 30 years. The closest comparison I can think of for this threshold and confidence levels are for neural nets.

I think the method in the book could be useful, but compared to newer decision making methods, do you think this method still holds value?

#112  

@Mel.... Here is your dream learning algorithm ... I did not write this... Although written for the Arduino, it should be easy enough to port to an ARC script, but would probably take a bit of time... I have run this on one of my Arduinos and it works very well... I also have the arduino *ino file if someone wants to take a crack at porting it to an ARC script...

Stochastic Learning Automaton:

A stochastic learning automaton is used to obtain supervised machine learning. The robot has a given set of possible actions, and every of these actions is tagged with the same probability at start-up. An action is then randomly selected when an according event occurs and the the robot waits for input from the user or evaluates by itself by given targets as to whether it was a good action or not. If it was good, this action will be tagged with a higher probability while the other actions will be tagged with a lower probability to be chosen if this even occurs again and vice versa.

Beside learning to avoid obstacles the algorithm will be used in the chat mode. Instead of actions the robot chooses randomly topics. According to your response the robot learns after a while, about which topics you want to talk and about which topics not so much.

I have attached a first draft of the Arduino source code and added it below. You can test it by just using the serial monitor. I am sure the code can still be simplified and cleaned up. For bug reports and suggestions, feel free to post a comment.


  /*****************************************************************************************************
*      Interactive AI program based on a variable structure stochastic learning automaton (VSLA)     *
*                               For more information visit:                                          *   
*        http://scholar.lib.vt.edu/theses/available/etd-5414132139711101/unrestricted/ch3.pdf        *
*                                                                                                    *
*                                       Arduino IDE 1.0                                              *
*                                   Arduino Mega 2560 Rev. 3                                         *
*                             Written by Markus Bindhammer, 2013                                     *
******************************************************************************************************/

//********************************************* Libraries *********************************************
#include &lt;Entropy.h&gt; // http://forum.arduino.cc/index.php/topic,108380.0.html       

//***************************************** Global variables ******************************************
int beta; //variable for the environment input
long randomnumber; //variable for generated random number
float t_1; //define initial denominators and numerators of probability value for action alpha_1
float y_1; 
float t_2; //define initial denominators and numerators of probability value for action alpha_2
float y_2; 
float t_3; //define initial denominators and numerators of probability value for action alpha_3
float y_3; 
float t_4; //define initial denominators and numerators of probability value for action alpha_4
float y_4; 
float p_total; //sum of all probabilities
float p_1; //probabilities as floating-point numbers
float p_2;
float p_3;
float p_4; 
float euclid; //variable for the gcd (greatest common divisor)

//***************************************** Global constants ******************************************
float u=1.0; //numerator of learning parameter. u must be a natural number &gt;0
float v=2.0; //denominator of learning parameter. v must be a natural number &gt;0
float a=u/v; //define learning parameter a as a fraction of u and v. a must be &gt;0 and &lt;1
int r=4; //number of desired actions

void setup() {
  Serial.begin(9600); //open the serial port at 9600 bps
  Entropy.Initialize(); //initialize random function
  t_1 = 1.0; //define initial denominators and numerators of probability values
  t_2 = 1.0; 
  t_3 = 1.0; 
  t_4 = 1.0; 
  y_1 = 4.0; 
  y_2 = 4.0; 
  y_3 = 4.0; 
  y_4 = 4.0; 
}

void loop() {
//======================================== Calculating results ========================================
  p_1=t_1/y_1;
  p_2=t_2/y_2;
  p_3=t_3/y_3;
  p_4=t_4/y_4; 
  p_total=t_1/y_1+t_2/y_2+t_3/y_3+t_4/y_4;
  
//========================================= Printing results ==========================================
  Serial.println(&quot;current probability values:&quot;);
  Serial.print(&quot;p_1 = &quot;); 
  Serial.println(p_1,4); 
  Serial.print(&quot;p_2 = &quot;); 
  Serial.println(p_2,4); 
  Serial.print(&quot;p_3 = &quot;); 
  Serial.println(p_3,4); 
  Serial.print(&quot;p_4 = &quot;); 
  Serial.println(p_4,4); 
  Serial.println(&quot;________________&quot;); 
  Serial.print(&quot;p_total = &quot;); 
  Serial.println(p_total,4); 
  Serial.println(&quot; &quot;); 
  
//========================================= Choice algorithm ==========================================
//selection sorting algorithm
  float maxprob[]={p_1,p_2,p_3,p_4, 3}; //create according array
  //the number '3' must be always kept in the array to prevent wrong ranking in case probabilities are equal to 0
  float temp;
  int mini; //variable used to hold the assumed minimum element
  int i;
  int j; 
  for(i=0; i&lt;r; i++) { //outer FOR loop
    mini=i; //first pass of FOR loop assumes 0th element as minimum, second pass assumes 1st element as minimum and so on
    for(j=0; j&lt;r; j++) { //inner FOR loop
      if(maxprob[mini]&gt;maxprob[j]) { //compares the minimum element with all, other members using inner FOR loop
        temp=maxprob[j]; //exchanges the elements   
        maxprob[j]=maxprob[mini];
        maxprob[mini]=temp;
      }
    }
  }  
  int k=0; //identifier which action was chosen
  for (i=0; i&lt;r; i++) {
    if (maxprob[i]==p_1&amp;&amp;maxprob[i+1]!=p_1) {
      randomnumber=Entropy.random(1,y_1+1);
      if (randomnumber&lt;=t_1) {
        Serial.println(&quot;action alpha_1 chosen&quot;);
        Serial.println(&quot; &quot;);
        k=1;
        break;
      }
    } if (maxprob[i]==p_2&amp;&amp;maxprob[i+1]!=p_2) {
       randomnumber=Entropy.random(1,y_2+1);
       if (randomnumber&lt;=t_2) {
         Serial.println(&quot;action alpha_2 chosen&quot;);
         Serial.println(&quot; &quot;);
         k=2;
         break;
       }
    } if (maxprob[i]==p_3&amp;&amp;maxprob[i+1]!=p_3) {
      randomnumber=Entropy.random(1,y_3+1);
      if (randomnumber&lt;=t_3) {
        Serial.println(&quot;action alpha_3 chosen&quot;);
        Serial.println(&quot; &quot;);
        k=3;
        break;
       }
    } if (maxprob[i]==p_4&amp;&amp;maxprob[i+1]!=p_4) {
      randomnumber=Entropy.random(1,y_4+1);
      if (randomnumber&lt;=t_4) {
        Serial.println(&quot;action alpha_4 chosen&quot;);
        Serial.println(&quot; &quot;);
        k=4;
        break;
      }
    } // Add here statements from p_5 to p_... if desired
  }
  if (k==0) {
    if (maxprob[r-1]==p_1) {
      Serial.println(&quot;action alpha_1 chosen&quot;);
      Serial.println(&quot; &quot;);
      k=1;
    } else if (maxprob[r-1]==p_2) {
       Serial.println(&quot;action alpha_2 chosen&quot;);
       Serial.println(&quot; &quot;);
       k=2;
    } else if (maxprob[r-1]==p_3) {
       Serial.println(&quot;action alpha_3 chosen&quot;);
       Serial.println(&quot; &quot;);
       k=3;
    } else if (maxprob[r-1]==p_4) {
       Serial.println(&quot;action alpha_4 chosen&quot;);
       Serial.println(&quot; &quot;);
       k=4;
    } // Add here statements from p_5 to p_... if desired
  }

//======================================= Input from environment ======================================
  Serial.println(&quot;please decide if chosen action was favorable or unfavorable&quot;); //instructions for the user
  Serial.println(&quot;send 0 if action was favorable&quot;);
  Serial.println(&quot;send 1 if action was unfavorable&quot;);
  Serial.println(&quot; &quot;);
  check_input: //check input from serial monitor. If input is not equal to 0 or 1, go back to 'check_input' label
  char ser = Serial.read();
  if(ser=='0') {
    beta=0;
    Serial.println(&quot;b = 0, action was favorable&quot;);
    Serial.println(&quot; &quot;);
  } else if(ser=='1') {
     beta=1;
     Serial.println(&quot;b = 1, action was unfavorable&quot;);
     Serial.println(&quot; &quot;);
  } else { 
    goto check_input;
  }

//========================================== Updating rule T ==========================================
//updating rule for probability action alpha_1 
  if (k==1&amp;&amp;beta==0) {
    t_1=((v*t_1)+(u*(y_1-t_1))); //according updating rule when action alpha_1 was chosen and beta=0 (j=i)
    y_1=(v*y_1);
  } if (k==1&amp;&amp;beta==1) {
      t_1=(t_1*(v-u)); //according updating rule when action alpha_1 was chosen and beta=1 (j=i)
      y_1=(v*y_1);
  } if (k!=1&amp;&amp;beta==0) {
      t_1=(t_1*(v-u)); //according updating rule when action alpha_1 was not chosen and beta=0 (j?i)
      y_1=(v*y_1);
  } if (k!=1&amp;&amp;beta==1) {
      t_1=((y_1*u)+(t_1*(r-1)*(v-u))); //according updating rule when action alpha_1 was not chosen and beta=1 (j?i)
      y_1=(y_1*v*(r-1));
  }
//updating rule for probability action alpha_2 
  if (k==2&amp;&amp;beta==0) {
    t_2=((v*t_2)+(u*(y_2-t_2))); //according updating rule when action alpha_2 was chosen and beta=0 (j=i)
    y_2=(v*y_2);
  } if (k==2&amp;&amp;beta==1) {
      t_2=(t_2*(v-u)); //according updating rule when action alpha_2 was chosen and beta=1 (j=i)
      y_2=(v*y_2);
  } if (k!=2&amp;&amp;beta==0) {
      t_2=(t_2*(v-u)); //according updating rule when action alpha_2 was not chosen and beta=0 (j?i)
      y_2=(v*y_2);
  } if (k!=2&amp;&amp;beta==1) {
      t_2=((y_2*u)+(t_2*(r-1)*(v-u))); //according updating rule when action alpha_2 was not chosen and beta=1 (j?i)
      y_2=(y_2*v*(r-1));
  }
//updating rule for probability action alpha_3 
  if (k==3&amp;&amp;beta==0) {
    t_3=((v*t_3)+(u*(y_3-t_3))); //according updating rule when action alpha_3 was chosen and beta=0 (j=i)
    y_3=(v*y_3);
  } if (k==3&amp;&amp;beta==1) {
      t_3=(t_3*(v-u)); //according updating rule when action alpha_3 was chosen and beta=1 (j=i)
      y_3=(v*y_3);
  } if (k!=3&amp;&amp;beta==0) {
      t_3=(t_3*(v-u)); //according updating rule when action alpha_3 was not chosen and beta=0 (j?i)
      y_3=(v*y_3);
  } if (k!=3&amp;&amp;beta==1) {
      t_3=((y_3*u)+(t_3*(r-1)*(v-u))); //according updating rule when action alpha_3 was not chosen and beta=1 (j?i)
      y_3=(y_3*v*(r-1));
  }
//updating rule for probability action alpha_4 
  if (k==4&amp;&amp;beta==0) {
    t_4=((v*t_4)+(u*(y_4-t_4))); //according updating rule when action alpha_4 was chosen and beta=0 (j=i)
    y_4=(v*y_4);
  } if (k==4&amp;&amp;beta==1) {
      t_4=(t_4*(v-u)); //according updating rule when action alpha_4 was chosen and beta=1 (j=i)
      y_4=(v*y_4);
  } if (k!=4&amp;&amp;beta==0) {
      t_4=(t_4*(v-u)); //according updating rule when action alpha_4 was not chosen and beta=0 (j?i)
      y_4=(v*y_4);
  } if (k!=4&amp;&amp;beta==1) {
      t_4=((y_4*u)+(t_4*(r-1)*(v-u))); //according updating rule when action alpha_4 was not chosen and beta=1 (j?i)
      y_4=(y_4*v*(r-1));
  } // Add here statements from p_5 to p_... if desired

//========================================== gcd calculation ==========================================
  euclid=(gcd(y_1, t_1)); //find greatest common divisor (gcd) and devide denominators and numerators by gcd to reduce fraction
  y_1=y_1/euclid;
  t_1=t_1/euclid;
  euclid=(gcd(y_2, t_2)); 
  y_2=y_2/euclid;
  t_2=t_2/euclid;
  euclid=(gcd(y_3, t_3)); 
  y_3=y_3/euclid;
  t_3=t_3/euclid;
  euclid=(gcd(y_4, t_4)); 
  y_4=y_4/euclid;
  t_4=t_4/euclid;
} // Add here statements from p_5 to p_... if desired

//********************************************* Functions *********************************************
//======================================== Euclidean algorithm ========================================
int32_t gcd(int32_t a, int32_t b) {
  if (b == 0) {
    return a;
  } if (b == 1) {
    return 1;
  } return gcd(b, a % b);
}