Asked — Edited

Api.Ai Example To Learn A Little Bit From

I have been working in API.AI for a while and jumped back into it yesterday some. I want to describe what API.AI is and what it isn't so that there is a clearer picture of it. I am attaching a sample api.ai project that you can use to start building from if you would like to do so.

First, what is it? API.AI is great at taking speech and breaking it down to give you the intent behind the speech. For example, in the example, you can say something like "Hey Robot, I want you to raise your right arm 19 degrees". The returned text (what would be returned to ARC) is "[Movement] [RightArm] [up] [19] [deg]". You can use a script to break down what the robot is being asked to do from the information above. The phrase "yo, raise your right arm 1 degree" would also work and return "[Movement] [RightArm] [up] [1] [deg]" for you to parse and use.

There are some chatbot features called "SmallTalk". This works like any other chatbot and takes questions or statements from the user and returns text that you want it to return. This is the simplest form of using API.AI and probably is the easiest part of it, but is also not the most useful.

There are some prebuilt agents. These agents each use their own API key. Because of this, I don't recommend using them because the plugin only allows one API key, and you will quickly run out of allowable uses. It is far better to build a single customized agent which contains everything that you want your robot to use.

The use of this tool is to break apart and parameterize language. This allows you to use completely different speech for specific commands in ARC. Currently, the plugin only sets 2 variables in ARC. This requires that you pass the parameters in API.AI back in the Text response field that would match the layout of what you want to parse.

In addition, you can add what are called integrations. This is where you could tie into many different things like Actions on Google, Microsoft Cortana, Amazon Alexa skills, telegram, skype, twitter, facebook messenger and many other one-click integrations. There are also SDK's for Android, IOS, Ruby, HTML5, JavaScript, Node.JS, Cordova, Unity, C#, C++, Xamarin, Python, PHP, Java, Botkit and Epson Moverio that allow you to write whatever you want with whatever language you want for the most part. These integrations allow you to run code instead of simply returning the meaning of what was stated back to ARC.

The example here doesn't use integrations, but is designed more to have the information sent back to ARC for you to do something with.

This is a very scaled down version of the Rafiki client that I had been working on. Rafiki-Client.zip You can take this and import or restore it into your api.ai project.

I hope this example helps people see what API.AI can be used for and better understand where it fits in your robot brain.


ARC Pro

Upgrade to ARC Pro

Unleash your creativity with the power of easy robot programming using Synthiam ARC Pro

#17  

@David, that makes sense I have imported it into the API.AI website. I think I'm still missing something I'm sorry I'm a visual person more than I am reading person.

I think I still have to do something I'm not sure, when I say hey robot robot move forward it's starts the response script that you made but it does not start the movie script.

I will hunt and peck like I said I'm a visual person I have a hard time reading and understanding some things especially this techie stuff but thanks again for pointing me to the right direction David I appreciate it !

#18  

OK, the zip file has been imported to the api.ai web site. IS there more stuff I need to do in api.ai web site? This is what I have in the api.ai example that David created. When I say robot moveforward it still gives an "input.unknown" in the $APIAction variable watch.


#code here to move the robot
#$unitinfo = APIGetParamByIndex(1) + " " + APIGetParamByIndex(0)
if($APIAction = "robot.moveForward")
  #place code here to move the robot forward the specified distance of units
  saywait("Moving forward " + APIGetParamByIndex(1) + " " + APIGetParamByIndex(0) )
  #add whatever code you want to put in here
  forward()
  Sleep(1000)
  Stop()
endif
if($APIAction = "robot.moveBackward")
  #place code here to move the robot forward the specified distance of units
  saywait("Moving backward " + APIGetParamByIndex(1) + " " + APIGetParamByIndex(0) )
  #add whatever code you want to put in here
  Reverse()
  Sleep(1000)
  Stop()
endif
ControlCommand("Bing Speech Recognition", PauseListening)
ControlCommand("Speech Recognition", PauseOff)

I add the forward() command in David's move script as you see above. When I say Robot moveforward it starts the script ResponseParse but it does not start the moverobot script.

Do I need to do something or add something on David's example code in the api.ai web site, for this below?


move forward @sys.number:value @sys.unit-length-name:unit

I was hoping Dave S. wound respond as I know David is very busy and has stated we need to do the leg work. I just need a start how ever small, as this is how I have learned what little I know by others examples. I am not asking for someone else to do my work, just asking for a visual example. I hope this makes sense.

Thanks in advance,

#20  

@David, OK that worked, boy I am dense;) I was looking at them parameters on the API.AI website and thought I had to change something there after your last post. It now makes sense thank you very much David. It is now working. Thank you very much for all your time and sharing your project with others David. This is very cool!

Thanks and cheers!

@RichardR, thanks Richard for your reply.

#21  

This Api.ai project looks very good for EZ-Robots having fully functional AI pretty quickly :)