United Kingdom
Asked — Edited

Pandoras Bot Integration

In another topic an idea was mentioned for a user to speak to the robot, the text sent to Chatterbot and the response sent back... So I took it upon myself to make such a script...

The first problem is chatterbot itself, there is no user control and for the advanced features you need to pay. Budget is a huge thing to me so paying for anything I don't already have is a big no no.

But, I had used Pandoras Bot in the past and it offers everything needed for free. It's really intelligent in how it works and very trainable either online through the control panel or by just talking to the robot.

i.e. if you start talking you can say "my name is Rich" and it will remember your name. Or the same for age, favourite things etc.

Anyway, it's more of a project/work in progress than a script at this point but not something for the showcase so something for the scripting section.

So far, there are problems to overcome.

  1. Pandora Bot's API uses XML, EZ-Script can't parse it correctly. In this case all it just needs to set $response to be whatever is between the and xml tags.
  2. WaitForSpeech needs a phrase list and (unless I missed it) can't wait for dictation.

A little bit of playing around and I've managed to write (well, modify) a PHP parser for the API and can use HTTPGet to send the user speech to the API and save the response as a variable to be spoken by EZ-Script.

Currently testing with a small phrase list of; Hello Who are you What is your name How old are you

The script is sending off the recognised phrases ($input) and the parser is working, sending the response back and keeping it in variable ($response)

The script then says the text with Say($response) and the robot speaks.

Now come a few new issues... 3) The WaitForSpeech is picking up the robot speaking 4) WaitForSpeech is waiting for the phrase to finish and match, if it doesn't it times out. It seems to always want to include the part the robot said (or acts that way), for instance if it responds to "hello" with "hi there" it will hear itself, if then you say "how old are you" it thinks you have said "hi there how old are you" and doesn't match so times out.

I've avoided the timeout by setting it short and looping but this means the user has only a couple of seconds to say what they want to say or it'll restart. If I don't loop it around it takes "timeout" as the phrase and the robot responds to that.

What I need to find out is if EZ-Script can be set to wait until speech finished, rather than putting a Sleep(2000) to wait a couple of seconds.

Also, is it possible to use SAPI's dictation mode to set a variable as anything heard not just from a set list of phrases?

And, if it's possible to reset the waitforspeech after say a second of silence, rather than timeout and restart even if it is "hearing" something.

Also, can an XML parser be added like the RSS one? It's extremely similar in construction so sounds simple but without knowing how the RSS one works in ARC I really have no clue.

I guess those last few questions are really aimed at @DJ

This may all come to nothing and be one of those many failed projects I've started, it may be something that just isn't really possible with the software but it could be an awesome addition to the AI... whatever it is, it's going to be fun :)


ARC Pro

Upgrade to ARC Pro

Stay at the forefront of robot programming innovation with ARC Pro, ensuring your robot is always equipped with the latest advancements.

#1  

Rich, your issue # 3), it should wait till speech is finished! , I hope DJ can answer that one. Your quote " Also, is it possible to use SAPI's dictation mode to set a variable as anything heard not just from a set list of phrases?" I too am looking for a solution for that circumstance as well! I 'm wondering if the ADC with a separate mic was to monitor sound levels and pick up something "voice" like which could then be scripted for a "response" Just a crazy thought!...and pardon my ignorance, could you define XML?

#2  

I know you wanted to do this in EZ Script, but couldn't you do a VB or C# script to use sapi dictation to set your variable?

DJ, are variables set from dotnet script windows global like Ez scriot variables are? If not, I guess you could writre to a file and have the new file io functions in ez script pick it up.

United Kingdom
#3  

I looked at the SDK earlier as I have a nagging feeling it's not going to be possible, or it means additional features in ARC. If it turns out that EZ-Script is not capable, or more precisely the wrong tool for the job then i'll be looking in to the SDK. The problem there is I've not used C# or VB. I have been wanting to learn one or the other properly though so this may be my entry in to that.

What it can do though (with EZ-Script) is give various responses without having to list them all in the EZ-Script, and pick a random one - it was timing out earlier and saying "timeout" to Pandora makes it say all kinds of crazy stuff, which was kinda fun.

@irobot, XML stands for EXtensible Markup Language, it is a markup language much like HTML. XML was designed to carry data rather than display it.The tags are not predefined. It doesn't actually do anything, it's like a list. Example, a note from my bot to me; <note> <to>Rich</to> <from>Melvin</from> <heading>Reminder</heading> <body>Don't forget to charge me this weekend!</body> </note>

or an extract from a random XML on my PC <action> <cmdType>Results.RegEx</cmdType> <cmdString>album:(.*)</cmdString> <cmdRepeat>1</cmdRepeat> </action>

Whatever is reading it knows what to do with the tags (the bits between <action> and </action or <to> and </to>). RSS is based on XML.

Anyway, I digress.

#4  

Thanks Rich!:) yet another programmablish/usable language!

#5  

I too am trying to find time to learn c#. I know vb6 pretty well, but dotnet changed a lot.

I wasn't talking about the sdk though. ARC can run vb script and c#script inside the application, so you could script just the dictation function to capture the variable in a language that can already utilize sapi, and then handle responses in ez script.

I really need to finish my honey-do list so I can start working on these projects and learning dotnet. I have all these ideas and no time to implement them. :(

Alan

United Kingdom
#6  

I can use other software for the voice recognition and responses which is no problem, other than having to have other software running also and it isn't free software. That is plan B, for if it just isn't possible with the ARC/EZ-Script.

And, at least on my PC, the sound servo control picks up from the sound card, so if I play an MP3 it picks it up, if I watch a movie it picks it up etc. So that can be used in conjunction with the other software to flicker lamps or move servos.

I'm thinking that will be the way this heads unless I've missed something in the EZ-Script manual but will keep trying a little longer before I give up on it.

The only other problem is, the other software I mentioned, much like probably all software, needs a magic word before it'll listen. I guess the robot's name or even the word robot could work but it wouldn't feel like a real conversation... I guess that's just going to be one of those things that can't be avoided though... Thinking about it, if it always listened and responded it would be constantly talking away about random rubbish, which may sound cute at first but it would get very annoying very fast.

I wont have time to work on it this weekend but will have another look next week sometime and see where it heads.

#7  

@RichMR2 , "I guess the robot's name or even the word robot could work but it wouldn't feel like a real conversation" I agree, but getting anyones "attention" can be difficult ..like when I am asked to take the garbage out my hearing is um not working!:) At a service counter one often has to "hit the counter bell" or take a number at the deli!....Perhaps thats an idea for you/me is to use a pure sound like a counter bell or finger snap or even "whistle with your mouth, as in for a pet dog!)

PRO
New Zealand
#8  

Hi Rich...

It is possible to have a loop that waits on your every word see below:

Note: If a response is not heard $Name_Response_Loops=0 is incremented forcing the check to be done again, three times in fact.

At the end of this routine your answer is converted and stored as Yes, No, Undecided or Timeout in the $Resonse string.

You can then have your robot take action according to the response.

#----------------------------------------------------------|

Confirmation of a spoken response |

#----------------------------------------------------------| Print ("+--------------------------------------------------------------+" ) Print ("| Positive responses = Yes, Affirmative, True, Okay, Correct |" ) Print ("| Negative responses = No, Negative, Incorrect, False, Wrong |" ) Print ("| Undecied responses = Maybe, perhaps, possibly |" ) Print ("+--------------------------------------------------------------+" )

Initialise Variables

$Name_Response_Loops=0

Start the sequence

:Confirmation

$Name_Response_Loops=$Name_Response_Loops+1 Print ("Check # $Name_Response_Loops" ) $Response = WaitForSpeech(10, "Yes", "Affirmative", "True", "Okay", "Correct", "In correct", "No", "Negative", "False", "Wrong", "Maybe", "Perhaps" )

Report initial response including any timeout

IF ($Response = "timeout" ) Print ("No response from User!" ) Else Print ("I Heard you say $Response" ) ENDIF

Take necessary action according to response

IF ($Response = "Yes" ) goto(Positive) ELSEIF ($Response = "True" ) goto(Positive) ELSEIF ($Response = "Okay" ) goto(Positive) ELSEIF ($Response = "Correct" ) goto(Positive) ELSEIF ($Response = "Affirmative" ) goto(Positive) ELSEIF ($Response = "No" ) goto(Negative)

ELSEIF ($Response = "In correct" ) goto(Negative) ELSEIF ($Response = "Negative" ) goto(Negative) ELSEIF ($Response = "False" ) goto(Negative) ELSEIF ($Response = "Wrong" ) goto(Negative) ELSEIF ($Response = "Maybe" ) goto(Undecided) ELSEIF ($Response = "Perhaps" ) goto(Undecided) ELSEIF ($Response = "Possibly" ) goto(Undecided) ELSE IF ($Name_Response_Loops=3) goto (No_Response) ENDIF goto(Confirmation) ENDIF Halt()

All 'Yes' answers come here

:Positive $Response = "Yes" Halt()

All 'No' answers come here

:Negative $Response = "No" Halt()

All 'Maybe' answers come here

:Undecided $Response = "Undecided" Halt()

No Answer comes here

:No_Response $Response = "No Response" Halt()

United Kingdom
#9  

Thanks. I had looked at that code before too which is where I got a few of the ideas in my code from.

I've not really looked in to this again for the last couple of days due to being out yesterday and recovering today however it is one of those things that's always ticking over in my mind and the more I think about it the more I think it's going to end up having to be a C# addition or having to wait and see whatever happened to DJs chatbot (maybe part of the "revolution", who knows). If DJ is working on this still then there's really very little point in going any further with this (I have no doubts his will work better).

I've also realised I need to go over the EZ-Script manual a few more times as I keep finding functions I was not aware of.

PRO
New Zealand
#10  

PS You should also see the pause box tick and untick itself on the speech recognition box thus stopping your bot listening to itself.

United Kingdom
#11  

I'll have to check that, from memory I don't think I have the speech recognition box added to the ARC, that may explain the problem...

United Kingdom
#12  

Time has been against me lately with other commitments and work stopping work on this script and on my bot in general however a small update on this side project of mine. And that is I've solved the issue of the robot listening to itself.

What I didn't realise until earlier is there are both the Say() command and SayWait() command. I was using Say() which works as it should but would also continue to listen. I guess I really should have read the Script manual a bit sooner than I did :)

So that is one of the small issues solved. The main one still remains to be the listening for any word not just a phrase list. But on second thoughts (after making a few changes to my other voice controlled software I use) it's best to stick to a list otherwise the software can come out with some rather confusing sentences... I think I will probably do this on a word by word basis, with a timeout indicating no further words. Writing the phrase list may be the biggest task but luckily there are downloadable dictionaries which will help there with some copy & paste work.

Now to figure out a logical list of words for each position based on the English language and correct grammar etc. (however English was never my strong point at school and what I learned there I've forgotten in the last 15 years!).

#13  

What might be an interesting addition to EZ-Script would be a "Listen" feature. As already pointed out, it's already using speech recognition, but it's looking for specific phrases. What if the "Listen" feature, simply translated speech to text? It would probably need to have a variable that indicates how long of a pause to listen for to indicate the end of a phrase.

Something like:

Listen(2000), would activate the mic and listen for a phrase. It will stop listening if there is a pause in speech longer than 2 seconds.

Whatever is captured by the Listen feature could be:

  • Added to a global variable
  • Written to a file.

Once the speech is captured as a variable, then you could probably use it in any number of diabolical ways.

PRO
Synthiam
#14  

I've gone the route many times with a open dictionary of english words for input. Meaning, it uses every word in the english language and tries to "guess" what was spoken. You can try it on your computer by using "Dictation" built into windows. You can speak and have the text written to Notepad, for example.

Try it - and you'll see what the trouble has been :)

Hint: It picks up words that sound like other words. Speak a sentence, and what is "detected" is entirely incorrect, but rhymes. With a proper microphone headset, it works better - but no one here uses a headset.

If I was to create a dictation control, you would require a headset - and that still might not be accurate enough.

#15  

Curses! Eye guess ewe can't wean them awl? :)

PRO
Synthiam
#16  

I've had some ideas by using a custom dictionary. There has been a few examples I've tried. Those who use the EZ-SDK will see some examples and the speech recognition with full dictation.

United Kingdom
#17  

I agree completely, my system has had over 12 months of constant learning through voxcommando (controls media playback with voice, plus a lot more) and it gets around 45-50% success in dictation mode, 95% with the fixed phrase list.

A word by word WaitForSpeech would be the only way for it to be accurate enough for me to see it worth using, then you loose a lot of the features that I expect people would want i.e. Asking "Who is Charlie Sheen?" his name isn't in the dictionary so the bot can't understand it.

I suspected it would become a failed idea, I'm starting to realise I was right all along. But I still win, I got to read all 14 pages of the EZ-Script manual thanks to looking for the SayWait command :)

P.S. if anyone want's to see how horrible dictation to a chatbot is download voxcommando trial, add the pandora bot plugin and set it up then ask "Who is Steven Seagal?" The answer I got back was rather peculiar but it did think I asked who was eating sea gulls.

#18  

Oddly enough, the answer to both questions is the same. ;)

PRO
Synthiam
#19  

@RichMR2, the 50% success of dictation mode is very very poor and unusable. 95% with fixed phrase is great. However, phrases and "words" are different. For example, a WaitForSpeech would require 2 second pause between words. How the speech API works to detect phrases needs a pause between phrase. So even if a phrase contains a word, it still needs a 2 second pause.

You can change the length of the pause, but that causes a whole new pile of issues :)

From what I understand, your goal is to speak to your computer? Can you give me an example of a conversation you would like to have...

United Kingdom
#20  

The original idea was not mine, it was asked elsewhere if a chatbot could be used. I think the original idea was someone wanted their robot to respond with a wider range of responses to a bigger range of questions asked by their friends. So the conversation could be pretty much anything from asking how the robot is, or what it's been doing to asking it to tell you a story, sing a song...

It was at the time I was looking for "things" I could use EZ-Script to do as I learn from doing not by reading. I already knew about Pandora Bots' AI which could be used and customised for each robot giving it it's own personality, memory etc.

@DJ I agree, something that only works half of the time is very poor. Personally (and my standards may be very high but being a perfectionist I'm allowed high standards) I wouldn't accept anything less than 90% for dictation and even then I wouldn't be completely happy. The 95% for fixed phrase barely cuts it with me but that 5% miss I put down to my current mic being a kinect mic array fixed to the wall above the TV some 3 or 4 meters from where I am when speaking and the phrases used being very similar in some cases (it's temporary until I find a decent mic solution)

I can think of a bunch of ideas that may "work" to get dictation sent to the chatbot and the reply back but thinking about it and seeing the issues that would be there and the inaccuracy this idea is being marked down as one which failed (although happy to share ideas and even what little code I have with anyone who does want to try and implement it in to their bot).

PRO
Synthiam
#21  

@richmr2, I think i have a really interesting solution for you :)

Give me some time to work on a prototype

PRO
New Zealand
#22  

Sounds interesting....

#23  

I wonder what DJ has in mind.

Canada
#24  

DJ's on the job. Que super hero flourish.