
Hello,
i am asking me one question:
Have i the possibilty to set a variable in the Speech Commands?
To understand what i mean:
I want to ask the Robot:
Do you like "x" ( and now i can say what i want)
like:
Do you like fishing? Do you like drinking? Do you like flying?
and so on.
and the robot can give an answer: "Yes i like!"
But the important, is to set a variable!
Interresting also:
I will say: My favorite music is "x"
Now the robot will remember the "x" (in this case it will be "Hip Hop")
And the Robot will answer:
Hip Hop (from the variable) i like also!
i think this is more complicate then the first idea, there the speech reco only must understand the start or the end of a sentence.
So it is possible?
Boris
PS: For what i need this Commands:
$SpeechConfidence
and $SpeechPhrase
?
I check out the forum and the manual, but i really find nothing about it.
If you want your robot to remember what you say, and put it into a variable, I believe you have to do what I call "Covering all the bases".
What that means is, in order for the robot to remember your word, you have to add a line in speech recognition which has "Hip hop" in it. The problem with it is, You end up with a command for every type of music.
I don't believe there is another way, though there are things you can do to cut down on how much to enter into speech recognition.
I will provide an example for you soon.
Here's my example. See the download link below. Make sure it downloads to your "Downloads" Folder. Then extract the files by right-clicking the "AI voice files.zip" and clicking extract, then okay on the following prompt, if one shows.
Open the extracted folder and click the "Ai integrated 1.ezb" file.
In the ARC project, edit each script in the Script manager to match each file location to the downloads folder, with your computer username.
AIvoicefiles.zip
Hello Technopro!
Really nice idea!
You Demo EZB Project answers me another question what i had... How to make Random Answers! This i got now!
But my first question are not really cleared.
Let´s check only the first problem:
I take your Demo File....
You got there:
Hello Hi Hey
ok, problem this works! But if i say:
Hello Peter Hi Robot Hey variable
this not works!
My idea is, to copy a little bit ALICE to EZB!
So there it is:
<pattern>* CHILDREN</pattern> <template>I HAVE LESS CONTACT TO CHILDREN</template>
So the "star"* is the variable and on every sentence what ends with children the Bot will say this answer.
Do you understand this idea?
But your random test is cool! I will use this also
Boris
What you are trying to do won't work. The difference is in how speech recognition works with Alice vs ARC.
ARC uses a defined set of items. This is setup by the commands that you have defined in the speech engine. This allows the speech engine to be far more accurate.
Alice is in dictation mode meaning that it tries to pickup anything that is said. This means that it is less reliable, but can handle far more items.
"Do you like fishing?" "Do you like drinking?" "Do you like flying?" would have to be setup inside your speech engine for it to recognize this combination of words for ARC to understand. This is why the push for DNS was huge. The cost too was huge.
There are some other ways of doing what you are wanting to do but they all involve programming outside of ARC.
Hi David,
you see i am still on the Speaking Project!
And i think the ouside programming its to complicted for me, because i told i really don´t know C++ . What a pitty
@rentaprinta.
Why not just use the Pandorabot control in ARC and create your own Pandorabot Alice (AIML) files to do what you are asking? It would be a lot easier than programming. Then you can use speech recognition (like David said, less reliable than ARC speech Rec control, but does work), and place control commandsin to the AIML responses to make the robot do things while it speaks.
Hi Steve,
i build my on pandorabot, but i must say the speech reco there is horrible until not useable. From 0-10 %
The Speech Reco from EZB works to 80-90 %
And i am sitting i a silent room in the front of my mic!
And now image i am standing in my room with open window!
So only EZB will work, thats why i am intressted to make this Speech Commands a little bit more working for a conversation. And not only 100% excat trained sentences.
Boris
@Boris.
Fair enough. I understand where you are coming from now.
Other than using a really good speech recognition microphone or headset, and with lots of Windows speech recognition training, then I agree that accuracy will be an issue. That's why I use a remote PC app called "iteleport" which is installed on my iPhone and Windows laptop. Speech recognition is greatly improved using the iPhone, but it does mean pressing a couple of buttons on the iPhone screen.
It's a shame the Dragon idea never took off, as being able to just speak to a robot with excellent results without wearing a microphone or pressing any buttons on a phone to activate and send text, is still something I (and others) would love to have the ability to do, especially where Pandorbot is concerned.