
PRO
TerryBau
USA
Asked
Hi all
Is there a way to fire a voice command... like say Hi, then the script will say what is your name... then you say your name, and the VR holds that variable (e.g. your name) and the EZ says the variable back?
Related Hardware EZ-B v4
Yes, its very easy. Try it.
A couple questions first,
To clarify, you want the robot to repeat back the name spoken to him. The name could be any name? Are you using Bing VR?
I haven't thought this through yet but when you speak a work (or in your case a name) to Bing is will return what it hears to you in text form. You will see what is returned in the control window. Perhaps there is a way to use that returned text to make the robot say what is returned? Perhaps by some some how capturing Bing's returned text and then using the Say( text to speech ) command?
I think I see why you are asking about capturing the returned name into a variable. You could use that variable to use the Say command to repeat it.
Hey Dave... so I am grabbing API data, it shows different data per a variable (in this instance a name)... that name is tacked onto the json string... so I would like B9 to
I already know how to have EZ wait for a verbal command (e.g. like a yes no, or a multiple name thing) and then based on the input (e.g. if response = no) do something...
Ahhh maybe i will look at the Bing VR, that may do what I want, i was using the standard VR from EZ
You’re on the right track with bing speech recognition. If you’re having conversation, also look into the open ai robot skill
but I do want to clarify.
VR = virtual reality
Speech Recognition = speech recognition
:)
also, there is something called Voice Recognition sort of. It’s the process of matching sound to a voice. So the robot would know who’s speaking. That technology has mostly renamed to Speaker Recognition over the years. Not sure why lol
so if you do say voice recognition, you’re talking about identifying the voice and that’s very different than speech recognition.
Try Bing Speech Recognition robot skill. It can detect anything.
Woops, yes. VR. Voice. Sorry. Slip of the mind/keystroke.
You're into an area I really don't know about. I think DJ is pointing to this skill: https://synthiam.com/Support/Skills/Artificial-Intelligence/OpenAI-Chatbot?id=20207
Ya that open ai works pretty good.
We’ve been circling around the speaker recognition from azure for a while. Not sure an eta though