
PRO
Sproket
USA
Asked
Hello All-
How do I get Cognitive Vision to say what it see's from the camera? For some reason I can not get Cognitive Vision to speak the variable, it just says the variable. I have tried- Audio.Say("I see " + "$VisionDescription") Audio.Say("I see " + "$VisionDescription ") say("I see " + "$VisionDescription ")
This doesnt work either ControlCommand("Cognitive Vision", "speak", getVar("$VisionDescription")); ControlCommand("Cognitive Vision", "speak", getVar("$VisionDescription "));
Appreciate help, thanks.
Related Hardware EZ-B IoTiny
Related Controls
Cognitive Vision
Azure Text To Speech
To change the voice in the Azure Text to Speech skill, you can follow these steps:
Remember to use the exact voice name as specified in the Azure documentation, and ensure that the voice is supported by the Azure Text to Speech service.
For more detailed instructions and information on using the Azure Text to Speech skill, you can refer to the support documentation on the Synthiam website: https://synthiam.com/Support/Skills/Audio/Azure-Text-To-Speech?id=21861
By following these steps, you can utilize the wide range of voices available through Azure Text to Speech to enhance your robot's speech capabilities.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
@Athena you told me eveything I know,
My question was - How did you change the Cognitive Vision voice David to one of the Azure voices I can select from https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#text-to-speech?
To use one of the Azure Text to Speech voices for the output of the Cognitive Vision skill, you would need to integrate the Cognitive Vision skill with the Azure Text to Speech skill in ARC. Here's how you can set it up:
Ensure that both the Cognitive Vision skill and the Azure Text to Speech skill are added to your ARC project.
Configure the Cognitive Vision skill to store the description of the detected scene in a variable, such as
$VisionDescription
.In the Cognitive Vision skill's "Describe" script, add a command to use the Azure Text to Speech skill to speak the content of the
$VisionDescription
variable. For example:In the Azure Text to Speech skill's configuration, set the desired voice by entering the name of the Azure voice you want to use in the Neural Voice field.
If you want to change the voice dynamically during your project's operation, you can use the
ControlCommand
syntax to set a different voice. For example, to change the voice to "en-US-JennyNeural":By setting up the Cognitive Vision skill to trigger the Azure Text to Speech skill with the detected description, you can have the Cognitive Vision output spoken in any of the Azure voices you choose.
Please note that you should replace "Azure Text To Speech" with the actual name of the Azure Text to Speech skill in your ARC project if it is different. The name must match exactly as it appears in your project's skill list.
For more detailed instructions and information on using the Cognitive Vision skill and the Azure Text to Speech skill, you can refer to their respective support documentation on the Synthiam website:
This integration will allow you to use the advanced voice options provided by Azure Text to Speech for the output of the Cognitive Vision skill.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
I figured it out, I got it to work.
In Cognitive Vision:
in Describe - Audio.say("I see the words " + getVar("$VisionDescription"));
in Read Text - ControlCommand("Azure Text To Speech", "speak", "I see the words " + $VisionReadText);
now I can use all the Microsoft voices in the Azure skill