PRO
Sproket
USA
Asked
Hello All-
How do I get Cognitive Vision to say what it see's from the camera? For some reason I can not get Cognitive Vision to speak the variable, it just says the variable. I have tried- Audio.Say("I see " + "$VisionDescription") Audio.Say("I see " + "$VisionDescription ") say("I see " + "$VisionDescription ")
This doesnt work either ControlCommand("Cognitive Vision", "speak", getVar("$VisionDescription")); ControlCommand("Cognitive Vision", "speak", getVar("$VisionDescription "));
Appreciate help, thanks.
Related Hardware EZ-B IoTiny
Related Controls
Cognitive Vision
Azure Text To Speech
Hello!
To have your robot speak out what the Cognitive Vision skill sees, you need to ensure that the variables are being used correctly in your script. The variables should not be enclosed in quotes, as this will cause the script to interpret them as literal strings rather than variable names.
Here's the correct way to use the variable in a script to make the robot speak what it sees:
Make sure that the Cognitive Vision skill is running and actively detecting objects to populate the
$VisionDescription
variable with data. If the variable is not being updated with a description, the robot will not have anything to say.If you continue to have issues, please ensure that the Cognitive Vision skill is properly configured and that your EZ-B IoTiny is connected to the internet, as the Cognitive Vision skill requires an internet connection to work with Microsoft's cloud services.
For more detailed information on using the Cognitive Vision skill and troubleshooting, you can refer to the support documentation for Cognitive Vision on the Synthiam website: https://synthiam.com/Support/Skills/Camera/Cognitive-Vision?id=16211
Remember to also check the variable watcher to see if the
$VisionDescription
variable is being updated when the Cognitive Vision skill detects objects. If the variable is not being updated, the issue may lie with the Cognitive Vision skill's detection process or with the camera's input.I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
This did not work.
say("I see " + $VisionDescription)
I am using the Cognitive Vision
In the Cognitive Vision I put:
in Describe - Say("I am " + $VisionConfidence + "percent certain that i see " + $VisionDescription + $VisionReadText)
or simply: say("I see " + $VisionDescription)
in the Read Text:
works well
It appears you are using JavaScript and want to use Global Variables. A global variable is accessible across all robot skills in ARC and all languages. Therefore, the variable must be accessed by using the getVar() or setVar() methods.
From the JavaScript overview: https://synthiam.com/Support/javascript-api/javascript-overview
JavaScript
In Editor
The Variable List in the editor window displays all global variables to make it easier. Clicking on a global variable will insert the text into the script editor at the current cursor position. More about that here: https://synthiam.com/Support/Programming/code-editor/variable-pickerFrom that support document...
Summary
Now that you're familiar with the global variable store in ARC, you can guess the code snippet you're looking for is...Final Thoughts
In your examples, you have written "Audio.Say" with a capital S instead of a lowercase s. Syntax is important in programming, as are upper and lower cases. The manual defines the available commands and the IntelliSense, which pops up available options while typing. The examples have lowercase s because that's different from an uppercase S. Computers see a small s different than an upper S, so you will need to use the correct case.If you choose not to use the intellisense, this is the online JAvaScript manual that will be useful: https://synthiam.com/Support/javascript-api/javascript-overview
This worked -
Audio.say("I see " + getVar("$VisionDescription"));
I also noticed the G (getVar) should be lower case.
and for Azure Text to Speech-
ControlCommand("Azure Text To Speech", "speak", "I see "); ControlCommand("Azure Text To Speech", "speak", getVar("$VisionDescription"));
Thank you all.
Great, you got it going.
@Athena How did you change the Cognitive Vision voice to one of the Azure voices?
I know I can use Speech Settings and with the Audio Box plugin / with scrip to get many voices but not all the voices Azure has
@ezang you can find the manual here which explains more: https://synthiam.com/Support/Skills/Audio/Azure-Text-To-Speech?id=21861
You cannot add Athena’s name by editing the message. You can only add her when a message is first posted. This prevents her from responding to every time a message is edited. It is mentioned when editing a message.
@athena tell ezang how he can change voices in the azure text to speech skill.
To change the voice in the Azure Text to Speech skill, you can follow these steps:
Remember to use the exact voice name as specified in the Azure documentation, and ensure that the voice is supported by the Azure Text to Speech service.
For more detailed instructions and information on using the Azure Text to Speech skill, you can refer to the support documentation on the Synthiam website: https://synthiam.com/Support/Skills/Audio/Azure-Text-To-Speech?id=21861
By following these steps, you can utilize the wide range of voices available through Azure Text to Speech to enhance your robot's speech capabilities.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
@Athena you told me eveything I know,
My question was - How did you change the Cognitive Vision voice David to one of the Azure voices I can select from https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#text-to-speech?
To use one of the Azure Text to Speech voices for the output of the Cognitive Vision skill, you would need to integrate the Cognitive Vision skill with the Azure Text to Speech skill in ARC. Here's how you can set it up:
Ensure that both the Cognitive Vision skill and the Azure Text to Speech skill are added to your ARC project.
Configure the Cognitive Vision skill to store the description of the detected scene in a variable, such as
$VisionDescription
.In the Cognitive Vision skill's "Describe" script, add a command to use the Azure Text to Speech skill to speak the content of the
$VisionDescription
variable. For example:In the Azure Text to Speech skill's configuration, set the desired voice by entering the name of the Azure voice you want to use in the Neural Voice field.
If you want to change the voice dynamically during your project's operation, you can use the
ControlCommand
syntax to set a different voice. For example, to change the voice to "en-US-JennyNeural":By setting up the Cognitive Vision skill to trigger the Azure Text to Speech skill with the detected description, you can have the Cognitive Vision output spoken in any of the Azure voices you choose.
Please note that you should replace "Azure Text To Speech" with the actual name of the Azure Text to Speech skill in your ARC project if it is different. The name must match exactly as it appears in your project's skill list.
For more detailed instructions and information on using the Cognitive Vision skill and the Azure Text to Speech skill, you can refer to their respective support documentation on the Synthiam website:
This integration will allow you to use the advanced voice options provided by Azure Text to Speech for the output of the Cognitive Vision skill.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
I figured it out, I got it to work.
In Cognitive Vision:
in Describe - Audio.say("I see the words " + getVar("$VisionDescription"));
in Read Text - ControlCommand("Azure Text To Speech", "speak", "I see the words " + $VisionReadText);
now I can use all the Microsoft voices in the Azure skill