Asked

Cognitive Vision Not Speaking

Hello All-

How do I get Cognitive Vision to say what it see's from the camera? For some reason I can not get Cognitive Vision to speak the variable, it just says the variable. I have tried- Audio.Say("I see " + "$VisionDescription") Audio.Say("I see " + "$VisionDescription ") say("I see " + "$VisionDescription ")

This doesnt work either ControlCommand("Cognitive Vision", "speak", getVar("$VisionDescription")); ControlCommand("Cognitive Vision", "speak", getVar("$VisionDescription "));

Appreciate help, thanks.


Related Hardware EZ-B IoTiny

ARC Pro

Upgrade to ARC Pro

Synthiam ARC Pro is a new tool that will help unleash your creativity with programming robots in just seconds!

PRO
Canada
#1  

Hello!

To have your robot speak out what the Cognitive Vision skill sees, you need to ensure that the variables are being used correctly in your script. The variables should not be enclosed in quotes, as this will cause the script to interpret them as literal strings rather than variable names.

Here's the correct way to use the variable in a script to make the robot speak what it sees:

say("I see " + $VisionDescription)

Make sure that the Cognitive Vision skill is running and actively detecting objects to populate the $VisionDescription variable with data. If the variable is not being updated with a description, the robot will not have anything to say.

If you continue to have issues, please ensure that the Cognitive Vision skill is properly configured and that your EZ-B IoTiny is connected to the internet, as the Cognitive Vision skill requires an internet connection to work with Microsoft's cloud services.

For more detailed information on using the Cognitive Vision skill and troubleshooting, you can refer to the support documentation for Cognitive Vision on the Synthiam website: https://synthiam.com/Support/Skills/Camera/Cognitive-Vision?id=16211

Remember to also check the variable watcher to see if the $VisionDescription variable is being updated when the Cognitive Vision skill detects objects. If the variable is not being updated, the issue may lie with the Cognitive Vision skill's detection process or with the camera's input.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
USA
#2  

This did not work.

say("I see " + $VisionDescription)

PRO
USA
#3   — Edited

I am using the Cognitive Vision

In the Cognitive Vision I put:

in Describe - Say("I am " + $VisionConfidence + "percent certain that i see " + $VisionDescription + $VisionReadText)

or simply: say("I see " + $VisionDescription)

in the Read Text:

User-inserted image

works well

#4   — Edited

It appears you are using JavaScript and want to use Global Variables. A global variable is accessible across all robot skills in ARC and all languages. Therefore, the variable must be accessed by using the getVar() or setVar() methods.

From the JavaScript overview: https://synthiam.com/Support/javascript-api/javascript-overview

JavaScript

Quote:

getVar( variableName, [default value] ) Retrieves the value from the Arc’s public global variable storage. These are variables published by robot skills, such as the Camera, Auto Position, Speech Recognition, etc... {variableName} - The name of the global variable as a string {default value} [Optional] If specified, this value is returned if the global variable doesn’t exist. {return} The value of the global variable

Example: // Get the current direction the robot is moving var direction = getVar($Direction);

// Get the value of $test, and if it doesn’t exist, return false var testVar = getVar($test, false); setVar( variableName, value ) Sets the value from Arc’s public global variable storage. This allows the variable to be available to other controls or scripts using getVar(). {variableName} - The name of the global variable as a string {value} The value that you wish to store in the global variable {return} The value of the global variable

Example: // Set a value of 5 to be accessible by other controls setVar($MyValue, 5);

In Editor

The Variable List in the editor window displays all global variables to make it easier. Clicking on a global variable will insert the text into the script editor at the current cursor position. More about that here: https://synthiam.com/Support/Programming/code-editor/variable-picker

From that support document...

Quote:

All supported programming languages in ARC have a variable picker. This will list all variables that are currently assigned in the compiler. EZ-Script variables are global and begin with a $(dollar sign), so all controls and scripts can share the variables. To access global Script variables in JavaScript or Python, use the getVar() and setVar() methods. The value of each variable is also displayed, including the values within arrays. Clicking on a variable name will add the text to your script editor.

User-inserted image

Summary

Now that you're familiar with the global variable store in ARC, you can guess the code snippet you're looking for is...


Audio.say("I see " + GetVar("$VisionDescription"));

Final Thoughts

In your examples, you have written "Audio.Say" with a capital S instead of a lowercase s. Syntax is important in programming, as are upper and lower cases. The manual defines the available commands and the IntelliSense, which pops up available options while typing. The examples have lowercase s because that's different from an uppercase S. Computers see a small s different than an upper S, so you will need to use the correct case.

If you choose not to use the intellisense, this is the online JAvaScript manual that will be useful: https://synthiam.com/Support/javascript-api/javascript-overview

PRO
USA
#5   — Edited

This worked -

Audio.say("I see " + getVar("$VisionDescription"));

I also noticed the  G (getVar) should be lower case.

and for Azure Text to Speech-

ControlCommand("Azure Text To Speech", "speak", "I see "); ControlCommand("Azure Text To Speech", "speak", getVar("$VisionDescription"));

Thank you all.

PRO
USA
#6   — Edited

Great, you got it going.

PRO
USA
#7   — Edited

@Athena How did you change the Cognitive Vision voice to one of the Azure voices?

I know I can use Speech Settings and with the Audio Box plugin / with scrip to get many voices but not all the voices Azure has

PRO
Canada
#9  

To change the voice in the Azure Text to Speech skill, you can follow these steps:

  1. Open the Configuration Menu of the Azure Text to Speech skill in your ARC project.
  2. In the Neural Voice field, enter the name of the voice you wish to use. You can find a list of available voices provided by Microsoft Azure here: https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#text-to-speech
  3. You can dynamically change the voice during your project's operation using the ControlCommand syntax. Here's an example of how to change the voice to "en-US-JennyNeural" using a script:
ControlCommand("Azure Text To Speech", "SetVoice", "en-US-JennyNeural");
  1. If you want to hear a sample of the selected voice, you can click the SAMPLE button in the configuration window.

Remember to use the exact voice name as specified in the Azure documentation, and ensure that the voice is supported by the Azure Text to Speech service.

For more detailed instructions and information on using the Azure Text to Speech skill, you can refer to the support documentation on the Synthiam website: https://synthiam.com/Support/Skills/Audio/Azure-Text-To-Speech?id=21861

By following these steps, you can utilize the wide range of voices available through Azure Text to Speech to enhance your robot's speech capabilities.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Canada
#11  

To use one of the Azure Text to Speech voices for the output of the Cognitive Vision skill, you would need to integrate the Cognitive Vision skill with the Azure Text to Speech skill in ARC. Here's how you can set it up:

  1. Ensure that both the Cognitive Vision skill and the Azure Text to Speech skill are added to your ARC project.

  2. Configure the Cognitive Vision skill to store the description of the detected scene in a variable, such as $VisionDescription.

  3. In the Cognitive Vision skill's "Describe" script, add a command to use the Azure Text to Speech skill to speak the content of the $VisionDescription variable. For example:

ControlCommand("Azure Text To Speech", "Speak", $VisionDescription);
  1. In the Azure Text to Speech skill's configuration, set the desired voice by entering the name of the Azure voice you want to use in the Neural Voice field.

  2. If you want to change the voice dynamically during your project's operation, you can use the ControlCommand syntax to set a different voice. For example, to change the voice to "en-US-JennyNeural":

ControlCommand("Azure Text To Speech", "SetVoice", "en-US-JennyNeural");

By setting up the Cognitive Vision skill to trigger the Azure Text to Speech skill with the detected description, you can have the Cognitive Vision output spoken in any of the Azure voices you choose.

Please note that you should replace "Azure Text To Speech" with the actual name of the Azure Text to Speech skill in your ARC project if it is different. The name must match exactly as it appears in your project's skill list.

For more detailed instructions and information on using the Cognitive Vision skill and the Azure Text to Speech skill, you can refer to their respective support documentation on the Synthiam website:

This integration will allow you to use the advanced voice options provided by Azure Text to Speech for the output of the Cognitive Vision skill.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
USA
#12   — Edited

I figured it out, I got it to work.

In Cognitive Vision:

in Describe - Audio.say("I see the words " + getVar("$VisionDescription"));

in Read Text - ControlCommand("Azure Text To Speech", "speak", "I see the words " + $VisionReadText);

now I can use all the Microsoft voices in the Azure skill