PRO
Sproket
USA
Asked
Hello All-
Why is the script collection not executing the real time vision from Cognitive Vision skill?
Im using Cognitive Vision - ControlCommand("Azure Text To Speech", "speak", "I see " + getVar("$VisionDescription")); and
It works fine when I click describe image but when I make a script collection and execute same command it only speaks and see's what was previous captured by the Cognitive Vision skill.
I guess you say its only seeing the cached vision from Cognitive skill.
Thank you all for reviewing.
Related Hardware EZ-B IoTiny
Related Control
Cognitive Vision
Hello,
It sounds like you're experiencing an issue where the Cognitive Vision skill is not updating the vision data in real-time when you're running it through a script collection. The Cognitive Vision skill processes images from the camera and updates the variables with the latest descriptions when you manually click "describe image." However, when you're using a script to automate this process, you need to ensure that the script triggers a new image to be processed.
To address this, you might need to add a command in your script that instructs the Cognitive Vision skill to capture and describe a new image before attempting to speak the description. This ensures that the data is fresh and not just a cached result.
Here's a general approach to ensure the Cognitive Vision skill processes a new image:
ControlCommand()
function to trigger the Cognitive Vision skill to capture and describe a new image.Please note that I am not able to provide code or programming, but you can use ARC to create the necessary script. Make sure to check the Cognitive Vision skill documentation for the correct
ControlCommand()
syntax to capture and describe a new image.If you need further assistance, you can refer to the support section for the Cognitive Vision skill on the Synthiam website: https://synthiam.com/Support/Skills/Camera/Cognitive-Vision?id=16211
Remember to also consider the daily quota limitations of the Cognitive Vision skill, as excessive requests may exceed the shared license key's allowance.
Best regards, Athena
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
Removing duplicate post
Hello,
It sounds like you're experiencing an issue where the Cognitive Vision skill is not updating the vision data in real-time when you're running it through a script collection. The Cognitive Vision skill processes images from the camera and updates variables such as
$VisionDescription
with the latest description of what the camera sees. However, if the script is not triggering a new image to be processed, it will indeed use the last cached result.To ensure that your script is using real-time vision data, you need to make sure that the Cognitive Vision skill is actively processing a new image each time you want to speak the description. You can do this by using the
ControlCommand()
function to trigger the "Describe Image" action of the Cognitive Vision skill before you attempt to speak the description.Here's what you should include in your script collection:
$VisionDescription
variable to be updated.$VisionDescription
variable.For example:
Please adjust the
Sleep()
duration as needed based on the processing time of the Cognitive Vision skill. This should ensure that the description spoken is based on the latest image captured by the camera.If you continue to experience issues, please make sure that your camera device is properly configured and actively streaming images to the Cognitive Vision skill. You can also refer to the Cognitive Vision skill documentation for more details and troubleshooting steps: https://synthiam.com/Support/Skills/Camera/Cognitive-Vision?id=16211
Remember that the Cognitive Vision skill requires an internet connection to process images, and there is a daily quota for the number of requests you can make with the shared license key. If you exceed this quota, you may need to wait until the next day or consider obtaining a personal license key for higher usage.
If you need further assistance, please provide more details about your setup and the exact steps you are taking, and I'll be happy to help you troubleshoot the issue.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
Vision: Execution Error Line 1 Col 0 - ControlCommand Error for 'Cognitive Vision' sending 'Describe Image'. 'ControlCommand' with parameter Describe Image, is not supported for this control (Cognitive Vision)
Using - ControlCommand("Cognitive Vision", "Describe Image") Sleep(3000) // Wait for 3 seconds to allow image processing ControlCommand("Azure Text To Speech", "speak", "I see " + getVar("$VisionDescription"))
We have merged your multiple duplicate posts into this single thread. Please use a single thread for the same topic to avoid spam the forum and be courteous to other users.
@athena isn't great at helping with programming, but her initial responses are correct. You can perform the ControlCommand() action by reading the cognitive vision manual she originally suggested. The manual explains the control commands in greater detail. However, using the Cheat Sheet to view available control commands for robot skills would be beneficial. More information is here on how to access available control commands for each robot skill: https://synthiam.com/Support/Programming/control-command
Hello,
I apologize for the confusion earlier.It seems there was a misunderstanding regarding the correct
ControlCommand
for the Cognitive Vision skill.The error message you received indicates that theDescribe Image
command is not supported for the Cognitive Vision skill.To trigger the Cognitive Vision skill to process a new image,you should use the
Detect
command instead ofDescribe Image
.TheDetect
command will instruct the Cognitive Vision skill to analyze the current camera frame and update the variables with the latest detection information.Here's the corrected script:
Please replace the
Describe Image
command withDetect
in your script collection and try running it again.This should resolve the issue and allow your script to speak the description based on real-time vision data.For more information on the available control commands for the Cognitive Vision skill,you can refer to the Cognitive Vision documentation on the Synthiam website: https://synthiam.com/Support/Skills/Camera/Cognitive-Vision?id=16211
Additionally,you can use the Cheat Sheet in ARC to view available control commands for each robot skill,as mentioned by Synthiam Support.
If you encounter any further issues,please let me know,and I'll be happy to assist you.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
@athena i think waiting is a silly idea. The robot skill has the ability to execute a script after the detection has occurred. Wouldn't that make more sense to put the result code in there?
You are correct, using the built-in feature of the Cognitive Vision skill to execute a script after detection is a more efficient approach. The Cognitive Vision skill allows you to specify scripts to run after certain events, such as after a detection is completed. This way, you can ensure that the actions you want to take, such as speaking the detected description, are performed immediately after the vision processing is done, without an arbitrary delay.
To utilize this feature, you can configure the Cognitive Vision skill to execute a script after a detection. In the configuration menu of the Cognitive Vision skill, you can set up the "Describe" script to include the command to speak the detected description using the Azure Text to Speech skill.
Here's what you should do:
For example, the "Describe" script could look like this:
This script will be executed automatically after the Cognitive Vision skill completes a detection, and the
$VisionDescription
variable is updated with the new description.By using this method, you eliminate the need for a
Sleep()
command and ensure that the spoken description is based on the latest vision data.For more detailed instructions and options, please refer to the Cognitive Vision skill documentation on the Synthiam website: https://synthiam.com/Support/Skills/Camera/Cognitive-Vision?id=16211
If you have any further questions or need additional assistance, please let me know.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
That worked - ControlCommand("Cognitive Vision","Detect") Sleep(3000) // Wait for 3 seconds to allow image processing ControlCommand("Azure Text To Speech","speak","I see " + getVar("$VisionDescription"))
Thank you.