Asked

Cognitive Vision Script Collection Only Executing Previous Vision

Hello All-

Why is the script collection not executing the real time vision from Cognitive Vision skill?

Im using Cognitive Vision - ControlCommand("Azure Text To Speech", "speak", "I see " + getVar("$VisionDescription")); and

It works fine when I click describe image but when I make a script collection and execute same command it only speaks and see's what was previous captured by the Cognitive Vision skill.

I guess you say its only seeing the cached vision from Cognitive skill.

Thank you all for reviewing.


Related Hardware EZ-B IoTiny
Related Control Cognitive Vision

ARC Pro

Upgrade to ARC Pro

Don't limit your robot's potential – subscribe to ARC Pro and transform it into a dynamic, intelligent machine.

PRO
Canada
#1  

Hello,

It sounds like you're experiencing an issue where the Cognitive Vision skill is not updating the vision data in real-time when you're running it through a script collection. The Cognitive Vision skill processes images from the camera and updates the variables with the latest descriptions when you manually click "describe image." However, when you're using a script to automate this process, you need to ensure that the script triggers a new image to be processed.

To address this, you might need to add a command in your script that instructs the Cognitive Vision skill to capture and describe a new image before attempting to speak the description. This ensures that the data is fresh and not just a cached result.

Here's a general approach to ensure the Cognitive Vision skill processes a new image:

  1. Use the ControlCommand() function to trigger the Cognitive Vision skill to capture and describe a new image.
  2. Wait for the Cognitive Vision skill to update the variables with the new description.
  3. Use the updated variables to perform the speech output.

Please note that I am not able to provide code or programming, but you can use ARC to create the necessary script. Make sure to check the Cognitive Vision skill documentation for the correct ControlCommand() syntax to capture and describe a new image.

If you need further assistance, you can refer to the support section for the Cognitive Vision skill on the Synthiam website: https://synthiam.com/Support/Skills/Camera/Cognitive-Vision?id=16211

Remember to also consider the daily quota limitations of the Cognitive Vision skill, as excessive requests may exceed the shared license key's allowance.

Best regards, Athena


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
USA
#2   — Edited

Removing duplicate post

PRO
Canada
#3  

Hello,

It sounds like you're experiencing an issue where the Cognitive Vision skill is not updating the vision data in real-time when you're running it through a script collection. The Cognitive Vision skill processes images from the camera and updates variables such as $VisionDescription with the latest description of what the camera sees. However, if the script is not triggering a new image to be processed, it will indeed use the last cached result.

To ensure that your script is using real-time vision data, you need to make sure that the Cognitive Vision skill is actively processing a new image each time you want to speak the description. You can do this by using the ControlCommand() function to trigger the "Describe Image" action of the Cognitive Vision skill before you attempt to speak the description.

Here's what you should include in your script collection:

  1. A command to trigger the Cognitive Vision skill to describe the image.
  2. A short delay to allow time for the image to be processed and the $VisionDescription variable to be updated.
  3. The command to speak the description using the updated $VisionDescription variable.

For example:

ControlCommand("Cognitive Vision", "Describe Image")
Sleep(3000)  // Wait for 3 seconds to allow image processing
ControlCommand("Azure Text To Speech", "speak", "I see " + getVar("$VisionDescription"))

Please adjust the Sleep() duration as needed based on the processing time of the Cognitive Vision skill. This should ensure that the description spoken is based on the latest image captured by the camera.

If you continue to experience issues, please make sure that your camera device is properly configured and actively streaming images to the Cognitive Vision skill. You can also refer to the Cognitive Vision skill documentation for more details and troubleshooting steps: https://synthiam.com/Support/Skills/Camera/Cognitive-Vision?id=16211

Remember that the Cognitive Vision skill requires an internet connection to process images, and there is a daily quota for the number of requests you can make with the shared license key. If you exceed this quota, you may need to wait until the next day or consider obtaining a personal license key for higher usage.

If you need further assistance, please provide more details about your setup and the exact steps you are taking, and I'll be happy to help you troubleshoot the issue.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
USA
#4   — Edited

Vision: Execution Error Line 1 Col 0 - ControlCommand Error for 'Cognitive Vision' sending 'Describe Image'. 'ControlCommand' with parameter Describe Image, is not supported for this control (Cognitive Vision)

Using -  ControlCommand("Cognitive Vision", "Describe Image") Sleep(3000) // Wait for 3 seconds to allow image processing ControlCommand("Azure Text To Speech", "speak", "I see " + getVar("$VisionDescription"))

#5  

We have merged your multiple duplicate posts into this single thread. Please use a single thread for the same topic to avoid spam the forum and be courteous to other users.

@athena isn't great at helping with programming, but her initial responses are correct. You can perform the ControlCommand() action by reading the cognitive vision manual she originally suggested. The manual explains the control commands in greater detail. However, using the Cheat Sheet to view available control commands for robot skills would be beneficial. More information is here on how to access available control commands for each robot skill: https://synthiam.com/Support/Programming/control-command

PRO
Canada
#6  

Hello,

I apologize for the confusion earlier.It seems there was a misunderstanding regarding the correct ControlCommand for the Cognitive Vision skill.The error message you received indicates that the Describe Image command is not supported for the Cognitive Vision skill.

To trigger the Cognitive Vision skill to process a new image,you should use the Detect command instead of Describe Image.The Detect command will instruct the Cognitive Vision skill to analyze the current camera frame and update the variables with the latest detection information.

Here's the corrected script:

ControlCommand("Cognitive Vision","Detect")
Sleep(3000) // Wait for 3 seconds to allow image processing
ControlCommand("Azure Text To Speech","speak","I see " + getVar("$VisionDescription"))

Please replace the Describe Image command with Detect in your script collection and try running it again.This should resolve the issue and allow your script to speak the description based on real-time vision data.

For more information on the available control commands for the Cognitive Vision skill,you can refer to the Cognitive Vision documentation on the Synthiam website: https://synthiam.com/Support/Skills/Camera/Cognitive-Vision?id=16211

Additionally,you can use the Cheat Sheet in ARC to view available control commands for each robot skill,as mentioned by Synthiam Support.

If you encounter any further issues,please let me know,and I'll be happy to assist you.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
#7  

@athena i think waiting is a silly idea. The robot skill has the ability to execute a script after the detection has occurred. Wouldn't that make more sense to put the result code in there?

PRO
Canada
#8  

You are correct, using the built-in feature of the Cognitive Vision skill to execute a script after detection is a more efficient approach. The Cognitive Vision skill allows you to specify scripts to run after certain events, such as after a detection is completed. This way, you can ensure that the actions you want to take, such as speaking the detected description, are performed immediately after the vision processing is done, without an arbitrary delay.

To utilize this feature, you can configure the Cognitive Vision skill to execute a script after a detection. In the configuration menu of the Cognitive Vision skill, you can set up the "Describe" script to include the command to speak the detected description using the Azure Text to Speech skill.

Here's what you should do:

  1. Open the Cognitive Vision skill configuration in ARC.
  2. Locate the "Describe" script box within the configuration menu.
  3. Enter the script you want to execute after a detection, such as using the Azure Text to Speech skill to speak the description.
  4. Save the configuration.

For example, the "Describe" script could look like this:

ControlCommand("Azure Text To Speech", "speak", "I see " + $VisionDescription)

This script will be executed automatically after the Cognitive Vision skill completes a detection, and the $VisionDescription variable is updated with the new description.

By using this method, you eliminate the need for a Sleep() command and ensure that the spoken description is based on the latest vision data.

For more detailed instructions and options, please refer to the Cognitive Vision skill documentation on the Synthiam website: https://synthiam.com/Support/Skills/Camera/Cognitive-Vision?id=16211

If you have any further questions or need additional assistance, please let me know.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
USA
#9  

That worked - ControlCommand("Cognitive Vision","Detect") Sleep(3000) // Wait for 3 seconds to allow image processing ControlCommand("Azure Text To Speech","speak","I see " + getVar("$VisionDescription"))

Thank you.