
PRO
Rodney Mckay
Canada
Asked

I have a question regarding the Talk servo v2 robot skill in Synthiam ARC.
According to the manual, it states: "If the robot skill that is speaking is not using the built-in Say() commands (such as the Azure Text To Speech), you can use the ControlCommand() 'Speak,' which simulates the mouth servos with the first parameter."
I have configured my robot to use the Talk servo skill, Azure Text To Speech skill, and Autonomous AI skill. Could you please advise on where I should place the ControlCommand() "Speak" to effectively activate the Talk servo skill?
Additionally, could you provide an example of how to implement this command within the context of my current setup?
Related Hardware EZ-B v4
Related Controls
Talk Servo v2
Azure Text To Speech
To effectively use the
ControlCommand()
with the Talk servo v2 skill in conjunction with the Azure Text To Speech skill, you need to ensure that the Talk servo v2 skill is triggered whenever the Azure Text To Speech skill is used to speak. Since the Azure Text To Speech skill does not use the built-inSay()
commands, you will need to manually trigger the Talk servo v2 skill using theControlCommand()
.Here's how you can implement this:
Add the Talk servo v2 and Azure Text To Speech skills to your ARC project if you haven't already.
Configure the Talk servo v2 skill:
Use the ControlCommand() in your script:
ControlCommand()
for the Talk servo v2 skill to simulate the mouth movement.Here's an example of how you can implement this in a script:
Explanation:
ControlCommand()
to trigger the Talk servo v2 skill to simulate mouth movement. The text parameter should match the text being spoken by the Azure Text To Speech skill.Placement:
By following these steps, you can ensure that the Talk servo v2 skill is activated whenever the Azure Text To Speech skill is used, providing a synchronized mouth movement with the speech. If you have any further questions or need additional assistance, feel free to ask!
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
Thanks for the reply,
I tried and it worked but had to list the Talk servo command control before the Azure Command Control in the Event script editor .
There is about a 2 second delay after the talk servo moves and the voice begins. I've adjusted the start delay command to the max of 1995 ms and it's still not quite enough.
A quick tip: If you are responding to Athena, you will need to mention her name in the post, like @athena. You will notice that it is written in several places when replying to ensure you get the required assistance.
Since I am here, I'll mention that Athena will most likely suggest adding a delay between the commands.
*Note: I edited your post to add formatting to the code so it is presented correctly. The format buttons in the editor help format the post to be readable by others.
@ Synthiam Support. Thank you for the gentle chastisement. I'll try and play by the rules from now on. I'll add a sleep command and make adjustments with it.
Hahaha oh boy - glad i'm not the one having to do that! I was just about to respond to your message but looks like it's taken care of.
I think athena would have provided the same example with the sleep command. unless she's got a different idea. @Athena what do you think?
Hello, Rodney! It seems like you've already received some great advice. Adding a
sleep()
command between theControlCommand()
for the Talk servo v2 and the Azure Text To Speech is indeed a good approach to synchronize the mouth movement with the speech.Here's a refined example with the
sleep()
command:Tips for Synchronization:
sleep()
duration to find the best timing that matches the start of the speech with the mouth movement. Start with 1000 ms and adjust as needed.sleep()
command will help fine-tune the synchronization.If you have any more questions or need further assistance, feel free to ask! I'm here to help.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
I think the azure speech recognition robot skill would benefit from having a script that runs as soon as it begins speaking. Then you could put it in there.
DJ, you are correct. There is a script that can be executed in the Azure Text To Speech robot skill. You can add the Talk servo ControlCommand in that script. The manual explains more here: https://synthiam.com/Support/Skills/Audio/Azure-Text-To-Speech?id=21861
Rodney, perform these changes...
By doing that, the Talk servo will execute as soon as the Azure text-to-speech begins speaking.
Ah yeah, I see. So your ControlCommand workflow would be like this...