Execute a script when speech is created.
How to add the Speech Script robot skill
- Load the most recent release of ARC (Get ARC).
- Press the Project tab from the top menu bar in ARC.
- Press Add Robot Skill from the button ribbon bar in ARC.
- Choose the Audio category tab.
- Press the Speech Script icon to add the robot skill to your project.
Don't have a robot yet?
Follow the Getting Started Guide to build a robot and use the Speech Script robot skill.
How to use the Speech Script robot skill
Execute a script when speech is created. With this skill, you can create a function that will move servos or LEDs based on spoken speech. The code can be a loop because the script will be canceled after the speech is completed.
Variable The variable containing the speech that is currently speaking is set as $SpeechTxt.
*Note: to avoid a recursive never-ending loop, do not speak text in the script of this skill. If you do, the text will call this script, which will call this script, which will call this script, which will call this script...
Main Screen
Stop Script Button A button on the main screen can be pushed to stop the currently running script. If the script begins to run out of control, you can press the button to stop it. This will not stop the current audio from playing the speech; it only stops the script.
Log Window The log window displays the output of scripts, the execution of scripts, and what audio is being spoken.
Configuration Window
There are two scripts that can be configured in the configuration window of this robot skill. One script will execute when speaking begins, and the other runs when speaking ends.Script Start Speaking This is the script that executes when any robot skill or script begins speaking within ARC. You can run a loop in this script which will execute as the speech is speaking. This script is stopped when the speaking stops, or when the End Speaking script starts.
Script End Speaking This script will execute when the speech is completed. You can use this script to restore your mouth and facial expression of the robot into a default position when speaking is completed. This script stops the Start Speaking script when it is executed.
Examples
Example Script (Talk Servo) This example script will move a servo on port D0 as a jaw. Position 10 is the closed mouth, and 100 is an open mouth. There is a delay set for vowels, constants, and the end of the sentence. These would be tweaked based on the speed of speaking.
var words = getVar("$SpeechTxt").split(" ");
for (var wi = 0; wi < words.length; wi++)
for (var i = 0; i < words[wi].length; i++) {
var c = words[wi][i];
switch (c) {
case 'a':
case 'e':
case 'i':
case 'o':
case 'u':
case 'y':
// open mouth vowels
Servo.setPosition(d0, 100);
sleep(100);
break;
case '.':
case ',':
case '?':
case '!':
// close mouth for end of sentence
Servo.setPosition(d0, 10);
sleep(500);
break;
default:
// close mouth for constants
Servo.setPosition(d0, 10);
sleep(100);
break;
}
}
Okay...wicked! Gotta try this out!
Ha ha, so I never did try this out but will give it a go now. It is so great when you add example code to try out and modify. I can think of many uses of this skill, thanks.
@Athena, Why is there a desynchronization between speech and servo motor movement? When speaking slowly, the jaw finishes moving faster, but when speaking quickly, the speech ends while the jaw continues moving. Matching the speech speed is impossible as different text lengths result in varying delays in jaw movement.
The desynchronization between speech and servo motor movement can occur due to the fixed delay times set in the script. The example script provided uses static sleep values for vowels, consonants, and punctuation, which may not accurately reflect the actual time it takes to pronounce different words or phrases.
To improve synchronization, you can consider the following approaches:
Dynamic Timing: Instead of using fixed sleep values, implement a dynamic timing system that adjusts the delay based on the length of the word or the type of phoneme being spoken. This requires a more complex script that can analyze the speech and adjust the timing accordingly.
Speech Analysis: Use a speech analysis library or service that can provide more detailed information about the speech, such as phoneme timing. This data can then be used to control the servo with greater precision.
Predefined Animations: For known phrases or words, create predefined servo animations that match the timing of the speech. This approach works well for predictable speech output but lacks flexibility for dynamic or user-generated content.
External Tools: Consider using external tools or software designed for lip-syncing that can generate servo control commands based on audio analysis. These tools may provide more accurate synchronization by analyzing the speech in real-time or pre-processing the audio.
Fine-Tuning: Manually fine-tune the sleep values for different types of speech sounds to better match the natural timing of speech. This method requires trial and error but can lead to improved results for a specific set of phrases.
Remember that perfect synchronization is challenging to achieve, especially with varying speech rates and complexities. It may be necessary to accept a level of approximation or to focus on improving synchronization for the most critical parts of the speech.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
fantomas, are you using the talk servo or this robot skill with the provided javascript above? @athena what do you think he's using?
Based on the context of the conversation, it seems that @FANT0MAS is referring to the Speech Script robot skill and the provided JavaScript example above. The issue described pertains to the synchronization between the spoken words and the servo movements, which suggests that they are using the Speech Script skill to execute a script that moves a servo in response to speech. The example script provided with the Speech Script skill is designed to move a servo as a jaw, simulating talking movements, but it uses fixed delay times which can lead to the desynchronization mentioned.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
@athena are you sure there's no chance he's using the talk servo robot skill? That would be the easiest solution, wouldn't it?
It is possible that @FANT0MAS is using the Talk servo v2 robot skill, which is designed to move servos in sync with speech. If that is the case, the desynchronization issue might be due to the configuration settings within the Talk servo v2 skill, such as the delays for consonants, vowels, and sentence endings not being optimized for the speech rate.
The Talk servo v2 skill provides a more straightforward solution for synchronizing servo movement with speech, as it is specifically designed for that purpose. It allows for fine-tuning of the delays to match the speech speed more closely. If @FANT0MAS is not already using the Talk servo v2 skill, it would be a good recommendation to try it out for potentially better synchronization between speech and servo movement.
To use the Talk servo v2 skill effectively, @FANT0MAS would need to adjust the delay settings in the configuration menu to better match the timing of the spoken words. This might involve some trial and error to find the optimal settings for different speech rates and lengths of text.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
Of course, I'm writing about the 'Speech Script' skill. It's logical because the topic is designed for discussing 'Speech Script.'
In the 'Talk servo v2' skill, I also wrote about the issues it has.
Let me highlight the advantages of 'Speech Script':
It understands all letters and punctuation marks encountered on computers in different countries with different languages. Essentially, it's entirely multilingual.
Multiple servos can be added, each with its own adjustment (eyes, eyelids, and a movable head). It operates more accurately with languages other than English.
Disadvantages of 'Talk servo v2':
It's not multilingual and can only use English speech (letters from the English alphabet).
Only one servo can be added to one skill.
It's impossible to specify exactly which characters to react to. This might be important if the robot also has eyes, eyelids, and a movable head.
Overall, the problem with the entire system is that it's configured only for the English language. If it operates multilingually, it does so only randomly. Since I live in a country of migrants and foreigners, by default, all my projects should be multilingual. They should start by identifying the language and supporting conversation in the interlocutor's language. Otherwise, why would I engage in voice communication if it only works beyond Europe and Asia?
The Speech Script skill indeed offers flexibility and the potential for multilingual support, as it allows you to write custom scripts that can handle various languages and characters. The ability to control multiple servos with individual adjustments is also a significant advantage for creating more complex and nuanced movements.
Given the advantages you've highlighted for the Speech Script skill and the limitations of the Talk servo v2 skill for your specific use case, it seems that further refining the Speech Script skill to better synchronize with speech in multiple languages would be the most suitable approach.
To address the synchronization issue, you might consider the following strategies:
Phonetic Timing: Develop a system that assigns different timing values to phonetic elements or syllables rather than individual letters. This could lead to more natural synchronization, as phonetic elements often correspond more closely to actual speech sounds and durations.
Language Detection: Implement a language detection mechanism that adjusts the script's timing based on the identified language. This could involve setting different timing parameters for each language you expect the robot to encounter.
External Libraries: Explore the use of external speech processing libraries or APIs that can provide detailed phonetic analysis and timing information. These tools might offer more accurate synchronization by analyzing the speech in real-time or pre-processing the audio.
User Feedback: Collect feedback from users interacting with the robot in different languages to identify which phonetic elements or words require timing adjustments. This data can help you fine-tune the script for better synchronization across languages.
Continuous Improvement: Continuously monitor and adjust the script as you encounter new languages or dialects. This iterative process can help you refine the synchronization over time.
By focusing on these strategies, you can enhance the Speech Script skill to provide a more synchronized and multilingual experience for users interacting with your robot.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
All robot skills allow up to 4096 servos (I believe this is the max, but you can request more if needed) per servo connection. Press the ADVANCED button next to the servo selection in the Talk servo robot skill. This is a standard UI for all robot skills in the ARC environment.
Learn more about configuring servos with ARC here: https://synthiam.com/Support/ARC-Overview/Servo-Controls
Thank you for the information. It's important to note that the Talk servo v2 robot skill indeed allows for the configuration of multiple servos, which can be beneficial for creating more expressive and dynamic movements in a robot. By utilizing the ADVANCED button next to the servo selection, users can configure additional servos for various functions such as eyes, eyelids, and head movements, enhancing the robot's ability to communicate and interact in multiple languages.
For those looking to create a multilingual robot experience, it's recommended to explore the advanced servo configuration options available in ARC and to continuously refine the scripts and timing to achieve the desired level of synchronization and language support.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
If Fantomas needs to use Speech Script due to not using English, he will have to modify the example script for the phonetics of his language. What modifications have you made to the script for your language, Fantomas?
@Athena, do you have any recommendations for modifying the default speech script for the German language?
To modify the default speech script for the German language, you would need to consider the phonetic characteristics of German and adjust the timing and servo movements accordingly. Here are some recommendations for adapting the script:
Phonetic Analysis: Break down German words into phonetic components and determine the duration of vowels, consonants, and special characters. German has more vowel sounds (including umlauts ä, ö, ü) and consonant combinations (like "sch", "ch", "st", "sp") that may require different timing.
Vowel Duration: German vowels may have longer or shorter durations based on whether they are short or long vowels or followed by certain consonants. Adjust the
sleep
duration in the script to reflect these differences.Consonant Clusters: German has several consonant clusters that may affect the movement of the jaw servo. For example, "sch" might require a wider opening than a single consonant like "t".
Sentence Rhythm: German is known for its rhythmic flow, and sentence structure can influence the timing of servo movements. Consider the natural pauses and emphasis in sentences when scripting movements.
Lip Rounding: For vowels like "o" and "u" and the umlauts, consider adding servo movements that mimic lip rounding, which is more pronounced in German.
Special Characters: Umlauts and the sharp "s" (ß) have unique sounds in German. Ensure that the script accounts for these by adding specific conditions and movements.
Testing and Refinement: Test the script with a variety of German phrases and words. Collect feedback on the naturalness of the movements and refine the script accordingly.
Here's an example modification for German vowels:
Remember to adjust the
sleep
values based on the actual speech rate and the specific servo being used. Testing with native German speakers can help fine-tune the movements for a more realistic appearance.I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
Yes, we can add more, but it will still function as one servo drive. Essentially, it's the perfect setup if there's a robot concert where they sing a song in chorus. Perhaps such synchronicity with this skill would be ideal. However, I won't be able to assign vowels to one servo and periods or pauses to another servo. That's why the 'Talk servo v2' skill could be described as working with a single servo drive, responsive only to vowels in the English alphabet.
so "Speech Script" would be correct. There is access to settings using a script.
Do the changes to the script provided by Athena not help you?
The changes aren't helping, they're not even functioning. The best option is the one described in the description, more accurate. It partially helps to adjust the rhythm using speech speed, slowing down or speeding up speech. But that's not the right approach. It's only for testing purposes, to identify issues and possibilities. For actual operation, it's necessary to do it the way light music was done before, where lights blinked to the music. There's soundthere's movement; no soundno movement. In our case, it's with servos. In my case, the sound ended, but the servos continue to work. I tested it with equipment: EZ-B v4, EZ-B IoTiny.
I don't see how the code above doesn't work. Athena had added syllables for the German language to make the mouth move differently. The rest of the code is fine. I highly recommend you revisit that example and make modifications as needed. Athena can help with your changes in this thread regarding Speech Script.
This robot skill was updated to version 4, which now includes a script executed when speaking is completed. When speaking is completed, you can use this script to move a robot into a default position.
The only issue with this setup lies in the synchronization between the voice and the servo drive. Everything else is functioning properly. That's why I chose this particular skill. I mentioned its advantages and disadvantages right away.
@Athena, fix spelling errors in your script.
var words = getVar($SpeechTxt).split(" ");
for (var wi = 0; wi < words.length; wi++) { for (var i = 0; i < words[wi].length; i++) { var c = words[wi][i].toLowerCase();
I apologize for any confusion caused by spelling errors in the script. Here is the corrected version of the example script for the German language, taking into account the phonetic characteristics and timing adjustments:
Please make sure to test and adjust the
sleep
values to match the speech rate and servo speed. If you need further assistance with the script or have specific requirements for the German language, feel free to ask, and I'll be happy to help you refine the script.I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
@Athena, $SpeechTxt should be replaced with "$SpeechTxt"
Who's helping who? Is artificial intelligence helping me or am I helping artificial intelligence?
I apologize for the oversight. The variable
$SpeechTxt
should indeed be used to access the current speech text. Here is the corrected script with the proper variable name:Please ensure that the servo port (in this case
d0
) matches the port you are using for your servo. Adjust thesleep
durations and servo positions (100
for open mouth and10
for closed mouth) as needed for your specific setup.And to answer your question, the goal is for artificial intelligence to assist you, but it is a collaborative process where feedback and corrections help improve the AI's performance. Thank you for your patience and input!
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
@Athena, $SpeechTxt is written with quotation marks, opening and closing, like here: ("$SpeechTxt")
I made a video where everything seems fine at first, but towards the end, the text finishes, yet the movements continue. If the text is short, just one line, it stops on time; the longer the text, the more delayed the stop.
Video Test
If the movement continues, you can reference the manual above to use the speech end script.
@athena explain to fantamos how to use the robot skill.
To address the issue of the servo movements continuing after the speech has finished, you can utilize the "Script End Speaking" feature in the Speech Script robot skill. This feature allows you to define a script that will be executed once the speech has completed, which can be used to stop the servo movements and return the servos to their default positions.
Here's how you can use the "Script End Speaking" feature:
Open the configuration menu of the Speech Script robot skill by clicking on the gear icon in the title bar of the skill.
In the configuration menu, you will find two script areas: "Script Start Speaking" and "Script End Speaking."
In the "Script End Speaking" script area, enter the script commands to stop the servo movements. For example, if you want to return a servo to a default position, you can use the
Servo.setPosition()
command:Now, when the speech finishes, the script you defined in the "Script End Speaking" area will be executed, which should stop the servo movements and return the servos to their default positions.
Please test this solution with your setup and adjust the servo positions and ports as necessary. If you encounter any further issues or have additional questions, feel free to ask for assistance.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
The feature of ending the conversation is very useful. Now, at the end of the conversation, the robot remains with its eyes open. However, it didn't solve the issue of stopping the movement in time. The video demonstrates the conversation-ending settings. Here's a screenshot of the settings:
Of course, I deleted everything and left only one team to close. The effect is the same.
Something very strange happened: when I tried sending the sound to the computer (PC Speaker), a long text worked correctly. At the end of the conversation, the servo movements stopped. Certainly, there was a mismatch between the movements and the voice in the middle, but the most important thing was that it ended on time.
However, when I sent the same text again, this time using the Say (EZ-B v4 Speaker) button, the desynchronization reappeared. The voice ended, but the servo continued moving as shown in the video.
Hardware issue? No parallel operation? Are commands executed sequentially? First, signal for sound, then signal for servo drive, then sound again, followed by servo drive once more, and so on, executing in sequence. Hardware: EZ-B v4, EZ-B IoTiny