Unknown Country
Asked

Language Setting For Tts

I want to program jd humanoid via EZ-Script so he will speak the words i want him to say , in the dutch language. How do i  do that ?.



Related Hardware (view all EZB hardware)
JD Humanoid by EZ-Robot
JD humanoid robot kit - WiFi-enabled, 16 DOF with metal-gear servos; easy, fun, educational, available from the EZ-Robot online store.
Wi-Fi / USB
Servos 24
Camera
Audio
UART 3
I2C
ADC 8
Digital 24

Related Robot Skills (view all robot skills)
Accurate Bing cloud speech-to-text for ARC: wake-word, programmable control, $BingSpeech output, Windows language support, headset compatible
Azure TTS for ARC robots: generate natural, customizable neural voices for companion, educational, assistive, and entertainment applications.

ARC Pro

Upgrade to ARC Pro

Stay on the cutting edge of robotics with ARC Pro, guaranteeing that your robot is always ahead of the game.

#10  

You must create a new thread because your question and topic has changed.

  • Your original question had ezb v4 selected and now you are discussing arduino mega
  • The question was about speech recognition and now you’re asking about moving servos.
Author Avatar
PRO
Iraq
#11  

Hello @Athena , I apologize for going off-topic earlier. I'm a beginner with Synthiam, please excuse me. All the problems are now resolved, but the servo only works with English speech in the Talk servo v2 skill and doesn't work with Arabic. What's the solution? Thank you for your patience.

Author Avatar
PRO
Canada
#12  

Good progress - you solved the hardware issues, nice work. The behavior you describe is expected: Talk servo v2 analyzes text to find vowels and consonants to animate the mouth. Because Arabic uses a different script and vowel system, Talk servo v2 usually won’t detect vowels the same way it does for English (Latin) text - so the servo appears to do nothing for Arabic text.

Two reliable ways to fix this:

Option A - Explicit commands (simple)

  • Send the Arabic text to Azure TTS for speech.
  • Also send a matching Latin (romanized) version of the same phrase to Talk servo v2 so it can detect vowels and animate the mouth.

Option B - Transliterate automatically (recommended)

  • Use a small transliteration step in your script to convert Arabic text to a simple Latin phonetic string before sending to Talk servo v2.
  • Send the original Arabic text to Azure TTS (so speech stays Arabic), and send the transliteration to Talk servo v2 for animation.

Both options require you to explicitly call the Talk servo v2 Speak ControlCommand - the Azure TTS skill does not automatically guarantee Talk servo v2 will receive text for non-built-in Say() calls or for non-Latin scripts.

Step-by-step (recommended approach with JavaScript)

  1. Configure Talk servo v2 (kebab menu / Config)
  • Set the servo port (e.g., d0) and Min/Max positions.
  • Tune Start Delay, Constant Delay, Vowel Delay, Period Delay (defaults are a good start).
  • Ensure Pause is unchecked.
  1. Configure Azure Text To Speech
  • Pick your Arabic neural voice in the Neural Voice field (e.g. ar-EG-SalmaNeural).
  • If you want the audio through the EZB speaker, enable Speak out of EZB and verify audio connection.
  1. Use this JavaScript script in a Script skill (replace skill names, voice id, and servo port as needed). This example:
  • sets the Azure voice (optional),
  • transliterates a short Arabic phrase to Latin phonetics,
  • tells Talk servo v2 to animate using the transliteration,
  • then tells Azure to speak the Arabic text and waits until finished.
// Example - Speak Arabic with Azure + animate Talk servo v2 using transliteration
// Replace skill names exactly as they appear in your project

// 1) Optionally set the Azure voice once
ControlCommand("Azure Text To Speech", "setVoice", "ar-EG-SalmaNeural");
print("Azure voice set");

// 2) Arabic text to speak (replace with your phrase)
var arabicText = "  "; // "Hello, how are you" in Arabic

// 3) Simple transliteration function - produces Latin text with vowels
function transliterateArabicToLatin(ar) {
  // This is a simple phonetic transliteration to help Talk servo v2 detect vowels.
  // It's intentionally basic. Improve mapping if you need better timing.
  var map = {
    "":"a","":"a","":"i","":"aa",
    "":"b","":"t","":"th","":"j","":"h","":"kh",
    "":"d","":"dh","":"r","":"z","":"s","":"sh","":"s",
    "":"d","":"t","":"z","":"a","":"gh",
    "":"f","":"q","":"k","":"l","":"m","":"n","":"h","":"u","":"i",
    "":"a","":"'", "":"u", "":"i",
    "":"an","":"un","":"in","":"a","":"u","":"i","":"",
    " ":" "
  };
  var out = "";
  for (var i=0;i<ar.length;i++) {
    var ch = ar[i];
    out += (map[ch] !== undefined) ? map[ch] : ch;
  }
  // reduce repeated chars and normalize small things
  out = out.replace(/aa+/g,"aa").replace(/ii+/g,"i").replace(/uu+/g,"u");
  return out;
}

var talkText = transliterateArabicToLatin(arabicText);
print("Transliteration: " + talkText);

// 4) Send transliteration to Talk servo v2 (animates mouth)
ControlCommand("Talk servo v2", "Speak", talkText);
print("Talk servo animation started");

// 5) Speak Arabic with Azure and wait until finished
ControlCommand("Azure Text To Speech", "speakWait", arabicText);
print("Azure finished speaking");

Tuning tips

  • Start with short phrases and test timing. Adjust Talk servo v2 Start Delay (kebab menu > Config) to sync animation with the spoken audio.
  • Tweak Vowel Delay and Constant Delay to change how long the mouth stays open/closed.
  • Adjust servo Min/Max positions to decrease/increase jaw travel.
  • If you want fully asynchronous behavior, use "speak" instead of "speakWait" for Azure, but then ensure the servo animation duration roughly matches the audio length.

Troubleshooting checklist

  • Confirm Talk servo v2 Pause is OFF.
  • Ensure skill window names in the script are exactly as they appear in ARC (case sensitive).
  • Verify the transliteration produces Latin vowels (a/e/i/o/u). These are what Talk servo v2 expects.
  • If using EZB speaker, verify Audio.isConnected() returns true or test speaking out of PC speaker first.
  • If animation still looks off, try sending a separate short Latin text like "ah ee oo" to Talk servo v2 to see if the servo moves predictably.

Would you like:

  • I can make a stronger transliteration mapping for your specific dialect (Egyptian, Levantine, Gulf, MSA). Tell me which dialect you prefer.
  • Or I can give you a short phrase list (Arabic + transliteration) to quickly test mouth motion and sync values.

Which Arabic voice and which servo port are you using (e.g., ar-EG-SalmaNeural, d0)? I’ll tailor the code and delay suggestions to that.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
#13   — Edited

This thread has already been notified that it is off-topic. Athena cannot provide assistance when the conversation goes off topic. However, the advice is to use one of the Sound servo robot skills based on your audio configuration.

Map EZ-B audio volume to servos; multi-servo mirroring, scalar range control, invert/min-max, waveform feedback to sync mouth motion.

Maps PC microphone volume to servo positions - control multiple servos (e.g., robotic mouth) with scalar, min/max and invert options.

Maps PC audio volume to servos in real time with scalar, min/max, invert & multi-servo options-ideal for syncing robot mouth to sound

*Note: there is a great document that helps explain how to ask questions and specifically use LLM's to get the most out of athena: https://synthiam.com/Support/website-features/How-To-Ask-Robot-Building-Questions

Author Avatar
PRO
Canada
#14  

Short answer: your Talk servo v2 uses text parsing for English, so it won’t animate Arabic correctly. The recommended fix is to use one of the Sound-Servo robot skills that animate from the actual audio waveform (languageagnostic). That way Azure TTS speaks Arabic and the sound-servo reads the audio to move the mouth/jaw.

Before you start: because this is now a new topic (servo animation / audio routing) please consider starting a new thread so it’s easier for others to find and follow. Meanwhile, below are the exact steps and troubleshooting tips so you can get it working right away.

Which Sound-Servo to use

  • Use the Sound-Servo that matches where your audio comes from:
    • If the TTS audio plays from the PC speakers, use the PC Sound servo skill.
    • If the TTS audio plays out of the EZ-B speaker (EZ-B v4 / IoTiny), use the EZB Sound servo skill.
    • If you’re using a microphone input or a different audio path, pick the Sound-Servo skill that matches that audio source.

Helpful skill links (add via Project -> Add Robot Skill):

Step-by-step setup (recommended)

  1. Decide audio output

    • If Azure TTS speaks from the PC, use the PC Sound Servo.
    • If Azure TTS is configured to "Speak out of EZB" (in the Azure Text To Speech config) and you have an EZ-B v4 or IoTiny speaker, use the EZB Sound Servo.
  2. Add the correct Sound-Servo robot skill

    • Project -> Add robot skill -> Audio -> (select the correct Sound servo skill).
    • Open the Sound servo skill window.
  3. Configure the Sound-Servo (kebab menu / Config)

    • Select audio source (PC or EZB) if the skill has that option.
    • Set the servo port for jaw movement (e.g., d0). Remember: when specifying servo ports in scripts use d0 (no quotes).
    • Set Min/Max positions to suit your jaw travel (small travel recommended at first).
    • Calibrate gain/threshold (many Sound-Servo skills include a calibration button or a "Test / Visualizer" so you can see the waveform and thresholds). Use sample Arabic speech while calibrating.
  4. Configure Azure Text To Speech

    • In the Azure TTS skill config set the Arabic voice you want (for example ar-EG-SalmaNeural or ar-SA-ZariyahNeural).
    • If you want audio on the EZB speaker: check "Speak out of EZB" in Azure TTS config. If you want PC audio, leave it off.
    • Use the SAMPLE button in Azure config to test audio output first.
  5. Test full pipeline

    • From a Script skill run:
// Example: speak Arabic and wait for it to finish
ControlCommand("Azure Text To Speech", "setVoice", "ar-EG-SalmaNeural"); // optional
ControlCommand("Azure Text To Speech", "speakWait", "  ");
print("TTS completed");
  • While it speaks, the Sound-Servo should animate automatically if it’s listening to the same audio output (PC or EZB). If it does not animate, go back to the Sound-Servo config and use its visualizer / test tools to verify it sees the audio waveform.

Why this works (and why Talk servo v2 failed)

  • Talk servo v2 analyzes Latin characters (vowels/consonants) and maps them to mouth movements it doesn’t understand Arabic script/phonemes.
  • Sound-Servo reads the audio waveform (volume/frequency) and animates directly from the spoken audio, so it works for any language-including Arabic.

Troubleshooting checklist

  • Make sure Azure TTS is actually playing where the Sound-Servo is listening:
    • If Sound-Servo listens to PC audio, but Azure is set to EZB speaker, they won’t sync.
  • If using EZB speaker, ensure EZB audio is connected (Audio.isConnected() should return true).
  • Tune servo Min/Max, Vowel/Period delays and thresholds to get natural mouth movement.
  • If the servo still won’t move, verify the servo port in Sound-Servo config matches the physical port (d0..d23) and that Servo.setPosition(d0, 90) from a Script moves it.
  • For latency: use speakWait so you can time animation and confirm it runs during the TTS audio.

Optional: If you must keep using Talk servo v2

  • Use transliteration: keep Azure TTS speaking the Arabic text, but also send a romanized/transliterated string to Talk servo v2 so it can animate. (This is more manual and less robust than a Sound-Servo.)

Next steps / what I need from you

  • Tell me which audio output you’re using (PC speaker or EZ-B speaker).
  • Tell me which Sound-Servo skill you added (the exact name shown in your ARC project).
  • Tell me the servo port used for the jaw (e.g., d0).

If you post those, I’ll give a concise, customized config and a short test script you can paste into a Script skill to verify the setup.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.