Asked — Edited

Text To Speech Servo Movement

I am starting this thread because it seems that a few people are having issues with this and perhaps we can get it working,

I am trying to get the mouth of my robot to move according to the "text" I put in a command. So what ever I type in, the robot will speak those words (audio) and the mouth (jaw) will move similar to a humans for the given words. I have tried the Sound servo and adjusted the "Update Speed" and"Scaler" to different values, without success. The servo moves very little and not in sync with the text.

Has anyone got this to work?


Upgrade to ARC Pro

Take control of your robot's destiny by subscribing to Synthiam ARC Pro, and watch it evolve into a versatile and responsive machine.

United Kingdom

The problem is, sound servo and any other audio based movement control will move based on volume. When a human speaks the mouth doesn't open more when the sound is louder, it moves to make the right phonetic sounds.

This will always be a problem with any control or module.


Which Sound servo are you using, PC or EZB? The PC version will require a PC mic and will rely on the sound picked up by the mic. If you selected the EZB version, for the version it should take the audio level sent out (not the mic). That might make a difference.


I've been using "Sound ServoEZB"

United Kingdom

Here's an example of using sound servo to set a PWM based on the range of the sound servo variable. The PWM() can be replaced with Servo() or any other command for that matter, and the ranges are easily adjustable.


So is there a way to get a servo to move based on the "SayEZB text" in a command rather than sound?

Any ideas?