
Dunning-Kruger
Canada
Asked
— Edited
Here is a little project I have been working on with fellow forum member Bob Houston.... It was originally written by another forum member Louis Vazquez.... who really did a good job on this... I have tweaked it a little to try and make it more adaptable from project to project.... It works like sound servo, but using text to drive the servo instead of sound....
I have Luis' permission to post so here it is...
I have posted it to the cloud as well...
Tanks Richard right on time to test Inmoov's head
Richard R and Luis Vazquez have done a great job writing these scripts to control a servo from text contain within a script. To get the text to control the servo you must input your text as; $sent $sent2 ControlCommand("Text Speech Engine", ScriptStart)
as in this example;
You also need to have this script running as a Control Command;
These scripts work great! You may have to adjust some of the timing and of course to the servo settings to meet your needs.
What is needed now is an easier way to enter the text. Hopefully, we will be able to make it work just by entering the text in Say() or SayEZB(). By"We", I mean the community, If you have any thoughts on this please post them. Thanks again Richard and Luis.
Just an added note... every sentence you use in $sent must end in a "." (period) or you get an error...
You can remove the "$sent" line by taking the "|" out;
And no period at the end of a sentence is needed if this line is changed;
It works, however, it does show this error; 1/19/2015 4:52 PM - Error on line 13: Error splitting 'hello i am in move the robot.' with SplitChar: '' to field #1. Index was outside the bounds of the array.
Hello ,
After doing much testing I have an update to the Text speech movement.
It is basically doing the same thing with a major difference of when things get done.
as it is not each letter is evaluated in real time as the speech is being played aloud.
Problem is the more things ARC has going on (ie. Listening for speech , reading sensors , sending and receiving UART strings) the more out of whack the delays get and require adjusting.
This method will analyze the string to be said and turn it in to a string of commands to open or close the mouth.
Last it will say the words and execute the pre-compiled command string.
This will be more stable and should most always time the same.
Notes for modifying scripts.
New Hello World
change Line 1:
$sent = "This is a test of something for the robot to say."
Set the text to anything you want the computer to say and lip sync.
New Text Speech Engine
Set up the Mouth Server
Line 47 : Servo(D0,90) : Set D0 to the mouth servo you will use and 90 to the closed position of the mouth Servo
Line 55 : servo(D0,60) : Set D0 to the mouth servo you will use and 60 to the closed position of the mouth servo
Line 61 : Servo(D0,90) : Set D0 to the mouth servo you will use and 90 to the closed position of the mouth servo
Set up the Timing
Line 56 : Sleep(85) : time to leave servo in Mouth Open Positions before going to next char
Line 61 : Sleep(85) : time to leave servo in Mouth closed Positions before going to next char
Line 66 : Sleep(90) : time to leave servo in Mouth Closed Positions before next word
Speech out put
Line 44 : say($sent) : to send to Ezb4 user sayezb($sent)
Code for New Hello World
Code for New Text Speech Engine
I hope this helps out.
Luis Vazquez
Email Me
Thanks Luis.... It works great.... I was thinking last night about removing the "|" and just using a space to identify word separation... This would eliminate the need for 2 $sent ($sent2) strings... But you and Bob beat me to it... LOL
Thanks for the contribution on this....
Cheers Richard
I've uploaded this project to the cloud
look for project named
Text to Speech Engine V2
I just scanned over it, am I right in thinking the text needs to be written twice, once with the | for splitting it up by word?
If that's the case, can it work dynamically, with RSS feeds etc.? Can't it be split from the space between words?
Also, would it not pay to use the Auto Position control for accurate mouth (and other facial features) movements depending on words used?
Like I said, only quickly scanned this so I apologise if the above has been mentioned before.