Dunning-Kruger
Canada
Asked
— Edited
Here is a little project I have been working on with fellow forum member Bob Houston.... It was originally written by another forum member Louis Vazquez.... who really did a good job on this... I have tweaked it a little to try and make it more adaptable from project to project.... It works like sound servo, but using text to drive the servo instead of sound....
I have Luis' permission to post so here it is...
I have posted it to the cloud as well...
Tanks Richard right on time to test Inmoov's head
Richard R and Luis Vazquez have done a great job writing these scripts to control a servo from text contain within a script. To get the text to control the servo you must input your text as; $sent $sent2 ControlCommand("Text Speech Engine", ScriptStart)
as in this example;
You also need to have this script running as a Control Command;
These scripts work great! You may have to adjust some of the timing and of course to the servo settings to meet your needs.
What is needed now is an easier way to enter the text. Hopefully, we will be able to make it work just by entering the text in Say() or SayEZB(). By"We", I mean the community, If you have any thoughts on this please post them. Thanks again Richard and Luis.
Just an added note... every sentence you use in $sent must end in a "." (period) or you get an error...
You can remove the "$sent" line by taking the "|" out;
And no period at the end of a sentence is needed if this line is changed;
It works, however, it does show this error; 1/19/2015 4:52 PM - Error on line 13: Error splitting 'hello i am in move the robot.' with SplitChar: '' to field #1. Index was outside the bounds of the array.
Hello ,
After doing much testing I have an update to the Text speech movement.
It is basically doing the same thing with a major difference of when things get done.
as it is not each letter is evaluated in real time as the speech is being played aloud.
Problem is the more things ARC has going on (ie. Listening for speech , reading sensors , sending and receiving UART strings) the more out of whack the delays get and require adjusting.
This method will analyze the string to be said and turn it in to a string of commands to open or close the mouth.
Last it will say the words and execute the pre-compiled command string.
This will be more stable and should most always time the same.
Notes for modifying scripts.
New Hello World
change Line 1:
$sent = "This is a test of something for the robot to say."
Set the text to anything you want the computer to say and lip sync.
New Text Speech Engine
Set up the Mouth Server
Line 47 : Servo(D0,90) : Set D0 to the mouth servo you will use and 90 to the closed position of the mouth Servo
Line 55 : servo(D0,60) : Set D0 to the mouth servo you will use and 60 to the closed position of the mouth servo
Line 61 : Servo(D0,90) : Set D0 to the mouth servo you will use and 90 to the closed position of the mouth servo
Set up the Timing
Line 56 : Sleep(85) : time to leave servo in Mouth Open Positions before going to next char
Line 61 : Sleep(85) : time to leave servo in Mouth closed Positions before going to next char
Line 66 : Sleep(90) : time to leave servo in Mouth Closed Positions before next word
Speech out put
Line 44 : say($sent) : to send to Ezb4 user sayezb($sent)
Code for New Hello World
Code for New Text Speech Engine
I hope this helps out.
Luis Vazquez
Email Me
Thanks Luis.... It works great.... I was thinking last night about removing the "|" and just using a space to identify word separation... This would eliminate the need for 2 $sent ($sent2) strings... But you and Bob beat me to it... LOL
Thanks for the contribution on this....
Cheers Richard
I've uploaded this project to the cloud
look for project named
Text to Speech Engine V2
I just scanned over it, am I right in thinking the text needs to be written twice, once with the | for splitting it up by word?
If that's the case, can it work dynamically, with RSS feeds etc.? Can't it be split from the space between words?
Also, would it not pay to use the Auto Position control for accurate mouth (and other facial features) movements depending on words used?
Like I said, only quickly scanned this so I apologise if the above has been mentioned before.
@Rich... not anymore Luis modified it... only stipulation is a period has to end each sentence or phrase... Only need two lines...
Of course you need to adjust the sleep and servo commands in the script to work with your particular project...
Presumably the period is picked up somewhere in the code (I've not read the code yet) and that tells it that it's the end of the sentence?
If that's the case you could get the string length and use that to find the end of the sentence etc. I'm just thinking of problems when exclamation marks and question marks are used. Or no period at all.
@Rich... I am sure Luis will (hopefully) keep tweaking the code.... As you are right the code is looking for a period to denote the end of a sentence... So it probably will have to be tweaked for the reasons you mentioned... Also, I noticed commas are not picked up as a slight pause...
Given what Luis has written here the code as it stands... dare I say it, works better than sound servo...
Define better
SoundServo is volume driven and works fine for that. But it doesn't move a mouth as many desire. This has been something others have contacted me about in the past but I've had no time to spend on the script for it (so glad someone else has stepped up)
I am familiar on how to adjust the sound servo control to suit my needs... And it does work quite well.... The main difference that I see with Luis' code is it seems to mimic speech better... This of course just my opinion and is based solely on my inMoov project an nothing else... Comparing the same sentence "side by side" if you will, the text to speech engine to me seems more fluid and realistic dynamic when compared to sound servo...
On a side note my girlfriend (who's entire extent of robotic knowledge is based on my blabbering to her constantly about my robot projects) without prompting a head of time picked Luis' code as the more realistic mouth movement....
Sound servo of course is much easier to use and integrate into projects... For that reason most people will still prefer to use. Not to mention, it works very well....
Actual in the new revision you do not need the ending .
Also ran a version of the script that does short vale sound like to have a half open option. this would make the voice even smother. I opp-ted to upload the one using 2 position only because that's what bob was needing.
I am working on a version that uses phonetics written in C++ , but I will port it over to ez script when complete and working.
I like the sound servo OK but it move on any sound. This script if more for text to speech rather than reacting to any sound played to the card.
Also I would like to know if you had to play with any timings on the script or did it work out of the box. And if you did change things, what did you have to change. I'm going to work on making these items variables at some point.
Luis A. Vazquez
try this hippiegeekbook@gmail.com
One huge difference though it that sound servo will react to any sound, not just speech, so I can see a lot of value in this scripted method.
Alan
Hey Luis, I would be very interested in the version you are porting from C++... Your second version works very well indeed.... Awesome work dude....
I couldn't get your email link to work....
OK, here is my email. anyone in the community is welcome to contact me so long as it concerns robotic code and hardware. (thanks)
hippiegeekbook@gmail.com
Also to clear up a questions above.
The "." is not need anymore as I am using the length of the string to read to end of line. I have a some code in there to check to see if the last letter left the mouth open and is so closes it.
The next time I get to work on the code , I will test the timing so that if a "." is used in the string. for instance two sentences in one string separated by a "." followed by a space. maybe the timing between sentences is a bit longer than between words. I will also check for things like a "," that may generate a pause in the speech and have a timing for that as well .
Maybe you guys and gals can help. I can use variables for setting the timings at the start of the script , what I can't see to do is set a Variable that represents a servo pin. anyone out there know how to do that?
That's a good idea Luis... maybe other variables such as for sleep delays as well?... in case the user wants to increase or decrease the speech rate....
Check your email when you have a chance...
@Luis, I've got this new code all set up now - It works great! Much more user friendly for entering the "text to speech". This is just what we needed. Playing around with the script I have found you can put other punctuation in the sentence, like "," "!" and "?" . Thanks again for you work on this project.
You're most welcome @bhouston .
I love you videos , keep it up!
Variable for the servo pin? As in the digital port it's on, right?
If so;
And
Unless DJ has changed it (again) that should work however it's not really good practice and DJ had removed the ability to do that in a previous update however it was added back in.
Nice work on this. It just saved me $59 on a board I was about to purchase for this purpose.
Thanks for sharing!
:) Any videos of this script in action?
I only tested with a servo hooked up, not connected to a mouth yet. It moved well enough for me to get an idea that it worked by watching the servo move (what was attached to an arm). As soon as I get the other parts, I would be happy to make a video, but I am sure that others will be able to get to it much more quickly than I.
Thanks -- By reading the script it will be very useful. I was thinking of tying this up the arms too for arm movement when talking. Will give the bot some more animation.
The Sound servo can control concurrent servos?
Thanks, Aliusa
@aliusa Just download the script, load it into ARC connect a servo to your V4 and try it out for yourself...
You can drive more than one servo port with the script too...
A cool effect would be for the sound servo to control arms and this script to control mouth. This would let them be different from each other, allowing the arms to be based on volume.
and yes, the soundservo can use multiple servos.
While testing this with a 2800 character string (traffic feed from london) this didnt produce good results. It also took about 3 minutes to process the text to figure out the mouth movements. It looks like the purchase of the board I was looking at is still on the books.
I am not bashing this at all as it is great for short phrases, but long text takes too long to parse and it doesn't work nearly as well.
d. cochran, That's good to know. I have not tried it with really long text. Could the time it takes to parse be related to the speed of the computer? I would think that has something to do with it. What is this "board" you are referring to and what does it do?
Yes, it could but I am running the fastest possible config. I7, 32 GB ram, SSD drives for storage that are in an array. It would be hard to find a faster computer than I have, so I dont think that is the issue. I had 86% free memory while running.
The board i am looking at is here.
If you feel like building it, here are the details. build one for about $20 USD
edit to add more info string length 2559
CPU at 35% Memory at 21% Disk activity at 1%
Parsing string and building OWC string. Started at 2:01:35 Finished at 2:05:24
While not running this process, CPU at 30%
Here is the text that was being parsed. "M5 J4A M42One lane closed due to broken down vehicle on M5 Southbound between J4A M42 and J5 A38 Droitwich / Wychbold. M61 J8 A674 ChorleyQueueing traffic and Main carriageway closed due to multi-vehicle accident on M61 Northbound between J6 A6027 Horwich and J8 A674 Chorley. Royston - A10 A505A10 both ways closed due to accident, two vehicles involved between A505 and Royston Road Melbourn turn off. Earith - A1123 Hill Row Causeway Long DroveA1123 Hill Row Causeway both ways closed due to serious accident between Long Drove Earith and Church Lane Haddenham. M6 J10 A454 / B4464 Wolverhampton / WalsallOne lane closed and heavy traffic due to accident on M6 Northbound between J10 A454 / B4464 Wolverhampton / Walsall and J10A M54. Hollym - A1033 Hollym Road Tithe Barn LaneA1033 Hollym Road both ways closed due to serious accident between Tithe Barn Lane and South Carr Dales Road. M40 J3 A40 / A4094 High Wycombe East / LoudwaterQueueing traffic and lane blocked on exit slip road due to accident, motorcyclist involved on M40 Northbound at J3 A40 / A4094 High Wycombe East / Loudwater, congestion on M40 to J2 A355 Beaconsfield. Burford - A456 Forresters RoadA456 both ways closed due to serious accident near Forresters Road. M1 J15A A43 / A5123 Towcester / Northampton ServicesQueueing traffic due to earlier accident, around eight cars involved on M1 Southbound at J15A A43 / A5123 Towcester / Northampton Services, congestion on M1 to Watford Gap Services. All lanes have been re-opened. Kennford - A38 A380 Splatford SplitSlow traffic due to earlier accident on A38 Northbound near A380 Splatford Split. In the roadworks area. All lanes have been re-opened. Croydon - A212 Wellesley Road Station RoadA212 Wellesley Road Northbound closed, queueing traffic due to burst water main between Station Road and A222 St James's Road, congestion on A212 Park Lane to A232 Barclay Road. M20 J7 A249 Maidstone/Detling HillSlip road onto motorway closed due to queues of lorries on M20 coastbound at J7 A249 Maidstone / Detling Hill. M25 J6 A22 / B2235 GodstoneOne lane closed and very slow traffic due to barrier repairs on M25 anticlockwise between J6 A22 / B2235 Godstone and J5 M26 / A21 Sevenoaks. M20 J8 A20 Leeds Castle / Maidstone ServicesM20 coastbound closed, delays due to Operation Stack between J8 A20 Leeds Castle / Maidstone Services and J9 A20 Ashford. M25 J6 A22 / B2235 GodstoneOne lane closed and heavy traffic due to barrier repairs on M25 anticlockwise between J6 A22 / B2235 Godstone and J5 M26 / A21 Sevenoaks. "
Just a thought, I ran into an issue like this with the text-to-speech(audio) module I worked on back on the V3. I don't know how the script is handling the parsing but if it parses everything and then sends the data out to the servos you could have an issue there. When I was working on the text-to-speech(audio) module the board I was using had a limit on the number of characters you could send in each packet. My work around was to send each character singularly as I parsed. This way it can be processing the text as it is implementing it to the servos. Just a thought.
I updated the script in the Cloud and moved all Variables to the top of the file for easy tweaking.
The top of the file looks like this now.
@Luis.... Thanks man....
Cheers Richard
Here's a couple of ideas on how we can make this "Text to Speech" engine Command Control even better.
Is it possible to make it so that the command control doesn't have to be put in after every line of text? Perhaps,a way to put it once at the beginning of the script and then any text in that script would be "spoken".
Is there a way to get the sound to come out of the EZB rather than the computer? For example a "$SentEZB" command, kind of like "SayEZB".
Mouth servo is on D2
speech settings - medium rate of speaking
say($sent2) # or sayEZB(
Change this line from say to sayEZB and the sound will come out of the ezb.
In the new script
say($sent) would be changed to sayEZB($sent)
Man, this community is great! Thanks d.cochran, I knew it would be EZ to do ! One down one to go.
I posted this question awhile ago and didn't get a response. So I'll post it again just in case someone had a thought on it.
Is it possible to make it so that the command control doesn't have to be put in after every line of text? Perhaps, a way to put it once at the beginning of the script and then any text in that script would be "spoken".
Make a script that is called to handle the function of evaluating the text. This one does that.
In your other scripts, set the $sent = "whatever you want to say"
Then
Call this script.
You could also use waitforchange on the $sent.... The draw back is you couldn't say the same phrase twice in a row.....
would be changed to if you are using the Script Manager which I would recommend.
[edit] The WaitForChange($Sent) would need to be added to the Text Speech Engine. As soon as it got done speaking or maybe just before, you could $Sent = "" so that the script would have time to evaluate what it needs to before the variable is set to "" again.
This would allow you to just use
$sent = "What you want to say" instead of SAYEZB( "What you want to say" )
you would need to put the Text Speech Engine in a loop and call it at the start of the project in an init or startup script.
Good point Rich and Richard.
@Richard Unless you have the $sent variable "reset" to something immediately after waitforchange
Ok... I figured a repeatuntil loop in the speech engine (to keep it running) could be used and a waitforchange right after the repeatuntil command...