Steamwitz
Hi! I'm rather new to ARC, but a friend and I are trying to put together an animatronic head (from Robotics Squared) that will answer questions and look around, blink, move its mouth, etc.
Right now we're using a slightly modified script that @Rich posted to take the variable from the Sound servo control and open the mouth to different levels.
The sound coming into the Sound servo control is being supplied by SayEZB commands in the Speech Recognition control, so that we can have it answer questions and phrases.
Our problem is that the information going to the Sound servo variable is going a lot faster than the actual speech coming out of the speakers. We're thinking we can fix this by piping raw audio out of one port and into another, reading the levels there as the robot is talking and have the actual audio move the mouth.
Our problem with that solution is that we don't know how to pipe the audio out one of the ports. It doesn't even have to be true audio, just approximate volume levels that are fairly true to the actual speech. Anyone know how to pipe synthesized speech out of one of the connectors?
Any help would be greatly appreciated!
Andrew
(P.S. I'll try to check this every evening, but we're only meeting once a week to work on the project.)
If anyone is interested, we have a working theory on what we can do.
We're planning to change the SayEZB to a regular Say, and split the audio output from the PC. We're thinking that if we route the audio signal from the PC into one of the inputs, we can just have a script polling that input and opening and closing the mouth whenever there's audio on that line, with the open-ness corresponding to the volume of the audio.
Not sure just how well it'll work, and we have to work out the polling intervals and servo speeds, but in theory it sounds pretty good to me.
Anyone have any thoughts on this?
Andrew
We are doing some work on the Sound servo (ezb) - we will have progress in a few days
Awesome, thanks!
There's still some voice offset with the Sound Servo, so we're going with a hardware solution to get the raw output and feed it back in. We're trying to build an envelope follower circuit for the audio, so it'll be able to take pretty much any audio input and move the android mouth with it. We're using just the Envelope Follower part of this circuit
I'd be VERY interested in any updates in sound servo DJ.
nothing ever gets missed place It's all in the list of over 100 items in my To Do
Hehe, ok! It's wondeful to see you back at the programing with nearly daily updates ! Just like the old days
I'm sure you are able to relax a little now that the revolution is shipping.
Come on Will, you know DJ better than that. The guy never sleeps or rests. He probably wakes up in the morning with half written code in his head.
Absolutely. This is my favorite part of all things EZB. JD's commitment to the products, being personally involved in his company on a level that's just unheard of.
Hi again! Is there any way that I could get the ADC ports to read input faster than 100ms? I've set up an envelope follower circuit so I'm able to get the mouth to move with the audio, and I set up a software audio delay program on my PC to sync the sound to the mouth (Audio Delay by Fountainware), but the ADC port seems to be missing most of the signal - it's updating the android mouth maybe twice a second, all told, which gives it a really jerky and unpredictable appearance. I'm thinking that if there is a way to read the ADC port and write that to the servo more fluidly, this would help the project along a lot.
One other option we're considering is inverting the envelope follower, DC offsetting the inverted signal to make the android mouth set in the right range, and having the whole mouth-motion go through an analog circuit instead of through the ARC, and then saving the EZB for doing things like turning the head, blinking, etc.
So anyway - anyone know of a way to increase the read/write speeds of the ADC and digital ports?
Thanks!
I am working with another EZrobot member now to work on servo sync to speech , we are using a different method of solution that does not require adc. We are very early into this and will post our results when we have it worked out to the point we are ready to show it. basically it reads the text string and parses it for phonetics and creates the timing for the mouth then plays the speech synced to the mouth moment. early tests have been promising.
Awesome! I'd definitely be interested to hear what you do in your project and how it turns out. We're going a little bit different path, where we're hoping to be able to feed any audio at all into the input and have the android lipsync to the audio. It seems that the ez-robot's bus may be a little too slow for that translation, leaving our android's mouth output jumpy. We're also chasing down another option for the servo controller, based on a project here.
Still hoping there's some way to speed things up on the ADC-in, so that it can sample a bit faster than what we've seen so far.
Hi, so I'm a super noob to the EZR world. But... I saw this thread & thought maybe you could help. In my heart of hearts I'd like to have an animatronic head that will have somewhat phoenome-type mouth movements that will happen in real time to audio input... in oter words; I wanna speak into a mic & have the animatronic head move it's mouth in sync with my voice. Thoughts?... is this even possible through the EZB? Thanks all.
With a few sentences, yes, it is possible. If you want to do a lot of text, I would look at the scary terry board for controlling the mouth and the ez-b for everything else. The Scary terry board is more of a low level controller designed specifically for this purpose.
This is a good read on the topic.
https://synthiam.com/Community/Questions/7173
Updating this control for faster response for audio is actually on my list and is approaching quickly. Maybe a release or two away
d.cochran, thanks much, I'll check out the info & the scarry terry!
DJ Sures, Very cool! And can I just add to the schmabbillions of other people saying "tank you!" for this kit... I got a Dev kit about a month ago & just now have some spare time here and there to play w/it. Very excited!
Thank you for the kind words Looking forward to seeing your robot!
An alternative to the scary Terry board (I hadn't heard of that one, I'll have to check it out) is the Auto Talk board available here
Scary Terry board
I like this one because its $59.00 and It uses pots to adjust min/max servo movement and sensitivity. It can be tuned to work well with your application. It can also drive LED's and other devices along with the servo.
There are mic and audio in jacks. It allows you to just process the left or right audio channel or both. Its a pretty cool board.