Hi! I'm rather new to EZ-Builder, but a friend and I are trying to put together an animatronic head (from Robotics Squared) that will answer questions and look around, blink, move its mouth, etc.
Right now we're using a slightly modified script that @Rich posted to take the variable from the Sound servo control and open the mouth to different levels.
The sound coming into the Sound servo control is being supplied by SayEZB commands in the Speech Recognition control, so that we can have it answer questions and phrases.
Our problem is that the information going to the Sound servo variable is going a lot faster than the actual speech coming out of the speakers. We're thinking we can fix this by piping raw audio out of one port and into another, reading the levels there as the robot is talking and have the actual audio move the mouth.
Our problem with that solution is that we don't know how to pipe the audio out one of the ports. It doesn't even have to be true audio, just approximate volume levels that are fairly true to the actual speech. Anyone know how to pipe synthesized speech out of one of the connectors?
Any help would be greatly appreciated!
(P.S. I'll try to check this every evening, but we're only meeting once a week to work on the project.)
Upgrade to ARC Pro
Your robot can be more than a simple automated machine with the power of ARC Pro!
We're planning to change the SayEZB to a regular Say, and split the audio output from the PC. We're thinking that if we route the audio signal from the PC into one of the inputs, we can just have a script polling that input and opening and closing the mouth whenever there's audio on that line, with the open-ness corresponding to the volume of the audio.
Not sure just how well it'll work, and we have to work out the polling intervals and servo speeds, but in theory it sounds pretty good to me.
Anyone have any thoughts on this?
I'm sure you are able to relax a little now that the revolution is shipping.
One other option we're considering is inverting the envelope follower, DC offsetting the inverted signal to make the android mouth set in the right range, and having the whole mouth-motion go through an analog circuit instead of through the ARC, and then saving the EZB for doing things like turning the head, blinking, etc.
So anyway - anyone know of a way to increase the read/write speeds of the ADC and digital ports?
Still hoping there's some way to speed things up on the ADC-in, so that it can sample a bit faster than what we've seen so far.
thanks much, I'll check out the info & the scarry terry!
Very cool! And can I just add to the schmabbillions of other people saying "tank you!" for this kit... I got a Dev kit about a month ago & just now have some spare time here and there to play w/it. Very excited!
I like this one because its $59.00 and It uses pots to adjust min/max servo movement and sensitivity. It can be tuned to work well with your application. It can also drive LED's and other devices along with the servo.
There are mic and audio in jacks. It allows you to just process the left or right audio channel or both. Its a pretty cool board.