
Steamwitz

Hi! I'm rather new to ARC, but a friend and I are trying to put together an animatronic head (from Robotics Squared) that will answer questions and look around, blink, move its mouth, etc.
Right now we're using a slightly modified script that @Rich posted to take the variable from the Sound servo control and open the mouth to different levels.
The sound coming into the Sound servo control is being supplied by SayEZB commands in the Speech Recognition control, so that we can have it answer questions and phrases.
Our problem is that the information going to the Sound servo variable is going a lot faster than the actual speech coming out of the speakers. We're thinking we can fix this by piping raw audio out of one port and into another, reading the levels there as the robot is talking and have the actual audio move the mouth.
Our problem with that solution is that we don't know how to pipe the audio out one of the ports. It doesn't even have to be true audio, just approximate volume levels that are fairly true to the actual speech. Anyone know how to pipe synthesized speech out of one of the connectors?
Any help would be greatly appreciated!
Andrew
(P.S. I'll try to check this every evening, but we're only meeting once a week to work on the project.)
If anyone is interested, we have a working theory on what we can do.
We're planning to change the SayEZB to a regular Say, and split the audio output from the PC. We're thinking that if we route the audio signal from the PC into one of the inputs, we can just have a script polling that input and opening and closing the mouth whenever there's audio on that line, with the open-ness corresponding to the volume of the audio.
Not sure just how well it'll work, and we have to work out the polling intervals and servo speeds, but in theory it sounds pretty good to me.
Anyone have any thoughts on this?
Andrew
We are doing some work on the Sound servo (ezb) - we will have progress in a few days
Awesome, thanks!
There's still some voice offset with the Sound Servo, so we're going with a hardware solution to get the raw output and feed it back in. We're trying to build an envelope follower circuit for the audio, so it'll be able to take pretty much any audio input and move the android mouth with it. We're using just the Envelope Follower part of this circuit
I'd be VERY interested in any updates in sound servo DJ.
nothing ever gets missed place
It's all in the list of over 100 items in my To Do
Hehe, ok! It's wondeful to see you back at the programing with nearly daily updates ! Just like the old days
I'm sure you are able to relax a little now that the revolution is shipping.
Come on Will, you know DJ better than that. The guy never sleeps or rests. He probably wakes up in the morning with half written code in his head.