Hey Ez-B community! Newbie question here. I want to build a talking robot that will deliver speeches. Basically the idea is just to use my own voice and make mouth open and close when I speek like a Ventriloquist. The straight forward approach would just be to manually control the mouth with the computer or joy stick and try to make the lip-sync good. But is there a more elegant solution? For instance can the mouth move only when there is sound coming into a mic plugged in the computer? Are there these capabilities built into the SDK? Can somebody point me in the right dirrection. Thanks!