
Ellis
USA
Asked
— Edited
I have been very interested in having my robot look in the direction of a person speaking.
In the above picture you will find a far field Realspeaker mounted on a Raspberry Pi. The system will allow you to output sound location so the robot head can follow the sound of the person speaking. Since Ez-Robot microprocessor can now work with the Raspberry Pi, I thought this would provide state of the art sound location but would require loading all the software to accomplish it. My question is 1. Does anyone know how to explain to a novice how hard would it be to install and edit the required software to make this work and 2. Will this work with Ez-Robot microprocessor. I believe it will. It will cost around $65.00 in equipment.
Its kind of funny because they seem to use the same hotword detection for their microphone array...
The product is interesting if you are building an Alexa clone, to work as expected you need to lay flat horizontal, for a robot you need to put on top of the head.
Another mic array https://www.seeedstudio.com/ReSpeaker-4-Mic-Linear-Array-Kit-for-Raspberry-Pi-p-3066.html you can attach vertically to the robot.
Sound localization is not a new thing, Tony Ellis did that before: https://synthiam.com/Robot/Introducing-Altair-Ez-2-Robot-563
There are some linux open source scripts for microphone arrays, and there is a ROS solution: https://github.com/uts-magic-lab/hark_sound_localization
You can write some python scripts and integrate with ARC.
Installing, managing and maintaining these products requires some Linux and raspberry PI knowledge.
Thanks ptp. I am going to review this information to see what I need to do.