Unknown Country
Asked

Two Robot Heads Talking To Each Other

I am trying get two robot heads to talk to each other.  They are both clients on my wifi network, and I am using one computer, and one speaker output.

My major problem is that I am using sound servo (pc speaker) to move the robot heads jaws, and it only works for one of the two heads (e.g., port 0.D2).  I want sound servo to move the jaw servo for robot A when it is talking (port 0.D2), and I want sound servo to move the jaw servo for robot B when it is talking (e.g., port 1.D2).

How do I do this?  I don't want to use two computers.  If I use two computers, the speaker out of computer A would have to be the mic in of computer B, and vice versa, and I would use speech recognition.  If however I use a single computer, I can sequence the "robot conversation" mp3 files in soundboard via scripting with delays.  The problem is that I can't get only the jaw of robot A to move when it is speaking, but then only have the jaw of robot B move when it is speaking.


Related Hardware EZ-B IoTiny
Related Control Sound Servo (PC Speaker)

ARC Pro

Upgrade to ARC Pro

Your robot can be more than a simple automated machine with the power of ARC Pro!

Unknown Country
#9  

Thanks DJ.

I am aware of how the sound servo works.  What I discovered is that both robot jaws move at the same time, even though they are in different instances of ARC.  So the speech recog trigger expressions must be listed the same for both ARC instances, even if a particular expression doesn't apply to one of the robots.  When the speech recog phase (from me, via mic) is heard by both instances of ARC, one ARC instance wlll have to be scripted to pause its sound servo, while the other ARC instance allows its sound servo to automatically mimic what is coming out of the PC speakers while playing that robot's mp3 file response.

The real problem arises when the end of say robot A's mp3 file is supposed to trigger a response from robot B. Robot B's speech recog will have to be listening to the computer speaker output.  I am now testing if a second mic near the the speakers hooked in parallel with the original mic (me) will do the trick.  Otherwise I will have to loop back audio out to mic in, which may require an external coupling circuit (cap and resistor).

Besides using a second mic in front of the computer's speaker, does anybody have a simple way to get a computer to listen to itself (speaker out goes to mic in)?  All I could find is this.

FYI, for speech recog in ARC, I have found that there must be a pause before the last words you want recognized. For example, if the mp3 file (or spoken word) is "But I'm not happy" and the speech recog skill is looking for "happy", the recording will have to be "But I'm not (pause) happy".  Since many of my mp3 robot responses are generated as synthetic voices with an AI app called Eleven Labs, I will probably have to add the pause using Audacity.

This is more complicated than i thought it would be!

Again, thanks.  If I get this to work, I will upload a video to youtube.

PRO
Synthiam
#11  

Ah, I was just on a conference call talking about something entirely unrelated, and my A.D.D. kicked in, and I think I solved the problem. I'll have to build a prototype robot skill first, but I think I have a solution to using third-party voices. We're finalizing the Open AI Whisper robot skill this week, so I can probably sneak in a few tests next week.

PRO
Belgium
#12  

hi all

you can easly stop a script by using the stop command . if one head uses d0 and d1 and the other excample d22 and d23 . avery time you make a script there"s is auto a stop script too .

User-inserted image