dbeard
I have a question about the ARC robot skill "Bing Speech Recognition".
I am using the Bing Speech Recognition skill with wake-word enabled. The wake word works correctly and the skill plays the wake-word ding. After that, it starts listening and normally sends the recognized phrase to my script via the $BingSpeech variable.
I want to prevent the robot from hearing its own spoken replies, so I tried using this ControlCommand during or after the robot’s response:
controlCommand("Bing Speech Recognition", "PauseListening");
I also tested similar listener control commands such as StopListening.
After PauseListening or StopListening is called from a script, the Bing Speech Recognition skill no longer works correctly with the wake word. The wake word still appears to trigger (the ding plays), but the skill gets stuck at "sending data" and never completes recognition or runs the recognized-speech script.
Manual operation still works. If I manually press Start Recording, speak a phrase, and then press Stop Recording, recognition works and my script runs correctly. The issue only appears after using the listener control commands from a script.
For troubleshooting, I replaced my recognized-speech script with a simple print of $BingSpeech, and that works correctly. I also removed the PauseListening / StopListening control commands from my script, and wake-word recognition works again.
So it appears that calling PauseListening or StopListening from a script may leave the Bing Speech Recognition wake-word flow in a bad state.
My question: Is PauseListening intended to be safe to call from a script while using wake-word mode? If so, is there a recommended way to pause Bing Speech Recognition during the robot’s speech and then safely resume it afterward?
Environment:
- ARC on Windows (PC-connected)
- Robot Skill: Bing Speech Recognition
- Wake-word enabled
- Recognized text variable: $BingSpeech
- Script invoked from the Bing Speech Recognition recognized-speech script
Related Hardware (view all EZB hardware)
Related Robot Skills (view all robot skills)

Yes -
PauseListening/StopListeningcan interfere with the wake-word flow if they’re called from within the same Bing Speech Recognition workflow that is currently handling recognition.What’s likely happening
With wake-word enabled, Bing Speech Recognition is managing a state machine:
If your recognized-speech script calls
PauseListeningorStopListeningwhile that session is still active, the skill can end up in a partially paused state and the next wake-word cycle may get stuck at "sending data".Recommended approach
Instead of pausing the skill itself during robot speech, use a separate guard variable in ARC to ignore recognition while your robot is talking.
For example:
Then, before your robot speaks:
If you are using
Audio.say()instead of the wait version, you can monitor speech with the built-in variable:getVar("$IsSpeaking")for computer TTSgetVar("$EZBPlayingAudio")for EZ-B audioBetter pattern for wake-word robots
A common pattern is:
Example:
If you really want to pause listening
If you want to pause based on a button or external event, do it outside the recognized-speech callback, not from inside the recognition handler that is currently processing the phrase.
Also, if you need more advanced control than Bing’s built-in behavior, ARC lets you create a custom robot skill: https://synthiam.com/Support/Create-Robot-Skill/Overview
That can be useful if you want a dedicated "mute while speaking" state machine.
My recommendation
For your use case, I would avoid
PauseListeningandStopListeninginside the recognized-speech script and use a variable flag instead. That’s the safest way to prevent the robot from hearing itself without breaking wake-word mode.If you want, I can help you write the exact JavaScript script for your setup - are you using
Audio.sayEZB()orAudio.say()for the robot replies?I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
Thank you, that explanation makes sense and matches what I am seeing.
I tested this further and confirmed that the wake word itself is working. The skill hears the wake word, plays the ding, and
$BingSpeechupdates correctly when I keep the recognized-speech script very simple.The problem appears when I call
PauseListeningorStopListeningfrom the script flow tied to the recognized speech. After that, the wake-word flow can get stuck at "sending data" and does not complete the recognition/script cycle correctly.Based on your explanation, I am going to avoid using
PauseListeningorStopListeningfrom inside the Bing recognized-speech callback. Instead, I will keep Bing running and use an ARC guard variable such as$ROE_Voice_IgnoreUntilMsor$RobotBusyto ignore any recognition that happens while the robot is speaking.For now, I will keep the Bing recognized-speech script very small, something like:
Then the router will decide whether to act on the phrase or ignore it based on a busy/ignore window.
That seems like the safest pattern for wake-word use. I appreciate the clarification that the listener control commands should be used outside the active recognition callback flow.