
smiller29
The Speech Is Cut Off Using Bing Speech And CHATGPT And Iotiny
So I ask the BingSpeech "What is your name?" it sends question to CHATGPT which then provides it response in the form of the $OpenAIResponse "My name is XR1..........." but what I get out of the speaker is "is XR1......" or "name is XR1........" Why would it be cut off like that?
@Nink I have not tried Watson speech to text and text to speech skills for this. I just can't understand how it can be dropping part of the output in the audio. The EZBSAY command is sending the complete response but parts are missing in the audio output. To me this sounds like a software issue talking to the hardware or a firmware issue in the hardware. I know wifi can cause delays but if the packets got sent and received the audio should play everything not parts of it.
This is not an issue with bing speech it is doing the translation correctly and sending it correctly to CHATGPT. The issue is the EZBSAY command being used in the call within CHATGPT.
hi all
FANT0MAS
if you have many try's without sucses , try delete your browser hystory and reboot .
I’m guessing there’s two robot skills trying to access the ezb speaker at the same time. Most likely the Bing and chat gpt both have sayezb() so they’re conflicting. Each robot skill runs in its own process thread. So if they’re both trying to access the speaker using sayezb(), one will cut off the other.
The most suggested course of action for diagnosing is to remove any potential other issues. So simplify the process by starting a new blank project, add chat gpt and Bing.
then , configure Bing to send the response to chat gpt.
then, configure chat gpt to speak its response
it’ll work fine. That means your project is doing something to cut off the speech. if it works in a new blank project, then something needs to be done in your project
I must have missed this topic sometime ago, the issue is likely related to the Best Match feature in the OpenAI ChatGPT skill.
When I originally used the Best Match feature on the EZ-inMoov Robot Head Advanced project it worked differently then it does today. I believe we released the project in May and it was released tested and working. The skill was likely functionally changed in late June without us being aware.
This is a tricky issue because this skill now has some automatic features that can conflict.
For example, I have this code in my Response script:
And this an example of code in my Best Match scripts:
Previous to June the Best match script would only execute if a "best match" was found, and the Response script would normally execute if a "best match" wasn't found.
Now both the Response script and Best match script execute at the same time, and there is always a "best match" found.
As you can see in the example above the Audio.sayEZBWait(getVar("$OpenAIResponse")); is now executed twice, at the same time leading to some very weird audio.
It seems that the skill is now designed for users to use either the Best match feature or the Response script feature, not both.
From my testing today, is seems that the best match feature is bugged as the response I get back is always "No" for whatever question I ask it. It doesn't matter if the checkbox for "Match on AI response when checked, otherwise match on human input" is checked or not.
Knowing that both the best match and response scripts execute on every request, I would not place duplicate speech codes in both. Doing so will result in precisely the thing you're having difficulty with. Remove the Say command from one of the scripts. Removing it from the Match script makes the most sense because there might not always be a match.
Lastly, make sure you're use-case has a reason for using sayezbWait() because Wait will perform differently than simply sayez()
@Athena explain more...
The
Audio.sayEZBWait()
command in ARC is designed to make the EZ-B v4/2's onboard speaker articulate the specified text using text-to-speech synthesis. TheWait
suffix in the command indicates that the command will block further execution of the script until the speech synthesis is complete. This ensures that the entire phrase is spoken before the script continues to run any subsequent commands.If you have multiple scripts or robot skills attempting to use the
Audio.sayEZBWait()
command simultaneously, they may interfere with each other, causing the audio to be cut off or overlapped. This is because theWait
command holds the script's execution, but if another script starts, it may interrupt the ongoing speech.To avoid conflicts and ensure smooth audio output, consider the following:
Avoid Duplicate Speech Commands: Ensure that you do not have duplicate
Audio.sayEZBWait()
commands in both the Best Match scripts and the Response script within the OpenAI ChatGPT skill. If both scripts are triggered, they will overlap and cause audio issues.Use Non-blocking Speech Commands: If the use case allows, you can use the non-blocking
Audio.sayEZB()
command, which will not hold the script's execution. This can prevent conflicts if multiple scripts are running, but you will need to manage the timing to avoid overlaps manually.Sequential Execution: If you need to use the
Wait
version of the command for specific reasons, ensure that the scripts are executed sequentially and not simultaneously. You can manage this by using variables or flags to indicate when a speech command is in progress and prevent other scripts from starting speech until it's complete.Simplify Your Project: As suggested by DJ Sures, start with a clean ARC project with only the necessary skills (Bing Speech Recognition and OpenAI ChatGPT) and build up from there. This will help you identify where the conflict occurs and allow you to adjust your scripts accordingly.
Check Network and Hardware Performance: While you've ruled out network issues, it's still a good idea to ensure that your PC and IoTiny are performing optimally and that there are no hardware limitations causing the audio to be cut off.
By carefully managing the speech commands and the flow of your scripts, you should be able to resolve the issue of the audio being cut off and achieve the desired interaction with your robot. If you continue to experience difficulties, consider reaching out to the Synthiam community or support for further assistance.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
Sorry @DJ I think you missed what I was explaining.
I understand how it worked before and how it works now. I no longer am using the code above but was using it to explain the trouble that I was having and what @smiller29 may have been experiencing.
I am very certain best match is bugged, please try it for yourself.