PRO
afcorson
Australia
Asked
I am experiencing a very slow response time when using ChatGPT within the Synthiam ARC software. It frequently times out after approximately 15 seconds. I notice that during the processing of my request, the message "Sending Convo request 44 msgs (0 images)" is displayed. Could anyone explain what the "44 msgs" refers to? Additionally, what might be causing this slowdown in response time?
Related Hardware EZ-B v4
Related Control
OpenAI ChatGPT

Hi there! It sounds like your ARC project is running into performance issues with the OpenAI ChatGPT robot skill. Let me explain a couple of things:
1. What the "44 msgs" Means
2. Possible Causes for Slow Response
When the conversation history grows, several factors can lead to slower responses:
Increased Token Count:
Each message contributes tokens (the pieces of text) that are sent to the OpenAI server. The more tokens sent with each request, the longer it might take to process the request. A history of 44 messages means the server is handling a significant amount of context data, which can slow down response times.
Network Latency:
If you’re experiencing network slowdowns or high latency between your EZ-B v4 and the OpenAI server, every request may take longer to complete.
Timeout Settings:
The 15-second timeout you’re encountering might be reached because of the combined processing time needed for a large conversation history plus any network delays.
Processing Overhead in ARC:
While ARC is designed to work with multiple skills simultaneously, if one of the skills (like OpenAI ChatGPT) is handling a lot of data or commands, it might cause some delays in overall responsiveness.
3. What You Can Do
Here are a few suggestions to help mitigate the slow response:
Reset/Clear Conversation History:
If your conversation history is not needed for context, consider using the
ResetMemorycontrolCommand. For example, you can insert a step to clear the history:This can reduce the total number of messages sent with each new request.
Limit History Length:
Consider modifying your setup so that only a limited number of recent messages are sent. This might involve adjusting settings in the OpenAI ChatGPT skill configuration (look for options related to "Chat History Count").
Check Your Network:
Ensure that your network connection is stable and has minimal latency. A WiFi connection, especially with the EZ-B v4, can sometimes be more prone to delays than a direct USB or Ethernet connection.
Review Timeout Settings:
If possible, look into whether the timeout settings in ARC or the OpenAI skill can be adjusted (although be cautious - extending the timeout too much might simply delay error handling rather than resolving the underlying problem).
I hope that helps clarify the "44 msgs" and gives you a direction to troubleshoot the slow response time. Do you have any more questions or need further clarification on any of these points?
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
Don’t forget to try different models. More complex models have a slower response during processing. It would be useful to know the details of your model and if camera image is attached to each request. If using images only the most recent image is added to the request. So that generally isn’t a problem. It’s the model I find matters most.
Hi there,
DJ_Sures raises some good points regarding model complexity. Here are a few considerations:
Different Models:
More advanced models (like GPT-4) inherently require more processing time given their larger size and complexity. If you're experiencing slower responses, your selected model might be contributing to this delay. You might try switching to a less complex model (e.g., GPT-3.5) to see if response times improve.
Camera Image Attachment:
As DJ_Sures mentioned, if you're using images, only the most recent image is attached to each request. This means that if you’re not actively capturing a camera image for each conversation input, the image data likely isn’t the problem. However, it's still a good idea to verify that your settings are as intended so that unnecessary image data isn't being attached.
Conversation History:
Even though it isn’t mentioned in this particular message, remember that a larger conversation history (like the 44 msgs noted earlier) can also add to processing time. If you don’t need the full context, consider clearing or limiting the history with something like:
Testing and Adjustments:
It might be useful to experiment by:
Could you share which model you're currently using and whether your setup regularly attaches camera images? This will help narrow down the cause of the slow response in your specific case.
Feel free to ask more questions or give additional details if you need further assistance!
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
I am using GPT-5 nano. I have added a ChatGPT memory reset every time I connect to the robot. It can still take up to 40 secs to get a response to a simple question (via the mic). No image is attached the the conversion request. Image analysis, on the other hand, is fast. I can try different models but their doco suggests GPT-5 nano is the fastest.
Don’t forget that cloud processing means you’re at the mercy of the current demand load of the OpenAI service.
while it may be slow now, it might be quicker tomorrow or later in the day. This is due to the popularity usage of the model or service. Geolocation could also play a roll based on the processing capability of servers in your geographic location.