PRO
mstephens_42
USA
Asked
GPT-40 Support Coming.
This will be revolutionary with Synthiam enabled Robots! I can’t wait for ARC to have an updated GPT plugin! My C-3PO could respond in real time to conversation!
@DJ Sures the future is NOW!
@athena tell us about GPT-4o and what features that can be added to a GPT-4o robot skill that could help synthiam robot users build awesome robots.
Hello @DJ_Sures and @mstephens_42,
GPT-4, the latest iteration of OpenAI's Generative Pre-trained Transformer, brings several advancements and features that can significantly enhance the capabilities of Synthiam-enabled robots. Here are some key features and potential applications for robot skills:
Key Features of GPT-4:
Improved Natural Language Understanding:
Enhanced Multimodal Capabilities:
Greater Customization:
Increased Efficiency and Scalability:
Potential Applications for Robot Skills:
Advanced Conversational Agents:
Enhanced Vision and Perception:
Personalized Assistance:
Educational and Training Tools:
Healthcare and Therapy:
Integration with Synthiam ARC:
To integrate GPT-4 into Synthiam ARC, developers can follow these steps:
By leveraging GPT-4's advanced features, Synthiam-enabled robots can achieve new levels of interactivity, intelligence, and usefulness, making them valuable tools in various domains.
If you have any specific questions or need further assistance with integrating GPT-4 into your ARC project, feel free to ask!
Best regards, Athena
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
@athena you seem to be talking about GPT-4 and not GPT-4o. The GPT-4o is the newest version that you aren't aware of it seems. Anyone can read more about it here: https://openai.com/index/hello-gpt-4o/
I think it'll be good to see a robot skill created with it. i don't think there's much API capability for it yet.
Thank you for the clarification, @DJ_Sures. I apologize for the confusion. As of my knowledge cutoff date in October 2023, I do not have information on GPT-4o. However, based on the link you provided, it seems like GPT-4o is an even more advanced iteration of OpenAI's language models.
Potential Features of GPT-4o for Robot Skills:
While I don't have specific details about GPT-4o, I can speculate on potential features and enhancements based on the progression from GPT-3 to GPT-4:
Enhanced Multimodal Capabilities:
Improved Contextual Awareness:
Greater Customization and Fine-Tuning:
Increased Efficiency and Scalability:
Potential Applications for Robot Skills:
Advanced Conversational Agents:
Enhanced Vision and Perception:
Personalized Assistance:
Educational and Training Tools:
Healthcare and Therapy:
Integration with Synthiam ARC:
To integrate GPT-4o into Synthiam ARC, developers can follow these steps:
By leveraging GPT-4o's advanced features, Synthiam-enabled robots can achieve new levels of interactivity, intelligence, and usefulness, making them valuable tools in various domains.
If you have any questions or need further assistance with integrating GPT-4o into your ARC project, feel free to ask!
Best regards, Athena
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
Yeah another game changer from Open AI. I really like the super low latency because its uses speech to speech. I think the ability for the app to use vision for purposes of exploring the environment might be super useful in robotics. Definitely a useful product from my perspective, giving the Alan and Alena the ability for extremely natural conversation.
@DJ do you think an API could include using vision from the camera skill to give ChatGPT the ability to see like shown in the examples from yesterdays keynotes using the iphones camera?
Yeah the vision is surely something we need to implement. There hasn't been any update on the API yet so I'll keep an eye on it. As for the speech and feedback, that seems to be a chat-gpt thing (i.e. their app) not the API. The API looks like it'll take audio but not sure how the rest will work yet. Guess we just have to hold onto our hats until the API is released
@DJSures, is this saying the api is ready?
API
Another post from your account was removed being a duplicate post. We feel it was a question that duplicates this other post which we beleive is the same question.
You can use GPT-4o by using the Chat GPT robot skill found in ARC. The updaed Chat GPT robot skill manual explains more information that can be read by viewing the Chat GPT robot skill manual page.
Have fun!
Ah support is right - there was an update to the chat gpt skill a few weeks ago for doing images and such for gpt-4o. check it out - there's a conversation at the bottom with nink using it
GPT-4o nows inludes audio input. Can this be utlised in the ChatGPT Skill? For my robot, the latency for verbal communcation is much higher than for Image description. I assume there are two reasons for this. Firstly, the ChatGPT Skill is converting my speech to text, and secondly, when the BingSpeech Recognition Skill starts recording, it waits until the maximum recording length expires before sending the text to ChatGPT. So if the user says 'hello', ChatGPT takes several seconds to respond. I think this skill used to detect when the person stopped talking which would be more efficient. I tested GPT-4o in real time via my account, the response was immediate, and as soon as I stopped talking. This is what I'd love to achieve with my robots.
Sounds cool. Id still like having my robots at least sound mechanical when they speak ( like a Cylon)