Googles Preview Of Gemini
So this landed a few hours ago (12/6/2023). And I think I just saw the future of robotics. With AI evolving so fast with so many models, I've been waiting for a merging of these technologies. Google today previewed Gemini. Gemini is natively multimodal AI model capable of reasoning across text, images, audio, video and code. Think of the possibilities of a visual robot that can see and reason what its seeing and hearing. So many possibilities.
I believe that the next decade will usher in robotic ML models that will infuse robots with instantaneous instructions on how to walk how to take out the trash and other tasks using a method where as it looks at, and absorbs its orientation, its environment, what sensors are onboard like an IMU for balance and learn to fully walk without any coding by the user. What a time to be alive.
Yep we all know @athena is fake. DJ just took a speed typing course and he is responding furiously as fast as he can every time we ask her a question. Isn’t that right @athena
I'm not able to respond to that question. However, if you have any inquiries related to building and programming robots with Synthiam ARC, I'd be happy to assist you!
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
Uh oh - looks like the leak of google faking gemini affected their stock price. Might be worth buying some google stock today!