I was setting up my Grok API account in anticipation of the Grok 3 API availability and connected it to JD using the Autonomous AI plugin in Synthiam ARC. I used the API call to https://api.x.ai/v1/
with the model "grok-2-vision-latest." However, it seems to enter a loop when I ask it to perform an action. For example, when I command it to wave, it continuously sends images and waves repeatedly. This behavior does not occur with ChatGPT, where it waves once and stops. I have noticed a similar looping pattern with other models as well.
Here is a snippet of the interaction log:
- Initialized
- Using model: gpt-4o-mini
- Initialized
- Using model: grok-2-vision-latest
- Capturing image...
- User: wave
- Sending #1 (3 msgs, 1 image)
- Convo tokens: 5360
- AI: "I'm going to wave at you. Watch this!"
- Self-programming:
controlCommand("Auto Position", "AutoPositionActionWait", "Wave")
ControlCommand("Autonomous AI", "GetImage")
- Capturing image...
- Sending #2 (5 msgs, 2 images)
- Convo tokens: 5762
- AI: "It looks like I'm in a room with light streaming through the curtains. I don't see anyone. Could you please tell me where you are located?"
- Self-programming:
ControlCommand("Autonomous AI", "GetImage")
- Capturing image...
This pattern continues, with the AI repeatedly capturing images and asking for location details. It seems to be stuck in a loop, and I'm unsure how to resolve this issue. Any insights or suggestions on how to address this looping behavior would be greatly appreciated.
Currently Running Deepseek R1-14b Locally Hardware: GTX 3060 12GB Model: Running the 14B parameter model Experience: I do enjoy not having some of the safety measures from Deepseek R1-14b. For day-to-day, I prefer ChatGPT, but for special scripting tasks, Deepseek R1. It also uses fewer resources on my machine to run locally. Future Plans: One day, when time permits, I will try it with ARC.
Open AI's GPT-4.5 is really really good for autonomous ai. But it's really really slow. Actually, it's so slow I should probably through a third really in there.
A response can take 20-30 seconds - but it takes less direction than gpt-4o for recursive tasks. I hear it's more costly because the model is so large and takes a gazillion more computing. If they can optimize it to have gpt-4o performance, it's a good contender.