smiller29
What is the best way to use a servo with more than one skill? I am not sure how to ask this question so I will provide an example.
I have the camera skill installed in my project with head tracking turned on using two servos to rotate and tilt the head. I also have Auto Position skill setup with movements. I also have ChatGPT installed where a may want to trigger an Auto Position movement that uses the same two servos the head tracking is using. So can I just trigger the movement or do I have to turn off the head tracking do the movement and then turn the head tracking back on?
So I just want to understand how to deal with situations like this. As you can imagine this could be a very common thing and I want to know what the best practice.
In ARC, when you want to control a servo with multiple skills, such as head tracking with the Camera skill and movements with the Auto Position skill, you need to manage the interaction between these skills to avoid conflicts.
Here's a recommended approach to handle such situations:
Prioritize Skills: Decide which skill should take priority over the servo at any given time. For example, if head tracking is a continuous activity but Auto Position movements are occasional, you might prioritize head tracking.
Use ControlCommand(): Utilize the
ControlCommand()
function to programmatically enable or disable skills as needed. This function allows you to send commands to other skills within your project.Scripting: Write scripts that manage the transition between skills. For example, before triggering an Auto Position movement, you can use a script to disable head tracking, perform the movement, and then re-enable head tracking.
Here's an example script that you might use to manage this process:
Event Handling: Some skills may have events that you can use to detect when an action is completed. For example, the Auto Position skill might have an event that triggers when a movement is finished, which you can use to re-enable head tracking.
Variable Monitoring: Use global variables to monitor the state of your skills. For instance, you could have a variable that indicates whether head tracking is currently active and use this in your scripts to decide when to perform other actions.
User Interface: If you're manually controlling the robot, you can create custom buttons using the Interface Builder that run scripts to manage these transitions.
Remember that when multiple skills attempt to control the same servo simultaneously, it can lead to unpredictable behavior, so it's essential to manage the transitions between skills carefully. The best practice is to ensure that only one skill is controlling the servo at any given time, and use scripting to manage the handover between skills.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
Stop the one thing to do the other.
If you're using a screwdriver and you need to use a hammer. What do you do? Do you try to hold them both at the same time? haha
A servo can be in only one position at a time. If you try to have camera tracking while the Auto Position is running, you'll have some crazy robot freak-out session!
But, to wait for an auto position, Athena's will work, but the timing will have to be accurate. It's easier and more dynamic to do this...
Its best to put the robot in different states for its actions. Read the manual and use the Task Script: synthiam.com/Support/Skills/Scripting/Task-Script?id=21103
Also, this process of how to do everything is in this great tutorial here: synthiam.com/Support/Get-Started/how-to-make-a-robot/make-a-diy-robot
Oh, and I thought of this for you too. It’s mentioned in the getting starter guide but in case you missed it
conversation menu: synthiam.com/Support/Skills/Audio/Conversational-Menu?id=21091
you can have a conversation with the user to put the robots in various states
Thanks DJ, I need figure out how to make a proper traffic cop to keep track all of this stuff as I want the robot to seem like it is running autonomous.
So I am looking for ways to make that happen using all these tools you have pointed out.
A good way to start is by writing down the things you want the robot to do. It’s an important first step of the getting started guide. It helps you categorize the modes for the robot.
use the conversation menu to allow a user to select a mode. Each mode will configure the robot skills differently.
but you need to make a list of the things you want the robot to do first. That way you can plan the modes.
Yep I have done that! On the most part the main form of user input will be voice commands. But at the same time I want it to do things based on sensor input