Object detection is fundemental to computer vision: Recognize what the objects are inside the robot camera and also where they are in the image.
How to add the Tiny Yolo2 robot skill
- Load the most recent release of ARC (Get ARC).
- Press the Project tab from the top menu bar in ARC.
- Press Add Robot Skill from the button ribbon bar in ARC.
- Choose the Camera category tab.
- Press the Tiny Yolo2 icon to add the robot skill to your project.
Don't have a robot yet?
Follow the Getting Started Guide to build a robot and use the Tiny Yolo2 robot skill.
How to use the Tiny Yolo2 robot skill
Object detection is fundamental to computer vision: Recognize the objects inside the robot camera and where they are in the image. This robot skill attaches to the Camera Device robot skill to obtain the video feed for detection.
Demo
Directions
Add a Camera Device robot skill to the project
Add this robot skill to the project. Check the robot skill's log view to ensure the robot skill has loaded the model correctly.
START the camera device robot skill, so it displays a video stream
By default, the TinyYolo skill will not detect objects actively. Check the "Active" checkbox to begin processing the camera video data stream.
Detected objects use the Camera Device robot skill features. The tracking On Tracking Start script will execute when objects are detected, and $CameraObject_____ variables will be populated. Check the Camera Device robot skill page for a list of camera variables.
Camera Device Integration
This robot skill integrates with the camera device by using the tracking features. If the servo tracking is enabled, this robot skill will move the servos. This is an extension of the camera robot skill. The On Tracking Start script will execute, and camera device variables will be populated when tracking objects.Performance
In HD webcam resolution, Tiny Yolo is processing 30+ FPS with 38% CPU, sometimes more, depending on the processor of your PC.Variables
The detected objects are stored in global variables in the array provided by the camera robot skill. The number of detected objects determines the size of the array. The detected object's location, confidence, and name are all stored in variables. Detected objects use the Camera Device robot skill features. The tracking On Tracking Start script will execute when objects are detected, and $CameraObject_____ variables will be populated. Check the Camera Device robot skill page for a list of camera variables.Trained Objects
Tiny Yolo robot skill includes an ONNX model with 20 trained objects. They are... "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"The ImageNetData is trained with the image resolution of 416x416 because it uses the TINY model. Regular-sized models are trained at 608x608.
ToDo
- control commands for starting, stopping
- ability to specify custom training model files
Yes, you can create a script in ARC to audibly announce the objects that the camera, in conjunction with the TinyYolo skill, is detecting. Here's a step-by-step guide on how to set this up:
Add the Camera Device and TinyYolo Skill: Ensure you have both the Camera Device and the TinyYolo skill added to your ARC project and that they are properly configured.
Add the Speech Synthesis Skill: If you haven't already, add the Speech Synthesis skill to your project. This skill will allow your robot to speak.
Create a Script: Add a new script to your project. This script will check the variables set by the TinyYolo skill for detected objects and use the
Say
command to announce them.Access Detection Variables: The TinyYolo skill will store the detected objects in global variables. You can access these variables in your script to determine which objects have been detected.
Write the Script: In the script, write a loop or a conditional statement that checks the object detection variables and then uses the
Say
command to announce the objects. For example:Make sure to tailor the script to match the specific variables and output format of the TinyYolo skill. The example script provided is a basic template and may need to be adjusted for your specific setup.
For more information on scripting in ARC, you can refer to the scripting documentation on the Synthiam website: https://synthiam.com/Support/ARC-Scripting
If you need further assistance with scripting or setting up the skills, you can find support articles and resources on the Synthiam website: https://synthiam.com/Support
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
@Athena Ai
in global variables, there are no variables for TinyYolo
To use audio feedback for object detection with TinyYolo and the Camera in ARC,you will need to create a script that checks for the presence of detected objects and then uses the Speech Synthesis skill to announce them.Here's a general approach to achieve this:
Add the Camera Device and TinyYolo Skill: Ensure you have both the Camera Device and the TinyYolo skill added to your ARC project and that they are properly configured.
Add the Speech Synthesis Skill: If you haven't already,add the Speech Synthesis skill to your project.This skill will allow your robot to speak.
Create a Script: Add a new script to your project.This script will check the variables set by the TinyYolo skill for detected objects and use the
Say
command to announce them.Access Detection Variables: The TinyYolo skill will store the detected objects in global variables.You can access these variables in your script to determine which objects have been detected.
Write the Script: In the script,write a loop or a conditional statement that checks the object detection variables and then uses the
Say
command to announce the objects.For example:Make sure to tailor the script to match the variables and output format of the TinyYolo skill.The example script provided is a basic template and may need to be adjusted for your setup.
If you are not seeing any variables related to TinyYolo in the global variables list,make sure that the TinyYolo skill is active and correctly detecting objects.The variables should be populated once the skill detects objects in the camera's view.
For more information on scripting in ARC,you can refer to the scripting documentation on the Synthiam website: https://synthiam.com/Support/ARC-Scripting
If you need further assistance with scripting or setting up the skills,you can find support articles and resources on the Synthiam website: https://synthiam.com/Support
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
@Athena Ai
Thanks anyway, no luck, maybe you can't use audio with the TinyYolo2 skill and a camera.
I apologize for the confusion earlier. If you're not seeing any variables related to TinyYolo in the global variables list, it's possible that the skill may not be outputting variables in a way that can be easily accessed by a script. However, you can still attempt to create a script that checks for detected objects and announces them using the Speech Synthesis skill.
Here's a revised approach that you can try:
Ensure TinyYolo and Camera are Active: Make sure that the TinyYolo skill and the camera are running and that object detection is working as expected.
Check for Detection Events: Instead of looking for global variables, you can use the detection events that are triggered when an object is detected. These events can be used to trigger a script.
Create a Script for Detection Events: In ARC, you can create a script that is triggered by the detection event. This script can use the
Say
command to announce the detected object.Use the
On Detected
ControlCommand: The TinyYolo skill may have aControlCommand
that is called when an object is detected. You can use this command in your script to trigger speech when an object is detected.Here's an example of how you might set up the script:
In this example,
GetDetectionArray()
is a placeholder for the actual function or command you would use to retrieve the array of detected objects from TinyYolo. You would need to replace this with the correct command based on the TinyYolo skill's documentation or output.If the TinyYolo skill does not provide a straightforward way to access the detected objects, you may need to reach out for further assistance or look for alternative methods to achieve your goal.
For more detailed guidance, you can refer to the TinyYolo skill documentation or explore the support articles on the Synthiam website: https://synthiam.com/Support.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
Thanks anyway, no luck, maybe you can't use audio with the TinyYolo2 skill and a camera.
in global variables, there are no variables for TinyYolo2
TinyYolo uses the camera device - it won't work without the camera device. If you scroll to the top of this page, you can read the manual for TinyYolo. The variables that are detected are camera object variables. You can use the variable watcher to see them. More information about the camera variables can be found on the Camera Device robot skill page. This page is for the TinyYolo robot skill.
Any robot skill can speak if it can add a script. To add a script on how to make a robot skill speak, look in the support section for your respective programming language (ezscript, javascript, python). For example, in JavaScript, you type Audio.say("Hello there");.
If you want the robot to speak when it detects with TinyYolo, you would put code in the Camera Device Tracking Start Script. This is explained above in this manual. For more information about the variables and scripts of the Camera Device, the camera device manual is here: https://synthiam.com/Support/Skills/Camera/Camera-Device?id=16120