Thumbnail

Tiny Yolo2

by Synthiam

Object detection is fundemental to computer vision: Recognize what the objects are inside the robot camera and also where they are in the image.

Requires ARC v13 (Updated 11/26/2023)

How to add the Tiny Yolo2 robot skill

  1. Load the most recent release of ARC (Get ARC).
  2. Press the Project tab from the top menu bar in ARC.
  3. Press Add Robot Skill from the button ribbon bar in ARC.
  4. Choose the Camera category tab.
  5. Press the Tiny Yolo2 icon to add the robot skill to your project.

Don't have a robot yet?

Follow the Getting Started Guide to build a robot and use the Tiny Yolo2 robot skill.


How to use the Tiny Yolo2 robot skill

Object detection is fundamental to computer vision: Recognize the objects inside the robot camera and where they are in the image. This robot skill attaches to the Camera Device robot skill to obtain the video feed for detection.

Demo

Directions

  1. Add a Camera Device robot skill to the project

  2. Add this robot skill to the project. Check the robot skill's log view to ensure the robot skill has loaded the model correctly.

  3. START the camera device robot skill, so it displays a video stream

  4. By default, the TinyYolo skill will not detect objects actively. Check the "Active" checkbox to begin processing the camera video data stream.

  5. Detected objects use the Camera Device robot skill features. The tracking On Tracking Start script will execute when objects are detected, and $CameraObject_____ variables will be populated. Check the Camera Device robot skill page for a list of camera variables.

Camera Device Integration

This robot skill integrates with the camera device by using the tracking features. If the servo tracking is enabled, this robot skill will move the servos. This is an extension of the camera robot skill. The On Tracking Start script will execute, and camera device variables will be populated when tracking objects.

Performance

In HD webcam resolution, Tiny Yolo is processing 30+ FPS with 38% CPU, sometimes more, depending on the processor of your PC.

User-inserted image

Variables

The detected objects are stored in global variables in the array provided by the camera robot skill. The number of detected objects determines the size of the array. The detected object's location, confidence, and name are all stored in variables. Detected objects use the Camera Device robot skill features. The tracking On Tracking Start script will execute when objects are detected, and $CameraObject_____ variables will be populated. Check the Camera Device robot skill page for a list of camera variables.

User-inserted image

Trained Objects

Tiny Yolo robot skill includes an ONNX model with 20 trained objects. They are... "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"

The ImageNetData is trained with the image resolution of 416x416 because it uses the TINY model. Regular-sized models are trained at 608x608.

ToDo

  • control commands for starting, stopping
  • ability to specify custom training model files

ARC Pro

Upgrade to ARC Pro

Get access to the latest features and updates before they're released. You'll have everything that's needed to unleash your robot's potential!

PRO
Canada
#57  

Yes, you can create a script in ARC to audibly announce the objects that the camera, in conjunction with the TinyYolo skill, is detecting. Here's a step-by-step guide on how to set this up:

  1. Add the Camera Device and TinyYolo Skill: Ensure you have both the Camera Device and the TinyYolo skill added to your ARC project and that they are properly configured.

  2. Add the Speech Synthesis Skill: If you haven't already, add the Speech Synthesis skill to your project. This skill will allow your robot to speak.

  3. Create a Script: Add a new script to your project. This script will check the variables set by the TinyYolo skill for detected objects and use the Say command to announce them.

  4. Access Detection Variables: The TinyYolo skill will store the detected objects in global variables. You can access these variables in your script to determine which objects have been detected.

  5. Write the Script: In the script, write a loop or a conditional statement that checks the object detection variables and then uses the Say command to announce the objects. For example:

:Start
if ($CameraIsTracking = 1)
  Say("I see a " + $CameraObjectName)
endif
Sleep(1000)  # Wait for a second before checking again
goto(Start)
  1. Run the Script: Execute your script while the camera and TinyYolo skills are active. When an object is detected, the script will use the Speech Synthesis skill to announce it.

Make sure to tailor the script to match the specific variables and output format of the TinyYolo skill. The example script provided is a basic template and may need to be adjusted for your specific setup.

For more information on scripting in ARC, you can refer to the scripting documentation on the Synthiam website: https://synthiam.com/Support/ARC-Scripting

If you need further assistance with scripting or setting up the skills, you can find support articles and resources on the Synthiam website: https://synthiam.com/Support


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
USA
#58  

@Athena Ai

in global variables, there are no variables for TinyYolo

PRO
Canada
#59  

To use audio feedback for object detection with TinyYolo and the Camera in ARC,you will need to create a script that checks for the presence of detected objects and then uses the Speech Synthesis skill to announce them.Here's a general approach to achieve this:

  1. Add the Camera Device and TinyYolo Skill: Ensure you have both the Camera Device and the TinyYolo skill added to your ARC project and that they are properly configured.

  2. Add the Speech Synthesis Skill: If you haven't already,add the Speech Synthesis skill to your project.This skill will allow your robot to speak.

  3. Create a Script: Add a new script to your project.This script will check the variables set by the TinyYolo skill for detected objects and use the Say command to announce them.

  4. Access Detection Variables: The TinyYolo skill will store the detected objects in global variables.You can access these variables in your script to determine which objects have been detected.

  5. Write the Script: In the script,write a loop or a conditional statement that checks the object detection variables and then uses the Say command to announce the objects.For example:

:Start
if ($CameraIsTracking = 1)
  Say("I see a " + $CameraObjectName)
endif
Sleep(1000) # Wait for a second before checking again
goto(Start)
  1. Run the Script: Execute your script while the camera and TinyYolo skills are active.When an object is detected,the script will use the Speech Synthesis skill to announce it.

Make sure to tailor the script to match the variables and output format of the TinyYolo skill.The example script provided is a basic template and may need to be adjusted for your setup.

If you are not seeing any variables related to TinyYolo in the global variables list,make sure that the TinyYolo skill is active and correctly detecting objects.The variables should be populated once the skill detects objects in the camera's view.

For more information on scripting in ARC,you can refer to the scripting documentation on the Synthiam website: https://synthiam.com/Support/ARC-Scripting

If you need further assistance with scripting or setting up the skills,you can find support articles and resources on the Synthiam website: https://synthiam.com/Support


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
USA
#60  

@Athena Ai

Thanks anyway, no luck, maybe you can't use audio with the TinyYolo2 skill and a camera.

PRO
Canada
#61  

I apologize for the confusion earlier. If you're not seeing any variables related to TinyYolo in the global variables list, it's possible that the skill may not be outputting variables in a way that can be easily accessed by a script. However, you can still attempt to create a script that checks for detected objects and announces them using the Speech Synthesis skill.

Here's a revised approach that you can try:

  1. Ensure TinyYolo and Camera are Active: Make sure that the TinyYolo skill and the camera are running and that object detection is working as expected.

  2. Check for Detection Events: Instead of looking for global variables, you can use the detection events that are triggered when an object is detected. These events can be used to trigger a script.

  3. Create a Script for Detection Events: In ARC, you can create a script that is triggered by the detection event. This script can use the Say command to announce the detected object.

  4. Use the On Detected ControlCommand: The TinyYolo skill may have a ControlCommand that is called when an object is detected. You can use this command in your script to trigger speech when an object is detected.

Here's an example of how you might set up the script:

DefineArray($detectionArray, 0)

:CheckDetection
$detectionArray = GetDetectionArray()

if (GetSize($detectionArray) > 0)
  $detectedObject = $detectionArray[0]
  Say("I see a " + $detectedObject)
endif

Sleep(1000) # Wait for a second before checking again
Goto(CheckDetection)

In this example, GetDetectionArray() is a placeholder for the actual function or command you would use to retrieve the array of detected objects from TinyYolo. You would need to replace this with the correct command based on the TinyYolo skill's documentation or output.

If the TinyYolo skill does not provide a straightforward way to access the detected objects, you may need to reach out for further assistance or look for alternative methods to achieve your goal.

For more detailed guidance, you can refer to the TinyYolo skill documentation or explore the support articles on the Synthiam website: https://synthiam.com/Support.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
USA
#62   — Edited

Thanks anyway, no luck, maybe you can't use audio with the TinyYolo2 skill and a camera.

in global variables, there are no variables for TinyYolo2

PRO
Synthiam
#63   — Edited

TinyYolo uses the camera device - it won't work without the camera device. If you scroll to the top of this page, you can read the manual for TinyYolo. The variables that are detected are camera object variables. You can use the variable watcher to see them. More information about the camera variables can be found on the Camera Device robot skill page. This page is for the TinyYolo robot skill.

Any robot skill can speak if it can add a script. To add a script on how to make a robot skill speak, look in the support section for your respective programming language (ezscript, javascript, python). For example, in JavaScript, you type Audio.say("Hello there");.

If you want the robot to speak when it detects with TinyYolo, you would put code in the Camera Device Tracking Start Script. This is explained above in this manual. For more information about the variables and scripts of the Camera Device, the camera device manual is here: https://synthiam.com/Support/Skills/Camera/Camera-Device?id=16120

User-inserted image