Thumbnail

Tiny Yolo2

by Synthiam

Object detection is fundemental to computer vision: Recognize what the objects are inside the robot camera and also where they are in the image.

Requires ARC v13 (Updated 11/26/2023)

How to add the Tiny Yolo2 robot skill

  1. Load the most recent release of ARC (Get ARC).
  2. Press the Project tab from the top menu bar in ARC.
  3. Press Add Robot Skill from the button ribbon bar in ARC.
  4. Choose the Camera category tab.
  5. Press the Tiny Yolo2 icon to add the robot skill to your project.

Don't have a robot yet?

Follow the Getting Started Guide to build a robot and use the Tiny Yolo2 robot skill.

How to use the Tiny Yolo2 robot skill

Object detection is fundamental to computer vision: Recognize the objects inside the robot camera and where they are in the image. This robot skill attaches to the Camera Device robot skill to obtain the video feed for detection.

Demo

Directions

  1. Add a Camera Device robot skill to the project

  2. Add this robot skill to the project. Check the robot skill's log view to ensure the robot skill has loaded the model correctly.

  3. START the camera device robot skill, so it displays a video stream

  4. By default, the TinyYolo skill will not detect objects actively. Check the "Active" checkbox to begin processing the camera video data stream.

  5. Detected objects use the Camera Device robot skill features. The tracking On Tracking Start script will execute when objects are detected, and $CameraObject_____ variables will be populated. Check the Camera Device robot skill page for a list of camera variables.

Camera Device Integration

This robot skill integrates with the camera device by using the tracking features. If the servo tracking is enabled, this robot skill will move the servos. This is an extension of the camera robot skill. The On Tracking Start script will execute, and camera device variables will be populated when tracking objects.

Performance

In HD webcam resolution, Tiny Yolo is processing 30+ FPS with 38% CPU, sometimes more, depending on the processor of your PC.

User-inserted image

Variables

The detected objects are stored in global variables in the array provided by the camera robot skill. The number of detected objects determines the size of the array. The detected object's location, confidence, and name are all stored in variables. Detected objects use the Camera Device robot skill features. The tracking On Tracking Start script will execute when objects are detected, and $CameraObject_____ variables will be populated. Check the Camera Device robot skill page for a list of camera variables.

User-inserted image

Trained Objects

Tiny Yolo robot skill includes an ONNX model with 20 trained objects. They are... "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"

The ImageNetData is trained with the image resolution of 416x416 because it uses the TINY model. Regular-sized models are trained at 608x608.

ToDo

  • control commands for starting, stopping
  • ability to specify custom training model files

ARC Pro

Upgrade to ARC Pro

Don't limit your robot's potential – subscribe to ARC Pro and transform it into a dynamic, intelligent machine.

PRO
USA
#1   — Edited

It see a person, but all other objects  like cup, glass, glasses no good

will try again

User-inserted image

PRO
Synthiam
#2  

Interesting - I handle that error in the latest release - but don't know what is causing it yet

PRO
USA
#3   — Edited

Wait wha...???? That is so fast! Will it eventually have  bounding box to display which object its capturing?

PRO
Synthiam
#4   — Edited

Ya - in the to do section there’s a list

PRO
USA
#5  

hahah i see it now..doh..

PRO
Synthiam
#6  

I do prefer ptp's solution though - his is more elegant so far i think

PRO
USA
#7  

When I open it now it says in the Tiny Yolo2:

Loading models: C:\ProgramData\ARC\Plugins\19a75b67-c593-406c-9789-464aa3ba998b\models\TinyYolo2_model.onnxSet Configuration: Error initializing model :Microsoft.ML.OnnxRuntime.OnnxRuntimeException: [ErrorCode:RuntimeException] Exception during initialization: bad allocation at Microsoft.ML.OnnxRuntime.NativeApiStatus.VerifySuccess(IntPtr nativeStatus) at Microsoft.ML.OnnxRuntime.InferenceSession.Init(String modelPath, SessionOptions options) at Microsoft.ML.OnnxRuntime.InferenceSession..ctor(String modelPath) at Microsoft.ML.Transforms.Onnx.OnnxModel..ctor(String modelFile, Nullable1 gpuDeviceId, Boolean fallbackToCpu, Boolean ownModelFile, IDictionary2 shapeDictionary) at Microsoft.ML.Transforms.Onnx.OnnxTransformer..ctor(IHostEnvironment env, Options options, Byte[] modelBytes)

PRO
Synthiam
#8  

Looks like your computer ran out of memory when loading and parsing the model. Try rebooting and using the skill again.

#9  

This looks interesting will give it a try!

PRO
USA
#10   — Edited

I tried it again, only can see "a person" , thinks my iPhone in a tv-monitor,   :-)

Also the box that pops up is to the left of the  "person" it sees -  the person is not in the box

Will keep trying

#11  

V8 not working for me. I get this message and no scrolling of detected items.

User-inserted image

PRO
Synthiam
#12  

You'll have to press the ACTIVE button - and an active camera will need to be running. You can see the instructions in video above:)

#13  

I do have an active camera running and the ACTIVE button is depressed as in the pic above. Followed the vid to a T. That is all I ever see in the plugin. I do notice in the variable watcher that it is detecting and the variable changes when it detects me or something else, but the plugin never changes.

PRO
USA
#14  

DJ

I tried it again, only can see "a person" , thinks my iPhone in a tv-monitor, :-)

Also the box that pops up is to the left of the "person" it sees - the person is not in the box

PRO
Synthiam
#15   — Edited

Fixed the boxes around the detected objects

Perry, the plugin doesn't change or display anything. The objects detected are in the camera view and the variable manager. They're not in the display of the plugin. Would it help if they were also listed in the plugin display for you to see in there as well?

PRO
Synthiam
#16  

Perry, I added a DEBUG checkbox that will display debug information in the plugin of detected objects as i think that's what you'd like to see? The detected object will also be highlighted in the camera device

#17  

Hi DJ. I noticed the boxes that appear around detected objects onscreen. The text was tiny and unreadable in the highest resolution I was previously stuck with. I had expected to see the plugin updating with the detected objects because that is the way it works in the video.

PRO
Synthiam
#18  

Does the new debug work for you? I can make it so the debug value saves with the project configuration as well - if it's something you'd like to see on every frame

PRO
USA
#19   — Edited

Are the pictures captured in a file, stored on your laptop like PTP's DarkNet Yolo stores the files is a captured files ?

The reason I a asking is, I like deleting the captured files to save on storage on my laptop -  I do delete the DArkNet Yolo files / pictures in the C:...Documents\YOLO.Plugin\Captures folder

thanks

EzAng

PRO
Synthiam
#20  

There’s no files saved. Do not delete any files

PRO
USA
#21  

got it

thanks

EzAng

PRO
Netherlands
#22  

Any chance that I can load my own Yolo or ONNX trained neural network in this skill?

PRO
Synthiam
#23  

Yeah, you totally can. There's a folder that contains the data file. It'll be in the plugin folder, which should be something like...

Quote:

C:\ProgramData\ARC\Plugins\19a75b67-c593-406c-9789-464aa3ba998b

I can quickly change the robot skill to allow you to select your own data file if you'd like. I don't think that exists in this.

PRO
Synthiam
#25  

It'll be easier if we modify the robot skill to allow loading of custom files. that way you can test it and have full control of your environment.

PRO
USA
#26  

Any chance support for YOLO NAS?...it is incredible!

PRO
Canada
#27  

Looks like the trained model for YOLO NAS is non commercial use only. I wonder if that means synthiam would have to licence or provide the users a way to import the trained model into synthiam after the plugin was installed.

PRO
Synthiam
#28   — Edited

It wouldn't matter about the license because robot skills are not part of the ARC software - there is no commercial use of robot skills. They redistribute technology, not a product - which is why we have "plugin" robot skills, because they're add-ons and not part of the ARC software. If they were included in ARC's installation and/or required a subscription, then it would fall under commercial use.

*edit: I looked around a bit about the yolo-nas - not a lot of info bout it yet. It looks to be specific to opencv, and we looked into an open cv wrapper for ARC before, but i don't think it'll be very useful to anyone without advanced programming skills. i'd have to put some thought on how to make that available to ppl who don't want to be writing a bunch of single-use code hmmm

PRO
Netherlands
#29  

Was this one made available? "It'll be easier if we modify the robot skill to allow loading of custom files. that way you can test it and have full control of your environment." Then I will test it

PRO
Synthiam
#30  

You can load custom files with this one. Give it a shot. Is the yolo-nas just a different data file?

PRO
USA
#31   — Edited

I jumped too fast at posting. Watched a video, and it is VERY new. You can play around with training models in Google Collab. And dev in RoboFlow. I think its early release. But really so good at object detection..a portion of Yolo is used in Tesla cars.

PRO
Canada
#32  

tiny Yolo doesn't like me today.   Blank Project, Camera, IoTiny,  Tiny Yolo  See error below, Darknet Yolo not playing very nicely either. @athena any ideas?

AI is growing exponentially but I feel Vision is being left in the dark. We were supposed to have self driving cars, maps that could geo locate from a single photo, self checkouts that could scan unbarcoded items like fruits and vegetables, and thousands of other vision recognition use cases. There seems to have been a lot of focus on invasive privacy detection where we can recognize a person wearing a mask or just their gate when walking but for other applications I don't see the tools emerging. Microsoft Cognitive vision still can't recognize lots of basic objects (has it improved at all in 5 years?) . Yolo maybe fast and look impressive but it has a very limited library it can recognize. Based on other advancements in AI Vision recognition should be at the stage where we can identify an image from just a small section of that image and then generate a complete image from that small amount of data. Sadly we still can't even identify a lot of basic items.

Loading models: C:\ProgramData\ARC\Plugins\19a75b67-c593-406c-9789-464aa3ba998b\models\TinyYolo2_model.onnx... Done. System.InvalidOperationException: Splitter/consolidator worker encountered exception while consuming source data ---> Microsoft.ML.OnnxRuntime.OnnxRuntimeException: [ErrorCode:Fail] bad allocation at Microsoft.ML.OnnxRuntime.NativeApiStatus.VerifySuccess(IntPtr nativeStatus) at Microsoft.ML.OnnxRuntime.InferenceSession.RunImpl(RunOptions options, IntPtr[] inputNames, IntPtr[] inputValues, IntPtr[] outputNames, DisposableList1 cleanupList) at Microsoft.ML.OnnxRuntime.InferenceSession.Run(IReadOnlyCollection1 inputs, IReadOnlyCollection1 outputNames, RunOptions options) at Microsoft.ML.OnnxRuntime.InferenceSession.Run(IReadOnlyCollection1 inputs, IReadOnlyCollection1 outputNames) at Microsoft.ML.OnnxRuntime.InferenceSession.Run(IReadOnlyCollection1 inputs) at Microsoft.ML.Transforms.Onnx.OnnxTransformer.Mapper.UpdateCacheIfNeeded(Int64 position, INamedOnnxValueGetter[] srcNamedOnnxValueGetters, String[] activeOutputColNames, OnnxRuntimeOutputCacher outputCache) at Microsoft.ML.Transforms.Onnx.OnnxTransformer.Mapper.<>c__DisplayClass12_01.b__0(VBuffer1& dst) at Microsoft.ML.Data.DataViewUtils.Splitter.InPipe.Impl1.Fill() at Microsoft.ML.Data.DataViewUtils.Splitter.<>c__DisplayClass7_1.b__2() --- End of inner exception stack trace --- at Microsoft.ML.Data.DataViewUtils.Splitter.Batch.SetAll(OutPipe[] pipes) at Microsoft.ML.Data.DataViewUtils.Splitter.Cursor.MoveNextCore() at Microsoft.ML.Data.RootCursorBase.MoveNext() at Microsoft.ML.Data.ColumnCursorExtensions.d__41.MoveNext() at System.Linq.Enumerable.WhereSelectEnumerableIterator2.MoveNext() at System.Linq.Enumerable.ElementAt[TSource](IEnumerable1 source, Int32 index) at Tiny_Yolo2.yolo.yoloService.processThread() in C:\My Documents\SVN\Developer - Controls\In Production\Tiny Yolo2\MY_PROJECT_NAME\yolo\yoloService.cs:line 125

PRO
Canada
#33  

It appears that you are encountering a memory allocation issue with the Tiny Yolo2 model. This could be due to insufficient system resources or other limitations on your machine. Here are a few steps you can try to resolve the issue:

  1. Restart your computer: This can free up memory and system resources that may be tied up by other processes.

  2. Close other applications: Make sure to close any unnecessary applications that may be consuming memory and processing power.

  3. Update ARC: Ensure that you are running the latest version of ARC, as updates often include bug fixes and performance improvements.

  4. Check system requirements: Verify that your computer meets the minimum system requirements for running ARC and the Tiny Yolo2 model. You may need a machine with more memory or a better GPU.

  5. Reduce camera resolution: Lowering the resolution of the camera feed can reduce the amount of memory required for processing the video stream.

  6. Check for updates to the Tiny Yolo2 skill: Sometimes, updates to robot skills can address issues like the one you're experiencing.

If you continue to have trouble after trying these steps, you may want to reach out for further assistance through the Synthiam support section at https://synthiam.com/Support. There, you can find additional resources and potentially contact support for more personalized help.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
Unknown Country
#34  

I think @athena is correct about the memory/resources for your PC. One of the most significant programs could be Chrome or a web browser, which usually takes many gigabytes of RAM. I tested the robot skill, which takes 700 MB max in ARC.

PRO
Canada
#35  

It's great to hear that you've tested the robot skill and have an idea of its memory usage. If you're experiencing issues with the Tiny Yolo2 model or any other robot skill, it's always a good idea to monitor your system's resource usage to ensure that there is enough memory and processing power available for ARC to operate smoothly. As mentioned earlier, closing unnecessary applications and browser tabs can help free up resources. If you have any further questions or need assistance with ARC or robot skills, feel free to reach out for support.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Canada
#36   — Edited

I am using a desktop  PC with 32 GB and 12 core and only ARC running.  I had added yolo to the project for the InMoov and it produced an error I closed ARC and opened up a blank project and just used yolo and camera and also error. Not sure if some type of memory / resource leak.  Anyway rebooted after Athena answered again just loaded blank cam and yolo. Tester and  it worked, so once again loaded full ez-robot inmoov project added yolo and produced same error.

I tried in a VM on my MAC (intel / parallels 8GB 4 core assigned) same issue. I haven’t fired up another windows PC yet but I will try and test yolo with just the cam and then with some other plugins added tomorrow.

PRO
Canada
#37  

I just tried again on 3rd computer #athena this time I used a Rock Pi X on your supported computer list.  Only Yolo2 and Camera loaded no other apps.  This time it errors on start of Yolo2     Loading models: C:\ProgramData\ARC\Plugins\19a75b67-c593-406c-9789-464aa3ba998b\models\TinyYolo2_model.onnx... Set Configuration: The type initializer for 'Microsoft.ML.OnnxRuntime.NativeMethods' threw an exception.

PRO
Canada
#38  

@athena read above post

PRO
Canada
#39  

It seems like you're encountering a different error on the Rock Pi X, which is related to the initialization of the Microsoft ML OnnxRuntime. This could be due to a compatibility issue with the OnnxRuntime and the hardware or operating system on the Rock Pi X. Here are a few steps you can try:

  1. Check for ARC updates: Make sure you have the latest version of ARC installed, as updates may include compatibility fixes.

  2. Install Visual C++ Redistributable: The OnnxRuntime may require the Visual C++ Redistributable for Visual Studio 2015, 2017, and 2019. You can download it from the official Microsoft website.

  3. Check the OnnxRuntime compatibility: Ensure that the OnnxRuntime is compatible with the operating system on your Rock Pi X. You may need to look for any specific versions or requirements for ARM-based systems if that's what you're using.

  4. Reinstall the Tiny Yolo2 skill: Try removing and then reinstalling the Tiny Yolo2 skill to ensure that all components are correctly installed.

  5. Check for missing dependencies: There may be additional dependencies required by the OnnxRuntime that are not present on your system. Check the documentation for the OnnxRuntime to see if there are any other required components.

If you continue to experience issues after trying these steps, you may want to reach out for further assistance through the Synthiam support section at https://synthiam.com/Support. There, you can find additional resources and potentially contact support for more personalized help.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Synthiam
#40  

Rock pi is supported for ARC / not verified for 700 3rd party robot skills

the rock pi probably won’t run that robot skill because it’s missing enough gpu. You can look up the error on google to understand more.

I’m imagining the project combined with this robot skill is using too many resources. There’s a limit per application - so your 47473737gb of ram is irrelevant to that:). I guess this is why most tensor DLL stuff runs in separate applications domains. Each robot skill does run in a separate thread but it’s impossible to be a separate application domain.

Id look into an alternative solution or maybe consider looking at making the robot skill a stand-alone application that talks to ARC via some protocol.

or maybe a smaller image model? That would be a good solution.

PRO
Synthiam
#41  

i'll also take a look and see if there's an update to onyx which might be helpful

PRO
Synthiam
#42  

I dunno - i tested it with the advanced inmoov head project and it works as well - even found my bicycle! This computer only has 16gb Windows 10. Do you have virus scanners and spam testers and registry cleaners and a bunch of "helper free" utilities running in the taskbar?:)

User-inserted image

PRO
Canada
#43  

I just formatted an intel NUC installed windows 10 and Latest ARC and Loaded Cam and Yolo2 and nothing else. 8GB RAM.

Loading models: C:\ProgramData\ARC\Plugins\19a75b67-c593-406c-9789-464aa3ba998b\models\TinyYolo2_model.onnx... Set Configuration: The type initializer for 'Microsoft.ML.OnnxRuntime.NativeMethods' threw an exception.

I am using the IoTiny and the EZ-Robot camera so I also tried a webcam.

Opening a beer.

User-inserted image

PRO
Synthiam
#44  

haha - well i don't know what to tell ya! When I get time I can take a look but it's not written by Synthiam. The Onnx ML library is a Microsoft product, so there's nothing inside that I can do. I can update the libraries to the latest nuget packages and hope there's no breaking changes  (confidence is low)

But in the meantime, it is a lot of effort for 20 trained items lol. According to the manual above the list is... "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"

PRO
Synthiam
#45  

PS, why did they include sheep and cow an dhorse and train? Ugh ppl are funny:P

PRO
Canada
#46  

Yeah I just wanted person and book (it does book as well) then if book then read cover then look up book then provide a summary of book.  If short story (kids book) read book.  I can do it another way but this was just to run in background chew few resources and if someome walked up and showed robot a book then kick in the other tools.

PRO
Canada
#47  

maybe it doesn't do book the Darknet one does hmmm

User-inserted image

PRO
Synthiam
#48  

I updated v13 to the latest onnx libraries - which, as i predicted, broke the build. They deprecated bitmap and required column name compiler attributes... all of which weren't documented. So I'm gluing the hair I pulled out back onto my head haha

I think you're confusing the models with frameworks and their components. There are numerous Machine Learning frameworks that can work with the same machine learning data set. The example dataset that you're using must have a "book" added. It might make sense to allow tiny Yolo ml datasets to be added easily to this robot skill in the future. Darknet is the basis for YOLO. So the ML model you're using is different than the one included with this skill.

While the models can be specified, the labels need to be somehow imported as well. I'll have to think about how to do that and make it dynamic.

PRO
Canada
#49  

Thanks DJ:) While you have the cover off are they not up to Yolo V7:)  I do know I need to use a local model as I don't want to be sending stuff in the cloud every couple of seconds just looking to see if a 5 year old is standing in front of robot holding up a a book.

PRO
USA
#51  

works well,

Also, thanks for the update

PRO
Canada
#52  

new version works perfectly. Thanks DJ:D  yeah that model was not trained on books, but if someone holds up a cow we will be the first know.

PRO
Canada
#53  

I took a look at the pre trained data sets.  The Yolo9000 is amazing it has 9000 classes of objects that it recognizes.  Not sure how hard it would be to upgrade the Yolo2 to Yolo9000 or how it performs but with such a large class of objects should meet most object recognition needs (Even someone looking for book :-)  https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/9k.names

PRO
USA
#54   — Edited

@Athena Al

How can I use audio to hear what the camera is seeing?

PRO
Canada
#55  

To use audio to hear what the camera is seeing, you can utilize the Speech Synthesis feature in ARC to announce the names of the objects that the camera detects. Here's how you can set it up:

  1. Add a Camera Device to your project: If you haven't already, make sure you have a camera device added to your ARC project.

  2. Add the Object Detection Skill: Add an object detection skill, such as the Tiny Yolo robot skill, to your project. Ensure it's configured and active.

  3. Create a Script: Add a new EZ-Script to your project that will check the variables set by the object detection skill and use the Say or SayEZB command to announce the detected objects.

  4. Use Variables: The object detection skill will store the detected objects in global variables. You can access these variables in your script to determine which objects have been detected.

  5. Write the Script: In the script, you can write a loop or a conditional statement that checks the object detection variables and then uses the Say command to announce the objects. For example:

:Start
if ($CameraIsTracking = 1)
  Say("I see a " + $CameraObjectName)
endif
Sleep(1000)
goto(Start)
  1. Run the Script: Once your script is ready, you can run it alongside the camera and object detection skills. Whenever an object is detected, the script will use the Speech Synthesis feature to announce it.

Remember to adjust the script according to the specific variables and conditions used by the object detection skill you are using. The example provided is a basic template and may need to be modified to work with your setup.

For more detailed instructions and options for speech synthesis, you can refer to the Speech Synthesis robot skill documentation on the Synthiam website: https://synthiam.com/Support/Skills/Audio/Speech-Synthesis?id=16123

If you need to add the Speech Synthesis skill to your project, you can find it in the ARC Skill Store and add it just like any other skill.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
USA
#56  

@Athena Ai

Using TinyYolo and the Camera, is there a way to add a script to hear what the camera is seeing?

PRO
Canada
#57  

Yes, you can create a script in ARC to audibly announce the objects that the camera, in conjunction with the TinyYolo skill, is detecting. Here's a step-by-step guide on how to set this up:

  1. Add the Camera Device and TinyYolo Skill: Ensure you have both the Camera Device and the TinyYolo skill added to your ARC project and that they are properly configured.

  2. Add the Speech Synthesis Skill: If you haven't already, add the Speech Synthesis skill to your project. This skill will allow your robot to speak.

  3. Create a Script: Add a new script to your project. This script will check the variables set by the TinyYolo skill for detected objects and use the Say command to announce them.

  4. Access Detection Variables: The TinyYolo skill will store the detected objects in global variables. You can access these variables in your script to determine which objects have been detected.

  5. Write the Script: In the script, write a loop or a conditional statement that checks the object detection variables and then uses the Say command to announce the objects. For example:

:Start
if ($CameraIsTracking = 1)
  Say("I see a " + $CameraObjectName)
endif
Sleep(1000)  # Wait for a second before checking again
goto(Start)
  1. Run the Script: Execute your script while the camera and TinyYolo skills are active. When an object is detected, the script will use the Speech Synthesis skill to announce it.

Make sure to tailor the script to match the specific variables and output format of the TinyYolo skill. The example script provided is a basic template and may need to be adjusted for your specific setup.

For more information on scripting in ARC, you can refer to the scripting documentation on the Synthiam website: https://synthiam.com/Support/ARC-Scripting

If you need further assistance with scripting or setting up the skills, you can find support articles and resources on the Synthiam website: https://synthiam.com/Support


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
USA
#58  

@Athena Ai

in global variables, there are no variables for TinyYolo

PRO
Canada
#59  

To use audio feedback for object detection with TinyYolo and the Camera in ARC,you will need to create a script that checks for the presence of detected objects and then uses the Speech Synthesis skill to announce them.Here's a general approach to achieve this:

  1. Add the Camera Device and TinyYolo Skill: Ensure you have both the Camera Device and the TinyYolo skill added to your ARC project and that they are properly configured.

  2. Add the Speech Synthesis Skill: If you haven't already,add the Speech Synthesis skill to your project.This skill will allow your robot to speak.

  3. Create a Script: Add a new script to your project.This script will check the variables set by the TinyYolo skill for detected objects and use the Say command to announce them.

  4. Access Detection Variables: The TinyYolo skill will store the detected objects in global variables.You can access these variables in your script to determine which objects have been detected.

  5. Write the Script: In the script,write a loop or a conditional statement that checks the object detection variables and then uses the Say command to announce the objects.For example:

:Start
if ($CameraIsTracking = 1)
  Say("I see a " + $CameraObjectName)
endif
Sleep(1000) # Wait for a second before checking again
goto(Start)
  1. Run the Script: Execute your script while the camera and TinyYolo skills are active.When an object is detected,the script will use the Speech Synthesis skill to announce it.

Make sure to tailor the script to match the variables and output format of the TinyYolo skill.The example script provided is a basic template and may need to be adjusted for your setup.

If you are not seeing any variables related to TinyYolo in the global variables list,make sure that the TinyYolo skill is active and correctly detecting objects.The variables should be populated once the skill detects objects in the camera's view.

For more information on scripting in ARC,you can refer to the scripting documentation on the Synthiam website: https://synthiam.com/Support/ARC-Scripting

If you need further assistance with scripting or setting up the skills,you can find support articles and resources on the Synthiam website: https://synthiam.com/Support


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
USA
#60  

@Athena Ai

Thanks anyway, no luck, maybe you can't use audio with the TinyYolo2 skill and a camera.

PRO
Canada
#61  

I apologize for the confusion earlier. If you're not seeing any variables related to TinyYolo in the global variables list, it's possible that the skill may not be outputting variables in a way that can be easily accessed by a script. However, you can still attempt to create a script that checks for detected objects and announces them using the Speech Synthesis skill.

Here's a revised approach that you can try:

  1. Ensure TinyYolo and Camera are Active: Make sure that the TinyYolo skill and the camera are running and that object detection is working as expected.

  2. Check for Detection Events: Instead of looking for global variables, you can use the detection events that are triggered when an object is detected. These events can be used to trigger a script.

  3. Create a Script for Detection Events: In ARC, you can create a script that is triggered by the detection event. This script can use the Say command to announce the detected object.

  4. Use the On Detected ControlCommand: The TinyYolo skill may have a ControlCommand that is called when an object is detected. You can use this command in your script to trigger speech when an object is detected.

Here's an example of how you might set up the script:

DefineArray($detectionArray, 0)

:CheckDetection
$detectionArray = GetDetectionArray()

if (GetSize($detectionArray) > 0)
  $detectedObject = $detectionArray[0]
  Say("I see a " + $detectedObject)
endif

Sleep(1000) # Wait for a second before checking again
Goto(CheckDetection)

In this example, GetDetectionArray() is a placeholder for the actual function or command you would use to retrieve the array of detected objects from TinyYolo. You would need to replace this with the correct command based on the TinyYolo skill's documentation or output.

If the TinyYolo skill does not provide a straightforward way to access the detected objects, you may need to reach out for further assistance or look for alternative methods to achieve your goal.

For more detailed guidance, you can refer to the TinyYolo skill documentation or explore the support articles on the Synthiam website: https://synthiam.com/Support.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
USA
#62   — Edited

Thanks anyway, no luck, maybe you can't use audio with the TinyYolo2 skill and a camera.

in global variables, there are no variables for TinyYolo2

PRO
Synthiam
#63   — Edited

TinyYolo uses the camera device - it won't work without the camera device. If you scroll to the top of this page, you can read the manual for TinyYolo. The variables that are detected are camera object variables. You can use the variable watcher to see them. More information about the camera variables can be found on the Camera Device robot skill page. This page is for the TinyYolo robot skill.

Any robot skill can speak if it can add a script. To add a script on how to make a robot skill speak, look in the support section for your respective programming language (ezscript, javascript, python). For example, in JavaScript, you type Audio.say("Hello there");.

If you want the robot to speak when it detects with TinyYolo, you would put code in the Camera Device Tracking Start Script. This is explained above in this manual. For more information about the variables and scripts of the Camera Device, the camera device manual is here: https://synthiam.com/Support/Skills/Camera/Camera-Device?id=16120

User-inserted image