You only look once (YOLO) is a state-of-the-art, real-time object detection system. using Tiny YOLOv3 a very small model as well for constrained environments (CPU Only, NO GPU)
How to add the Darknet YOLO (Obj Detection) robot skill
- Load the most recent release of ARC (Get ARC).
- Press the Project tab from the top menu bar in ARC.
- Press Add Robot Skill from the button ribbon bar in ARC.
- Choose the Camera category tab.
- Press the Darknet YOLO (Obj Detection) icon to add the robot skill to your project.
Don't have a robot yet?
Follow the Getting Started Guide to build a robot and use the Darknet YOLO (Obj Detection) robot skill.
How to use the Darknet YOLO (Obj Detection) robot skill
You only look once (YOLO) is a state-of-the-art, real-time object detection system. using Tiny YOLOv3 a very small model as well for constrained environments (CPU Only, NO GPU)
Darket YOLO website: https://pjreddie.com/darknet/yolo/
Requirements: You only need a camera control, the detection is done offline (no cloud services).
- start the camera.
- check the Running (check box)
The detection will run continuously when the detection results change an On Changes script is executed (check the configuration area):
- Press config
- Edit the on changes script
- on changes Javascript script
you can run the detection on demand, javascript:
controlCommand("Darknet YOLO", "Run");
The above command runs the configured on demand script.
An example of script:
var numberOfRegions=getVar('$YOLONumberOfRegions');
if (numberOfRegions==0)
{
Audio.sayWait('No regions found');
}
else
{
Audio.sayWait('Found ' + numberOfRegions + ' regions');
var classes = getVar('$YOLOClasses');
var scores = getVar('$YOLOScores');
for(var ix=0; ix {
Audio.sayWait('Found ' + classes[ix] + ' with score: ' + (classes[ix]*100) + '%');
}
}
There is a file size limit for plugin uploads. There is a missing file required to operate the plugin.
Please download the following file:
And copy to the plugin folder:
Expected plugin folder content:
hi ptp,
Intriguing item and website.
I have a https://pixycam.com/
EzAng
What's the file size of your entire plugin package? including the .weights file?
And - this is amazing
DJ: 35.5 Mb
Thanks DJ!
Okay great - I'll have amin update the file size on the website for ya
Pedro, this is awsome. Love to see a video of the plug in running. How does it compare in speed to the GPU version shown on their site (Pascal Titan X )?
@Ezang:
indeed a strange/dark name "Darknet" for an AI framework, there are also nightmares: https://pjreddie.com/darknet/nightmare/ it's analogue to google deepdream project, so they played with the words dream... nightmare. The PixyCam is useful to pair with underpowered microcontroller e.g. arduino 8 bits. If you are using ARC the camera control has many more features plus you have additional CPU power.@Fxrtst:
I will do it soon, the plugin requires additional TLC Unfortunately not great, there is a reason why NVIDIA GPUs cost a few $$$$it's important to explain what the plugin does.
Frameworks: Darknet is an open source neural network framework written in C and CUDA similar to TensorFlow, Caffe although is mainly used for Object Detection, and have a different architecture. The frameworks have both the training and inference processes. You can run both without CUDA (CPU only) but be prepared for a huge difference.
Datasets: To train a neural framework you need input (data) e.g. images, sounds, etc. This plugin ships with the following dataset: https://cocodataset.org/ the biggest publicly available. Each dataset requires additional metadata: labels, categories and image optimization (resize image filters) so is not an easy task to create one. Each dataset contains specific categories e.g. people, birds, dogs the COCO has 90 categories, usually other datasets have less than 30.
Model: So the model is the output of a dataset training. The models are not interchangeable between the frameworks. You ll find COCO model for Tensorflow, YOLO etc. Training takes time, huge time if you don't have the GPU power, although the models are framework specific you can convert between them (there are some issues and requirements and sometimes additional scripting) for Yolo there is a tool called DarkFlow (everything is dark).
So the Yolo detection + coco model (245Mb) takes almost 50 seconds (first time) to detect an image on a Intel NUC i7 (my machine) without a CUDA card with 8 GB you can't expect to have FPS only FPM (frames per minute).
I plan to test the AtomicPI (similar to LattePanda) what can you expect with atom processor ? We will see, everyone agrees the game changer is the GPU and only NVDIA has the stuff!
To alleviate the frustration the yolo guys trained the model for a tiny version plus a different configuration/parameters, so with a tiny version you can get some FPS, but, once again the GPU blows the CPU performance.
Tensorflow also released a different engine (Tensorflow Lite) plus TF lite models, that allows you to run lite models in micro-controllers, embedded computers (PI), mobile phones and regular CPUs.
To summarize: The plugin ships with the tiny COCO model (35Mb). Later I'll add the possibility to download the full model (250GB) more accurate but very slow. The plugin was built without CUDA support, so does not matter if you have a CUDA GPU.
Let's hope it can be useful running on a Latte panda.
I found a bug: while running in the continuous mode, I don't stop the detection process, so while the On Changes script is running (Text To Speech say results9, the detection is queueing results.
Debug:
Darknet - offline
09:32:39.250>Info>>Cleared
states numerously audibly: no Regions found:
09:32:39.276>Debug>>Detection Took:71 seconds
Regions found: 0
then:
09:32:52.701>Debug>>Detection Took:72 seconds
Regions found: 1
..class=[person] confidence=[0.5682923] X=[106] Y=[77]
says "a person" me, a cup,
a banana not recognize
works a little :-(
EzAng
@ptp upload file size is now increased so you should be fine
ok, I will try it - did not get an update yet
EzAng
@ptp Got it. I won't be installing my main computer into any robot soon.....as it seems the more power the better with the full YOLO version. I have a beast desktop with 32 cores 128 mb ram and I was one of the lucky ones to snag a Nvidia 3090Ti with 24 mg vram and 10,496 cuda cores. But as I said wont be putting it into a robot any time soon!
I like the idea of getting your plug in to maybe work with the Latte Panda. I have the ver 1 of the Panda installed in the bartender robot and its extremely impressive. This plug in will be a great addition to vision systems on robots.
Nice job as always!!!!
Hello again, back,
Is your version 2 up to date, or is an update coming?works ok
EzAng
OMG.... @fxrtst What are you up to???
This is a great plugin, I never thought that YOLO will make it into the heavenly realm of plugins!! Great work @ptp
your YOLO porting is nice and so perfect for my next Robot Gen. but the bug is annoying, because the voice repeat non stop what the cam saw.. I can only stop the voice if I close the ez-robot software.
working on a fix / update.
@amin: Thanks, 35Mb file uploaded with success.
@Smarty: Fixed.
@All: Model file is now is included with the plugin.
wow PTP you are amazing. This is excellent. FIVE STARS
Your program works well,
I have been using DJ's Train Vision Object By Script = works well also
thanks for all your work
EzAng
@ptp
now I have another problem. If I use "run detection only" all is fine. a perfect live video. If I unmark "run detection only" the video takes 15 seconds to show another frame. no smoothly live video. YOLOv2 was both together: a perfect frame video and voice playback (with the non stop playback problem). YOLOv3 now has a frame problem.
@Smarty, New update with minor optimizations.
Regarding the delay: before (v2) during the script execution the detection results were queued, and that was the cause of the bug i.e. after you stop the detection the queue was still begin processed.
To solve the bug I stop the execution while the onchanges script is being executed. I presume the 15 seconds must be delay processing the script
Can you add the javascript code:
to OnChanges script.And try to see if the delay is relevant ?
If you are using EZ-Script I recommend changing to Javascript, EZ-Script is very slow.
Post your EZ-Script if you need help converting to Javascript.
YOLO is the acronym of the phrase "you only live once" lol
thanks again for the app, control
EzAng.
In this case it actually means "You Only Look Once" of course referring to the urban slang...but it is describing the way the algorithm is working. Kinda a cool Tagline for a sopisticated mathematical operation!!
How are you Mickey?
What are you up to?
EzAng
I have a few questions for PTP.
NUC Core I7 4 fps with a dummy javascript script i.e. comments only
2) Correct. This version does not use GPU, also is using the tiny model, less accurate but lighter.
3) No tests yet. I've used Atomic PI (similar to Latte panda entry model) but is running ROS, I got a new one and I plan to install Windows, ARC and the plugin, I can guess the performance will be worst.
4) I'll address in another post.
so basically is a dual risc with a KPU: The K210 is not new (2019) is from a Chinese manufacturer, it's a good choice for IOT scenarios i.e. (no PC) , power and budget constrains. I don't like DFRobot approach they mentioned and advertised as an open source product but later they changed to "to be open source later".
So if you are designing solutions for IOT and pairing with other micro-controllers it's good choice, everything is glue together (camera on board), serial communication, product support etc.
Regarding Robots in general If you plan to have an embedded computer, operating system, additional software e.g. ARC or ROS you will need additional hardware: GPU or a TPU.
If you are building real robots or creating products:
That is of my opinion, also is the reason why ROS works very well on Linux. If you need to develop a windows driver, or something low level is a pain in neck you need to deal with all the safe protections e.g. drivers certificates and closed APIs. Also a good portion of CPU is used for user interface and other user features not relevant for a Robot.
ARC/Builder's user base expects an aasy (EZ) software to run the robots with a friendly operating system i.e. Windows plus a friendly off the shelf controller (EZ-Robot) with no extra soldering or extra changes.
There was a Raspberry PI version now is gone, similar scenario for EZ-Robot controller replacement i.e. Arduino Firmwares, you can do a lot of new stuff... but requires coding not an easy task for most forum users so most people will wait for Synthiam to add the required features.
Regarding TPUs there are some low cost solutions running on Linux, and some are ported (being fixed) to run on windows.
Hopefully they will become more Windows friendly and combined with Lattepandas / Upboards will become a solution for Windows + ARC users.
Thank you very much for your detailed answers.
ptp,
I Downloaded and installed your skill and it works PERFECTLY!!! In the past I installed YOLO/Darknet and it took me 2 weeks to get it right. This time it took less than two minutes! I get 8 FPS.
I made some changes to your javascript code that works better for me. I wanted to just announce the object/objects that it sees.
Again thank you for all your hard work in getting this skill up and going!!!!!
--Thomas
PS. Do you have a list of all of the objects that it can detect?
PPS. FEATURE REQUEST: Can you give us the coordinates of the bounding boxes, or at least the center of the bounding boxes?
categories (80): https://github.com/pjreddie/darknet/blob/master/data/coco.names
ptp: Thanks! I am looking forward to it!!!!
I like the the code with a few things off, I highlighted it
var numberOfRegions=getVar('$YOLONumberOfRegions'); if (numberOfRegions==0) { // Audio.sayWait('No regions found'); } else { // Audio.sayWait('Found ' + numberOfRegions + 'regions'); //Audio.sayWait('Found '); var classes = getVar('$YOLOClasses'); for(var ix=0; ix { //Audio.sayWait("I see a " + (classes[ix])); or Audio.sayWait(classes[ix]); }
So the audio only comes out with the item it detects
EzAng
I get this error during detection
I'm guessing the detection is done in a new task (new thread)? If so, you'll have to make a copy of the bitmap if it's not being manipulated in the OnNewFrame event. Working with any camera image has to be done either in the new frame event, or a copy of itself needs to be made to work in another thread.
It works here, pretty well
EzAng
@DJ:
:
Maybe is not enough to copy the bitmap ? Do you recommend another method to copy the bitmap ?
That won't copy the bitmap - it'll create a new object wrapped around the memory of the bitmap. The bitmap memory actually never changes in ARC. The memory is allocated when the camera starts and is re-used for every frame. A new Bitmap(bitmap) will create a wrapper around the memory that's being used.
An old version of ARC (when ARC days) used to create a new bitmap and dispose it for every frame. But that was super expensive on garbage collection. With the new method, the memory is reused - and that's also why a camera image can exist on many skill controls without a ton of cpu being used or memory. It's because every instance of that bitmap is actually referencing the same memory location and only the screen needs to be refreshed when the memory updates
So the solution for yours is actually quite easy - i would recommend taking the Camera's bitmap and "draw" it to a bitmap for your own detection thread. That way, you can keep your own bitmap or dispose of it how you wish etc etc and it lives for long as you want it to.
Something like...
OR fastest way is....... you can use memcpy and get an instance of your Bitmap's BitmapData - and memcpy the Camera.GetCurrentBitmapUnmanaged.Data to your bitmap's Data0
DJ: quick search: https://docs.microsoft.com/en-us/dotnet/api/system.drawing.bitmap.-ctor and I'm guessing is a shallow copy not a deep copy, so the "shell" object is different but the byte buffer is the same. So soon or later becomes an issue. I'll change the code, looking for elegant e.g. (less boiler plate code) to generate a deep clone.
EDITED *** I did not see the previous post ** Thanks!
Here - you might find these handy... They exist in EZ_B.Camera
Very nice work ptp. I am interested to know what tools/languages you used to build this, if you have the time that is. I love Yolo...seems to work pretty well in near dark lighting conditions too. I've been using the 80 class version. My favorite is when it recognizes my cats, plants, phones, and tvs. I don't know why but it continues to amuse me, I think because I know I would never be able to do it without a NN. It feels like magic. I keep hoping someone in the industry will build some more Yolo-like models with a lot more classes. It seems like I read about a Yolo9000 but was never was able to find anything I could use. If anyone finds a model with a lot more classes, I'd love to hear about it. I haven't tried v4 or v5 yet...I don't think anyone has published one beyond v3 that will deploy on a Movid.
Ptp did a great job - works well! If you’re interested in having a larger trained dataset, there’s a global version here: https://synthiam.com/Support/Skills/Camera/Cognitive-Vision?id=16211
it uses a worldwide database of trained stuff. And you get a description of the scene that’s neat. You can feed that into nlp for topics of the surroundings.
Hey PTP this looks amazing, will give it a try, hopefully works well when people detected coming through main door and use enhanced script to have some fun with intruders!
Sorry the delay, I've been underwater with work.
@Guys: Thanks for the good words
@Martin:
Short answer: Visual Studio Community/Enterprise 2019ARC plugins are .NET The main plugin dll is a visual studio c# .NET class project and I've two other additional projects in c++. ARC is a 32 bits application so when you combine low level code (c++) or external native libraries and .NET you need to take that in consideration. Sometimes you need to compile from source, and fix or tweak the open source code to use msft building environment.
If you need more details my email is in the profile.
@DJ:
When I start playing with object detection, I wanted something to monitor a live video feed and trigger actions based on objects. All the cloud APIs have limits so is not feasible to use the online services.I presume your skill has a limit cap ?
Q) What is the limit (number of request) per ARC account ?
Also the model is a Tiny version optimized for CPU, so the accuracy is lower than the full models (Nvidia GPUs).
The biggest challenge is to find optimized models for our needs, for example I'm using this model to track the deliver man. I don't expect a train, horse, sheep, cow, elephant, bear, zebra, giraffe in the camera But the model supports those categories.
The solution is to train your model with your images. I'm capturing pictures of the deliver guy, and I want to expand the capture to the trucks. Maybe later I can train a model to detect UPS, Fedex, USPS, DHL, Amazon trucks
Until then... I've a trigger to alert me if an Elephant arrives at the door.
Spotting Elephants here in Alabama could be useful as its the mascot for the University of Alabama. BTW, there is no pressure to ever answer anything from me, timely or at all. I am just thrilled at any answer at all on any timeframe.
For me, I am proceeding along the following path with darknet object detection:
1. I implemented Darknet as you know, and it is returning good bounding boxes and probabilities for the 80 classes. When I mentioned having a better model...I meant I wanted more classes.
2. I implemented a skill to get the robot to tell me what it sees by saying "What do you see?". 3. I am wondering if there is a way to augment DarkNet with AlexNet. To this ends, I first implemented the AlexNet model (1000 classes). The problem here is AlexNet classifies an image as a single thing, so I need to figure out an algo for picking subsets of an image that might contain single interesting objects. Until I figure out now to do that, AlexNet is not all that useful to me unless the bot is leaning over, staring directly at something and trying to identify or pick it up. Also, a huge amount of the 1000 classes are still biology or other fairly useless classes like you pointed out. There are a lot of alternatives to AlexNet that all have these same issues...single object and too many useless classes like species. Species aside, does anyone know a good way to segment an image so parts of it can be classified?
4. Here are the use cases I want to focus on next... more verbal questions and answers (about what is seen) like "Where is the cat?", "How far away is the cat?" (depth sensor), "How big is the cat." (some trig with distance and bounding box), "What color is the cat?" (image processing, tougher one for me), "Shoot the cat." (lasers), "Go to/chase the cat" (nav/drive), "Point at the cat." (servos), "What is next to the cat", "How many cats do you see?", "Look at the cat", "What is the cat on top of?" (table, tv, etc.) and others. You get the idea. While some of these sound challenging or error prone, almost all of these are achievable. I'd like to make a vid when I get some of these going.
Hi,
I have been using this skill for some time and I am very glad that you created it! I do have a couple of questions:
It's not possible to use multiple datasets in a single inference process.
Can I port the On Changes Script over to EZ-Script, Blockly, or Python, or is only JavaScript supported for this skill?
Thomas Messerschmidt
There's a standard dialog for editing scripts - it's the same editor in all ARC scripts. You can select the language you wish to use by a tab on the top. There's more information on this page about how the script editor works and languages: https://synthiam.com/Support/Programming/code-editor/edit-scripts
Scroll to the bottom, and you can read that relevant section of the page. You can use the support section to find additional information about using ARC.
*edit: or this step of the getting started guide is quite popular: https://synthiam.com/Support/Get-Started/how-to-make-a-robot/choose-skill-level
So I assume you meant that just because the "On Changes Script" was written in JavaScript, it could have just as easily been written in the other 3 languages. I had assumed that there was JavaScript code used that would not work in the other languages. I guess I could have tried rewriting it myself. I've been a bit overwhelmed trying to get the last two Simone articles out.
Thanks.
Yeah, that's precisely what you'll have to do. Why would you want it in another language? The Javascript compiler is 100 times faster (or something comparable) than ezscript.