You only look once (YOLO) is a state-of-the-art, real-time object detection system. using Tiny YOLOv3 a very small model as well for constrained environments (CPU Only, NO GPU)

+ How To Add This Control To Your Project (Click to Expand)
Manual
You only look once (YOLO) is a state-of-the-art, real-time object detection system. using Tiny YOLOv3 a very small model as well for constrained environments (CPU Only, NO GPU)
Darket YOLO website:
Requirements:
You only need a camera control, the detection is done offline (no cloud services).
1) start the camera.
2) check the Running (check box)
The detection will run continuously when the detection results change an On Changes script is executed (check the configuration area):
1) Press config
2) Edit the on changes script
3) on changes Javascript script
you can run the detection on demand, javascript:Code:
controlCommand("Darknet YOLO", "Run");
The above command runs the configured on demand script.
An example of script:Code:
var numberOfRegions=getVar('$YOLONumberOfRegions');
if (numberOfRegions==0)
{
Audio.sayWait('No regions found');
}
else
{
Audio.sayWait('Found ' + numberOfRegions + ' regions');
var classes = getVar('$YOLOClasses');
var scores = getVar('$YOLOScores');
for(var ix=0; ix {
Audio.sayWait('Found ' + classes[ix] + ' with score: ' + (classes[ix]*100) + '%');
}
}
Please download the following file:
And copy to the plugin folder:Code:
Expected plugin folder content:Code:
Intriguing item and website.
I have a https://pixycam.com/
EzAng
And - this is amazing
35.5 Mb
Thanks DJ!
indeed a strange/dark name "Darknet" for an AI framework, there are also nightmares: https://pjreddie.com/darknet/nightmare/
it's analogue to google deepdream project, so they played with the words dream... nightmare.
The PixyCam is useful to pair with underpowered microcontroller e.g. arduino 8 bits. If you are using ARC the camera control has many more features plus you have additional CPU power.
@Fxrtst:
I will do it soon, the plugin requires additional TLC
Unfortunately not great, there is a reason why NVIDIA GPUs cost a few $$$$
it's important to explain what the plugin does.
Frameworks:
Darknet is an open source neural network framework written in C and CUDA similar to TensorFlow, Caffe although is mainly used for Object Detection, and have a different architecture. The frameworks have both the training and inference processes. You can run both without CUDA (CPU only) but be prepared for a huge difference.
Datasets:
To train a neural framework you need input (data) e.g. images, sounds, etc.
This plugin ships with the following dataset: https://cocodataset.org/ the biggest publicly available.
Each dataset requires additional metadata: labels, categories and image optimization (resize image filters) so is not an easy task to create one. Each dataset contains specific categories e.g. people, birds, dogs the COCO has 90 categories, usually other datasets have less than 30.
Model:
So the model is the output of a dataset training. The models are not interchangeable between the frameworks.
You ll find COCO model for Tensorflow, YOLO etc. Training takes time, huge time if you don't have the GPU power, although the models are framework specific you can convert between them (there are some issues and requirements and sometimes additional scripting) for Yolo there is a tool called DarkFlow (everything is dark :)).
So the Yolo detection + coco model (245Mb) takes almost 50 seconds (first time) to detect an image on a Intel NUC i7 (my machine) without a CUDA card with 8 GB you can't expect to have FPS only FPM (frames per minute).
I plan to test the AtomicPI (similar to LattePanda) what can you expect with atom processor ? We will see, everyone agrees the game changer is the GPU and only NVDIA has the stuff!
To alleviate the frustration the yolo guys trained the model for a tiny version plus a different configuration/parameters, so with a tiny version you can get some FPS, but, once again the GPU blows the CPU performance.
Tensorflow also released a different engine (Tensorflow Lite) plus TF lite models, that allows you to run lite models in micro-controllers, embedded computers (PI), mobile phones and regular CPUs.
To summarize:
The plugin ships with the tiny COCO model (35Mb). Later I'll add the possibility to download the full model (250GB) more accurate but very slow. The plugin was built without CUDA support, so does not matter if you have a CUDA GPU.
Let's hope it can be useful running on a Latte panda.
Darknet - offline
09:32:39.250>Info>>Cleared
states numerously audibly: no Regions found:
09:32:39.276>Debug>>Detection Took:71 seconds
# Regions found: 0
then:
09:32:52.701>Debug>>Detection Took:72 seconds
# Regions found: 1
..class=[person] confidence=[0.5682923] X=[106] Y=[77]
says "a person" me, a cup,
a banana not recognize
works a little :-(
EzAng
EzAng
I like the idea of getting your plug in to maybe work with the Latte Panda. I have the ver 1 of the Panda installed in the bartender robot and its extremely impressive. This plug in will be a great addition to vision systems on robots.
Nice job as always!!!!
Is your version 2 up to date, or is an update coming?
works ok
EzAng
What are you up to???
This is a great plugin, I never thought that YOLO will make it into the heavenly realm of plugins!!
Great work @ptp
Thanks, 35Mb file uploaded with success.
@Smarty:
Fixed.
@All:
Model file is now is included with the plugin.
I have been using DJ's Train Vision Object By Script = works well also
thanks for all your work
EzAng
now I have another problem. If I use "run detection only" all is fine. a perfect live video.
If I unmark "run detection only" the video takes 15 seconds to show another frame. no smoothly live video. YOLOv2 was both together: a perfect frame video and voice playback (with the non stop playback problem). YOLOv3 now has a frame problem.
New update with minor optimizations.
Regarding the delay: before (v2) during the script execution the detection results were queued, and that was the cause of the bug i.e. after you stop the detection the queue was still begin processed.
To solve the bug I stop the execution while the onchanges script is being executed.
I presume the 15 seconds must be delay processing the script
Can you add the javascript code:
to OnChanges script.
And try to see if the delay is relevant ?
If you are using EZ-Script I recommend changing to Javascript, EZ-Script is very slow.
Post your EZ-Script if you need help converting to Javascript.
thanks again for the app, control
EzAng.
What are you up to?
EzAng
1. Without a GPU how many frames per second/ per minute are people getting with this arrangement?
2. I have a GPU on my laptop, but you said that this code won't utilize it. Is that right?
3. What about the Latte Panda, how can such a tiny machine run this CPU/GPU intensive code?
4. How do they get object classification to run so fast on products like HuskeyLens? I get about 1 or 2 FPS on that device.
NUC Core I7
4 fps with a dummy javascript script i.e. comments only
2)
Correct. This version does not use GPU, also is using the tiny model, less accurate but lighter.
3)
No tests yet. I've used Atomic PI (similar to Latte panda entry model) but is running ROS, I got a new one and I plan to install Windows, ARC and the plugin, I can guess the performance will be worst.
4)
I'll address in another post.
You can read more about the chip here:
https://www.seeedstudio.com/blog/2019/09/12/get-started-with-k210-hardware-and-programming-environment/
https://hackaday.com/2019/11/04/how-smart-are-ai-chips-really/
so basically is a dual risc with a KPU:
The K210 is not new (2019) is from a Chinese manufacturer, it's a good choice for IOT scenarios i.e. (no PC) , power and budget constrains. I don't like DFRobot approach they mentioned and advertised as an open source product but later they changed to "to be open source later".
So if you are designing solutions for IOT and pairing with other micro-controllers it's good choice, everything is glue together (camera on board), serial communication, product support etc.
If you are building real robots or creating products:
1) you don't pick a Windows Desktop for a Robot.
2) For an embedded computer you pick ARM and not an Intel architecture. Intel arch is too much complicated. Only works if you have a good design (more money) to accommodate battery power, heat dissipation, space and additional hardware e.g. GPUS.
That is of my opinion, also is the reason why ROS works very well on Linux. If you need to develop a windows driver, or something low level is a pain in neck you need to deal with all the safe protections e.g. drivers certificates and closed APIs. Also a good portion of CPU is used for user interface and other user features not relevant for a Robot.
ARC/Builder's user base expects an aasy (EZ) software to run the robots with a friendly operating system i.e. Windows plus a friendly off the shelf controller (EZ-Robot) with no extra soldering or extra changes.
There was a Raspberry PI version now is gone, similar scenario for EZ-Robot controller replacement i.e. Arduino Firmwares, you can do a lot of new stuff... but requires coding not an easy task for most forum users so most people will wait for Synthiam to add the required features.
Regarding TPUs there are some low cost solutions running on Linux, and some are ported (being fixed) to run on windows.
Hopefully they will become more Windows friendly and combined with Lattepandas / Upboards will become a solution for Windows + ARC users.
I Downloaded and installed your skill and it works PERFECTLY!!! In the past I installed YOLO/Darknet and it took me 2 weeks to get it right. This time it took less than two minutes! I get 8 FPS.
I made some changes to your javascript code that works better for me. I wanted to just announce the object/objects that it sees.
Again thank you for all your hard work in getting this skill up and going!!!!!
--Thomas
Code:
https://cocodataset.org/
https://cocodataset.org/#explore
categories (80):
https://github.com/pjreddie/darknet/blob/master/data/coco.names
var numberOfRegions=getVar('$YOLONumberOfRegions');
if (numberOfRegions==0)
{
// Audio.sayWait('No regions found');
}
else
{
// Audio.sayWait('Found ' + numberOfRegions + 'regions');
//Audio.sayWait('Found ');
var classes = getVar('$YOLOClasses');
for(var ix=0; ix {
//Audio.sayWait("I see a " + (classes[ix])); or
Audio.sayWait(classes[ix]);
}
}
So the audio only comes out with the item it detects
EzAng
Code:
I'm guessing the detection is done in a new task (new thread)? If so, you'll have to make a copy of the bitmap if it's not being manipulated in the OnNewFrame event. Working with any camera image has to be done either in the new frame event, or a copy of itself needs to be made to work in another thread.
EzAng
:
Maybe is not enough to copy the bitmap ?Code:
Do you recommend another method to copy the bitmap ?
An old version of ARC (when ARC days) used to create a new bitmap and dispose it for every frame. But that was super expensive on garbage collection. With the new method, the memory is reused - and that's also why a camera image can exist on many skill controls without a ton of cpu being used or memory. It's because every instance of that bitmap is actually referencing the same memory location and only the screen needs to be refreshed when the memory updates
So the solution for yours is actually quite easy - i would recommend taking the Camera's bitmap and "draw" it to a bitmap for your own detection thread. That way, you can keep your own bitmap or dispose of it how you wish etc etc and it lives for long as you want it to.
Something like...
Code:
OR fastest way is....... you can use memcpy and get an instance of your Bitmap's BitmapData - and memcpy the Camera.GetCurrentBitmapUnmanaged.Data to your bitmap's Data0
quick search:
https://docs.microsoft.com/en-us/dotnet/api/system.drawing.bitmap.-ctor
and I'm guessing is a shallow copy not a deep copy, so the "shell" object is different but the byte buffer is the same.
So soon or later becomes an issue. I'll change the code, looking for elegant e.g. (less boiler plate code) to generate a deep clone.
EDITED *** I did not see the previous post **
Thanks!
Code:
I love Yolo...seems to work pretty well in near dark lighting conditions too. I've been using the 80 class version. My favorite is when it recognizes my cats, plants, phones, and tvs. I don't know why but it continues to amuse me, I think because I know I would never be able to do it without a NN. It feels like magic. I keep hoping someone in the industry will build some more Yolo-like models with a lot more classes. It seems like I read about a Yolo9000 but was never was able to find anything I could use. If anyone finds a model with a lot more classes, I'd love to hear about it. I haven't tried v4 or v5 yet...I don't think anyone has published one beyond v3 that will deploy on a Movid.
it uses a worldwide database of trained stuff. And you get a description of the scene that’s neat. You can feed that into nlp for topics of the surroundings.
@Guys:
Thanks for the good words
@Martin:
Short answer: Visual Studio Community/Enterprise 2019
ARC plugins are .NET
The main plugin dll is a visual studio c# .NET class project and I've two other additional projects in c++. ARC is a 32 bits application so when you combine low level code (c++) or external native libraries and .NET you need to take that in consideration. Sometimes you need to compile from source, and fix or tweak the open source code to use msft building environment.
If you need more details my email is in the profile.
When I start playing with object detection, I wanted something to monitor a live video feed and trigger actions based on objects. All the cloud APIs have limits so is not feasible to use the online services.
I presume your skill has a limit cap ?
Q) What is the limit (number of request) per ARC account ?
Also the model is a Tiny version optimized for CPU, so the accuracy is lower than the full models (Nvidia GPUs).
The biggest challenge is to find optimized models for our needs, for example I'm using this model to track the deliver man. I don't expect a train, horse, sheep, cow, elephant, bear, zebra, giraffe in the camera
But the model supports those categories.
The solution is to train your model with your images. I'm capturing pictures of the deliver guy, and I want to expand the capture to the trucks.
Maybe later I can train a model to detect UPS, Fedex, USPS, DHL, Amazon trucks
Until then... I've a trigger to alert me if an Elephant arrives at the door.
For me, I am proceeding along the following path with darknet object detection:
1. I implemented Darknet as you know, and it is returning good bounding boxes and probabilities for the 80 classes. When I mentioned having a better model...I meant I wanted more classes.
2. I implemented a skill to get the robot to tell me what it sees by saying "What do you see?".
3. I am wondering if there is a way to augment DarkNet with AlexNet. To this ends, I first implemented the AlexNet model (1000 classes). The problem here is AlexNet classifies an image as a single thing, so I need to figure out an algo for picking subsets of an image that might contain single interesting objects. Until I figure out now to do that, AlexNet is not all that useful to me unless the bot is leaning over, staring directly at something and trying to identify or pick it up. Also, a huge amount of the 1000 classes are still biology or other fairly useless classes like you pointed out. There are a lot of alternatives to AlexNet that all have these same issues...single object and too many useless classes like species. Species aside, does anyone know a good way to segment an image so parts of it can be classified?
4. Here are the use cases I want to focus on next... more verbal questions and answers (about what is seen) like "Where is the cat?", "How far away is the cat?" (depth sensor), "How big is the cat." (some trig with distance and bounding box), "What color is the cat?" (image processing, tougher one for me), "Shoot the cat." (lasers), "Go to/chase the cat" (nav/drive), "Point at the cat." (servos), "What is next to the cat", "How many cats do you see?", "Look at the cat", "What is the cat on top of?" (table, tv, etc.) and others. You get the idea. While some of these sound challenging or error prone, almost all of these are achievable. I'd like to make a vid when I get some of these going.
I have been using this skill for some time and I am very glad that you created it! I do have a couple of questions:
1. is there any EASY way to REMOVE objects from its list? I want to remove objects that my robot will never encounter.
2. is there any way from within ARC to define new objects? I understand that defining new objects (using thousands of photos) is a very CPU intensive operation.
3. is it possible to add other DarkNet/Yolo objects from other datasets?
If your are asking to remove classes e.g. elephants, horses, zebras to speed up the detection, that it's not possible.
ARC has a camera control that allows training new custom objects using a camera:
https://synthiam.com/Support/Skills/Camera/Camera-Device?id=16120#objectTracking
It's possible to improve the plugin to specify a custom Yolo dataset, there are a few Yolo framework implementation versions i.e. v3, v2, v4,v5. If you have one in particular please share the URL and I can try to see if is compatible with the plugin.
It's not possible to use multiple datasets in a single inference process.
2. Does the ARC camera control support YOLO objects?
3. I'll take a look. Thanks!