Example: Camera Custom Tracking Type
As you've seen in the previous tutorial step about detecting and attaching to the camera, there are a bunch of events that you can use. One of the events allows you to create a custom tracking type as a plugin, which is real cool!
Uses for creating a custom tracking type is if you want to experiment with OpenCV or any other vision libraries. Because ARC leverages .Net, we recommend the x86 nuget install of EMGUCV (https://github.com/emgucv/emgucv). Installing from NUGET is the easiest and most convenient.
The camera events that we'll use for creating a custom tracking type are...
        // assign an event that raises when the camera wants to initialize tracking types
        _camera.Camera.OnInitCustomTracking += Camera_OnInitCustomTracking;
        // assign an event that raises with a new frame that you can use for tracking
        _camera.OnCustomDetection += Camera_OnCustomDetection;
Once inside the OnCustomDetection() event, you have access to a bunch of different bitmaps throughout the flow of the detection process. They are...
// **********************
// From the EZ_B.Camera class
// **********************
    /// 
    /// This is a temporary bitmap that we can use to draw on but is lost per tracking type
    /// 
    public volatile Bitmap _WorkerBitmap;
    /// 
    /// This is the resized original bitmap that is never drawn on. Each tracking type uses this as the main source image for tracking, and then draws on the _OutputBitmap for tracking details
    /// 
    public volatile AForge.Imaging.UnmanagedImage _OriginalBitmap; // resized image that we process
    /// 
    /// Image that is outputted to the display. We draw on this bitmap with the tracking details
    /// 
    public volatile AForge.Imaging.UnmanagedImage _OutputBitmap;
    /// 
    /// Raw image unsized directly from the input device
    /// 
    public volatile AForge.Imaging.UnmanagedImage _RawUnsizedBitmap;
    /// 
    /// Last image for the GetCurrentImage 
    /// 
    public volatile AForge.Imaging.UnmanagedImage _RawUnsizedLastBitmap;
Understanding the images available, the ones we care about for creating a tracking type of our own are...
    /// 
    /// This is the resized original bitmap that is never drawn on. Each tracking type uses this as the main source image for tracking, and then draws on the _OutputBitmap for tracking details
    /// 
    public volatile AForge.Imaging.UnmanagedImage _OriginalBitmap; // resized image that we process
    /// 
    /// Image that is outputted to the display. We draw on this bitmap with the tracking details
    /// 
    public volatile AForge.Imaging.UnmanagedImage _OutputBitmap;
This is because we can use the _OriginalBitmap for our detection, and then draw on the _OutputBitmap where our detection was.
Example This is an example that fakes detection by drawing a rectangle on the _OutputBitmap that bounces around the screen. It moves with every frame in the CustomDetection event.
    // faking an object being tracked
    int _xPos = 0;
    int _yPos = 0;
    bool _xDir = true;
    bool _yDir = true;
    private EZ_B.ObjectLocation[] Camera_OnCustomDetection(EZ_Builder.UCForms.FormCameraDevice sender) {
      if (_isClosing)
        return new ObjectLocation[] { };
      if (!_camera.Camera.IsActive)
        return new ObjectLocation[] { };
      List objectLocations = new List();
      try {
        // This is demonstrating how you can return if an object has been detected and draw where it is
        // The camera control will start tracking when more than one ObjectLocation is returned
        // We're just putting fake bouncing rectable of a detected rect which will be displayed as a tracked object on the screen in the camera device
        if (_xDir)
          _xPos += 10;
        else
          _xPos -= 10;
        if (_yDir)
          _yPos += 10;
        else
          _yPos -= 10;
        var r = new Rectangle(_xPos, _yPos, 50, 50);
        if (r.Right > _camera.Camera._OutputBitmap.Width)
          _xDir = false;
        else if (r.Left  _camera.Camera._OutputBitmap.Height)
          _yDir = false;
        else if (r.Top <= 0)
          _yDir = true;
        var objectLocation = new ObjectLocation(ObjectLocation.TrackingTypeEnum.Custom);
        objectLocation.Rect = r;
        objectLocation.HorizontalLocation = _camera.Camera.GetHorizontalLocation(objectLocation.CenterX);
        objectLocation.VerticalLocation = _camera.Camera.GetVerticalLocation(objectLocation.CenterY);
        objectLocations.Add(objectLocation);
        AForge.Imaging.Drawing.Rectangle(_camera.Camera._OutputBitmap, r, Color.MediumSeaGreen);
      } catch (Exception ex) {
        EZ_Builder.EZBManager.Log(ex);
      }
      return objectLocations.ToArray();
    }
          
Is this out of date? There doesn't seem to be a GetConfiguration function within EZ_Builder.Config.Sub.PluginV1
Look at the tutorial step titled Code: Saving/Loading Configuration
The get and set configuration methods are overrides of the form. There’s a great video on the first step of this tutorial that demonstrates the step by step of building a plugin. I recommend watching that because it helps fill in any steps that were missed.
When you’ve done it once, it makes sense and voila, you can rinse and repeat
Excellent, thanks - I will do. I really must learn not to just jump ahead in the process
Hey no problem - I do it all the time, and end up frustrated because I dont know what it was that I missed. Excitement gets the best of me
Trying to follow the tutorials but can't find where the plugin page has gone. How do I add a new plugin to the ez-robot / Synthiam site to get the XML?
Never mind. Just found the "Create skill control" link
I am trying to follow the instructions for adding my own plugin but I cannot seem to find the place to register the plugin based on the instructions.
Any help is appreciated.
Thanks
The new button to create a plugin skill control is less than an inch below the button you pressed to create this question.