New Zealand
Asked — Edited

C# Sdk Vs Emgucv

Hi everyone,

I'm trying to write a GUI in c# that obtains information from a USB webcam. Since I 've got a ezb4 no so long ago, I'm aware what the sdk can do in terms of image processing and object recognition. But I'm contemplating between should I use sdk or emguCV to do my project on. So what do you guys recommen? (my project described below) Secondly, I've downloaded the sdk zip and have a brief look on the examples in the folders, but I'm wondering is there any proper tutorial /videos anywhere here, so I could properly learn the functionality of the sdk (I'm reasonably new to c#).

What I need to do for my project is, I'll have a 8 joint link robotic arm (that will be physically build) and a overhead usb webcam. Both of which will be link to a pc. What I wanted to do with the arm is, when the GUI runs, the webcam will continuously stream video to a picturebox in the GUI. Then when a ping pong ball is presented within the workspace of the arm, the GUI will recognise the ball and drive the arm to pick up the ball.

Thanks.


ARC Pro

Upgrade to ARC Pro

Synthiam ARC Pro is a new tool that will help unleash your creativity with programming robots in just seconds!

PRO
USA
#1  

Philip,

Inside the EZ-B SDK windows there is a tutorial 56 - Object detection.

I just opened to confirm the project code, it seems the project is not finished/working. some issues:

  1. How to learn using the SDK
  2. How to save the information/data generated from the learning process
  3. How to load the information/data previous saved

Let's wait for DJ's feedback.

#2  

emguCV and the EZ-B SDK are two totally separate things and not related at all.

emguCV is a .NET wrapper for OpenCV for computer vision. It is only related to computer vision.

EZ-B SDK is a framework to develop your own application to control EZ-Robot hardware (the EZ-B controller). It also includes EZ-B camera control functions, the same that are included in the ARC application.

For your project, what you want to do, I can not fathom why you would want to pursue developing a custom application when you can use ARC to track a ball and control the robot movements to steer towards it and pick it up.

I highly recommend you simply use ARC first.

If that does not work then use the EZ-B SDK if you need to develop something custom you can't do in ARC.

If that does not work or do everything you need, then use the emguCV to develop custom computer vision applications that you can link to either ARC or the EZ-B SDK application if you really must.

New Zealand
#3  

Hi @JuStinRatliff and @ptp,

Thanks for the reply. Sorry I forgot to mention, I'm experimenting with the ATmega series micro, so I'm going to build a driver circuit using that to drive the arm. The reason why I wanted to build a custom programme is because since I'm going to work with a link robot arm (a brief picture below of what it look like ) User-inserted image

So in order to for the arm to reach the ball (with one end of the arm permanently fixated to a corner of the arena that the project is going to be housed in), I think need to implement a Inverse Kinematic algorithm. I'm not sure does the SDK have this or similar function in the library (hence I'm asking for opinion here)

#4  

@PhilipW It sounds like you'd need to code your controller for those arm control and you could us the UART connecter on the EZ-B to communications between your arm controller and EZ-B. Then just use ARC, no need for an SDK.

You'll need to code your arm control algorithm from scratch in either EZ-Build scripts or your arm micro controller.

New Zealand
#5  

@JustinRatliff, alright I'll work on that first and if I got stuck somewhere down the line I'll ask for help.

Thanks

PRO
USA
#6  

Philip,

What kind of hardware are you going to use to build your robotic arm ?

New Zealand
#7  

@ptp,

the arm itself its just some 3D printed C brackets with ball bearings and servos. I'll also have some hall sensors to have some extra guidance on the location of the arm with respect to the pickup and drop off point of the ball.
I'll also have some IR sensors to indicate the location of the ball, where the overhead camera can't see.

PRO
USA
#8  

What kind of servos ? Digital or Analog, Are you considering EZR analog HD servos ?

New Zealand
#9  

I've got some HS-645MG servo motor in my house, so I'll just use that. In addition because I live in New Zealand, so I tried to get most of the things here. It takes too much time and too much overheads if I have to order everything from the states.

PRO
USA
#10  

You could use EZR HD servos they are engineered to work with an EZB. what are the roles for the Arduino and the EZB ?

New Zealand
#11  

I'm not using Arduino, at least not at this point of time.

PRO
USA
#12  

but you mentioned "ATmega series micro"

New Zealand
#13  

ahh yes, but I'll print my own PCB as well. So arduino is not what I'm considering

PRO
USA
#14  

So what your PCB will do/manage ?

New Zealand
#15  

At this point I have a ATmega1280, an external crystal, a bluetooth module , a programming header. Each servos I'm just going to directly connect them to a PWM pin.

I also reserved the I2C pin, incase I want to add a LCD later.

PRO
USA
#16  

So you are building a controller, how do you plan to integrate that with an EZB ?

ARC only works with an EZB, and the EZR Windows SDK is tailored to work with an EZB.

New Zealand
#17  

yea I'm not sure, I thought I could use the import the SDK library into C# and use it like a standard library. that's why I'm asking for comments and opinions now. but as justin suggested, since I already have a EZB4 I might as well started there and see if I can transfer anything useful from there

PRO
USA
#18  

If you are building a hardware controller, i don't see why you would use EZB SDK in your project.

If you want to use EZB as main hardware controller makes sense to use all the tools ARC, EZB SDK etc.

If you choose to go with an EZB i don't see why you need an Arduino Mega, plus the bluetooth module.

PRO
Synthiam
#19  

In about 20 lines of code you could do that with an ez-b v4 and the ez-sdk. It shouldn't take longer than an hour. Need help? :)

New Zealand
#20  

@DJSures it be nice i you could sow me how to do it on ez b4 and ez sdk.

PRO
Synthiam
#21  

You bet! Here's how we'll start...

  1. Mount the camera static on a tripod ideally in front of the paddle arm. The camera must not move

  2. Use enough joints in the arm to get a full reach from the pong table

Now for code, there's two parts - and it's super easy

Part 1 Use the ez-sdk camera to get the X/Y location of the detected object (the ball)

Part 2 Have a function that takes the X/Y object location and moves the paddle arm joints to put the paddle in the correct position block the ball

*Note: If you are capable to create an ez-b v4 replica in an AT Mega, then that above explanation should be easy based on the EZ-SDK examples. You're looking at about 30-50 lines of code.

PRO
USA
#22  

@DJ,

For future requests, and to get a status, can you read my post #2 ?

PRO
Synthiam
#23  

Ignore Tutorial 56 for this discussion. Do not consider object recognition for the tracking, and use color or motion instead. However, i would recommend starting with Color. This is because using object detection will not be very good with detecting a "ball". The ball will most likely return false positives on other round objects, as there is no real identifying contours. Also, training the ball would be a challenge because you would not be able to have your fingers in the trained image.

Again, i recommend using color tracking - and it's the fastest! Use a pink or Green or Red ping pong ball.

This is far less complicated of a solution than it needs to be. The code is simply...


  public partial class Form1 : Form {

    EZB                                          _ezb;
    Camera                                       _camera;

    public Form1() {

      InitializeComponent();
    }

    private void Form1_Load(object sender, EventArgs e) {

      _ezb = new EZB();

      _camera = new Camera(_ezb);
      _camera.OnNewFrame += _camera_OnNewFrame;

      comboBox1.Items.Clear();
      comboBox1.Items.AddRange(Camera.GetVideoCaptureDevices());
    }

    private void Form1_FormClosing(object sender, FormClosingEventArgs e) {

      _camera.Dispose();
    }

    private void comboBox1_SelectedIndexChanged(object sender, EventArgs e) {

      _camera.StartCamera((ValuePair)comboBox1.SelectedItem, pnlCamera, 320, 240);
    }

    void _camera_OnNewFrame() {

      ObjectLocation[] objectLocations = _camera.CameraBasicColorDetection.GetObjectLocationByColor(true, EZ_B.CameraDetection.ColorDetection.ColorEnum.Red, 20, 80);

      if (objectLocations.Length == 0)
        return;

      Invokers.SetAppendText(textBox1, true, objectLocations[0].ToString());

      moveArm(objectLocations[0].CenterX, objectLocations[0].CenterY);
    }

    void moveArm(int x, int y) {

      // not calibrated, nor tested. Using the scalar to determine the location
      _ezb.Servo.SetServoPositionScalar(Servo.ServoPortEnum.D0, 1, 150, 1, 320, x, false);

      // not calibrated, nor tested. Using the scalar to determine the location
      _ezb.Servo.SetServoPositionScalar(Servo.ServoPortEnum.D1, 1, 150, 1, 200, y, false);
    }
  }
}

Now my example really only uses a scalar and 2 servos for horizontal and vertical position. The toughest part of your attempt is to identify how to relate the camera X/Y position to the number of joints in the paddle arm.

If there are quite a few joints, it would be a little more challenging, but still very doable. Because the ez-b sdk accepts servo positions as degrees, i should be easy to do with atan() or atan2() to identify angles.

New Zealand
#24  

Hi @DJSures,

Because I'm currently still printing out the parts that I needed to construct the arm, so I thought might as well tried to get Emgu CV working to get some experience. Please see code below of what I've done so far with EMgu CV framework and C#. It is working as in it show the back camera, and the combo box did show the the list of camera's available on my pc. The only difficulty I'm currently experience is, after selecting a different camera in the combo box, it didn't switch. I suspect I missing a timer or something minor like that, that stops me from doing so. I'll try your example, but in the mean time if you can see what's is missing in my code, feedback is very welcome



using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;

using Emgu.CV;                  //
using Emgu.CV.CvEnum;           // usual Emgu CV imports
using Emgu.CV.Structure;        //
using Emgu.CV.UI;               //

using DirectShowLib;


namespace WebCam
{
    public partial class Form1 : Form
    {
        // member variables ///////////////////////////////////////////////////////////////////////
        Capture capWebcam;
        private int _CameraIndex;

        public Form1()
        {
            InitializeComponent();
        }

        void processFrameAndUpdateGUI(object sender, EventArgs arg)
        {
            Mat imgOriginal;

            imgOriginal = capWebcam.QueryFrame();

            if (imgOriginal == null)
            {
                MessageBox.Show("unable to read frame from webcam" + Environment.NewLine + Environment.NewLine +
                                "exiting program");
                Environment.Exit(0);
                return;
            }

            ibOriginal.Image = imgOriginal;

        }

        private void ComboBoxCameraList_SelectedIndexChanged(object sender, EventArgs e)
        {
            //-> Get the selected item in the combobox
            KeyValuePair<int, string> SelectedItem = (KeyValuePair<int, string>)ComboBoxCameraList.SelectedItem;

            //-> Assign selected cam index to defined var
            _CameraIndex = SelectedItem.Key;
        }

        private void Form1_Load(object sender, EventArgs e)
        {
            // For Selecting Which Camera to use
            //-> Create a List to store for ComboCameras
            List<KeyValuePair<int, string>> ListCamerasData = new List<KeyValuePair<int, string>>();

            //-> Find systems cameras with DirectShow.Net dll 
            DsDevice[] _SystemCamereas = DsDevice.GetDevicesOfCat(FilterCategory.VideoInputDevice);

            int _DeviceIndex = 0;
            foreach (DirectShowLib.DsDevice _Camera in _SystemCamereas)
            {
                ListCamerasData.Add(new KeyValuePair<int, string>(_DeviceIndex, _Camera.Name));
                _DeviceIndex++;
            }

            //-> clear the combobox
            ComboBoxCameraList.DataSource = null;
            ComboBoxCameraList.Items.Clear();

            //-> bind the combobox
            ComboBoxCameraList.DataSource = new BindingSource(ListCamerasData, null);
            ComboBoxCameraList.DisplayMember = "Value";
            ComboBoxCameraList.ValueMember = "Key";


            //for outputting the feed.
            try
            {
                capWebcam = new Capture(_CameraIndex);
            }
            catch (Exception ex)
            {
                MessageBox.Show("unable to read from webcam, error: " + Environment.NewLine + Environment.NewLine +
                                ex.Message + Environment.NewLine + Environment.NewLine +
                                "exiting program");
                Environment.Exit(0);
                return;
            }
            Application.Idle += processFrameAndUpdateGUI;       // add process image function to the application's list of tasks

        }

     
    }
}


PRO
Synthiam
#25  

I literally provided you a complete working code with object tracking and everything:) did you not see it?

All you have to do is copy and paste my code into your project - and the ONLY piece that you need to edit is the movearm() method for moving servos. That's it - the entire project is done and right there in this thread.... object tracking and everything.

New Zealand
#26  

@DJSure, I just saw it.

PRO
Synthiam
#27  

Okay wicked - so all you have to do is fill in the math for the servos to move the paddle into the x/y position. That goes in moveArm()...

I built a little tiny prototype with bits for fun to test with....

User-inserted image

User-inserted image

New Zealand
#29  

@JustinRatliff , when you brought the idea of paddlebots up. I have that image of 2 bots playing that classic arcade game call pong. Don't know why.

@DJSures, I'll work on that over the weekend and keep updating and/or posting if I got into come complications

PRO
Synthiam
#30  

Okay now that's awesome - computer vs computer with real life pong? I like it!

Philip, the toughest part will be filling in the code for moveArm(). Because you will need to calculate how to move the servo joints to reach each x/y relative coordinate.

Lastly, if you find the object isn't tracking "perfectly", the Size and Brightness can be adjusted with the GetObjectLocationByColor() parameters. I would recommend adding trackbars to configure those two values realtime.