United Kingdom
Asked — Edited

Videostream

Hi,

I would like to extend the pyJD (https://github.com/BrutusTT/pyJD) project to support video streaming for the JD Humanoid Robot. Is there any way to retrieve the camera images without using the EZB software stack?

The idea is to integrate the robot into an Yarp-based experiment setup. From the information on the forum I was able to control the servos via telnet. But now I'm stuck on accessing the camera. Unfortunately, using the Windows OS and ARC is not an option for the setup but I don't mind tinkering with low-level communication for the robot.

Cheers, BrutusTT


ARC Pro

Upgrade to ARC Pro

ARC Pro is your passport to a world of endless possibilities in robot programming, waiting for you to explore.

#1  

I think that the ez-bs camera is like a wifi camera. Though I'm not sure whether it is encoded or not.

#2  

It is a custom video stream that DJ developed for performance, but he has stated in the past he would share how to connect. I forget if he was going to say how to connect a custom camera to the EZ-B or how to use custom software to read from the EZ-B though.

Alan

PRO
Synthiam
#3  

The code is pretty straight forward to create a library for any application. Take a look at the univeralBot in the software section. Let me know if you have any questions.

United Kingdom
#4  

Thanks for the pointer. I found the class but was not able to test it yet because the battery of our robot died. I will let you know if I got it working once that problem is solved.

Cheers, BrutusTT

United Kingdom
#5  

Batteries are finally here :)

I had a look at EZBv4Video which looks like the stuff I need. However, I can not figure out the port I need to connect to as it seems not to be port the telnet or http port. Scanning with nmap also did not provide another port that I could use.

I know that in the ARC you can activate a Videostream port but since I can not use ARC I can not do this switch.

Is there any other method to activate the video stream port? Did I miss something?

United Kingdom
#7  

Thanks

I finally managed to fill my image buffer starting with EZIMG magic header. Only question remaining is:

How do I decode the buffer into something that I can save as an image or image stream? Easiest for me would be to convert the EZ image to an OpenCV image.

Any ideas where to look for the decoding?

United Kingdom
#8  

Ok, I guess I found the solution.

The buffer contains an JPEG/JFIF encoding and after removing the first couple of bytes its working.

In case someone wants to do it as well:

  1. separate the data by EZIMG as magic header
  2. remove the starting bytes before "FF D8" which is the start sequence of the image also called SOI (https://de.wikipedia.org/wiki/JPEG_File_Interchange_Format)
  3. put the buffer data into something that can read jpeg images.
PRO
Synthiam
#9  

The header includes the image size as well, which you can use. Again, I would recommend viewing the universalbot code as its open source. I suggested it earlier.

United Kingdom
#10  

I did had a look into UniversalBot code. But either it lacks the mentioned information or I was not able to find it. Afterall I'm not a C# programmer. The overall idea how to work with the videostream I got from the EZB4Video class.

But the last missing piece I dont see: How the EZIMG header is constructed or should be read.

PRO
Synthiam
#11  

The 4 bytes immediately after the header is the length of the frame in bytes. It's an unsigned 32 bit int. Good news is c# has many c++ similarities.

PRO
Synthiam
#12  

Here's some additional details...

Header...


    readonly byte [] TAG_EZIMAGE = new byte[] { (byte)'E', (byte)'Z', (byte)'I', (byte)'M', (byte)'G' };

And here's some documentation added to the parsing loop....


while (_cnt == threadStartParams.CNT && tcpClient.Connected) {

  // Read all available bytes from the network stream
  int read = ns.Read(bufferTmp, 0, BUFFER_SIZE);

  // returning 0 bytes means the socket has disconnected
  if (read == 0)
    throw new Exception("Client disconnected" ) ;

  // add the available data to the master buffer
  bufferImage.AddRange(bufferTmp.Take(read));

  // we will use this to see where the beginning of the header is (if found)
  int foundStart = -1;

  // If the amount of data in the buffer is less than the size of a header, obviously not a valid image so continue the loop to get more data
  if (bufferImage.Count < TAG_EZIMAGE.Length)
    continue;

  // loop and find the header
  for (int p = 0; p < bufferImage.Count - TAG_EZIMAGE.Length; p++)
    if (bufferImage[p] == TAG_EZIMAGE[0] &&
      bufferImage[p + 1] == TAG_EZIMAGE[1] &&
      bufferImage[p + 2] == TAG_EZIMAGE[2] &&
      bufferImage[p + 3] == TAG_EZIMAGE[3] &&
      bufferImage[p + 4] == TAG_EZIMAGE[4]) {

      // The header is found, so specify the start of the header
      foundStart = p;

      // Break out of the for loop because we got the header
      break;
    }

  // if we did not find a header, continue the while loop to get more data
  if (foundStart == -1)
    continue;

  // if the header is not the first byte, we're out of sync so remove all the data before the header
  if (foundStart > 0)
    bufferImage.RemoveRange(0, foundStart);

  // if the amount of data is not the length of the header + the size of an unsigned int (which contains the image lengh) then this isn't a complete image head so continue to get more data
  if (bufferImage.Count < TAG_EZIMAGE.Length + sizeof(UInt32))
    continue;

  // Extract the length of the image in bytes from the header
  int imageSize = (int)BitConverter.ToUInt32(bufferImage.GetRange(TAG_EZIMAGE.Length, sizeof(UInt32)).ToArray(), 0);

  // If the amount of data in the buffer is less than the frame length in bytes, continue to get more data
  if (bufferImage.Count <= imageSize + TAG_EZIMAGE.Length + sizeof(UInt32))
    continue;

  // If we got this far, the data length is greater or equal to the frame length sepcified in the header. Remove the header from the buffer
  bufferImage.RemoveRange(0, TAG_EZIMAGE.Length + sizeof(UInt32));

  try {

    // raise either of the assigned events with the image frame extracted from the buffer

    if (OnImageReady != null)
      OnImageReady(new Bitmap(new MemoryStream(bufferImage.GetRange(0, imageSize).ToArray())));

    if (OnImageDataReady != null)
      OnImageDataReady(bufferImage.GetRange(0, imageSize).ToArray());
  } catch (Exception ex) {

    _ezb.Log(false, "ezbv4 camera image render error: {0}", ex);
  }

  // remove the image frame from the master buffer. There may be part of another image frame, so we don't simply clear the buffer to not lose other frames. We only remove the length of bytes specified by the image frame header.
  bufferImage.RemoveRange(0, imageSize);
}

The loop searching for the header, and removing previous bytes from buffer are only cpu consuming for the first frame. This is because your first frame may only contain partial data, depending on where in the fifo the connection was established. Once your code finds the header, removes the leading bytes from the buffer, it will be in sync for each consecutive frame. Meaning, the first byte after cleaning the buffer of the last frame will be the image header...

Because once the data is synchronized with the code logic, the header will always be the first byte after the last frame. So don't worry about cpu activity with the FOR loop to find the header :)

United Kingdom
#13  

Ok, thanks. Got it working now. Hope I can release a new version with video soon :)