Hi,
I would like to extend the pyJD (https://github.com/BrutusTT/pyJD) project to support video streaming for the JD Humanoid Robot. Is there any way to retrieve the camera images without using the EZB software stack?
The idea is to integrate the robot into an Yarp-based experiment setup. From the information on the forum I was able to control the servos via telnet. But now I'm stuck on accessing the camera. Unfortunately, using the Windows OS and EZBuilder is not an option for the setup but I don't mind tinkering with low-level communication for the robot.
Cheers,
BrutusTT
Asked
— Edited
Alan
Cheers,
BrutusTT
I had a look at EZBv4Video which looks like the stuff I need. However, I can not figure out the port I need to connect to as it seems not to be port the telnet or http port. Scanning with nmap also did not provide another port that I could use.
I know that in the ARC you can activate a Videostream port but since I can not use ARC I can not do this switch.
Is there any other method to activate the video stream port? Did I miss something?
Alan
I finally managed to fill my image buffer starting with EZIMG magic header. Only question remaining is:
How do I decode the buffer into something that I can save as an image or image stream? Easiest for me would be to convert the EZ image to an OpenCV image.
Any ideas where to look for the decoding?
The buffer contains an JPEG/JFIF encoding and after removing the first couple of bytes its working.
In case someone wants to do it as well:
1. separate the data by EZIMG as magic header
2. remove the starting bytes before "FF D8" which is the start sequence of the image also called SOI (https://de.wikipedia.org/wiki/JPEG_File_Interchange_Format)
3. put the buffer data into something that can read jpeg images.
But the last missing piece I dont see: How the EZIMG header is constructed or should be read.
Header...
Code:
And here's some documentation added to the parsing loop....
Code:
The loop searching for the header, and removing previous bytes from buffer are only cpu consuming for the first frame. This is because your first frame may only contain partial data, depending on where in the fifo the connection was established. Once your code finds the header, removes the leading bytes from the buffer, it will be in sync for each consecutive frame. Meaning, the first byte after cleaning the buffer of the last frame will be the image header...
Because once the data is synchronized with the code logic, the header will always be the first byte after the last frame. So don't worry about cpu activity with the FOR loop to find the header