
I need some help to get the camera feed out of ARC. I've looked through the documentation and tutorials and nothing is working for me.
Back in the day the HTTP Server was the method to get at least jpg images via an IP address.
This does is not working for me, at least not with an ezBv4 JD with JD's camera. I can add the HTTP Server to my project following the tutorial linked above, but the provided IP address in the HTTP Server module nor the IP address of JD provide the HTTP interface that I used to get with a v3 setup. If I use the IP address of JD I get the maintenance interface, which I assume is correct.
And to qualify this questions, my JD works fine, it connects fine, I get video in my ARC project just fine. All the standard stuff works.
Is there a better way now to get the camera feed from an EZbv4? I'd prefer to get a live video stream rather than a live static image.
I had mentioned I got to work from the URL. I don't have any connection issues. It works now with the id in the url string.
Is there an mjpeg stream out of the v4?
I don't need a .Net control to update the .jpg stream to produce video, I can do that already. Thank you for the suggestion though.
There is not a mjpeg stream out of the v4. The v4 protocol is our own protocol because any other video compression would have added lag and our requirement is near-zero latency on video.
What program are you trying to get the camera video into? Perhaps I can help...
He is reworking ez face to use the v4 camera.
Thank you DJ, that makes sense about the video stream. d.cochran is right, I'm trying to get the .jpg stream to work with EZ-Face. It uses the Emgu wrapper for .Net. I have submitted questions on the Emgu forum and I'm waiting to hear back from them.
Emgu has a method for video capture from a windows capture device which works fine and a method for single image file...feeding a stream of images for face recognition is my problem. Others have had similar issue when pulling image streams from dvrs and security cameras. I have not come across a solution yet.
d.cochran, thank you too.
d.cochran helped me get the HTTP server functioning working to pull the static images. Thank you.
Nice
The EZ-SDK also has a class for pulling video stream from the ez-b v4 - which you can pass to the emgu module. Not sure if you are using the EZ-SDK as well, but you can pull the libraries from NUGET.
Thank you DJ! I did see it was added to NUTGET, very cool. It looks like that version is nearly a month newer then the regular direct download version, is that correct?
From everything I have read in the documentation and support forums and questions posted online I don't think emgu plays nice with url feeds. Which traces back to what might be a direct shortcoming in OpenCV. Perhaps a feature that is not well documented or does not exist.
I believe I have found a solution, because emgu capture and framegrabber function will work with .mp4 and .avi files and probably others. I have the video feed going to EZ-Face via the HTTP server in ARC...the images are pulled via a timer to create a video....the video is recorded and played back for face recognition.
Hi DJ,
<<The EZ-SDK also has a class for pulling video stream from the ez-b v4 - which you can pass to the emgu module. Not sure if you are using the EZ-SDK as well, but you can pull the libraries from NUGET. >>
Is there one class or file that I would use / call? Or would I need aForge, EZ-B and FFMPEG?