I need some help to get the camera feed out of ARC. I've looked through the documentation and tutorials and nothing is working for me.
Back in the day the HTTP Server was the method to get at least jpg images via an IP address.
This does is not working for me, at least not with an ezBv4 JD with JD's camera. I can add the HTTP Server to my project following the tutorial linked above, but the provided IP address in the HTTP Server module nor the IP address of JD provide the HTTP interface that I used to get with a v3 setup. If I use the IP address of JD I get the maintenance interface, which I assume is correct.
And to qualify this questions, my JD works fine, it connects fine, I get video in my ARC project just fine. All the standard stuff works.
Is there a better way now to get the camera feed from an EZbv4? I'd prefer to get a live video stream rather than a live static image.
Justin,
I have a video of how this works with RoboRealm right now. It should be the same for you in that it shows how to setup the http custom server port, and then how RoboRealm is configured to catch the camera feed. I hope it helps you out.
I added a custom HTTP like you did. I believe, for it to work the way I need it to, we should be able to enter an IP address in a browser and bring up the camera feed. If you enter http://127.0.0.1:8010/CameraImage.jpg?c=Camera in a web browser does it provide you video?
In doing some testing, it looks like roborealm is taking static JPG's and analyising them as images instead of a video feed. It is requerying the same location (http://127.0.0.1:8010/CameraImage.jpg?c=Camera) about 5 times a second to get what looks like a video feed.
Yes I agree, I do believe RoboRealm is taking snap shots to create a video effect. If you try the URL in IE it will fail because IE is a hateful creature. In Chrome, if everything is configured correctly and you apply the user id in the url like this : http://admin: @127.0.0.1:8010/CameraImage.jpg?c=Camera I got it to work, for a snap shot. Which is the minimum I needed.
I could have swore I read a blip somewhere in a release note a while back that DJ added a live video feature with the v4. Did I dream that?
Its feeding jpg images I believe and the images are then interpreted as video. try this, try to go to http://127.0.0.1:8010 from your browser. This is what is being passed. if you inspect the element in Chrome, you will see what is happening. There is a d=(NUMBER) value that is constantly changing. I am not the best with HTML 5 by any means, so I dont know, but I suspect that images are being written that are then being fed to the control in the webpage. The webpage is then displaying them quickly and loading the next.
I think this is how IP cameras work though. I haven't dug into it.
All video is individual snapshots hence frame which is a single image. The frame rate is the number of frames over a period of time - which is usually in seconds.
Video "streams" are many frames embedded I to a single stream. For example, a motion JPEG is called mjpeg and is many jpegs.
Perhaps your http server connection is not working because you are not connected to the network? Perhaps the ezb is in adhoc mode and your other computer can't see the pc.
Also, when you press start on the web server, check that there are no errors or warnings in the debug log. Perhaps your port 80 is already in use.
Thirdly, the url example is incorrect and misses the user authentication. The examole of url for video is in the http server. Please connect to the http server once you have repaired any errors and obtain the assistance you need to get the jpg
DJ, Thanks for responding. He is trying to figure out how to get the stream of images instead of just the single image that is displayed at that point in time. Do you know of a control in .Net that he can use to catch these?
This might help you Justin
Or
This might help you Justin
Or maybe even
This might help you Justin
I had mentioned I got to work from the URL. I don't have any connection issues. It works now with the id in the url string.
Is there an mjpeg stream out of the v4?
I don't need a .Net control to update the .jpg stream to produce video, I can do that already. Thank you for the suggestion though.
There is not a mjpeg stream out of the v4. The v4 protocol is our own protocol because any other video compression would have added lag and our requirement is near-zero latency on video.
What program are you trying to get the camera video into? Perhaps I can help...
He is reworking ez face to use the v4 camera.
Thank you DJ, that makes sense about the video stream. d.cochran is right, I'm trying to get the .jpg stream to work with EZ-Face. It uses the Emgu wrapper for .Net. I have submitted questions on the Emgu forum and I'm waiting to hear back from them.
Emgu has a method for video capture from a windows capture device which works fine and a method for single image file...feeding a stream of images for face recognition is my problem. Others have had similar issue when pulling image streams from dvrs and security cameras. I have not come across a solution yet.
d.cochran, thank you too.
d.cochran helped me get the HTTP server functioning working to pull the static images. Thank you.
Nice
The EZ-SDK also has a class for pulling video stream from the ez-b v4 - which you can pass to the emgu module. Not sure if you are using the EZ-SDK as well, but you can pull the libraries from NUGET.
Thank you DJ! I did see it was added to NUTGET, very cool. It looks like that version is nearly a month newer then the regular direct download version, is that correct?
From everything I have read in the documentation and support forums and questions posted online I don't think emgu plays nice with url feeds. Which traces back to what might be a direct shortcoming in OpenCV. Perhaps a feature that is not well documented or does not exist.
I believe I have found a solution, because emgu capture and framegrabber function will work with .mp4 and .avi files and probably others. I have the video feed going to EZ-Face via the HTTP server in ARC...the images are pulled via a timer to create a video....the video is recorded and played back for face recognition.
Hi DJ,
<<The EZ-SDK also has a class for pulling video stream from the ez-b v4 - which you can pass to the emgu module. Not sure if you are using the EZ-SDK as well, but you can pull the libraries from NUGET. >>
Is there one class or file that I would use / call? Or would I need aForge, EZ-B and FFMPEG?