
dmilevchkz
Uzbekistan
Asked

Hello guys, Does anyone know how I can to get buffer array of the video frame inside js?(or inside something else script). If it is not exists then I'am not understand why ARC supported udp sending function.
Hello guys, Does anyone know how I can to get buffer array of the video frame inside js?(or inside something else script). If it is not exists then I'am not understand why ARC supported udp sending function.
The image buffer would be too large and incredibly slow to expose to per frame to the java compiler. For that, we encourage ppl to create a robot skill to access the camera image in a faster language, such as c# or c++.
There's a tutorial here: https://synthiam.com/Support/Create-Robot-Skill/Overview
DJ, Thanks. My idea is to send video frames to my own web server. As I understand it, the connection to the server will also need to be registered in a robot skill?
Hmmm how do you want the video transmitted? If there’s a common protocol for that, I can help whip something up. Can you expand a bit more on what your goal is?
Yes, of course. I have a EZ robot "six" model. Also I have a django backend, where I want to implement Object Segmentation task in real time. So, at the current time I already know how to sent string data to my backend by http protocol in js emulator inside arc. But I don't know how to create and send image buffer, then I started to learn c# tools and ARC API.
I moved this conversation into a new thread so it's better organized for your outcome. i think making a robot skill for this is the best option, and it's fun to learn something new
. The only snag you will run into is you need Visual Studio Community 2019. You cannot use 2022 because of the bug microsoft has not fixed yet (https://developercommunity.visualstudio.com/t/WinForms-NET-Framework-Projects-cant-d/1601210)
So, what you will want to do is documented here: https://synthiam.com/Support/Create-Robot-Skill/Examples/Example-Camera-Control
That tutorial explains how to get the camera device, connect to the NewFrame events, and do something with the image. It'll be up to you to determine how to send the image to your webserver. You may wish to have it FTP or File Copy or something. I haven't looked in the Camera Robot Skills if there's anything already created to what you're looking for.
Here is a robot skill that is complete which uses the camera and demonstrates everything. You can reference it for your robot skill: Camera Overlay.zip
Hello DJ! Thanks for you response. I started plugin creating today, I still have a lot to figure out. Wanted to ask another question. Do I understand correctly that this function returns an image array? Just I have not yet figured out what type this variable is and have not yet figured out how to log it.
Yes and no. Here's exactly what you need...
Specifically, the part that you are focusing on is this...
Hello DJ! I want to thank you for your support! I was able to extract image frames from the camera in real time. I also managed to make a websocket client that sends a stream of frames to my django server.
This is your modified by me code:
This websocket server part on django:
Next, I output the frame stream to the browser in real time in the same way, but unfortunately here for some reason I can’t publish the html / js code, I don’t know why to be honest. The next step is server-side image segmentation and automatic motion control of the robot based on segmented image analysis. If you are interested in the further fate of the development, I can publish further results.
Best wishes!
That’s great! Nice work in such a short period of time. Please keep me updated. I enjoy watching progress of what ppl make