
hitlad
New Zealand
Asked
— Edited
Hi!
I'm a graduate student (in computer science and human interface) and I'm pretty new to the robot scene and I'm pretty excited about it. I'm not too sure where to start and was told to look in this forum.
I'm planning on getting a Rover in the next day or two and I would like to overlay, on top of the camera video feed, some computer-generated graphics.
Would anyone know if this is possible to do with the EZ-Robot platform?
Thanks for any thoughts and advice!
Hi Hitlad,
I think it's a great idea to try before you buy via the camera feature in ARC but as far as I know the software only works through the EZ-B board (which is fair enough).
A virtual robot program might work but as far as I know there isn't one out there yet for EZ-Robots.
Cheers
Lazy Hazbot
Hi, Hazbot.
Thanks for your thoughts. I bought the Rover but the video feed from ARC was not working in that setup for me so I was wondering if I needed an EZ-B as well. I thought the full API for ARC was working for the Rover platform, including the camera, without the EZ-B but I've gotten mixed feedback so I'm still a bit confused.
The issue may also be that I tested everything on a netbook, which is relatively low-end. I guess that may be making things a bit more challenging but I'm iOS-based at the moment and I wanted to get a sense of the features and capabilities of the EZ-Robot system before investing (my limited student budget) in a Windows machine.
Thanks for the thoughts.
Hitlad
Hey Hitlad,
It can get a bit frustrating when you get the whole "he says, she says, they say" routine on forums... when all you want is a yes, no or qualified maybe... preferably along with some supportable details along with said answer.
So on that note, here is some more info and details for you;
First off... you can control basic features of the Rover WITHOUT the EZ-B (hardware), that includes the video, as the EZ-B (hardware) doesn't actually deal with the video aspect of anything you use. All camera action happens within in the PC/ARC/SDK. EDIT - It seems a connection from the PC to the EZ-B is now required for some of the ARC features, even if the EZ-B is not actually mounted to the Rover.
I'll reference one of DJ's videos for that... Note that he doesn't hook up the board until halfway through the vid, and that is so he could add the sonic sensor. And from there one can add other sensors, and functions.
As a fellow net-book user, I can attest to the processing limitations I have run into. For me it was the microphone for use in voice commands... it would not work in ARC (but worked in Windows). Turned out to be driver related issues in Windows 7. Now it works great as long as I remember the commands I programmed in (Robot stop...Stop... STOP I SAY...
)
My net-book has a built in camera... does yours? I use mine for testing/debugging motion and facial recognition... but I have noticed that its resolution (and the net-book's processing) limits me a bit. Facial recognition does not work well with lower resolution settings... but the net-book could not handle the higher settings smoothly. And motion tracking is useless due to all the "noise" in the video... again due to the net-book/camera limitations, not ARC. As soon as I got my old quad-core set up on my new workbench then I was able to see what ARC could really do! Yay
As for your rover's video issue... as I understand it, the rover transmits video via WiFi... and ARC is effectually acting like the iphone/ipad interface that the Rover was originally intended to communicate with. Thus I would recommend a few elimination tests:
Test the Rover on an iphone/ipad, if possible, to eliminate an issue with the rover itself.
Then test the rover on another PC, again if available. First on ARC, then on EZ-SDK
Also test ARC (then SDK) on your net-book with its built in camera (if it has one) or a USB camera... but just use the Camera Control... don't try linking with the Rover yet. Can you see video that way, and if so... can you test colour/face/motion tracking - check-mark the debug option to the right of the video window and it will show you what it is responding to... thus no need to "control" anything yet... just focus on issue elimination.
It is a bit extra effort, but then that's how one learns (at least I do... never could learn from a book first
)
Hope this helps,
Gunner
Hi, Gunner.
Thanks - you totally understand what I'm going through and I really appreciate your insight.
With respect to the elimination tests, (1) the iphone/ipad videos are ok so it's not the rover and, (2) I've been trying to get another PC (I'm actually on the road and so not able to access my uni's computers) by installing a Windows emulator on my Mac.
Turns out that's taking a bit longer than anticipated (failed drive along the way) but I didn't want to just disappear from the planet and wanted to post a reply to follow up as well as mention, in reply to an earlier posting you had, regarding computer generated graphic overlays. There's a toolkit called ARToolkit which may be of interest to you: http://www.hitl.washington.edu/artoolkit/. I'll probably be digging into that once I get past this preliminary stuff. It allows the use of 3D models in augmented reality, which is pretty neat.
Hopefully, I'll get the second Windows machine working soon.
Thanks again for your help!
Hitlad
PS By the way, are you using VB or C#?
You are welcome
I will check out that site...
I currently don't know any programming language (well... I think I remember my BASIC from back when a Commodore Vic-20 was considered a computer
). But I will eventually get back into it once I get tired of the GUI part of EX-Builder.
I used to love the challenge of programming... I would be dreaming of some issue and wake up in the middle of the night to start coding the solution... now it is a challenge just to get out of bed in the morning
Gunner