I'm a graduate student (in computer science and human interface) and I'm pretty new to the robot scene and I'm pretty excited about it. I'm not too sure where to start and was told to look in this forum.
I'm planning on getting a Rover in the next day or two and I would like to overlay, on top of the camera video feed, some computer-generated graphics.
Would anyone know if this is possible to do with the EZ-Robot platform?
Thanks for any thoughts and advice!
Upgrade to ARC Pro
Get access to the latest features and updates with ARC Early Access edition. You'll have everything that's needed to unleash your robot's potential!
Thanks for the information - it helps a lot since I was trying to figure out whether to go with EZ-Robot or not, given that it is based upon VB/C# while my background is almost everything else but those(!) I guess I should invest a bit of time in those and give it a go since the glyph thing sounds like a good option.
The following is kind of a review of the EZ-Robot system to give you some insight on where to spend your hard earned cash in this exciting, fascinating and rewarding field of Robotics - (Mars Curiosity) for one.
Now I have to be a little tactful here because I know DJ is going to be reading this and whilst I want to help you with direction, I don't want him to feel defensive and come back at me for misinformation or for mocking something he has spent many, many hours building something from almost nothing.
Thanks to DJ's site and tools he developed, my son and I completed a great robot called "Bob" - see showcase "Bob built on Bits" using nothing but junk around the house, DJ's holy grail - the EZ-B board and the ARC software. To the developers credit (he says he is a programmer by trade) the Micro Processor board and his ARC software work together really well. I've recently hooked up an on-board netbook which has moved the robot from a "Alive looking" RC toy that had novelty functions to something practical that can be used around the house. I'm talking news feeds, weather reports, a voice activated Jukebox and literally thousands of action possibilities with arm head and movement possibilities. I think I'm one of the few people that have evolved out of the "Build" area into the robot "CAN-Do" realm on this site. My problem has always been I'm a fast mover and Robotics in general, at the moment,is very much trial and error and then asking questions in forums. This frustrates me because it is so time consuming and the examples are often open ended, that is to say they don't intuitively lead onto the next stage of what they are used for (DJ disagrees with my view on this) but I have to say the other real gem with this site is the help you will get from users and the many tools the developer has put in place to assist - Video's, Examples and a sample Cloud. So where to go from here ? What's out there ? It all comes down to the microprocessor board and the software to support it - Block code GUI is fast developing to make robotics simpler and faster.
Lego pioneered it with NXT, Argentina have minibloq and Microsoft has recently developed a very powerful RDS 4. At the time I started, ARC was a leader in this field. Moving code into something everyone can understand and use was a no brainer for making EZ-Robots my choice. In the past I have been critical of the gui of EZ-Robots and the information layout. I've looked at both sides of the scales and all the critical reviews of the site trend towards what struck me. The information layout is not as intuitive as it could be and the gui is far from being EZ. To combat this DJ constantly upgrades his Builder software which is great (the recent addition of Script Manager) made an enormous impact on making the gui workspace more efficient. There are many sites developing applications for the Italian Arduino and unlimited kits fast popping up which are becoming bigger, cheaper and more powerfull. As for me and my son, whilst I'll keep a hand in this great system, we are moving onto Microsofts RDS 4 ,using hardware and camera development like Kinect and concentrating on the "CAN-DO" from there. There are fabulious hack sites to inspire you and give you ideas. To summarize, DJ's system is great but now must compete with Giants (the people he says he is talking to). The fact remains that as soon as he refines this development of his into a beautiful gem, the Giants will come in and buy it. It's happening all the time, regardless of what he says or thinks. So there you go - I've always said there are no rules in this game and even Giants stagger and fall at the hands and ideas of the little guys.
It's a great system to kick off on - and there of options in the areas I have covered out there right now now.
I believe in freedom of speech and sharing information and Robotics is not exclusive to this. I've read where DJ himself speaks highly of hack sites and is very aware of what is out there and things that need to be improved and fixed (see BUG reports). My insight reply was for Hitlad as a response to his question
which was a BIG question.
I wish to make it clear to you I have a high regard for EZ-Robots and will defend it and also be critical of it in relation to questions raised in forums. I'm a big believer in continuous improvement.
Keep building mate, but look around and take in what's happening out in the world and on Mars.
Reflect on how things can be improved.
Your ultimate Robot will be all the better for it.
But back to ARC / SDK... and the question at hand! What would you (or anyone else reading) recommend, if anything, about computer generated graphic overlay? I am actually quite interested as I wanted more (like I could handle more *eek*) while I played around with the glyph/overlay thing, and the whole "if face/colour/movement" is not in the center box, make it so" routine.
Thus, while getting waaayyy ahead of myself technically, I am also interested in seeing/utilizing other features like optional zones and pre-programed responses to action in specific zone, etc... including allowing a visual or transparent GIF overlay. Like, say, face tracking/recognition with targeting reticule if in center but angle/vector/technobable indicators, etc if elsewhere.
Can this be done in ARC/SDK? If so, how, if not can it be tweaked... Anyone, anyone, Bueller?
Gunner -> going back to workbench building... yay!
PS check out my YouTube page
First of all, Bob looks like one cool robot! My studies actually have more to do with navigation than robotics per se but looking at the stuff that people have done with EZ-robot, I may throw myself at building a robot from scratch it once I get past my current my project. At the moment, however, I need to focus on what I can use the robot for rather than putting one together and I only have the Rover that I bought yesterday...which brings me back to my original question regarding overlays.
The iPhone app works well enough but ARC is not displaying the video feed from the Rover's camera so I am unable to test out the augmented reality features Gunner pointed out. I'm not sure why that is: I selected the Rover in the Video Device pull-down menu and played with the settings a bit but all I see is the 3x3 grid over a blank background...
I may start chewing on the SDK if that's the only way to fiddle around with things but I'm guessing I should be able to try the augmented reality features out via ARC, first...? Has anyone had success with the augmented reality feature through ARC? If so, was the overlaid image always aligned with the glyph? (That is, if the image is an arrow pointing away from me when I view the glyph on one side, will it be pointing towards me when I view the glyph from the opposite side?)
Thanks for any thoughts and advice!
Couple of questions,
A) Are you viewing (or trying to view) the video stream via the iphone or a PC?
I don't have an iPhone, rather an Android... but no luck locating any EZ-B app for it, so I don't know what kind of capabilities either would have.
B) Is this the Brookstone Rover your are referring too?
As I understand it, DJ has backward engineered signals from the supported toys, including any video stream. But there may be different versions of the Rover... just as there is with the AR Parrot Drone - that does something different and thus no video.
Another guess would be that something else may be receiving the video stream? doubtful that it would cause your issue, but it is something I am familiar with in the PC realm (i.e Skype or EZ-B will not grab video if factory software, say Logitech, or another app is using it at same time. Are you able to test the video stream on the PC outside of ARC?
As for the augmented glyph thing... while I am sure more is possible within coding... as far as I have tested, the glyph is one of four pre-programmed patterns. And the augmented image you input to represent one or more of the four seems to match detected size as seen from camera, but not tilt or angle.
You can experiment with any web cam on a PC.
Soon the long weekend will be over and other, more knowledgeable, answers may come your way
A) I've tried viewing the video stream both on an iPhone (it works with the Rover app) and on a PC (using ARC, through which I was able to control the motor but was not able to view the video)
B) Yes, it is the Brookstone Rover. I'm guessing I'm using the same version as the one that was reverse engineered because I think there are only two versions (the earlier one is white while the newer one, Rover 2.0, is black).
I'm not running anything else that would be using the video stream, to the best of my knowledge. I have not tested the video stream on the PC outside of ARC. I haven't done much video streaming work - are there particular approaches or websites you can recommend for me to use to do this sort of stuff?
Regarding the glyphs, when you say that the match is made according to "size as seen from camera, but not tilt or angle," does that mean that the image used to replace the glyph is essentially shown as given - without distortion - except maybe scaled to match the distance from the camera to the glyph. This is in contrast to an object that is fixed to the glyph - both positionally and orientation-wise - in 3-space, so walking around the glyph will allow you to view other sides of the object.
Thanks for following up - and absolutely no apologies needed for the drama
I know that there is an interest in getting Microsoft's wonderful little Kinect tied in with EZ-B, but only DJ knows for sure
Thanks for the info.
One (extremely noob) question I have is about EZ-B: I was under the impression that I could do just about everything with the SDK using the Rover without any additional hardware. Having ARC control the Rover motor directly without any additional hardware sort of reinforced that perspective. My supervisor had pointed me to the Rover and I just assumed that was all I needed to invest financially.
But, I'm guessing I came in with the wrong assumptions and was not clear on some of the basics. So, if I understand correctly then, I'll minimally need the EZ-B. From there, I can add on all sorts of stuff, like wheels and cameras, etc., using either parts from the site or a sort of all-in-one by getting a Rover. Is that correct? Kind of makes sense...
A block development environment is single threaded and single process. I encourage hazbot to use a different robot platform for single thread simple process. The challenge of designing a GUI with the scalability for hazbot's and people willing to learn is challenging. Thousands of users would object to hazbot's opinion. I will delete any future references to block single thread development opinions from hazbot, as the repeated comments are disruptive and do not apply to the ez-robot goals.
If hazbot was a developer or GUI designer, he'd recognize the challenges. So instead it's a constant repeat :(. My suggestion to anyone willing to script advance features is to use your energy in a productive manner - compared to disruptive forum complaints.
So the answer is simple: if you want a lazy robot, buy Lego and be disappointed with being unable to achieve the features of ez-robot
Ps, welcome to the site!
Learning EZ-Script is quite easy. You don't need to start with it. I suggest following the tutorials and learn the controls. With only the controls, your robot can do amazing things. That won't even require ez-script! And new features are always added
I may be using some of the wrong terminology and adding to your confusion, sorry *blush*
When I referenced EZ-B, I was referring to the GUI... but I think I am wrong there... let see if I can get this right:
EZ-B = The controller board (The hardware part that I got in the kit, that I interface all the bits and pieces with)
ARC = The GUI software (free download. The part I use on my computer)
EZ-SDK/EZ-Script = The... not GUI :)... software (free download. I have no experience with this... I should probably download it and at least look at it.)
Since I do not have ANY of the "toys/pre-built robotic test bases" including the Rover. I have no hands on experience... but I believe you can control any and all of the rover's functions with just the builder or SDK software. You only need the EZ-B (board) if you also want to add other sensors and such to the rover, like sonic, IR, touch, etc. Basically taking the Rovers built in capabilities way past its native function, which is already enhanced with the EZ-"software" interface.
Hope you find what you need... and discover the additional benefits of the whole EZ-Robot experience.
Keep us updated on your study
Thanks for the clarification.
I'll see how far I can get with just the Rover and the SDK, then. If everything works out, I'm guessing I'll end up liking the system and invested in it enough to reach into my student savings and invest in an EZ-B to build some sort of R2 robot for myself after the navigation study
For now, I guess I'll have to shift away from my iOS development work and get sorted with C# or VB.
Thanks again - I really appreciate your time and help!
I'll definitely keep you posted
I think it's a great idea to try before you buy via the camera feature in ARC but as far as I know the software only works through the EZ-B board (which is fair enough).
A virtual robot program might work but as far as I know there isn't one out there yet for EZ-Robots.
Thanks for your thoughts. I bought the Rover but the video feed from ARC was not working in that setup for me so I was wondering if I needed an EZ-B as well. I thought the full API for ARC was working for the Rover platform, including the camera, without the EZ-B but I've gotten mixed feedback so I'm still a bit confused.
The issue may also be that I tested everything on a netbook, which is relatively low-end. I guess that may be making things a bit more challenging but I'm iOS-based at the moment and I wanted to get a sense of the features and capabilities of the EZ-Robot system before investing (my limited student budget) in a Windows machine.
Thanks for the thoughts.
It can get a bit frustrating when you get the whole "he says, she says, they say" routine on forums... when all you want is a yes, no or qualified maybe... preferably along with some supportable details along with said answer.
So on that note, here is some more info and details for you;
First off... you can control basic features of the Rover WITHOUT the EZ-B (hardware), that includes the video, as the EZ-B (hardware) doesn't actually deal with the video aspect of anything you use. All camera action happens within in the PC/ARC/SDK. EDIT - It seems a connection from the PC to the EZ-B is now required for some of the ARC features, even if the EZ-B is not actually mounted to the Rover.
I'll reference one of DJ's videos for that... Note that he doesn't hook up the board until halfway through the vid, and that is so he could add the sonic sensor. And from there one can add other sensors, and functions.
As a fellow net-book user, I can attest to the processing limitations I have run into. For me it was the microphone for use in voice commands... it would not work in ARC (but worked in Windows). Turned out to be driver related issues in Windows 7. Now it works great as long as I remember the commands I programmed in (Robot stop...Stop... STOP I SAY... )
My net-book has a built in camera... does yours? I use mine for testing/debugging motion and facial recognition... but I have noticed that its resolution (and the net-book's processing) limits me a bit. Facial recognition does not work well with lower resolution settings... but the net-book could not handle the higher settings smoothly. And motion tracking is useless due to all the "noise" in the video... again due to the net-book/camera limitations, not ARC. As soon as I got my old quad-core set up on my new workbench then I was able to see what ARC could really do! Yay
As for your rover's video issue... as I understand it, the rover transmits video via WiFi... and ARC is effectually acting like the iphone/ipad interface that the Rover was originally intended to communicate with. Thus I would recommend a few elimination tests:
Test the Rover on an iphone/ipad, if possible, to eliminate an issue with the rover itself.
Then test the rover on another PC, again if available. First on ARC, then on EZ-SDK
Also test ARC (then SDK) on your net-book with its built in camera (if it has one) or a USB camera... but just use the Camera Control... don't try linking with the Rover yet. Can you see video that way, and if so... can you test colour/face/motion tracking - check-mark the debug option to the right of the video window and it will show you what it is responding to... thus no need to "control" anything yet... just focus on issue elimination.
It is a bit extra effort, but then that's how one learns (at least I do... never could learn from a book first )
Hope this helps,
Thanks - you totally understand what I'm going through and I really appreciate your insight.
With respect to the elimination tests, (1) the iphone/ipad videos are ok so it's not the rover and, (2) I've been trying to get another PC (I'm actually on the road and so not able to access my uni's computers) by installing a Windows emulator on my Mac.
Turns out that's taking a bit longer than anticipated (failed drive along the way) but I didn't want to just disappear from the planet and wanted to post a reply to follow up as well as mention, in reply to an earlier posting you had, regarding computer generated graphic overlays. There's a toolkit called ARToolkit which may be of interest to you: http://www.hitl.washington.edu/artoolkit/. I'll probably be digging into that once I get past this preliminary stuff. It allows the use of 3D models in augmented reality, which is pretty neat.
Hopefully, I'll get the second Windows machine working soon.
Thanks again for your help!
PS By the way, are you using VB or C#?
I will check out that site...
I currently don't know any programming language (well... I think I remember my BASIC from back when a Commodore Vic-20 was considered a computer ). But I will eventually get back into it once I get tired of the GUI part of EX-Builder.
I used to love the challenge of programming... I would be dreaming of some issue and wake up in the middle of the night to start coding the solution... now it is a challenge just to get out of bed in the morning