Asked
— Edited

Hey guys.
I need a little help with some script. As the title suggests, I'm trying to get an ARC script to speak an identified object using RoboRealm. It is all set up and passing variables to ARC, and I have a couple of objects trained, but I'm having trouble writing a correct script. Here's what I've got so far...
if($RR_NV_ARR_OBJ_NAME[0])
say("Steve")
elseif($RR_NV_ARR_OBJ_NAME[1])
say("control")
endif
I would be grateful for any help with this to get me started.
Thanks.
another way to do this is
Thanks for replying David.
I tried the first script you cleaned up for me, but every time I press start it just keeps saying my name whether I'm in front of the camera or not. And when it detects the control object, I get a debug message saying "error on line 3. Index was out of the bounds of the array".
So I tried the second script and it doesn't seem to be doing anything. Any ideas?
Basically what I'm trying to do is when I different is detected on RoboRealm, is for ARC to speak the name of that person (or object), but not having much luck so far.
Use the second example. See if that works for you. The error is that you don't have 2 objects recognized when running the script.
The second script waits for a change. Let me get to a computer and I will write the script.
First script change This script will do the same as the second script but is written out.
Second script should work when the object changes in the view of the camera.
im in the middle of doing some testing on EZ-AI so I haven't tested this, but it should work fine.
Also, I would suggest adding a variable watcher so that you can see what the $RR_NV_OBJ_NAME[0] variable is set to if you dont have that in your project yet.
Thanks David.
That's working much better. I've got two seperate script controls and run the both together and so for it seems to be doing the job. At least that will give me something to build on (or at least try to
).
Thanks again.
@David.
Quick question. Can you learn new objects using the EZ-B camera with Roborealm? The problem I'm having is all the object learning is being done through my laptops webcam, and that's where the detection is happening, but even though I'm getting the EZ cameras feed, I cannot train new objects or do any recognition with it. I thought I had it working before but can't honestly remember now (stressfull week and all).
Thanks.
Yes, you can do this. You need to use the custom Web service in ARC to feed the stream to roborealm. There is a video I had showing How to set this up. Let me find it. I'll add it to this post.
https://www.google.com/url?sa=t&source=web&rct=j&q=roborealm%20with%20ez-builder%20youtube&ei=__8sVZ_3AYSpgwSHvYKgBA&url=http://m.youtube.com/watch%3Fv%3DmefpiqKiMYE&ved=0CBwQtwIwAA&usg=AFQjCNFwfbqJ_ekmYgxVVNAHo03tBCOclg
I'm not at a pc right now. Copy and paste this into a browser.
@Dave.
I already used your video to set it up initially. I do have the custom web server already set up, and when I click "ReadHTTP http://127.0.0.1:8020/CameraImage.jpg?c=Camera, I can get the same feed through the v4 camera. But when I go back to AVM, it reverts back to the webcam.