elfege
Hello everyone,
I'm opening this new thread because so far I still haven't resolved what looks to me either as a limitation of ARC or something wrong in my config, but I'm not sure so I'm checking.
I have succeeded in having my robot docking by itself, to the point where it will try again and again until it succeeds, re-centering its position toward the dock. So far, so good, I tried it several times and each time it eventually succeeded so I'm quite proud of the achievement here. I posted in a different thread a video and the EZB file for those who are interested, just do a search using my name, Elfège (it should work without the accent on the è).
Now, here is my question : although it works better than I expected, I still can't figure how to use severa glyphs for differentiated actions.
Each time I tried to use the glyph1, 2, 3 and 4 in the camera settings, it would always prioritize the script start menu content or, if I don't have anything in the script start, most of the time it won't recognize the glyph although it detects the object as A glyph but never or almost never as Glyph_1, 2 or 3.
Knowing how to do this reliably is the only way to achieve a full track back to the docking station.
I thought of using color, object or QR detection, each time the same problem : either priority given to the sole "script start" content OR totally random results.
I looked online for some codes I could use to have the objects identified reliably within a separate code (in the script manager), but same result : no reliable detection/identification.
There must be something with my config and there's another symptom that seems to indicate that : When I tried to work with QRcode it would literally NEVER recognize them, even after tons of frames while the robot and its camera stood in front of the QR code. It would only say something like "QRcode detected, coordinate, coordinate" in the debug.
So here I am, with a perfectly working autodocking system that is efficient for it is ultra-simple (it was really hard to find the simplest and therefore the most efficient way to achieve that, there's some good old French-cartesianism involved here I guess...).
Any idea?
Thanks A LOT in advance to whoever will bring some light to me!
My config : AMD FX 6300 16GB DDR3 Windows 7 ultimate
2 EZB V3 working in parallel
tried the detection with both my old EZB V3 wireless camera and the Kaicong IP camera that I prefer to use for most of my projects (because it is cheap about 30 bucks and has night vision).
Sincerely, Elfège.
Please do not start a new thread for the same topic. The other thread has been deleted. Continue using this thread. Thanks
Oops... indeed I did that! sorry.
Here is a more complete description of the different bugs I see happening all the time.
the "script start" camera setting... which will apply indifferently to any glyph.
I simply can't get the camera to identify one specific glyph.
anyway the "script start" menu would always take over and, if I don't use it, commands won't
trigger or they will but randomly and create ghost scripts (see section 6 below)
finally have different scripts running for different situations and locations.
But guess what? The camera will detect ALL possible objects as the learned objects. Objects that
HAVE STRICTLY NO RESSEMBLANCE whatsoever with the very original drawings I created. I added
colors, no difference, after a little while the camera acts exactly like if it had been
"learning" along tracking while this option, of course, is and was never checked by me.
Now... I am to reach the conclusion that ARC doesn't allow for multiscripting with
differentiated objects.
Please prove me wrong.
I already have a self docking process but only with one glyph so I can't have it find its way
accros the appartement and this is the challenge I might have to give up on due to either my own
limitations or to EZB's limitations.
Now, I'm trying to use
only, with no object detection
since it is totally messed up and random. But problems cited below remain some very frustrating
obstacles to any improvement.
the camera, or if I turn off/on the camera device itself, EZB, until I either restart the entire
application OR until I got to cam settings and save, will always SEE the object or glyph, so
there is NO WAY to make it go to the next step once it has seen the object. I can't even make it
center the object.
Workaround : created a new variable for this getping... but it simply seems to NOT READ the ping
because it continues when the value is already far beyond the defined criterium.
find a glyph), well EZB, as soon as a script is about more than 20 lines (it seems to be the
reason but I'm not sure) continues the same process, it just can't stop, even when all scripts
are off, camera off and above all it still goes on while all boards are disconnected!
I tried reinstalling EZB. I tried uninstalling it completely, including registry entries. I
tried restarting windows services. I eventually used a totally different and much more powerfull
computer. Same results.
Conclusion? DJ? Any idea? Is it because of limitations that could be overcome only by going into
actual coding or can you prove me wrong? I just want to stop trying if there's no point because
of inner limitations of the software.
Thanks in advance and my apologies if this message sound a bit rude, but this is really
frustrating and I sincerely desire that you'll prove me wrong on at least one of these points.
Best, Elfège.
Ok... it seems that My code generates a lot of loops some of them being inconsistant and that in itself could explain a lot of things, especially now that I remember that in my simpler version of docking I didn't meet with any of thoses bugs.
There's still the multiple glyphs thing that I can't figure out.
Sorry if I wasted anybody's time but the fact to write things down actually helped...
Hello elfege;
It seems to me you have too many things going on at once. Too many things that interact and make it that much harder to get the system going. Not to mention make it harder for others to follow what is going on.
I would recommend you back up and punt. What I mean by that is to tackle one problem at a time starting from a totally new project. Keep on that one aspect of the overall design until it works reliably. For example, start a new project concerning nothing but getting the camera to recognize the glyphs. Get it to reliably recognize just 1 glyph first. Flawlessly every time. No matter how long that takes. Then move on to 2. And make that work. Finally 3, or whatever. Save that project for your reference to use later. You can always use the Merge function to bring it into your current project eventually. You can even go back to the glyph recognition only project to work out new problems as they arise. At least you will be free of potential interference from other parts of your main project.
Same with other aspects of this project. List the main functions you have to get working and figure out how to make each as modularized and self contained as possible. That goes for the code associated with each function as well. Work on each function as independently as possible. Again, starting new projects concerning just one aspect of the overall goal. Think about ways you can use each of these functions without having to get into the guts of each to do it. You will have worked hard to get it to do it's thing well, you don't want to mess that up.
Basically leave the forest for a while and build a few individual structures. Then link them up to achieve what you want.
Well, i did that. I got only the cam to recognize objects types but never a specific glyph. Same goes with QRcodes. I remember that it used to work though, but now whatever I do, including new empty project... No joy.
I'll try again from another machine and I'll see.
Thanks for your answer.
@ elfege
Last year I have been working on a simulation how to navigate with QR Code and Glyph. I never could try it it on a moving robot. Just have a look at it and see if it could help you to develop what you're trying to achieve.
https://synthiam.com/Community/Questions/5305
good luck.
Just checking, Have you define a variable for each glyph under the camera setting / glyph/ script?
Well, the problem I'm having is that I can't use several different glyphs as my system 1) doesn't clear the last glyph (either by using the clearlastglyph command or by seeing another glyph as suggested) so 2) whatever the variable I define - like you did here - it will still say/print "I see the glyph_1" when it's actually, now, the glyph_2 or any other and the debug doesn't show Glyph_1 like it used to be some time ago, but only "glyph" then position, by name (Middle, Middle) and then in pixel in the camera's field/grid. But never any identification of a specific glyph.
That's where I'm stuck... and I really can't figure what is wrong in my config... Even with a new project and simple script the debug will never show the name of the glyph being seen. It will clear the last glyph if it is given to see an object not long after, although the object will be seen everywhere whatever its colors/unique shape.
I attached my project in the previous message and I'm curious to see if you'd face the same problem. But I wouldn't like to infect you with my cr...p if this is due to some sort of virus affecting some service in Framworks, which is possible but I have no idea where to look.
Well following WBS00001's advice it works in a totally new file, it can recognize different glyphs and behave accordingly. So it must be the too many scripts running in my system that renders everything chaotic. I'm gonna have to simplify everything then.
Thanks everybody.
And, I must add : I'M SO GLAD THAT I WAS WRONG about EZB limitations!
I'm very glad to hear my little bit of advice was helpful. I have been right where you were, neck deep into what seemed like an overwhelming array of problems and things not working as they were supposed to, when someone gave me that advice. I've been using that technique ever since.
Best of luck on your docking project. And please continue keeping us informed of your progress (and difficulties).
Did it! My Robot now can find its way home across my apt. ! So excited!
Thanks to this great community. Thanks for your incredible patience Dj!
youtu.be/6r69ONP8yQQ
@elfege, Nice job! Having a robot auto dock for charging has come up a few times but this is the first example, I've seen, of one actually doing it.