Would anyone have suggestions on how best way to go about interacting with the JD robot via flashcards and/or objects? The aim would be for the user to give some sort of meaningful visual response that JD can interpret and process.
QR codes are nice but they can be quite large particularly if the robot is further away. The only way I can think of keeping the codes small while maintaining the distance is to use a higher resolution camera but I'm unaware of any such cameras that are also compatible with the EZ-B v4.
Glyphs also work but I think there are only 4 of them. If someone knows of a way to add more, that'd be great.
Color detection can be noisy especially when the background colors also interfere.
Object recognition is quite tedious and inconsistent as the object often needs to be angled and oriented a certain way to be detected. Providing ARC with a set of training images would be great but I don't know of any ways to do this or if it's even possible.
Each have their pros and cons but I'm keen to hear what your thoughts are. Are there methods I haven't thought of or better ways to go about detecting responses?