Asked — Edited

Some Newbie Questions

Hi, guys. I am considering EZ-Robot as a kit to experiment with and demonstrate an AI system I am working on. Here is a video of an older version of my AI system in an RS Media robot, if you are interested:

EZ-Robot seems like a good system because of its high level approach and flexible design. I was wondering if you can help me with some questions:

  1. I would like to run ARC parallel to my C++ AI program. In other words, I would like them to run as their own executable programs that are capable of communicating with one another. I don't need or want low level control (like controlling individual servos). I envision designing scripted motions in ARC, and triggering them from my C++ program. Additionally, I would need to be able to retrieve sensor feedback in my C++ code. Does anybody have suggestions or links to ways I could do that? FYI - I am aware that ARC supports plugins, but they are in C#.

  2. One of my concerns is encountering limitations on the number of sensors or servos you can attach to one board. To clarify, I envision making a robot similar to the rover kit, but with these possible additions: 2 cameras instead of 1, 4-5 LED blocks, 4-5 touch or Ultrasonic sensors, the 4 in 1 orientation sensor, maybe a few more I haven't thought of yet.

  3. I noticed that touch sensors are conspicuously absent? Can the ultrasonic sensor be used as a pseudo touch sensor, like if something is < 1 cm away from it? Can multiple ultrasonic sensors function without interference from each other?

  4. My ambitions are very sensor-oriented, so the more types of sensors, the better. Are there any 3rd party sensors that you would recommend that have easy integration?

  5. Is a microphone integrated in one of the parts, or do we have to supply a 3rd party solution for audio input?

  6. Is it possible to get the RGB pixel data from the camera in another program? It sounds like ARC supports basic color and scan code recognition, but I may want to try my hand at something like shape recognition in the future.

It will take a good amount of time to complete my current AI version, but I am excited about integrating it in a robot in the future. Thanks for the help!


ARC Pro

Upgrade to ARC Pro

Stay at the forefront of robot programming innovation with ARC Pro, ensuring your robot is always equipped with the latest advancements.

PRO
Synthiam
#1  
  1. C# is merely a programming language. C++ is also a language. The real topic of discussion is the interpreter, which is .Net. C++ can be compiled to use .Net CLR. So yes, you can create a plugin or use the sdk or use the universalbot with C++. The SDK has a c++ Example Project to demonstrate the project parameters. Normally in windows, you compile the C++ against MFC. Simply compile it against CLR and you're good to go.

Also, i would recommend creating a plugin because then you can share the hardwork with others. Approaching 20,000 ez-robots out there... plugins get used:)

  1. The single camera "limitation" is related to onboard cpu processing capabilities and wifi bandwidth for near real-time control. Add a second EZ-B for two cameras. The rest of your mentions are not concerns because there's a lot of learning you will need to understand for the questions. Which is expected while jumping into real robot hardware from a toy - and it's a lot of fun, so don't worry:). For example, the i2c devices are chainable, meaning you can have up to 126 of them per ez-b. There are 24 digital ports, and an ultrasonic sensor takes 2 ports, so you can have 12 of them per ez-b.

  2. What do you mean by touch sensor? With your finger or bump for navigation? specifics of your expectation, please. If you want a touch sensor with your finger, use one of the 8 ADC ports to hack a capacitive touch-style human sensor. If you want bump sensors, use a micro switch on either ADC or digital ports? Don't forget to add a pull-up or pull-down resistor.

  3. You can connect any sensor in the entire world to the ez-b. Choose your sensor and read the datasheet to determine if you have the technical ability to comprehend the connectivity. If not, perhaps someone on the forum would help - ppl are pretty helpful here.

  4. No. Use a bluetooth microphone or embed a pc on the robot. If you are expecting a microphone for speech recognition, it won't work. Speech recognition without very expensive auto-directional microphones and well trained voice recognition won't detect anything accurately - specifically over the sound of the servos and motors. If you're using speech recognition, wear a bluetooth microphone on your ear or carry a microphone close to your face, as suggested by all speech recognition modules - specifically the one for Microsoft, which is what ARC uses.

  5. Yes. Download the software and look at the vision recognition types before assuming limitations - download the ARC software and run it on your laptop's camera to experience the possibilities. There are many types, and you can use the SDK or UnviersalBot or a plugin, or get the raw frames, etc... Although, there is no reason why you would want to extract the frame data, unless you're passing it to RoboRealm or an OpenCV custom filter. You can use the SDK or plugin to customize ez-robot vision recognition filters and order of filters as well.

#2  

Thanks for the info. Its good to hear I can use C++ if I wish to make a plugin. It sounds like my concerns about the number of supported servos/sensors are unfounded. To clarify about the "touch sensor", I'm envisioning a bump/pressure sensor. I would have to purchase that kind of sensor from a 3rd party, correct?

Thanks for clarifying about the mic/speech recognition. I envision the robot pausing its physical activity to "listen" for audio cues, so the servo noise should not be a problem to my knowledge (unless noise persists to keep fixed in a certain position). Perhaps I can find something that works as effectively as a phone's "speakerphone" mic in the future, pointed in the direction of the camera, but a headset would be fine in the short term.

I look forward to learning more about the ARC sw. It sounds like I can learn and experiment with it somewhat even before purchasing EZ robot parts.

#3  

You can make a bump sensor with tinfoil, tape, cardboard, and a servo wire, and a resistor. Bump sensors are easy to make. Just not documented well.

Having the microphone connected to the ez-b would be quite difficult and a bandwidth sugar plum to add. If you connected a mono mic to the robots adc, you would have to convert it to audio signals, then make a device that windows can see to use it for speech recognition. And if that wasn't impossible enough, the quality will be terrible. So a Bluetooth mic will be the best option.

If you mount a Bluetooth mic on the robot, all activity would have to be stopped for it to work, not to mention it would have to be away from metal or other blocking materials to be clear. It is doable though, and I plan on doing something similar.

#4  

Thanks. I can see your point about the mic bandwidth. Sounds like Bluetooth is the way to go.

Not crazy about the idea of making my own homemade bump sensor;) I imagine I should be able to find a 3rd party sensor supported by the software. Just trying to stick with 1st party as much as possible, as it guarantees 100% support.

#5  

Ultrasonic sensors are commonly used with the EZ-Robots. Bump sensors are basically contact sensors as opposed to non contact sensors like the ultrasonic ones. The ultrasonic ones can be used in the same manor and more. Brookstone is currently having a 2 for one sale on EZ-Robot ultrasonic sensors (and other parts) if you're interested. You will find a lot of threads here concerning their use and "how to" discussions on applying them. Just do a search at the top of the main forum page. There are also ready-to-go software components (controls) in the ARC software that can be used with them.

Of course bump sensors also have their good qualities, such as needing no power to operate and ease of interface and data interpretation. They also need no extra electronics to run them. Finally, they can be much smaller than an ultrasonic setup, as well as flexible and basically taped into place with double sided tape. They can be readily hooked up to either the digital (servo) ports or the AD ports on the EZB-4 controller. Just have to be mindful of the input voltage limitations.

#6  

Thanks for the sensor info and heads-up about the sale. Can multiple ultrasonic sensors be run simultaneously without interference with each other?

#7  

I haven't tried that but, to me the key to running multiple ultrasonic sensors is to have only one operating at any given moment. The sensors can be arranged such that only one is actually emitting a sonic output at a time. Then you would run each in rapid succession, having each send out sonic pulses in it's turn. In this way there would be no interference between units. Also, you could run multiples at a time if they are pointing in opposite directions. Perhaps even if 90 degrees from one another, but I'm not certain about that. Another alternative is to run just one in a sweeping motion, covering a wide area in front of it. Simply hooking it to a servo can do that.

Keep in mind they have a limited range. If you want a longer range you can use a Lidar unit. There are threads discussing those as well. Of course, as a bump sensor substitute a limited range is just fine. Preferable even.

PRO
USA
#8  

Buttons and bumpers sensors (switches) need extra electronics like wbs00001 said.

Scripts or actions are triggered in the PC, you can't have a script polling the status every millisecond to check if a bumper switch or push button changed from off to on. You will need to add a custom circuit like a counter/decoder or a micro-controller (arduino/pic) and integrate with the EZB via the serial ports.

@Singularity are you familiar with robots and electronics ?

#9  

Thanks for the suggestions, guys.

"@Singularity are you familiar with robots and electronics ? "

Very little. I've only programmed for 1 robot, and I had help. That is one of the reasons I am attracted to EZ-robot as an option, because it appears to allow higher level control than many of the other options out there. I'm seeking simple sensor feedback, scripted motions, voice recognition, voice synthesis, color/ scan code recognition, and a way to interface with my C++ AI system. While I am up for the work involved, I want to focus primarily on my AI system's capabilities translated to a robot, but wish to avoid getting too off focus on the AI stuff for the sake of low level electronic tinkering. AI and the demonstration of it is my primary focus.

PRO
Synthiam
#10  

I assure you all of those things are pretty easy. To get sensors, movements, voice recognition, synthesis and camera recognition is about 1 day of work.

Don't limit yourself to color/scan recognition - there is also glyph and object training/recognition - which is much more powerful

lastly, the ai system should be a few hours to port to a ARC skill plugin if it has already been written. Simply change the project from mfc to CLR. Next add the references and overrides necessary per the plugin tutorial. Compile, and that's it.

Your plugin may wish to talk to other controls, such as movement etc.. But all that is already exposed and exists as classes in the EZ_Builder.EZBManager, which you will find out about in the plugin tutorial.

However, i do recommend that you consider porting the C++ to C#. The language syntax is very similar. The original ARC and EZ-SDK was C++, which i ported to C# in a matter of hours. It's really really similar - simply keep compiling, visit the error and correct the syntax. It's that easy:). You will find C# to be a much friendlier and more productive syntax to use.

The great thing about C# is the size of the library for the windows OS - it's absolutely mind boggling how well supported the .Net libraries are.