We had been working on plugin
s for the Microsoft Cognitive Services, and I was testing them with the JD Humanoid
. This EZ-Builder
project uses a few controls, including Bing Speech Recognition, PandoraBot, Microsoft Vision and Microsoft Emotion Cognitive Services. This may sound confusing, but we made it super easy to use.
Here's links to the EZ-Builder
controls that I used for this awesome robot...Camera Device
This is the control that connects to a camera. In this case, i used the built-in camera on the JD Humanoid robotBing Speech Recognition
This allows any spoken word phrases to be converted into text. It uses a magical microsoft service to do so. The audio is sent to their server, and returned is a variable containing the speech as text. The text is parsed through a number of IF ELSE conditions to perform things like run the cognitive vision, or run the cognitive emotion. Otherwise, all other text is sent to the pandora bot engine (or you could use the AIMLBot as well)Cognitive Vision
This control takes the image from the camera device and processes it using the microsoft cognitive vision machine learning magic. It returns a string of what the camera image detected. I use the SayEZB() command to speak the returned string out of the robot speaker.Cognitive Emotion
This is similar to the cognitive vision control, except it processes a human face and returns the emotion. I use the SayEZB() command to speak the returned string which contains the emotion (i.e. happy, sad, angry, etc)Artificial Intelligence
one of these two artificial intelligence personality options...
- PandoraBots: https://synthiam.com/Software/Manual/PandoraBots-16070
- AIMLBot: https://synthiam.com/Software/Manual/AimlBot-16020