Welcome to Synthiam!

The easiest way to program the most powerful robots. Use technologies by leading industry experts. ARC is a free-to-use robot programming software that makes servo automation, computer vision, autonomous navigation, and artificial intelligence easy.

Get Started
Asked — Edited

Ibm Watson Services Plugin

The IBM Watson Services plugin created by @PTP currently allows you to perform Speech to Text and Text to Speech as well as visual recognition using Watson Services.

You can download and install the plugin here https://www.ez-robot.com/EZ-Builder/Plugins/view/251

Before you can use the plugin, you will need to apply for a free 30 day trial of IBM Speech to Text Account here https://www.ibm.com/watson/services/speech-to-text and a free 30 day trial of IBM Text to Speech https://www.ibm.com/watson/services/text-to-speech/

I will create some examples as it moves forward and try to answer any how to questions.

Thanks for creating this PTP, as with all your plugins this is an excellent piece of work and showcase of your talents.

User-inserted image


Upgrade to ARC Pro

Harnessing the power of ARC Pro, your robot can be more than just a simple automated machine.

Regarding the microphone subject (quality)

Anyone tested the Kinect Microphone Array ?

It can't be added to a small robot... and soon later they will stop selling...:(
What model kinect do you have? I have an old kinect 360 missing a power supply I can hack something together and test. Hopefully the openkinect or Primesense Mate NI drivers still work with windows 10.
I use a PS3 eye on my desktop and works really well with voice recognition (great mic) although it would be good to get a kinect working especially if we get a unity plugin.
I have a kinect 2, but I'm having trouble getting the microphone array to pick up sound in windows. I'll look into today and test the recognition if I can get it to work.
I'm curious how good is the Kinect in a noisy/open environment.

I have: PS3Eye, Asus Xtion, Kinect 1, Kinect 2.
All of them have microphone arrays, and kinect has a sound localization api.
One can use to turn the robot head towards the sound/person.

Does the PS3Eye work for long distances and/or environment noise ?

I'm evaluating a few microphone arrays, and one of the cheapest solutions is to use the PS3Eye with a Raspberry PI Zero and forward the sound to the PC (wifi microphone)

Post #19
Not sure about background noise but PS3 eye is good from a distance (nothing works well with background noise). I think we need to just do a "Launch Name" (like "hey siri", "OK Google" or "echo" etc) using ezb voice rec and hope for the best.

This doesn't solve depth sense issue though.

I am happy to go for a 2 for 1 on a depth sensor (I will buy 2 and send you one) if you want to work on something. I have been waiting for EZ-Robot to provide a LIDAR to do a SLAM. This in conjunction with the unity work going on would be exciting. Maybe we should just look at getting a couple of https://click.intel.com/realsense.html for now.

Off topic or on topic (Not sure any more) I have my latest photos of bicycle cards Ace to 6. I can send you a link to cards off line as you own a deck, but results still are not good. I think I need to string together in multiple AI searches but time delay is an issue and Watson Visual recognition does not seem to support a VR pipeline or linking requests so I have to call multiple VR requests from a single EZB Script based on previous VR outcome ,so takes A LONG TIME. First find Suit (Hearts, Diamonds, Clubs, Spades)=> when suit derived find Picture or number (check if numbers or picture cards)=> then actual card (Number in suit) and it still gets it wrong. Maybe I work on weekend if I have time.
have you guys considered this as a mic option?


"this 4-Mics version provides a super cool LED ring, which contains 12 APA102 programmable LEDs. With that 4 microphones and the LED ring, Raspberry Pi would have ability to do VAD(Voice Activity Detection), estimate DOA(Direction of Arrival) and show the direction via LED ring, just like Amazon Echo or Google Home"
Hi @ptp ,

Thank you for your work on the IBM Watson Plugin.
I tried to play with it a little bit and it seems that IBM Watson is now only providing an API Key and URL for the new accounts. So no more credentials with username/password.

https://console.bluemix.net/docs/services/watson/getting-started-tokens.html#tokens-for-authentication :


Important: IBM Cloud is migrating to token-based Identity and Access Management (IAM) authentication

This link is also mentionning the migration : https://www.ibm.com/watson/developercloud/text-to-speech/api/v1/curl.html?curl#introduction

I could only try with the visual recognition and I have the following error :
IBM.WatsonDeveloperCloud.Http.Exceptions.ServiceResponseException : The API query failed with status code Unauthorized :....

Not sure if it's my bad, or some big change were made on the Watson .NET SDK :
User-inserted image
Hi @Aryus96 I will send you some credentials off line to my old account. Seems to still work.

I'm investigating... my services are working.

Do you know how to convert an existing service (user/password) to the new authentication method ?

I can't find an upgrade option.
I also cannot find a way to generate a basic authentification (username/password) with my new account.

@ptp I'm also working on a personal plugin that include the Watson .NET SDK, using the 2.11.0 and I'm having some difficulties with the dependencies of system.net.http...
Did you encountered some similar problems ?
I deleted the existent service and created a new service.
Although London location is not available, the new service is located in Washington DC.
I created a new service on my personal account and I was issued a token. I didn't want to touch my old services as they still work.


I also cannot find a way to generate a basic authentification (username/password) with my new account.

Is obsolete
The Watson SDK supports both the old and new method, use the new method.


having some difficulties with the dependencies of system.net.http...

SDK TTS sample is working with the new TokenOptions. Does not work for you ?


I created a new service on my personal account and I was issued a token. I didn't want to touch my old services as they still work.

I'll update the plugin to support both user/password and the api key/token method.
Tokens seems more secure in advanced scenarios where a proxy service creates and delivers time limit tokens, if an attacker gets the token is not a major issue: has a time limit and is not the service's API key.

I tested with a .net core console project.

I'll check with a .NET project.

I found the issue and is working with a windows.forms .net 4.6.1.
I'll redo it again from zero to confirm the fix and I'll update the Watson's open ticket.
I've updated your ticket.
@ptp Wow, Thank you very much ! I will check that out !
Hi @ptp,

I'm starting to understand what's going on.

The TextToSpeech 2.11.0 isn't working with other versions of System.Net.Http than the 4.0.0.

When I run your code, I have to edit the AppConfig and remove the comment from the dependentAssembly of System.net.http otherwise it's loading the 4.1.0.

If I do that, I can run your code without any problem and it's working great.

But as I want to integrate it to ARC, I'm changing the project to a Class Library one, and not Window one. I add the debugger option, compiler, plugin.xml, path, etc.

I can load the plugin on ARC, however, it's loading the System.Net.http 4.1.0 even if the AppConfig is correctly setup. That means, ARC is using System.Net.Http and overwritting the use of the plugin one (4.0.0) ?

User-inserted image