Welcome to Synthiam!

Program robots using technologies created from industry experts. ARC is our free-to-use robot programming software that makes features like vision recognition, navigation and artificial intelligence easy.

Get Started
Asked — Edited

Amazon Echo

I have purchased an Amazon Echo and am very impressed with its abilities. I would like to know if there is any way to merge my Echo with EZB so I can get the benefit of both systems through my Echo. This sounds confusing but I want the benefit of information and voice interaction of my Echo and the Robot control aspects of EZB by using my Echo's great voice recognition. I love the Echo speaker system and microphone and its ability to answer almost any question. I also want to use the Echo for voice commands to the EZB and I want to use the Echo voice for both.

I realize I may have to use Windows Cortana since the voice comes through Windows but I like Echo Better and it has the ability to connect and control items through IFTTT.

AI Support Bot
Related Content
The Amazon Echo is a really cool device and I think Amazon got it right for almost everything. There are a couple of things that they should have added to make it perfect in my opinion and I hope that this is just the first of their products in this line.

I wish it had a camera. With a camera, the use of multiple users becomes amazing for this product. You can have multiple users, but the echo doesn't automatically switch between these users based on some non-obstructive mechanism. Hopefully they will find a way to automatically switch between users.

There isn't an easy way to have multiple of these devices work in conjunction with each other to make a whole house solution that shares data between these devices. I wish that there was.

Anyway, to your question. The Amazon Echo uses Alexa for its intelligence. Alexa is a service provided by Amazon which allows you to interface to this intelligence. The Amazon Echo is just a client for Alexa. So, the real thing that you want to do is to have a plugin developed to use the Alexa service. This is possible at a cost.

I wouldn't say it is easy, but if you would like to do some research, here are some sites that you should visit.




This would give you the beginning points of developing a plugin for ARC that uses the services provided by Amazon.
Thanks for the quick reply. I am really interested in giving my robot the abilities that Alexa has. I thought it would be cool to use the tech that is already developed to accomplish this.

The main way you would know the difference between Alexa and my robot would be the voice difference. Since EZB uses windows voice, if I could find an Alexa' voice for windows it would help.
If you developed a plugin, you would be able to use the Alexa voice I would assume. Another option is to use a voice from another vendor. There are some issues and I personally haven't tried to use some of these under windows 10.

A single voice is important to me, so with my projects, I normally just return the text of the statement and then put it into a variable in ARC, which then has a script to say the text. As you say, this then uses the selected voice from windows or a third party that has been loaded into windows.
I am working on it now. Let you know when it is available.
This will be a very useful plugin:):)
For obvious reasons I'd love to see this develop into a plug in? I ordered one to hack for ALAN. There is no audio port ( I know? How hard to have added a 3.5mm Jack?) so have to tear apart and add a couple wires to make it work. But can't stress enough how cool this would be.
I agree with everyone. It would be great to have the vast knowledge of the internet available by voice command to your robot.

I have done some research and found that Alexa has a Alexa Skills Kit (ASK).


This is a free SDK that lets you easily add new Alexa voice capabilities to your robot or any other device you want to make. In other words it gives you the capability of creating code to interface anything you build to Alexa. It is free and the usage is free. You do not have to buy an Alexa to develop this capability into your creation.

You do this by using Alexa Voice Service (AVS)


AVS allows simple code to work with Alexa.

Also this might be interesting to DJ. Amazon has set aside 100 million dollars to help manufactures develop this into their products. It seems like you would qualify as a manufacture to use some of this money to add this capability to your products. It might be a way of getting some development money.

You can find this which is called The Alexa Fund and can be found at this link.

@Ellis - Awesome research! I'll definitely look into the Alexa fund - nice find!

Don't lose your time, when the questions are done, says thank you and that's it, no email asked, it's a scam.

The post should be deleted!
Thanks for the heads up. I banned the user as well.
I guess I do not understand the above post. Am I banned? All I did was look for a way to use Alexa voice for my ez-robot. I do not see the scam being talked about.

I have come up with a way to almost accomplish what I am wanting to do. I think if I put my Echo on my robot and Bluetooth my computer to it. All audio will come out of the echo speaker. You will also be able to ask my computer questions and Echo will answer. Does this make sense?
@Ellis, The offending post and user were deleted. You are fine.

No, there was a spam post made that has been removed. It had noting to do with your post.
Thanks so much. What do you think about my connection idea with Echo?
As long as your robot is not too noisy to interfere, then I think it is a good idea.

Thanks. It is great to have all the help you guys give.
Are you planning on taking the Echo apart and mounting pieces or are you going to mount the Echo as a single component? Just curious.
I also had the idea to use the Amazon echo for my project...the IFTTT connection makes it a pretty cool gadget if combined with the maker channel!

How did you incorporate this into your robot, how did you get it to work?;)
Alexa technology is amazing!
I have it on my Amazon TVFirestick, plugs into a HDMI port on my TV.
Had audio playing from it, thru a bluetooth speaker on my robot, paired with the Firestick.
It worked well for a while, till I tried operating the robot at the same time.
I think the WiFi and bluetooth had interference, the Firestick gave me a warning that it may cause issues unless I change to a different router type. The Firestick eventually slowed and locked up.
The best solution may be having a plugin, but I am not talented to create one.
I have thought about getting a small HDMI monitor or a HDMI audio breakout board?
My robot is medium size robot. I am going to mount it on my robot assembled. I agree that an app would be great. I am not yet at the point of being able to create my own app. Also I think Ifttt might be two slow to be effective. It might be worth a try though. What I really need is an Alexa voice on my Windows narrator. Or a way of taking text from Alexa and using it in narrator.
This is part of my project to add Alexa to my Ezb. Does Ezb use windows 10 narrator for the voice of my robot?
ARC uses the Microsoft Speech API (SAPI 5.4 I believe), so it uses the same voices as Narrator, which also uses SAPI 5.4, but I don't believe it uses the Narrator services.

Does that mean that there are different choices of voice that can be added to my Robot? Windows 10 only shows two voices. If it does not use these voices than can I upload other voices to Ezb?
Microsoft used to have more voices available, and better support for adding 3rd party voices. The last few years they have really messed up SAPI, so it takes a little work to get other voices to work properly, but it can be done.

Steve G did a good tutorial on the process based on our (mostly his) work figuring it out. http://www.ez-robot.com/Tutorials/UserTutorials/32/1

The ARC uses the Windows Speech APIs to support two different functionalities:
1) Text-To-Speech (Say commands)
2) Speech Recognition (Trigger scripts per pre-configured phrases)

Microsoft provides additional voices files per feature TTS/SR, Culture (US,UK,CA,etc) and gender.

I've installed US,UK,CA,IN accents, also you can buy commercial voices (there are some posts) although i don't know if the commercial voices are used only for TTS or both.
Alexa Voice Service supports speech recognition requests, you capture the voice request and you upload to their service.

Because the recognition process is not done in the desktop, it's necessary to implement a mechanism to start audio capture and some timeout/trigger to stop.

Once you have the recorded audio, you call their api and a sound result is returned, you can output the result in your desktop speakers or in the EZB speaker (via EZB SDK).

To summarize Amazon Alexa Voice Service, allows you to do Speech Recognition, request interpretation and return results.

ASFAIK does not provide a TTS functionality.
Alexa Skills is a different beast, provides a mechanism to create skills/action responses applied to the speech recognition/interpretation.

Alexa make an appointment with ...
Alexa switch off the A/C
Alexa goto my bedroom (home robot)

are parsed and routed to different applications.
Amazon Echo is a consumer hardware product which uses all the above APIs, and the main objective is to materialize the concept.

Based in some reviews it seems the microphone array is very good.

I believe once the Amazon APIs take off to other devices/solutions Echo will be off the shelf, you will have the Alexa functionalities in the TV, smartphones, other hardware devices, robots, appliances etc.
So far i got some success with the Alexa Voice Service, there are some gaps, for example i need to start and stop the voice recording, to be smooth it's necessary some trigger like "Alexa, ..." one idea is to use the local speech recognition engine, to start the recording until there's a silence (sound processing) or a timeout.

Alexa Skills requires a callback mechanism, which is not simple to have on a mobile desktop or tablet, there are other alternatives to solve the issue, but is the one needed to trigger the custom actions.

:) You are traveling the same path that EZ-AI traveled.

BTW, one of the services that we use allows you to tie into Alexa.
I presume in your ez-ai the "interesting part" will be connecting all the dots, skills routing with yours or other providers, content feeding, there s a lot of work to be done to integrate and unify all the tools.

So far i m only in the easy path: integration started as an iot curiosity.

i think for simple robots windows speech apis are more aligned, All the logic/processing is local (desktop), only for entertaining, a quick (poor man's) AI or other alexa providers, makes sense going out.
Yes, we are not connecting to Alexa, but one of the API's that we use can connect to Alexa. It is the one that also allows you to customize EZ-AI if you want to do this type of work. It also has ties to Cortana.

We don't use either of these two. We use others that provide more information than either and have the data approved by professionals in those specific fields. A chemist reviews the chemical knowledge, physicist the physics knowledge and so on. We also have a local database for reminders and such.

when you mention same path, i assumed EZ-AI would be an AI hub handle multiple sources like Alexa, Cortana, Google now, etc.

So my last comment does not reflect the real EZ-AI direction, I apologize if I mislead anyone.
Yes, it is a hub that can connect to multiple services. These include

Nuance Cloud
Some local stuff that we have written

We have dropped Watson as these other services do a better job and cost less for the user.

API.AI can connect to Cortana and Alexa and allows you to customize EZ-AI if you want to go down that path.

Same path referring to how to handle the initial "Hey robotname" to start recording and when to stop, then submit the recorded data to other services.


it resembles me my bedroom alarm clock, my model must be version 0.1, mine have advanced features FM/AM radio, if i add a RPI2, it will be a killer product!
Hey @Herr, this is like one of the Rafiki pods which also uses EZ-AI. Really it is very similar from an electronics perspective.
I am very impressed with all the information. It seems we are going in the right direction. Can't wait.

That's great!
Just played with it and it seems to work very well.
I did to and it worked well. Should be no time until an app is made for Ezb.
Just ordered the Dot to connect ALAN. I guess there are several hoops to jump through to get it ordered. They only want to sell to early adopters so you have to use the Echo to pre-order although there is a work around using the Amazon apps but will prolly close that hack soon. The Dot was about $90. Won't get it until May. Very limited stock. But it connects via Bluetooth or 3.5 Jack.
Here is an app in python for RPi2 to use Echo. You must always have a trigger (input) when talking, that is the only downside.
If anyone is interested in trying Amazon Echo (Alexa) with EZB before diving into developing Skills or the API, I'd like to suggest you try your concepts using IFTTT. Its not as elegant, but works as a proof of concept.

IFTTT already supports a few Alexa commands. I use the Amazon Alexa Channel to create a verbal trigger

User-inserted image

Then I use the Maker Channel as described in another thread to send the command to EZB.

Execute Scripts Or Commands With Http Get Commands