
Ellis
I have purchased an Amazon Echo and am very impressed with its abilities. I would like to know if there is any way to merge my Echo with EZB so I can get the benefit of both systems through my Echo. This sounds confusing but I want the benefit of information and voice interaction of my Echo and the Robot control aspects of EZB by using my Echo's great voice recognition. I love the Echo speaker system and microphone and its ability to answer almost any question. I also want to use the Echo for voice commands to the EZB and I want to use the Echo voice for both.
I realize I may have to use Windows Cortana since the voice comes through Windows but I like Echo Better and it has the ability to connect and control items through IFTTT.
Because the recognition process is not done in the desktop, it's necessary to implement a mechanism to start audio capture and some timeout/trigger to stop.
Once you have the recorded audio, you call their api and a sound result is returned, you can output the result in your desktop speakers or in the EZB speaker (via EZB SDK).
To summarize Amazon Alexa Voice Service, allows you to do Speech Recognition, request interpretation and return results.
ASFAIK does not provide a TTS functionality.
Examples:
Alexa make an appointment with ...
Alexa switch off the A/C
Alexa goto my bedroom (home robot)
are parsed and routed to different applications.
Based in some reviews it seems the microphone array is very good.
I believe once the Amazon APIs take off to other devices/solutions Echo will be off the shelf, you will have the Alexa functionalities in the TV, smartphones, other hardware devices, robots, appliances etc.
Alexa Skills requires a callback mechanism, which is not simple to have on a mobile desktop or tablet, there are other alternatives to solve the issue, but is the one needed to trigger the custom actions.
Cheers
BTW, one of the services that we use allows you to tie into Alexa.
So far i m only in the easy path: integration started as an iot curiosity.
i think for simple robots windows speech apis are more aligned, All the logic/processing is local (desktop), only for entertaining, a quick (poor man's) AI or other alexa providers, makes sense going out.
hackaday.com/2015/09/23/echo-meet-mycroft/
Don't mean to hi jack this tread.
We don't use either of these two. We use others that provide more information than either and have the data approved by professionals in those specific fields. A chemist reviews the chemical knowledge, physicist the physics knowledge and so on. We also have a local database for reminders and such.
when you mention same path, i assumed EZ-AI would be an AI hub handle multiple sources like Alexa, Cortana, Google now, etc.
So my last comment does not reflect the real EZ-AI direction, I apologize if I mislead anyone.
API.AI
Nuance Cloud
Wolfram|Alpha
Some local stuff that we have written
We have dropped Watson as these other services do a better job and cost less for the user.
API.AI can connect to Cortana and Alexa and allows you to customize EZ-AI if you want to go down that path.
Same path referring to how to handle the initial "Hey robotname" to start recording and when to stop, then submit the recorded data to other services.
Nice
it resembles me my bedroom alarm clock, my model must be version 0.1, mine have advanced features FM/AM radio, if i add a RPI2, it will be a killer product!
Here you go.
https://alexaweb.herokuapp.com/
That's great!
Just played with it and it seems to work very well.
IFTTT already supports a few Alexa commands. I use the Amazon Alexa Channel to create a verbal trigger
Then I use the Maker Channel as described in another thread to send the command to EZB.
Execute Scripts Or Commands With Http Get Commands