
Andy Roid
USA
Asked
— Edited
Has anyone seen or heard of any hacks to the echo yet? Currently, I have my "personal assistant, Antonn" using Alexa. Key voice commands by him operate her when she can perform better than he can. My goal is a future Alexa voice change to male. I have seen some voice changes done on You Tube but they are beyond my skill level.
Has anyone heard of a voice service offering an Alexa's voice?
I think Alexa will be a great incorporation with an EZ-B in the future. Any opinions?
Ron
@CochranRobotics could not agree more on this!
I am hoping for more features to hack. The phone features may become interesting. A futuristic "video phone call' generated by a robot might be cool? I have to see where it goes first. (It could be cool to have a robot as an "answering machine"). ( Using vision to recognizes who is calling or answering ?)
I'm just stirring the pot. Maybe someone will get some other ideas. I hope others in the community will post their uses interacting with their robots.
@CochranRobotics I did not know the api.ai could call Alexa Skills.
I thought you misstated that until I looked and under integration on the api.ai website it sure enough does have that option built in! LOL
There are so many fantastic software options now, it's like an embarrassment of riches!
It also interfaces with Google Home and with Google Assistant. I haven't looked into that, but with Google now owning them, it would be logical.
Also, if memory serves me right, it has an IFTTT channel but I could be wrong. I am trying to remember how we did some of the things what we did with EZ-AI. Memory gets fuzzy when you aren't actively working on something anymore. I believe that we were using IFTTT to then control the OpenHAB instance but I can't remember.
So, here is the chain... You can record text from MS Speech stuff, and then send it to API.AI which then can act as a brain and determine what to do with what was said. From there it can return text to be spoken, call an echo skill (like say [what ever was returned from API.AI]) and a lot more. The issue is that a robot would only have the alexa voice. If API.AI decided that it needed to do something like "Turn on my living room light" it would use IFTTT to then transfer the request to OpenHAB which would be able to complete the request and so on.
It is a pretty powerful addition to ARC for sure. I just need to get back to it again as things change quickly as is evident by the new features in ARC. This rapid change is ultimately what caused me to bow out of EZ-AI due to it needing to be a full time job to keep up with the changes that are happening in this arena. I wish I was wealthy for no other reason than to be able to focus on one thing instead of many.
One other thing... OpenHAB2 is out now. It is much easier to use than base OpenHAB.
Okay, one more thing... API.AI has conversational logic built in, so something like...
"Turn off the living room light and lock the doors" would work.
Then, "turn it back on" - would be recognized as "Turn the living room light back on" and would work.
The conversational logic is a really cool piece of this. Also, something like Who wrote "To Kill a Mockingbird?" and then "What year was she born?" and then "What happened that year?" and then "What other books did she write?"
would all work.
Thanks David. I didn't realize the power in API.AI . I have to do homework and get it going.
Another thing to load into Antonn. It sounds like this will be the main AI driver I will need for him.
Yeah guys I could not agree more on the subject...api.ai is the key to bring natural language to your robot! It is a very powerful tool!
As @CochranRobotics pointed out, you will send text to api.ai where it gets processed and returned to ARC. This can be done easy by using the powerful plugin that @DJSures provided! Along with the speech there is also the action variable returned to ARC...this variable can be used to build logic or trigger events within ARC!
If you would like to call IFTTT you could use the new ARC IFTTT plugin. A certain returned action variable would trigger a certain IFTTT call! If you would want the robot to perform a certain move you could trigger AutoPosition...different action variables could trigger different robot animations. There is a lot of ways to get stuff done, and integration is very easy!
If you are using ARC you can also obviously change your TTS voice in Windows, if you are using the Google Assistant they already provided Google AIY where you can hack into Actions on Google to give your robot a specific voice! I don't know about Alexa skills, but I guess it is well integrated too!
Let's put some peronality to your beloved little toys!