Asked — Edited

A Note On Ez-Ai/Rafiki

EZ-AI development is on hold right now, well kind of...

We are in the process of working with some services that will allow the capabilities of EZ-AI to be far better than what they currently are. These include Wolfram|Alpha and IBM BlueMix/Watson. Speech Recognition will be performed through Nuance Cloud services. Advanced vision features will be available through OpenCV. A quick search of these services will allow you to see the end goal of what we are doing. These will be in the Rafiki project which is the primary focus at this time for CochranRobotics. We will release a limited use version for free which will replace EZ-AI. All of the current features of the EZ-AI database will be available through the new version. All of the services provided by EZ-AI will be available through REST queries and exposed services. This will allow plugins to ARC to be able to use these services.

There has been a huge amount of changes to what is possible since I first started working on EZ-AI. This huge shift of improved technologies has made it necessary to rework EZ-AI so that it can continue to grow and mature.

We are also toying with the idea of allowing programmers to write their own business logic layer within Rafiki. This would allow a programmer to be able to use the core services of Rafiki/EZ-AI and write their own applications with the data that is returned. It will probably be a while before this is implemented, but it is something that we are trying to make happen.

I have probably said too much, but wanted to give you all a picture of what is happening and why EZ-AI isn't being worked on directly. We hope to have our new AI available around the end of the year. There are multiple developers working on this while I find come up with solutions to other problems that arise.

As far as Rafiki goes, the pods are functioning great and additional code/abilities are being added almost daily. The models for the pods are being tweaked to expose the HDMI and usb ports and network port to the outside of the case. This will allow someone to connect a mouse, keyboard and monitor to the pod and use it as a computer if they would like. The Rafiki Bot is about 1/3 of the way printed. I am making modifications to the models as needed and reprinting some of the parts as needed. There will be 6 subsystems on this robot. 3 of these subsystems have been written and are ready to use. The other 3 subsystems cant be worked on until more of the Rafiki Bot has been printed. The 3 that are not complete are all very similar for motor control. I hope to have these ready in a couple of weeks. I should be able to show a demo of the Rafiki Bot in about a month, and then all of the robot programming starts. I will work on the charger base shortly after the robot has been completed and the pods are all working with all of their functionality.

One more thing on EZ-AI... As a part of this rewrite, you will just need to have Java installed on your computer to install and use EZ-AI. The days of the huge install will be behind us so it should make it far better in the long run. The other thing that this allows is robot platform independence. I will be working on modules in ARC to allow the features of EZ-AI to be accessible far more easily. This will probably not be worked on until December at the earliest.

Okay, going back to my robot cave. Have a great day all.


ARC Pro

Upgrade to ARC Pro

Become a Synthiam ARC Pro subscriber to unleash the power of easy and powerful robot programming

PRO
USA
#202  

David,

I hope all the best for your future projects and specially for your real life!

Quote:

There are laws against recording youth without their parents consent. This means that if our platform were to be used by anyone under 18 years of age without their parents knowing about it ...

is an interesting topic ...

https://www.indiegogo.com/projects/jibo-the-world-s-first-social-robot-for-the-home

Quote:

What data does JIBO store? JIBO stores certain information about you and backs it up to the cloud. It is encrypted via SSL when being uploaded or downloaded from the cloud. While stored in the cloud it is encrypted via 256-bit AES. The information stored may include your name, information required to connect to your WiFi network, various preferences, and data that is entered or acquired through one of the JIBO applications. Such data includes photos, videos, lists.

The robot will interact with kids, family, friends of the Jibo's owners, so the question is how they handle that law.

If my kid goes to a friends house and their whatever robot e.g. Jibo records videos or photos and uploads to the cloud and then their Asia support center guru, downloads to his laptop for work purposes and then the laptop ends up in black market ... is a serious issue but i think most people gave up of their privacy when they started using FB, G+, Twitter, Snapchat, Instagrams, etc etc.

Another example Nest Camera https://nest.com/support/article/How-does-Nest-Cam-store-my-recorded-video

Quote:

Nest Cam doesn’t use memory cards to store your video on the camera, it uploads your video continuously to the cloud if you’ve subscribed to the Nest Aware service. This allows smooth, uninterrupted video in case there are network connection issues. Nest Aware subscription service provides a simple and secure way to automatically store video up to 30 days in the cloud.

I've been several times in a Friend's house, he uses a Nest cameras to monitor the house, one of them is in the Kids playroom. I didn't know until he grabbed a few screenshots and sent the pictures to me (my kids and his kids) so the question is how Nest handles that, special when you can hide cameras for security protection....

Do you think i can sue Nest ?:) I'm joking ... but is a grey area.

#203  

Yea, its a grey area. What can you record in your own home? What is the software manufacturer liable for? If it was bought by someone to monitor their house, there may not be issues but then storing the information on the cloud, that then becomes a different issue.

Really, I only see this as getting worse because people are willing to forfeit their right to privacy and governments are willing to take more of these away. The law in the US passed without a vote of the house or senate. It passed by them ignoring it. UK actually voted on it and it passed. In any event, with more and more going to the cloud, I really think that these types of laws are going to do one of two things. One is that it will prevent people like me from offering anything that is stored on the cloud and kills production. The second thing it is going to do is slow down peoples acceptance of the cloud. Both might not be bad. IDK. I just don't have the energy to investigate what it is going to take to keep me not liable, so it's not worth it to me to take a chance. Others can get beat up by this until it is cleared up from litigation.

If you are a huge company that can afford the litigation and outlast those who are suing, great. Many people can outlast me.

PRO
USA
#204  

In other words, if you are a small company the law will be effective, if you are a big company there is no law.

I don't think people are aware of these issues when they buy/own the technology and i believe 99.9% don't care.

Check the Arkansas case: http://www.cnn.com/2016/12/28/tech/amazon-echo-alexa-bentonville-arkansas-murder-case-trnd/

Is only a question of time... in the end they will surrender the information. what is the justification border line to cross the privacy rights ?

MSFT had plans for a Xbox Console with Kinect built in always on, internet connection required. You can imagine a nice sensor with infrared camera, microphone array, skeleton tracking, etc on your house accessible (under law pressure) ?

other point how the law applies to a foreign company e.g. EZ-Robot @ Canada or Buddy Robot @ France ?

#205  

Hey David been a long time, tried to get a hold of you several times. Sorry to hear of your troubles with family, I know the feeling and I know you know what I'm talking about. Anyways some things have changed on my end and would love to chat so just throwing it out there, I'm available to chat whenever. Reading your post about your wife hit home. On November 17 th I had a artificial disc put in at c5-c6. On top of all the other issues. Anyways this is not the place so hope to talk to you soon.

    Chris
#206  

@ptp, I think if the case happened after Jan 1, 2017 it probably wouldn't even be fought by Amazon. If the case went up the chain of courts, it would probably not go Amazon's way now.

I have started looking at the packets that the Echo Dot is sending to Amazon. So far I see nothing coming from the IP that the Echo is on unless it is activated. I might leave this sniffer on for a while to see if the IP sends anything in the middle of the night. I know that this is the subject of some podcasts that I watch for next year. I think that more and more people are getting their hands on these devices (especially with the dot selling for $40.00) and people will be doing a more thorough examination of what it is doing.

@Kamaroman68, send me a text and I would be happy to talk. I am off work this week. I saw you texted a while ago. I forgot to get back to you. Sorry man. Yes, I definitely know that you understand the road I have traveled. Will talk soon.

#207  

Hi Dave,

Regarding the Echo, I assume this will not allow your EZ-Ai to continue as planned. Do you think the Echo will become an issue as is thought? If not do you think an Echo and an EZB will be able to be interconnected?

Ron

PS, email sent

#208  

I saw the email. I will reply to it shortly, but wanted to reply to my thoughts on the echo and how it could be hacked to work within an EZ-Robot here. I haven't tried this yet, but I think it would work...

First, a little information about what the general consists is about the echo vs the google home. The google home will be better at general information that isn't "real world information". For example, the question "What is the Millennium Falcon?" would be better answered by Google Home right now. Questions like "Who is Harrison Ford?" would return similar results. Questions like "What is the weather?" would have similar results. Tasks like reading gmail or setting up lists and such right now is better on the Echo simply because it is older and has more development done on it right now. IFTTT allows you to setup a lot of things like when x happens do y between different systems and Echo has more built for it in IFTTT for the same reason. Buying things would be better through the Echo right now and probably forever if you purchase things through Amazon.

Again, I haven't tried this yet... The Echo has a switch on top of it that allows you to bypass the wake-up words. Currently the wakeup words are "echo", "Amazon" and "Alexa". There isn't a way to change these, but by triggering the switch, you are able to speak and have the echo hear what you are asking. This could allow the EZ-B to be attached to the echo (after some hacking) to allow it to then start listening and make the keywords be handled through the EZ-B instead of through the Echo.

With that said, the voice coming from the echo will be the Amazon Echo voice and will not match exactly to what your robots other statements are. Some may see this as problematic. One of the advantages of EZ-AI is that the voices would match because everything would have been passed back to ARC to speak.

Both the Echo and Google home go to a single source for its information. The main complaint of EZ-AI was that it was slow. I have to describe the paths that these devices take to finally return the information for you to see why EZ-AI was slower than the Echo, siri, cortana or Google Home.

EZ-AI The recording of the question happened in the EZ-AI plugin The recording would then send to the EZ-AI server The EZ-AI server would

  1. Start a thread that would send a message to the EZ-AI Authentication server
  2. The EZ-AI Authentication server would validate that this was a valid user and which services were paid for by this user
  3. The EZ-AI Authentication server would send a response back to the EZ-AI Server saying it was okay to process the request. While 1,2 and 3 were being executed a separate thread would send the request off to API.AI to see if it could convert the text to speech and then process the request. (this was successful about 80% of the time) If the request could be processed then API.AI would classify the text Run through its logic to return the result Return the text to the EZ-AI server If the user was a valid user from the Authentication server checks from the other thread, the text would be returned to the plugin The plugin would then take this text and place it into a variable to be spoken.

If API.AI couldn't process the request it would return a failed attempt back to the EZ-AI server If the user from the checks from the Authentication server was a valid user and had paid for Nuance services The recorded audio would be sent to Nuance which would then perform the SST (speech to text) conversion (this had a 99.97% success rate in the beta). This text would be returned to the EZ-AI server, which would then send this information to API.AI If API.AI determined that this was a question that it didnt have the answer to it would return a failure to the EZ-AI server. The EZ-AI server would see if the user had access to Wolfram|Alpha from the checks it did earlier. If the user had access to Wolfram|Alpha, it would then submit the text to Wolfram|Alpha. This was about 25% of the requests from the beta. The Wolfam|Alpha engine would run and gather a lot of information and return it to the EZ-AI server.
The EZ-AI server grabbed the spoke text data and passed it back to the EZ-AI client.

As you can see, there was a lot of hopping around due to trying to provide the most accurate results possible. Sometimes the results (if it went through the entire chain of events) could take up to 20 seconds to return the final results. This was due to transmission times and due to the massive amount of data that Wolfram|Alpha provided. It could take 15 seconds for Wolfram to retrieve the information. This feels like a very long time, but it returned accurate information. It could return things like "What is Myotonia Congenita?" which is amazing but very few people would have asked this type of question. It does make it somewhat useful for medical professionals though, but what is the market?

A question to the echo of "how far is the earth from the moon?" sent or received 214 packets from and to the same IP address on different ports and took ~10 seconds to complete from the first packet to the last. The Echo doesn't wait until you are finished speaking before it starts sending packets to its servers for processing. The question took 2 seconds to complete and 8 seconds for me to ask the question and it to finish processing the request. This is because it had already figured out and classified most of the text before the statement was completed. I had no way to do this with the technologies that we were using. The downside to this is that you can't ask things like "What is Invokana?" making this really more of a digital assistant or a Amazon sales point in your home than anything.

So, to get speed, the Echo is better than anything I could ever develop simply because it goes to one location and can do so prior to the question being completed. It allows the number one requested thing from our testing and the conversations that I had with various users which was digital assistant features. It covers about 80% of what we could do from a knowledge engine perspective and it has a huge community of developers working to improve it daily. The only thing left is to get the robot to trigger it which could be done by hacking the Echo to allow the EZ-B to electronically or mechanically activate the switch on the top of the echo. The only thing you are missing is really accurate data to more difficult subjects, a consistent voice, controlling the robot by voice (ie move forward 25cm) and data being returned to the ARC application itself.

I will keep working on EZ-AI in my spare time but there just isn't a market for this product outside of the robot DIY community. The robot DIY community isn't large enough to support the required funding to make this cost effective, so I will just keep working on it for myself and see where we are later in time.