EZ-AI development is on hold right now, well kind of...
We are in the process of working with some services that will allow the capabilities of EZ-AI to be far better than what they currently are. These include Wolfram|Alpha and IBM BlueMix/Watson. Speech Recognition will be performed through Nuance Cloud services. Advanced vision features will be available through OpenCV. A quick search of these services will allow you to see the end goal of what we are doing. These will be in the Rafiki project which is the primary focus at this time for CochranRobotics. We will release a limited use version for free which will replace EZ-AI. All of the current features of the EZ-AI database will be available through the new version. All of the services provided by EZ-AI will be available through REST queries and exposed services. This will allow plugins to ARC to be able to use these services.
There has been a huge amount of changes to what is possible since I first started working on EZ-AI. This huge shift of improved technologies has made it necessary to rework EZ-AI so that it can continue to grow and mature.
We are also toying with the idea of allowing programmers to write their own business logic layer within Rafiki. This would allow a programmer to be able to use the core services of Rafiki/EZ-AI and write their own applications with the data that is returned. It will probably be a while before this is implemented, but it is something that we are trying to make happen.
I have probably said too much, but wanted to give you all a picture of what is happening and why EZ-AI isn't being worked on directly. We hope to have our new AI available around the end of the year. There are multiple developers working on this while I find come up with solutions to other problems that arise.
As far as Rafiki goes, the pods are functioning great and additional code/abilities are being added almost daily. The models for the pods are being tweaked to expose the HDMI and usb ports and network port to the outside of the case. This will allow someone to connect a mouse, keyboard and monitor to the pod and use it as a computer if they would like. The Rafiki Bot is about 1/3 of the way printed. I am making modifications to the models as needed and reprinting some of the parts as needed. There will be 6 subsystems on this robot. 3 of these subsystems have been written and are ready to use. The other 3 subsystems cant be worked on until more of the Rafiki Bot has been printed. The 3 that are not complete are all very similar for motor control. I hope to have these ready in a couple of weeks. I should be able to show a demo of the Rafiki Bot in about a month, and then all of the robot programming starts. I will work on the charger base shortly after the robot has been completed and the pods are all working with all of their functionality.
One more thing on EZ-AI... As a part of this rewrite, you will just need to have Java installed on your computer to install and use EZ-AI. The days of the huge install will be behind us so it should make it far better in the long run. The other thing that this allows is robot platform independence. I will be working on modules in ARC to allow the features of EZ-AI to be accessible far more easily. This will probably not be worked on until December at the earliest.
Okay, going back to my robot cave. Have a great day all.
Upgrade to ARC Pro
Get access to the latest features and updates with ARC Early Access edition. You'll have everything that's needed to unleash your robot's potential!
A question about latency: will response times (from when user voice input ends to when reply begins) change depending on whether the installation is cloud, local install connected to cloud ecosystem or local install/local ecosystem?
We spent some time at a conference last weekend demonstrating our AI to get feedback from people, and also promoting our new school. The crowd wasn't as large as the promoters had promised but we still got plenty of feedback along with some good promotion.
School link
EZ-AI presi
Rafiki was also shown off a bit. We didn't have him doing a lot because I wanted to focus on the school and EZ-AI information gathering. I was able to offer some assistance to the R2 Builders group and a few other groups at the show who want to make their robot props into actual robots. Anyway, enjoy the presi.
I was happy to hear you made it to the conference.
I enjoyed the presentation. It really showed the structure and features EZ-AI will offer. Will the first full release of EZ-AI available soon? I assume the "POD" will be a device offered also. Any projected release date window?
Ron
Typo on the school link. In the last paragraph you have "...continued support your school." Should be "...continued support for your school."
Alan
Ron, I hope so... I have had to step away from the EZ-AI development and let Nick take it over so I can get everything lined out with the school. I plan on going to Dallas on Thursday for work but hope to find some time to spend with him there catching up on where we are at... I will let you know.
Personally first
2 years ago, my daughter was injured during a cheer leading performance. In and of itself, this wouldn't have been a major thing to recover from but it lead us down a rabbit hole of situations that I will describe. My daughter tore her long thoracic nerve in her right shoulder. This caused her scapula to wing out and her shoulder to drop. There are literally 2 doctors in the US that have a clue about this injury but we didn't discover who they are until after two years of therapy and pain for my daughter. This injury caused us to have to home school our daughter which is another topic, but the pain caused her not to be able to sit in a classroom for more than about 15 minutes without being in severe pain. While doing EMG nerve tests early on to discover the extent of the injury, it was discovered that she has a rare disease called Myotonia Congenita Thomsen disease which is a nuerological condition which causes here muscles to lockup after tightening. This is a very rare disease (one other person in Oklahoma City has been diagnosed with it). This sent us down another trail of genetic testing and a lot of research to see how this will affect her life. Because of this she will not be able to have children and is also greatly affected by different types of medications and also affects her in other ways. We found a doctor in Houston TX who performs surgery to solve the issue with the long thoracic nerve and he performed the surgery last weekend which fixed the pain that she was having. Needless to say, all of these medical situations were expensive and time consuming so my focus on EZ-AI has been taken away from the project.
My grandson was diagnosed with autism. We have been helping his parents with finding help to learn how to handle his outbursts and what causes them. I am one of the few people that he reacts to on any sort of human level. This has consumed a lot of our financial resources and my time. I expect that it will continue to do so, but we are doing what we can to try to assist his parents and siblings.
My wife has had many medical conditions over the past few years, and had a 3 level neck fusion earlier this year. She is recovering but is not able to help as much with the grandchildren or our daughter. She is recovering but it hasn't been easy for her. This too has consumed a lot of resources along with time and money.
All of this has left me trying to recover from these issues. I have been trying to come up with something that my son can do that he will excel at (grandsons father) that will also allow him to be home more. We have decided to start a robotics school directed toward home-schooled students. There is a large market for this and really nothing exists that fits quite good enough, so we have developed a curriculum and a way to work with the students over the internet. At first we were going to focus on a brick and mortar school but we have decided that with technology being what it is now, we can do this virtually instead, which allows us to reach many more students world wide and also reduces costs drastically. This is where my focus has been for my robot time over the past 3 months or so.
EZ-AI was my other son's project (the programmer son) and we had a short meeting last weekend about the issues that we see with this product. EZ-AI is a good idea, but we lack the funding to make it a really cost effective product. The reason that I say that is as follows.
1. Many other products have become available over the last 2 years which allow a lot of the features that we were including in EZ-AI to be publicly available. These include Cortana, ok google and Siri from the desktop, and Google Home and Amazon Echo for hardware solutions. Every one of these has one thing in common that we can't do. You pay a one time charge for each of these (ether computer OS or a hardware device) and then you can use these devices as much as you want without being charged by these companies for their use. This kills EZ-AI because we will never be able to get the product anywhere near this price point. We have to pay for services, and as such have to pass this along to the users of our services.
2. There are some laws that were passed (one in the US and one in England) that make me not want to be in the world of cloud services. We were not going to store what someone does on these devices at all on our authentication server. These laws require online service providers to house the activity of its users for a minimum of one year. I don't like this in the slightest as it would require me to do so and then possibly have to turn over a users activity to authorities. This prevents me from perusing this type of a business.
3. There are laws against recording youth without their parents consent. This means that if our platform were to be used by anyone under 18 years of age without their parents knowing about it, and we identified the person using facial recognition (which requires recording) then we could be liable. I really don't like this either and it could never happen, but it could open me up to litigation that I wouldn't be able to afford. I believe that this is why the devices that are currently out don't contain cameras and don't do facial recognition. That was the biggest thing that made us different from these other commercially available products and kills our advantage.
4. API.AI got bought out by google. I am not a fan of google. I like my privacy a bit much and after learning more about the harvesting of personal data that is done by google, I have quit using their services. Their purchase of API.AI also leads back to point 2. If anyone is going to be requested to provide information to a government source, it is going to be google, and the use of services that they provide will then force those who base products off of their services to by default also have to provide this information.
5. The market for this type of product has become difficult because of the huge players in this market now. Apple, Google, Microsoft, Amazon and IBM would be my main competition along with the main services that I would use. This becomes a loosing fight quickly simply because there is no way that I can compete against all of these players. Add to this that a almost all of these companies are now cooperating with each other to further these types of functions, I seriously can no longer compete.
It would be one thing for me to put together something that I can use and handle the costs associated with my use. It becomes something quite different to make a product used by others. Developers normally can establish relationships that allow the API's that I used to be used without cost. Once you publish these for others to use, there are costs. The DIY community tries to keep costs as low as possible, which I totally understand and do the same thing. There are not a lot of people willing to pay for monthly for an alexa type device that can be put into a robot. The cost would be about $25.00 a month, which now you can buy an amazon echo dot for $40.00 and have unlimited use of it (if you have a prime account which carries with it other advantages). I don't see the market being open like it was even 6 months ago.
Because of all of these reasons, I have turned my attention toward a virtual school that allows anyone to enjoy live broadcasts teaching EZ-Robot based topics initially. This will allow people to participate in the live shows via IRC, Mumble, Skype or phone. I have worked with Noah from altaspeed to get the streaming to the internet setup in a way that will allow people to enjoy the stream from Twitch, YouTube live, RTMP and others. I have two entertaining hosts to serve as hosts for the class and have the streaming figured out. I do have some equipment to purchase so that I have redundancy and very high quality sound for the videos. Really, sound is the most difficult part of the entire setup. I should have the studio setup by February and we will start producing some test shows at that time. We will be housing our content for 1 year on a cloud server at digitalocean.com. We will keep archives locally also. I am currently working on getting a kodi channel up and running and testing other steaming services like scaleengine and ustream to see what they can offer that I cant do myself.
Students who buy the classes would be able to participate in the classroom conversations and be able to also get access to previously recorded classes. We will also start a show which will delve more into non-EZ-Robot specific topics with robotics. There would be news segments, product reviews, answers to questions and other things. We would have guest speakers on to discuss topics as we found those who would be willing to participate. There are a few other ideas floating around in my head, but this is a good enough start. From there, we would do other classes on other topics that are used in robotics like 3D design and manufacturing, programming and Artificial Intelligence topics.
We plan on doing shows on what we have done with EZ-AI and how it works which will allow others to do the same thing that we did. It will probably be about 6 months before we will start broadcasting. We will be setting up a patreon page for those who want to assist in getting this off the ground.
I was using another free service that the link just went dead, I assume for much the same reasons. I was unaware of the retention laws you spoke of. That is scary.
I hope and pray everything turns out well with your family, out of my 10 grandchildren, I have 2 that are autistic so I understand the challenges.
Will talk more off line.
RichardZ
You could also check out https://github.com/itsabot/abot/wiki/Getting-Started
Just a suggestion for anyone interested.
I hope all the best for your future projects and specially for your real life!
is an interesting topic ...
https://www.indiegogo.com/projects/jibo-the-world-s-first-social-robot-for-the-home
The robot will interact with kids, family, friends of the Jibo's owners, so the question is how they handle that law.
If my kid goes to a friends house and their whatever robot e.g. Jibo records videos or photos and uploads to the cloud and then their Asia support center guru, downloads to his laptop for work purposes and then the laptop ends up in black market ... is a serious issue but i think most people gave up of their privacy when they started using FB, G+, Twitter, Snapchat, Instagrams, etc etc.
Another example Nest Camera
https://nest.com/support/article/How-does-Nest-Cam-store-my-recorded-video
I've been several times in a Friend's house, he uses a Nest cameras to monitor the house, one of them is in the Kids playroom. I didn't know until he grabbed a few screenshots and sent the pictures to me (my kids and his kids) so the question is how Nest handles that, special when you can hide cameras for security protection....
Do you think i can sue Nest ?
Really, I only see this as getting worse because people are willing to forfeit their right to privacy and governments are willing to take more of these away. The law in the US passed without a vote of the house or senate. It passed by them ignoring it. UK actually voted on it and it passed. In any event, with more and more going to the cloud, I really think that these types of laws are going to do one of two things. One is that it will prevent people like me from offering anything that is stored on the cloud and kills production. The second thing it is going to do is slow down peoples acceptance of the cloud. Both might not be bad. IDK. I just don't have the energy to investigate what it is going to take to keep me not liable, so it's not worth it to me to take a chance. Others can get beat up by this until it is cleared up from litigation.
If you are a huge company that can afford the litigation and outlast those who are suing, great. Many people can outlast me.
I don't think people are aware of these issues when they buy/own the technology and i believe 99.9% don't care.
Check the Arkansas case:
http://www.cnn.com/2016/12/28/tech/amazon-echo-alexa-bentonville-arkansas-murder-case-trnd/
Is only a question of time... in the end they will surrender the information.
what is the justification border line to cross the privacy rights ?
MSFT had plans for a Xbox Console with Kinect built in always on, internet connection required. You can imagine a nice sensor with infrared camera, microphone array, skeleton tracking, etc on your house accessible (under law pressure) ?
other point how the law applies to a foreign company e.g. EZ-Robot @ Canada or Buddy Robot @ France ?
Chris
I have started looking at the packets that the Echo Dot is sending to Amazon. So far I see nothing coming from the IP that the Echo is on unless it is activated. I might leave this sniffer on for a while to see if the IP sends anything in the middle of the night. I know that this is the subject of some podcasts that I watch for next year. I think that more and more people are getting their hands on these devices (especially with the dot selling for $40.00) and people will be doing a more thorough examination of what it is doing.
@Kamaroman68, send me a text and I would be happy to talk. I am off work this week. I saw you texted a while ago. I forgot to get back to you. Sorry man. Yes, I definitely know that you understand the road I have traveled. Will talk soon.
Regarding the Echo, I assume this will not allow your EZ-Ai to continue as planned. Do you think the Echo will become an issue as is thought? If not do you think an Echo and an EZB will be able to be interconnected?
Ron
PS, email sent
First, a little information about what the general consists is about the echo vs the google home. The google home will be better at general information that isn't "real world information". For example, the question "What is the Millennium Falcon?" would be better answered by Google Home right now. Questions like "Who is Harrison Ford?" would return similar results. Questions like "What is the weather?" would have similar results. Tasks like reading gmail or setting up lists and such right now is better on the Echo simply because it is older and has more development done on it right now. IFTTT allows you to setup a lot of things like when x happens do y between different systems and Echo has more built for it in IFTTT for the same reason. Buying things would be better through the Echo right now and probably forever if you purchase things through Amazon.
Again, I haven't tried this yet...
The Echo has a switch on top of it that allows you to bypass the wake-up words. Currently the wakeup words are "echo", "Amazon" and "Alexa". There isn't a way to change these, but by triggering the switch, you are able to speak and have the echo hear what you are asking. This could allow the EZ-B to be attached to the echo (after some hacking) to allow it to then start listening and make the keywords be handled through the EZ-B instead of through the Echo.
With that said, the voice coming from the echo will be the Amazon Echo voice and will not match exactly to what your robots other statements are. Some may see this as problematic. One of the advantages of EZ-AI is that the voices would match because everything would have been passed back to ARC to speak.
Both the Echo and Google home go to a single source for its information. The main complaint of EZ-AI was that it was slow. I have to describe the paths that these devices take to finally return the information for you to see why EZ-AI was slower than the Echo, siri, cortana or Google Home.
EZ-AI
The recording of the question happened in the EZ-AI plugin
The recording would then send to the EZ-AI server
The EZ-AI server would
1. Start a thread that would send a message to the EZ-AI Authentication server
2. The EZ-AI Authentication server would validate that this was a valid user and which services were paid for by this user
3. The EZ-AI Authentication server would send a response back to the EZ-AI Server saying it was okay to process the request.
While 1,2 and 3 were being executed a separate thread would send the request off to API.AI to see if it could convert the text to speech and then process the request. (this was successful about 80% of the time) If the request could be processed then
API.AI would classify the text
Run through its logic to return the result
Return the text to the EZ-AI server
If the user was a valid user from the Authentication server checks from the other thread, the text would be returned to the plugin
The plugin would then take this text and place it into a variable to be spoken.
If API.AI couldn't process the request it would return a failed attempt back to the EZ-AI server
If the user from the checks from the Authentication server was a valid user and had paid for Nuance services
The recorded audio would be sent to Nuance which would then perform the SST (speech to text) conversion (this had a 99.97% success rate in the beta).
This text would be returned to the EZ-AI server, which would then send this information to API.AI
If API.AI determined that this was a question that it didnt have the answer to it would return a failure to the EZ-AI server.
The EZ-AI server would see if the user had access to Wolfram|Alpha from the checks it did earlier.
If the user had access to Wolfram|Alpha, it would then submit the text to Wolfram|Alpha. This was about 25% of the requests from the beta.
The Wolfam|Alpha engine would run and gather a lot of information and return it to the EZ-AI server.
The EZ-AI server grabbed the spoke text data and passed it back to the EZ-AI client.
As you can see, there was a lot of hopping around due to trying to provide the most accurate results possible. Sometimes the results (if it went through the entire chain of events) could take up to 20 seconds to return the final results. This was due to transmission times and due to the massive amount of data that Wolfram|Alpha provided. It could take 15 seconds for Wolfram to retrieve the information. This feels like a very long time, but it returned accurate information. It could return things like "What is Myotonia Congenita?" which is amazing but very few people would have asked this type of question. It does make it somewhat useful for medical professionals though, but what is the market?
A question to the echo of "how far is the earth from the moon?" sent or received 214 packets from and to the same IP address on different ports and took ~10 seconds to complete from the first packet to the last. The Echo doesn't wait until you are finished speaking before it starts sending packets to its servers for processing. The question took 2 seconds to complete and 8 seconds for me to ask the question and it to finish processing the request. This is because it had already figured out and classified most of the text before the statement was completed. I had no way to do this with the technologies that we were using. The downside to this is that you can't ask things like "What is Invokana?" making this really more of a digital assistant or a Amazon sales point in your home than anything.
So, to get speed, the Echo is better than anything I could ever develop simply because it goes to one location and can do so prior to the question being completed. It allows the number one requested thing from our testing and the conversations that I had with various users which was digital assistant features. It covers about 80% of what we could do from a knowledge engine perspective and it has a huge community of developers working to improve it daily. The only thing left is to get the robot to trigger it which could be done by hacking the Echo to allow the EZ-B to electronically or mechanically activate the switch on the top of the echo. The only thing you are missing is really accurate data to more difficult subjects, a consistent voice, controlling the robot by voice (ie move forward 25cm) and data being returned to the ARC application itself.
I will keep working on EZ-AI in my spare time but there just isn't a market for this product outside of the robot DIY community. The robot DIY community isn't large enough to support the required funding to make this cost effective, so I will just keep working on it for myself and see where we are later in time.
I'm so sorry both your family and your personal challenges are causing you all such trouble. Many here have been on that same road in our own lives so we totally empathize with you. Not that I am comparing but I also have a grandson with autism and also have had to stop more then one business venture because of economic issues or competition. It's hard to walk down a different path after you've put so mush into something or someone. My personal wishes and strength go out to you and your family to move through these times. I hope the next year sees better times for your family and your ventures. It sounds like you have a good plan and lots of the right kind of help to get you all there. Peace.