EZ-AI development is on hold right now, well kind of...
We are in the process of working with some services that will allow the capabilities of EZ-AI to be far better than what they currently are. These include Wolfram|Alpha and IBM BlueMix/Watson. Speech Recognition will be performed through Nuance Cloud services. Advanced vision features will be available through OpenCV. A quick search of these services will allow you to see the end goal of what we are doing. These will be in the Rafiki project which is the primary focus at this time for CochranRobotics. We will release a limited use version for free which will replace EZ-AI. All of the current features of the EZ-AI database will be available through the new version. All of the services provided by EZ-AI will be available through REST queries and exposed services. This will allow plugins to ARC to be able to use these services.
There has been a huge amount of changes to what is possible since I first started working on EZ-AI. This huge shift of improved technologies has made it necessary to rework EZ-AI so that it can continue to grow and mature.
We are also toying with the idea of allowing programmers to write their own business logic layer within Rafiki. This would allow a programmer to be able to use the core services of Rafiki/EZ-AI and write their own applications with the data that is returned. It will probably be a while before this is implemented, but it is something that we are trying to make happen.
I have probably said too much, but wanted to give you all a picture of what is happening and why EZ-AI isn't being worked on directly. We hope to have our new AI available around the end of the year. There are multiple developers working on this while I find come up with solutions to other problems that arise.
As far as Rafiki goes, the pods are functioning great and additional code/abilities are being added almost daily. The models for the pods are being tweaked to expose the HDMI and usb ports and network port to the outside of the case. This will allow someone to connect a mouse, keyboard and monitor to the pod and use it as a computer if they would like. The Rafiki Bot is about 1/3 of the way printed. I am making modifications to the models as needed and reprinting some of the parts as needed. There will be 6 subsystems on this robot. 3 of these subsystems have been written and are ready to use. The other 3 subsystems cant be worked on until more of the Rafiki Bot has been printed. The 3 that are not complete are all very similar for motor control. I hope to have these ready in a couple of weeks. I should be able to show a demo of the Rafiki Bot in about a month, and then all of the robot programming starts. I will work on the charger base shortly after the robot has been completed and the pods are all working with all of their functionality.
One more thing on EZ-AI... As a part of this rewrite, you will just need to have Java installed on your computer to install and use EZ-AI. The days of the huge install will be behind us so it should make it far better in the long run. The other thing that this allows is robot platform independence. I will be working on modules in ARC to allow the features of EZ-AI to be accessible far more easily. This will probably not be worked on until December at the earliest.
Okay, going back to my robot cave. Have a great day all.
I just ran a test with running the server for ez-ai on a Beaglebone black. It was a bit slow compared to what I have been testing on, but it was definately still quick enough for a couple of robots in a home. That is good news for sure.
We haven't had a lot of time to optimize the code yet so this is very promissing from a cost perspective.
We conducted tests on the Beaglebone black today as the EZ-AI server. This little single board computer did pretty well with multiple devices attaching to it. We asked question after question to 4 different devices (robots) and there was an average of about a 5 second delay from when speech was complete from the user and the results were returned. We tested this with a tablet mic, headset mic and a dedicated desktop usb mic.
The server was using the 4GB 8-bit eMMC on-board flash storage only. This is really cool. Not having removable storage helps us by reducing cost and providing a board that just has to be flashed. This allows for a drop in place solution.
To install the client, you simply install Java, then launch the jar file. (java -jar c:\rafikipod\rafiki_pod_pi.jar) for windows. It can be anywhere your machine has rights to see. You will possibly have to change permissions depending on how we deliver the jar file. This has been tested on Mac, Windows8.1, Windows 10, ubuntu and debian. The client searches your network (based on the first 3 octets of your ip address) for the server. This takes about 1 minute to complete. If you have a more advanced network configuration, you can specify the IP address of the server by placing it in the config file. We are working on ways to more easily identify the IP address for you but this is not our primary focus.
We will be working on an ARC specific client shortly. We will build a plugin for ARC which will allow ARC to drive the use of this client. I expect this to be finished in the next month or so.
The client will notify you of when you can ask questions and when it is working on getting an answer to the question you asked. On our system, this is done in the form of a light ring.
We will be working on the billing piece of EZ-AI shortly. We are spending this week finishing up code optimization and some user friendly type things.
EZ-AI uses the Nuance cloud for speech recognition and Speech to text, which is then either processed locally for commands like "Remind me to buy eggs". If the classification isn't something that we expect, the text is passed off to Watson and to Wolfram Alpha. The results are returned and spoken at this time through the client. The ARC skill plugin app will just place this information into a variable that can be monitored for changes, and when changes happen can be spoken.
We will be adding features over time. This is the really cool part about the EZ-AI server now... Updates to add new features to EZ-AI all happen on the server. You will be notified if there are new updates to be downloaded. If there are, you click a link and the updates will be downloaded and be applied. The server will reboot and when it comes back online, you are ready to use EZ-AI again.
This is a huge step forward for this product. By doing things this way, we are able to provide a product that we are confident will work. If you would like to provide more substantial hardware for larger installations, this can be done. If have a medium sized installation (5-20 robots all running EZ-AI at the same time) we will offer a different board (still a single board computer) running at quad core 1.6ish ghz processor with 2GB ram and 8GB flash storage for a little over double the cost of the Beaglebone Black. Larger installations are available but would require some conversation to insure that we size the server right for the situation.
The server will come with a CochranRobotics case to protect it from damage. We will be assessing the cost of the monthly fee very shortly. This will be based on the number of transactions you would be making per month. I will be posting videos in the next month or so of the new EZ-AI working with and without ARC.
If someone wanted to allow this to run on other equipment such as computers or devices other than robots, that is definitely possible. The clients will be downloadable. The server will be distributed on flashed single board computer for the most part.
There is still some development to, but the core is in place and functioning great.
Just an update to EZ-AI here...
We have discovered a new software product that looks promising from the outside. We had meetings with their team and it seems to be a great fit for EZ-AI. They are excited about what we are doing and are partnering with us. The new software product promises to reduce our dependence on some other products which will lower the cost if successful. We are going to be working on a Java based client for this vendor and will share our work with them. [EDIT as I forgot to mention this -] In return, they have offered to have their linguists convert our services to use multiple languages. Win on both fronts I think.
This product is a language processor and classifier which will allow us to reduce our dependence on Nuance and Watson. It understands conversations so saying "I want to hear some Classic Rock" and then saying "Tell me about this artist" is possible. There is a decay period for conversations which allows you to move from one conversation to another pretty easily. We will still use our knowledge system to process the request but this would allow us to not have to track the conversations locally as the conversation would be tracked by this service. This would also allow us to link knowledge. For example, showing a picture of Abraham Lincoln to the robot could then spawn a conversation about Abraham Lincoln. Conversations are a huge addition to what EZ-AI is able to do.
We also met with a local home builder who wants to put our products in new houses that he builds. He has some really cool ideas about features that he wants added to make this happen. This only improves our product and its capabilities. There have been some others who have offered up some ideas that are really cool that we never thought of. One of these is a feature that would alert the user when they got an email from a particular person, or an email with a specific subject. We have the ability to check emails developed, but this type of feature makes the platform much smarter. I personally never used our email feature due to the huge number of emails that I get on a daily basis, but there are some from certain people that are always viewed due to their level of importance to me. I suspect that this is the situation for most people running a business.
We are continuing our tests using very small single board computers as the main brain for EZ-AI. These tests have been going great. We are also fine tuning the logic used to tell when someone has stopped speaking. This is a bit more difficult on Linux than it is on Windows but the goal is to keep the cost as low as possible. Linux uses a much smaller scale for volumes than Windows. Knowing when someone is finished speaking in windows is much easier due to this larger scale. We looked at using IoT for our pods for Rafiki, but not all of the devices that we use in our pods are supported by Windows IoT. Ugh... Work is continuing here.
An update on Rafiki The trip to talk to the investor was okay. I spent my time working with my developers to identify any issues that we were having and identify possible solutions. My lead developer (my son) is in his final weeks of College. He has had to focus on school and work more lately. Once school is complete, he will be pushing development on EZ-AI and Rafiki. I found a couple of things that I wanted to change on the robot from this trip. These are mainly strengths of the neck motor and mounting system for the wheels. Both of these changes have been made to the prototype. I also adjusted the arms a bit so that they are more pocketed when closed.
Another change that we made was that all of the subsystems now report back what they are when sent a "Q0" serial command. Serial ports can change on a computer. What was com9 can become com10. This is problematic for software that isnt smart enough to handle this. The test app that I have created had this issue so now I query each com port at the time that the application starts up and use the results of this query to set the port identification parameters that are used. While doing this, I also added the ability to query the position of the motors. Sending a "Q1" now gets the pot reading from the first motor attached to the specific subsystem. "Q2" gets the other motors position. This allows me to then know the position and calculate things like "move your left arm up 12 degrees". I will be adding this functionality to the client shortly.
We have developed the test software in a way that allows us to easily build the production software (using the same classes in production as in test). The client will eventually become invisible and will allow the user to simply pass information into a variable in ARC. The robot is looking for this variable and will process the command as soon as it sees it. The commands in ARC would be things like "Move left arm up 12 degrees". The goal of this system is to make it as easy as possible to program the robot to do anything that you want it to do. I think simple language that everyone can understand is the best way to accomplish this. The test software has worked great so far.
We have started development on an EZ-AI client plugin for ARC. This will have all of the same features as the pod for Rafiki, but will be run through ARC. We haven't focused much on this because it will be so simple, and it really isn't needed until we get done with paragraph 2 in this novel.
We had a rather severe ice storm this week. I think Richard R and many others are now getting this storm (or will be shortly). This storm caused major power outages and a lot of damage. This has caused some delays along with the Thanksgiving holiday, but it was also a good time for us to take a break. I got to play lumberjack the past could of days and there is more cleanup to do in my back yard. I should be back to working on Rafiki and EZ-AI this week and it will be nice to get back to it.
Okay, I'm done giving an update for now.
Great update David, thank you. Lots of information there, but the language processor update got my attention, and sounds like that will be a great upgrade/addition.
Just an update of sorts...
I put in about 1500 hours of labor in 5 months on this project. It turns out that doing this and working at my normal daily job isn't very healthy. I visited my doctor a couple of weeks ago and found out that my health has gone down hill over the past 5 months. He told me that if I keep up this pace, I was risking a stroke. I decided to take a few weeks off to allow blood pressure and other issues to get back in line. I have allowed myself to get a lot of rest over the past couple of weeks and am feeling a lot better, so hopefully the damage I caused to myself wasn't too damaging.
Holidays and end of year type management responsibilities with my normal job have also cut into my time. My son has finished College (except for one easy class that he needs to complete online) so this allows him to have more time to focus on wrapping things up.
We have brought in another developer who is looking at adding another application to the project. I have mentioned it before but he is able to focus on this while others complete other parts of the project.
Right now, I put in only a couple of hours at most a day on the robot. I plan on spending more time on him this weekend. I have one issue to get figured out and then I can start focusing on SLAM for this robot. The code used to drive the robot got messed up somehow. I think I will probably just rewrite this code this weekend and go forward from there. The robot is almost all built again so after I get this piece done again, I will be able to hook up all of the subsystems again and go forward. Hopefully it won't be too hard to get SLAM implemented and I will be able to show a video of it all working together.
Have a great day all!
Dave
As much as I look forward to your AI for our robots, it's certainly not as important as your health. Life already passes us by very quickly, before you know it we are in our 60's (lol), tough to get going in the mornings and aches and pains that take forever to go away, if they ever do! No reason to push it any faster along. I myself want to be around to see my grandsons grow up. They all need a grandpa you know. So, kick back a little and smell the roses.
@David.... As Ted mentioned... We kinda' like you so take it easy so you'll stick around longer... We can wait for Rafiki... Besides, ez roboters are pros at waiting...
The past couple of weeks rest helped a lot. No huge worries, just have to pace myself. It's easy to forget how important sleep is. I have slowed down and let the young guys who can handle it do the major pushing. I have also spread things out more so not so much is on any single person.
I never have slept more than about 6 hours until the last couple of weeks. The last couple of weeks has increased to about 10 hours. I know you can't catch up on sleep, but I sure have tried