EZ-AI development is on hold right now, well kind of...
We are in the process of working with some services that will allow the capabilities of EZ-AI to be far better than what they currently are. These include Wolfram|Alpha and IBM BlueMix/Watson. Speech Recognition will be performed through Nuance Cloud services. Advanced vision features will be available through OpenCV. A quick search of these services will allow you to see the end goal of what we are doing. These will be in the Rafiki project which is the primary focus at this time for CochranRobotics. We will release a limited use version for free which will replace EZ-AI. All of the current features of the EZ-AI database will be available through the new version. All of the services provided by EZ-AI will be available through REST queries and exposed services. This will allow plugins to ARC to be able to use these services.
There has been a huge amount of changes to what is possible since I first started working on EZ-AI. This huge shift of improved technologies has made it necessary to rework EZ-AI so that it can continue to grow and mature.
We are also toying with the idea of allowing programmers to write their own business logic layer within Rafiki. This would allow a programmer to be able to use the core services of Rafiki/EZ-AI and write their own applications with the data that is returned. It will probably be a while before this is implemented, but it is something that we are trying to make happen.
I have probably said too much, but wanted to give you all a picture of what is happening and why EZ-AI isn't being worked on directly. We hope to have our new AI available around the end of the year. There are multiple developers working on this while I find come up with solutions to other problems that arise.
As far as Rafiki goes, the pods are functioning great and additional code/abilities are being added almost daily. The models for the pods are being tweaked to expose the HDMI and usb ports and network port to the outside of the case. This will allow someone to connect a mouse, keyboard and monitor to the pod and use it as a computer if they would like. The Rafiki Bot is about 1/3 of the way printed. I am making modifications to the models as needed and reprinting some of the parts as needed. There will be 6 subsystems on this robot. 3 of these subsystems have been written and are ready to use. The other 3 subsystems cant be worked on until more of the Rafiki Bot has been printed. The 3 that are not complete are all very similar for motor control. I hope to have these ready in a couple of weeks. I should be able to show a demo of the Rafiki Bot in about a month, and then all of the robot programming starts. I will work on the charger base shortly after the robot has been completed and the pods are all working with all of their functionality.
One more thing on EZ-AI... As a part of this rewrite, you will just need to have Java installed on your computer to install and use EZ-AI. The days of the huge install will be behind us so it should make it far better in the long run. The other thing that this allows is robot platform independence. I will be working on modules in ARC to allow the features of EZ-AI to be accessible far more easily. This will probably not be worked on until December at the earliest.
Okay, going back to my robot cave. Have a great day all.
That is sort of what he said. More like, the initial release will be $30/month. There is a framework built in that would allow sophisticated users to swap out the components with free ones that do the same function with perhaps less accuracy as long as they have APIs.
In another thread (about Google opening up their voice recognition API) David mentioned that a future release of EZ-AI might take advantage of that and a new API that Wikipedia is releasing to reduce or eliminate the monthly cost, but that would be a future thing after EZ-AI is released with its current speech and knowledge engines.
Alan
EZ-AI uses services for its information. Some of these services cost money to use. Some are free to use especially if you are signed up as a developer to use these services.
api.ai costs money to use if you make it public for people to use. api.ai allows you to also use a developer license for your own use, but if you make your service public for others, they will start to charge you or shutdown your service.
Nuance Cloud costs money to use if you are doing REST calls (basically using it from a computer). You can use this service for free for a limited time as long as you don't make your service public.
Wolfram|Alpha is free to use if you sign up to be a developer. Once you publish your project for others to use, this costs money.
Google speech stuff is free to use right now if you sign up to be a developer. It will cost money at some point to use this service but who knows when.
there are cookbook API's, Bible API's, map API's, Email API's and many many many other API's out there that are either free for a developer or run for a limited time.
EZ-AI uses CLASS files to interface to these API's. If someone needed an AI that associates dog tail length to the type of dog, this could be done in a CLASS file. The CLASS file could then be used by EZ-AI. It is possible for someone to publish this CLASS file which would then allow you to use these features from within EZ-AI. It might be that this costs money to use, or it could be free. The billing for the use of this CLASS file would have to be handled by the person making this CLASS file. EZ-AI is simply a way to leverage these CLASS files through an application.
To get EZ-AI, you will be charged the $30.00 for the first months service, and you will be charged the cost of the server and shipping. This all comes to about $120.00 roughly. You would then be able to modify your instance of EZ-AI to only use the services you want to use, or to use someone else's services through adding their CLASS files and turning off the default ones that come with EZ-AI.
If someone gets to writing a CLASS file that uses Wikipedia (example which is free) then they could use that CLASS file instead of the Wolfram class file. Because this API doesn't do Speech to Text, it wouldn't replace the Nuance Cloud API so you would still get charged for that service. If someone gets to writing a CLASS file that uses Google Speech stuff, and if it is still free by the time this happens, you could use that CLASS file to replace the use of Nuance Cloud. You would then not be charged for the use of Nuance Cloud's API. There is still the $10 charge for api.ai. This is the core service that makes EZ-AI work. That isn't to say that there isn't something else available that could do some of these things and if someone wrote a CLASS file to point to a different API, then this could also be replaced reducing the cost that you would incur for using our api.ai instance.
I believe that it is reasonable to charge $5.00 for the use of EZ-AI even if you don't use any of the other services, but right now, we are making it free to use if you don't use any of the API's that come with EZ-AI. You would still have to pay the $120.00 roughly for the hardware, shipping and first month's use of these services. Even if on the first day, you decided that you have other CLASS files that you want to use, you would still be charged the same $120.00 roughly.
So, why was the other version of EZ-AI free to use... Simply, I ate the costs associated with using the API's or found the least cost versions (sometimes free) of these API's. We proved that what we wanted to do was possible even though we didn't do it in the most efficient manner (as was evident from the brutal install process). We have done things far more efficiently now focusing on the end user experience. If we had a choice on API A or API B and API B offered a better user experience, we went with API B. This could mean that the data was more reliable, or that the speech recognition was better, or that the install was far simpler or whatever. Using the best of breed costs money. Designing EZ-AI to where someone has choices to make it what they want to make it costs money. Ultimately, if you choose to strip it down to its bare nuts and bolts that is your call and you have the ability to do this. If you decide that you want to use $100 a month in API's, that too is your call. We provide the services that we believe are the best for what people want, and give the option to the end user.
By providing a framework, we have allowed this product to be used in professional settings like hospitals or businesses, and allowed it to be used by hobbyists. This is like your cell phone. You can use it as a phone (which I rarely do anymore) or you can use it as a calculator, web browser, game machine or anything else. EZ-AI also has this type of flexibility depending on which CLASS files you add to it. If someone wrote a CLASS file to make phone calls and you use it, you too can make phone calls...
I hope this is logical. Let me know if you have any questions.
developers are lucky guys, in some working places they eat for free too ...
i h**e those guys
!
I've been developing since I was born, so i must be a developer ?
Only if you sign up to use various API's
With the Beta test starting soon, and because we are ahead of schedule, I would like to pose the question here that I posed in the Google Group for EZ-AI. What features would you like to see in an AI?
Examples might be Home Automation Launch Applications upon request Remind me to do things at specific times Tell me the weather, sports, news... Answer questions when asked
There are a ton of things on our list to program in time. We have some of these things listed already finished and a lot more, but we would like your input.
Thank you David
I would like to see choosing a music track or artist/album from your music library, similar to what I did with ARIEL see time 4:20 on the video.
Tony
Leave messages for others. When people check in with EZ-AI, it plays message back.
Thank you both for your comments. The messages piece is written. We have media playback on the list. Thank you for the video as it helps to see the way that yours works to give us some ideas.