EZ-AI development is on hold right now, well kind of...
We are in the process of working with some services that will allow the capabilities of EZ-AI to be far better than what they currently are. These include Wolfram|Alpha and IBM BlueMix/Watson. Speech Recognition will be performed through Nuance Cloud services. Advanced vision features will be available through OpenCV. A quick search of these services will allow you to see the end goal of what we are doing. These will be in the Rafiki project which is the primary focus at this time for CochranRobotics. We will release a limited use version for free which will replace EZ-AI. All of the current features of the EZ-AI database will be available through the new version. All of the services provided by EZ-AI will be available through REST queries and exposed services. This will allow plugins to ARC to be able to use these services.
There has been a huge amount of changes to what is possible since I first started working on EZ-AI. This huge shift of improved technologies has made it necessary to rework EZ-AI so that it can continue to grow and mature.
We are also toying with the idea of allowing programmers to write their own business logic layer within Rafiki. This would allow a programmer to be able to use the core services of Rafiki/EZ-AI and write their own applications with the data that is returned. It will probably be a while before this is implemented, but it is something that we are trying to make happen.
I have probably said too much, but wanted to give you all a picture of what is happening and why EZ-AI isn't being worked on directly. We hope to have our new AI available around the end of the year. There are multiple developers working on this while I find come up with solutions to other problems that arise.
As far as Rafiki goes, the pods are functioning great and additional code/abilities are being added almost daily. The models for the pods are being tweaked to expose the HDMI and usb ports and network port to the outside of the case. This will allow someone to connect a mouse, keyboard and monitor to the pod and use it as a computer if they would like. The Rafiki Bot is about 1/3 of the way printed. I am making modifications to the models as needed and reprinting some of the parts as needed. There will be 6 subsystems on this robot. 3 of these subsystems have been written and are ready to use. The other 3 subsystems cant be worked on until more of the Rafiki Bot has been printed. The 3 that are not complete are all very similar for motor control. I hope to have these ready in a couple of weeks. I should be able to show a demo of the Rafiki Bot in about a month, and then all of the robot programming starts. I will work on the charger base shortly after the robot has been completed and the pods are all working with all of their functionality.
One more thing on EZ-AI... As a part of this rewrite, you will just need to have Java installed on your computer to install and use EZ-AI. The days of the huge install will be behind us so it should make it far better in the long run. The other thing that this allows is robot platform independence. I will be working on modules in ARC to allow the features of EZ-AI to be accessible far more easily. This will probably not be worked on until December at the earliest.
Okay, going back to my robot cave. Have a great day all.
@David,
Different Profiles and a secure (i.e. non trivial) mechanism to switch the profile.
@PTP,
By profiles, do you mean users? Right now the way that people switch users is by one of three methods.
Either the variable is passed from the EZ-AI ARC client (through facial recognition or whatever means you deem necessary inside of ARC), facial recognition by passing an image taken by the camera, or voice recognition. If the user cant be identified by any of these, EZ-AI will ask who the user is.
There are three ways to run EZ-AI:
Using the CochranRobotics Ecosystem - We have contacted all external services and have paid keys for use. To use these keys, you pay CochranRobotics a monthly fee and the EZ-AI server is registered with our Authentication server. The Authentication server provides the necessary external API keys for the EZ-AI server to run properly, and the EZ-AI server reports its usage back to the Authentication server for billing. You purchase the EZ-AI server and hardware/software from us. You can purchase and use pods for this installation. Also, you can use any or our free and open source clients or program your own for your own products.
Completely Local Installation - You provide your own keys to the EZ-AI server. You maintain and pay for any usage yourself. This is useful for individual developers who want to integrate EZ-AI server into their own projects. This is NOT recommended for production, though. The keys are stored in the EZ-AI server's database locally. If the key changes, it needs to be manually changed. You purchase the EZ-AI server and hardware/software from us. You can purchase and use pods for this installation. Also, you can use any or our free and open source clients or program your own for your own products.
Using your own Ecosystem - This means that you are responsible for running and maintaining your own Authentication server, as well as you own external product keys. This is useful if you want to move your EZ-AI enabled product into production. The Authentication server will be free to download and we are discussing making the Authentication server open source like the clients. You purchase the EZ-AI server hardware and software from us, register the EZ-AI servers with your Authentication server, and distribute with your product.
Quick Points:
Any EZ-AI client will work on any EZ-AI server on any Ecosystem.
After you purchase the EZ-AI hardware/software from us, you are free to change your external API keys and not pay us a monthly fee. The only required external API is free to use for personal projects, and has a paid version for distributing in your own products.
2.B) This means, after you purchase the EZ-AI server hardware from us, you can run the hardware without a monthly fee while you develop your product.
Developers will be able to create plugins for EZ-AI server to extend functionality. As of now, these plugins are written in Java.
We are not planning on distributing licenses for the EZ-AI server to be run on non-approved hardware. This is to ensure that every EZ-AI server is of the highest quality, is fully setup/configured and is easy to use for everybody.
We are not planning on releasing the EZ-AI server as an open source project.
@CochranRobotics
Hail POD!\m/
LOL! great talking to you. I'll get the files to you in the morning. Thanks again for your help!
We are extending the EZ-AI ARC skill plugin beta test for an additional 30 days. There were a lot of things that happened outside of this project that affected our ability to complete some of the new features that we want to add. The test group has provided some great information and some really good feature requests that we are working on getting added.
We have focused on making the plugin more "bullet proof" and more reliable. We just published a new plugin which should handle the unexpected errors that we were seeing. I want to give this a good test to make sure that it solves any issues.
We are working on the pod clients. We have decided to make the pods open source as they are a client. We will be publishing the hardware needed, STL files and a disk image for you to download and use. This will allow you to extend your AI to multiple devices and allow the features of EZ-AI to be used throughout your home or office. I am building 3 pods this weekend and hope to have all but the disk image posted. Once the pod disk image is complete and tested, we will publish it.
Thanks David
A couple of this mentioned above... The plugin has been running non-stop for over 24 hours and I have not had any issues yet.
On the EZ-AI pods, as promised, here is a list of the hardware to build one EZ-AI Pod hardware You can get all of this from Amazon or many other places...
You should get this from Adafruit just so you know that you are getting the right ones
The STL files https://github.com/cochranrobotics/Public/blob/master/Podball%20STL%20Files.zip
Nick will be working on the pod client (since we added a screen there are a few changes needed to the one that was already written) soon.
Thanks David