EZ-AI development is on hold right now, well kind of...
We are in the process of working with some services that will allow the capabilities of EZ-AI to be far better than what they currently are. These include Wolfram|Alpha and IBM BlueMix/Watson. Speech Recognition will be performed through Nuance Cloud services. Advanced vision features will be available through OpenCV. A quick search of these services will allow you to see the end goal of what we are doing. These will be in the Rafiki project which is the primary focus at this time for CochranRobotics. We will release a limited use version for free which will replace EZ-AI. All of the current features of the EZ-AI database will be available through the new version. All of the services provided by EZ-AI will be available through REST queries and exposed services. This will allow plugins to ARC to be able to use these services.
There has been a huge amount of changes to what is possible since I first started working on EZ-AI. This huge shift of improved technologies has made it necessary to rework EZ-AI so that it can continue to grow and mature.
We are also toying with the idea of allowing programmers to write their own business logic layer within Rafiki. This would allow a programmer to be able to use the core services of Rafiki/EZ-AI and write their own applications with the data that is returned. It will probably be a while before this is implemented, but it is something that we are trying to make happen.
I have probably said too much, but wanted to give you all a picture of what is happening and why EZ-AI isn't being worked on directly. We hope to have our new AI available around the end of the year. There are multiple developers working on this while I find come up with solutions to other problems that arise.
As far as Rafiki goes, the pods are functioning great and additional code/abilities are being added almost daily. The models for the pods are being tweaked to expose the HDMI and usb ports and network port to the outside of the case. This will allow someone to connect a mouse, keyboard and monitor to the pod and use it as a computer if they would like. The Rafiki Bot is about 1/3 of the way printed. I am making modifications to the models as needed and reprinting some of the parts as needed. There will be 6 subsystems on this robot. 3 of these subsystems have been written and are ready to use. The other 3 subsystems cant be worked on until more of the Rafiki Bot has been printed. The 3 that are not complete are all very similar for motor control. I hope to have these ready in a couple of weeks. I should be able to show a demo of the Rafiki Bot in about a month, and then all of the robot programming starts. I will work on the charger base shortly after the robot has been completed and the pods are all working with all of their functionality.
One more thing on EZ-AI... As a part of this rewrite, you will just need to have Java installed on your computer to install and use EZ-AI. The days of the huge install will be behind us so it should make it far better in the long run. The other thing that this allows is robot platform independence. I will be working on modules in ARC to allow the features of EZ-AI to be accessible far more easily. This will probably not be worked on until December at the earliest.
Okay, going back to my robot cave. Have a great day all.
We are in the final stages of releasing an update for EZ-AI to the beta testers that will allow you to use dictated text to tell your robot to do things. This would be something like "Robot, move forward 10 feet". This speech is converted to text and returned to variables containing the action (move forward), the unit (Feet) and the value (10). From there, you would script to catch these variables and use them to perform the desired action.
In this example, someone could have encoders of different click counts per revolution, or no encoders at all. The robots could have different wheel sizes and motor speeds. It could be a walking robot . It is up to the user to determine what needs to be done to move forward 10 feet, but you would know what the request was without a lot of speech recognition objects.
This is cool but paves the way for other things that we are adding to EZ-AI. For example, one of the beta users asked for his robot to be able to dance when his favorite team scores a touchdown in American Football. Through IFTTT this is entirely possible. We will be implementing IFTTT so that the robot can control other things, or other events can trigger actions in your robot. This command structure is the first step of that and should be available for testing soon.
Awesome, great work, as usual can't wait to give it a whirl!
Here is a list of example commands come here robot come here
find [person] robot find David
find [thing] robot find my keys
flash light robot flash light flash lights robot flash lights
go to [person] robot go to David
go to [place] robot go to the office
look [direction] [distance] look down ten degrees robot look down one radian look up ten degrees robot look up one radian
Arm Commands lower left arm [distance] lower left arm ten degrees robot lower left arm one radian raise arms ten degrees robot raise arms pi radians raise left arm ten degrees robot raise left arm pi radians raise right arm ten degrees robot raise right arm pi radians
Move/Walk commands move backward(s) ten feet robot move backward(s) 2 inches walk backward(s) ten miles robot walk backward(s) pi meters move foreward ten feet robot move foreward 2 inches walk foreward ten miles robot walk foreward pi meters
Turn commands turn left ten degrees robot turn left pi radians turn right ten degrees robot turn right pi radians
On the move and turn commands, we may need to add a duration option...
I'm jealous. Your SR is so awesome and works so well.
I can say that with the beta test users, so far I have had 0 complaints about the SR piece. This was one of my major concerns going in. This is completely untrained dictation and works really well.
There have been a couple of issues where the setting in the plugin and the mic needed to be adjusted, but that is all pretty self explanatory. I have used this with literally $2.50 mics up to about $75.00 mics and with slight adjustments, gotten the SR to work great.
Thanks Dave. It wont be much longer before we will be selling EZ-AI.
The SR goes back to the many times and efforts of "Can we use Nuance\Dragon\DNS" within ARC. This allows that to happen and a lot more. I promised that we would work on it
There is a cost associate with doing this, but it is much cheaper entry point per individual than the total license cost. It is slower and does require an internet connection, but it is definitely an option. If nothing else, I believe that EZ-AI is worth the cost for this feature, much less all of the other things that you will get with it. Just my thoughts...
This is so Amazing! I have been using Lucy an online free AI api. But this is over the top. Are you taking preorders yet? I'm in.