Asked — Edited

A Note On Ez-Ai/Rafiki

EZ-AI development is on hold right now, well kind of...

We are in the process of working with some services that will allow the capabilities of EZ-AI to be far better than what they currently are. These include Wolfram|Alpha and IBM BlueMix/Watson. Speech Recognition will be performed through Nuance Cloud services. Advanced vision features will be available through OpenCV. A quick search of these services will allow you to see the end goal of what we are doing. These will be in the Rafiki project which is the primary focus at this time for CochranRobotics. We will release a limited use version for free which will replace EZ-AI. All of the current features of the EZ-AI database will be available through the new version. All of the services provided by EZ-AI will be available through REST queries and exposed services. This will allow plugins to ARC to be able to use these services.

There has been a huge amount of changes to what is possible since I first started working on EZ-AI. This huge shift of improved technologies has made it necessary to rework EZ-AI so that it can continue to grow and mature.

We are also toying with the idea of allowing programmers to write their own business logic layer within Rafiki. This would allow a programmer to be able to use the core services of Rafiki/EZ-AI and write their own applications with the data that is returned. It will probably be a while before this is implemented, but it is something that we are trying to make happen.

I have probably said too much, but wanted to give you all a picture of what is happening and why EZ-AI isn't being worked on directly. We hope to have our new AI available around the end of the year. There are multiple developers working on this while I find come up with solutions to other problems that arise.

As far as Rafiki goes, the pods are functioning great and additional code/abilities are being added almost daily. The models for the pods are being tweaked to expose the HDMI and usb ports and network port to the outside of the case. This will allow someone to connect a mouse, keyboard and monitor to the pod and use it as a computer if they would like. The Rafiki Bot is about 1/3 of the way printed. I am making modifications to the models as needed and reprinting some of the parts as needed. There will be 6 subsystems on this robot. 3 of these subsystems have been written and are ready to use. The other 3 subsystems cant be worked on until more of the Rafiki Bot has been printed. The 3 that are not complete are all very similar for motor control. I hope to have these ready in a couple of weeks. I should be able to show a demo of the Rafiki Bot in about a month, and then all of the robot programming starts. I will work on the charger base shortly after the robot has been completed and the pods are all working with all of their functionality.

One more thing on EZ-AI... As a part of this rewrite, you will just need to have Java installed on your computer to install and use EZ-AI. The days of the huge install will be behind us so it should make it far better in the long run. The other thing that this allows is robot platform independence. I will be working on modules in ARC to allow the features of EZ-AI to be accessible far more easily. This will probably not be worked on until December at the earliest.

Okay, going back to my robot cave. Have a great day all.


ARC Pro

Upgrade to ARC Pro

Discover the limitless potential of robot programming with Synthiam ARC Pro – where innovation and creativity meet seamlessly.

#185  

We will be taking orders by the end of the summer. The hope is that we will go live Aug 1 but you know how hopes go. There are a lot of things in the works so I wont know until the middle of July probably if we will hit the Aug 1 deadline or not.

#187  

The ARC EZ-AI plugin test is over. We are breaking things now and will have a new version that will include the ability to add plugins to the server, IFTTT integration through the maker channel, Toodledo integration and integration with OpenHAB for home automation. These will be available in the production release.

Along with this there is a client that is pretty well wrapped up for the pods, which extends the AI throughout the dwelling.

We also have an extensive list of features that we will be adding going forward. There are now 3 programmers working on EZ-AI along with a QC person who also specializes in Home Automation. Then, there is me... The face and voice of EZ-AI. I don't get to play with the cool stuff anymore:)

We have a few meetings this month which will determine when the release is. There are many interesting possibilities that I cant speak to now.

#188  

Plugins complete along with the java client. This allows the cost to be reduced as shown in the video. Nuance is a costly service to use. You are now able to choose to use the nuance service or not. Basically, this allows you to have a monthly cost of $22 instead of $30 with the current pricing model. We are working on adding other services that will reduce the cost to use EZ-AI, but allow you to add the services that you want to use. If you want to use the more costly service you have that option. Because ARC has its own TTS engine, the voice isn't a concern. The only negative impact on not using the nuance service for SST, you get slightly less impressive results.

Anyway, here is a rather lengthy video. The end of the video discusses some things we are working on currently before releasing EZ-AI.

PRO
USA
#189  

Awesome demo David!, Can't wait to have all my bots using EZ-AI. The list of ways this can be implemented is virtually endless. All those movies where a person walks into the house and asks the house ("... Any messages?"). This brings SCIFI to reality. "Gort, Clatu, Barada, Nictoe"

#190  

Thank you for the kind words Richard. I want robots to tie into other areas of life than just toys or education. I see there being so many possibilities that it kind of keeps me up at night. We have identified what will be in version 1 and are working toward completing that as soon as possible. There are things like documentation and making things pretty that need to be done still, but we are focusing on finishing up the core pieces. From there we will put a lot of effort into working on decreasing the time it takes for responses to return and then start adding more capabilities. We will also open things up for others to also add more capabilities and share these things if they want to do so.

We will be reviewing the code from others before making the plugins that they submit public. This is for a couple of reasons...

  1. To make sure that everything will work right.
  2. To make sure there isn't any malicious code, and
  3. To make sure that the standards that are needed are in place for the plugin to work.
    I have one guy who already is doing this for us and will do this type of review for submitted plugins also. I don't know of any other way to make sure that the plugins will work correctly that are submitted other than doing this. Once verified good, the developer can make the plugin public or keep it private.
#191  

David, A question about latency: will response times (from when user voice input ends to when reply begins) change depending on whether the installation is cloud, local install connected to cloud ecosystem or local install/local ecosystem?

#192  

the information is retrieved from the cloud and depending on what plugins you choose to use, there could be multiple actions accessing the cloud. If none of the cloud services you choose to use have the information, a local chat bot serves as the catchall. so, if you choose not to use any cloud based services, the local chat bot would reply quickly.