EZ-AI development is on hold right now, well kind of...
We are in the process of working with some services that will allow the capabilities of EZ-AI to be far better than what they currently are. These include Wolfram|Alpha and IBM BlueMix/Watson. Speech Recognition will be performed through Nuance Cloud services. Advanced vision features will be available through OpenCV. A quick search of these services will allow you to see the end goal of what we are doing. These will be in the Rafiki project which is the primary focus at this time for CochranRobotics. We will release a limited use version for free which will replace EZ-AI. All of the current features of the EZ-AI database will be available through the new version. All of the services provided by EZ-AI will be available through REST queries and exposed services. This will allow plugins to ARC to be able to use these services.
There has been a huge amount of changes to what is possible since I first started working on EZ-AI. This huge shift of improved technologies has made it necessary to rework EZ-AI so that it can continue to grow and mature.
We are also toying with the idea of allowing programmers to write their own business logic layer within Rafiki. This would allow a programmer to be able to use the core services of Rafiki/EZ-AI and write their own applications with the data that is returned. It will probably be a while before this is implemented, but it is something that we are trying to make happen.
I have probably said too much, but wanted to give you all a picture of what is happening and why EZ-AI isn't being worked on directly. We hope to have our new AI available around the end of the year. There are multiple developers working on this while I find come up with solutions to other problems that arise.
As far as Rafiki goes, the pods are functioning great and additional code/abilities are being added almost daily. The models for the pods are being tweaked to expose the HDMI and usb ports and network port to the outside of the case. This will allow someone to connect a mouse, keyboard and monitor to the pod and use it as a computer if they would like. The Rafiki Bot is about 1/3 of the way printed. I am making modifications to the models as needed and reprinting some of the parts as needed. There will be 6 subsystems on this robot. 3 of these subsystems have been written and are ready to use. The other 3 subsystems cant be worked on until more of the Rafiki Bot has been printed. The 3 that are not complete are all very similar for motor control. I hope to have these ready in a couple of weeks. I should be able to show a demo of the Rafiki Bot in about a month, and then all of the robot programming starts. I will work on the charger base shortly after the robot has been completed and the pods are all working with all of their functionality.
One more thing on EZ-AI... As a part of this rewrite, you will just need to have Java installed on your computer to install and use EZ-AI. The days of the huge install will be behind us so it should make it far better in the long run. The other thing that this allows is robot platform independence. I will be working on modules in ARC to allow the features of EZ-AI to be accessible far more easily. This will probably not be worked on until December at the earliest.
Okay, going back to my robot cave. Have a great day all.
One more thing to mention...
This timeline should allow the new components for the V4 to be available based on DJ's stated timeline. There are other components that I use that should also be updated by the time you are working on the areas that you would use these in. Basically, the timeline is based on a lot of different things, some of which I have no control of, but fit nicely into my timeline.
David,
I was thinking that I would like to build a Rafiki Bot with the powered base but realized I can't make good use of a roving bot. We live in a 2 story, split level home. I don't think a bot that can go up and down stairs would be economical or practical. But the Rafiki Pod is of great interest. Are you planning to release the Rafiki Pod as a kit?
I am a 79 year old, ex computer engineer. (Not a software developer.) I have a lot of experience with hardware and software package integration.
Some thoughts on Rafiki interaction:
We would not want enter the initial database information (People/Places/Important Dates, etc.) via keyboard.
Building the different databases as part of a voice interaction would be OK.
We also would like reminders of Dr. appointments. Most of ours are written on an appointment card. It would be great to have a scanning capability to enter the appointment Doctor, date, and time.
Scanning from a network printer or Rafiki Pod attatchment for other entries into the system would be a big help.
I would like to be a beta tester for many of the Rafiki capabilities if you thought that would be of any help. Buying a Pod in kit form has a lot of appeal.
Thank you.
I just got home from a trip to Dallas. I will give some thought to how I would go about doing this. There are some logistics to work out specifically with going into production mode with some of the vendors that I use that I need to look into. Turning this on costs me a pretty penny, so I have to make sure everything is ready. Not making money with it due to any failures would not be good.
The public release of Rafiki has begun. The base has been published on CochranRobotics.com for download. The document is incomplete but has the following available.
http://cochranrobotics.com/RafikiDIY
STL Files for the base Component list with recommended vendors for this part of the build and expected cost (except for kit cost) Slicer settings for printing the STL files First part of the instructions.
This layout requires that you have a 3D printer with a build envelope of 225 x 145 x 150 mm.
I will update the build instructions as I build this Rafiki. If you have any questions, please send me an email or make posts on the CochranRobotics.com forum.
Kit pricing will come shortly, but I didn't see a reason not to release these STL files as it is going to take some time to print these anyway.
Hi CochranRobotics, I think the project is very interesting and the Rafiki design is very appealing. But one major problem with all mobile robots is localization and for me it is crucial that a robot knows where it is and where to go. It is a great challenge to do an efficient SLAM. Environment interaction is also important, like to grasp objects, door opening, etc, or else it's just a computer on wheels. Just my 2 cents. Regards
It has SLAM (localization) and there have been many posts on this topic specific to Rafiki. It isnt designed to be able to open doors and such. For what it was designed to do, there was no reason for this. Thanks for your input though.
Let me rephrase something - I am working on adding a SLAM module to ARC for Rafiki. While I am building the second one, I am using the first one to finish the programming on it.
New base with modifications and body shell have been posted. If I run into anything while printing and building, I will update the zip files containing these STL files.
The Base, Body, Wings, Neck, internal pieces and head are available for download.
http://cochranrobotics.com/RafikiDIY
I think all of the STL files are out there. There are no instructions to speak of. I will be writing the instructions as I go through the build. I will also make modifications as I find that they are needed when I come to them.
[edit] I have also opened this location up to all users, registered or not. I am working on the instructions for the base now, and should have it completed by tomorrow.
There is an STL file missing for the head motor mount and one for the other parts that holds the linear actuator block in place on the rods. I will find these and update these zip files.