Asked — Edited

A Note On Ez-Ai/Rafiki

EZ-AI development is on hold right now, well kind of...

We are in the process of working with some services that will allow the capabilities of EZ-AI to be far better than what they currently are. These include Wolfram|Alpha and IBM BlueMix/Watson. Speech Recognition will be performed through Nuance Cloud services. Advanced vision features will be available through OpenCV. A quick search of these services will allow you to see the end goal of what we are doing. These will be in the Rafiki project which is the primary focus at this time for CochranRobotics. We will release a limited use version for free which will replace EZ-AI. All of the current features of the EZ-AI database will be available through the new version. All of the services provided by EZ-AI will be available through REST queries and exposed services. This will allow plugins to ARC to be able to use these services.

There has been a huge amount of changes to what is possible since I first started working on EZ-AI. This huge shift of improved technologies has made it necessary to rework EZ-AI so that it can continue to grow and mature.

We are also toying with the idea of allowing programmers to write their own business logic layer within Rafiki. This would allow a programmer to be able to use the core services of Rafiki/EZ-AI and write their own applications with the data that is returned. It will probably be a while before this is implemented, but it is something that we are trying to make happen.

I have probably said too much, but wanted to give you all a picture of what is happening and why EZ-AI isn't being worked on directly. We hope to have our new AI available around the end of the year. There are multiple developers working on this while I find come up with solutions to other problems that arise.

As far as Rafiki goes, the pods are functioning great and additional code/abilities are being added almost daily. The models for the pods are being tweaked to expose the HDMI and usb ports and network port to the outside of the case. This will allow someone to connect a mouse, keyboard and monitor to the pod and use it as a computer if they would like. The Rafiki Bot is about 1/3 of the way printed. I am making modifications to the models as needed and reprinting some of the parts as needed. There will be 6 subsystems on this robot. 3 of these subsystems have been written and are ready to use. The other 3 subsystems cant be worked on until more of the Rafiki Bot has been printed. The 3 that are not complete are all very similar for motor control. I hope to have these ready in a couple of weeks. I should be able to show a demo of the Rafiki Bot in about a month, and then all of the robot programming starts. I will work on the charger base shortly after the robot has been completed and the pods are all working with all of their functionality.

One more thing on EZ-AI... As a part of this rewrite, you will just need to have Java installed on your computer to install and use EZ-AI. The days of the huge install will be behind us so it should make it far better in the long run. The other thing that this allows is robot platform independence. I will be working on modules in ARC to allow the features of EZ-AI to be accessible far more easily. This will probably not be worked on until December at the earliest.

Okay, going back to my robot cave. Have a great day all.


ARC Pro

Upgrade to ARC Pro

Discover the limitless potential of robot programming with Synthiam ARC Pro – where innovation and creativity meet seamlessly.

PRO
USA
#1  

All very interesting. Glad to see that the EZ-AI will be available about the same time I start shipping robots. If you are going to show a demo in a month any chance we get to take a look at the design? Curious as to what type of robot you are constructing. I've even seen on Kickstarter where they have these kinds square units that magnetically connect and become "pods" sort of, is it something like that?

#2  

It is a wheeled robot platform designed to have a very rounded and non intimidating look and feel. I dont want to share the design yet, but will as soon as it is finished printing:). I am sure the design will change some with time. My daughter says it looks like Betamax from Big Hero Six or Eva from Wall-e

#4  

@David.... This will be great.... I will be following this with interest.... I look forward to a "packaged" EZ AI and superior integration with ARC...

Cheers

United Kingdom
#7  

This sounds fantastic David. I like the sound of a lighter EZ-AI instal. I'll be keeping watch with you progress with everything your doing, especially the robot build.

PRO
USA
#8  

I didn't mean to glance over the impact of the evolving EZ-AI. I'm super excited with the improvements and additions. Hopefully we will all benefit from your determination on keeping EZ-AI available to all. Thanks and looking forward to your demo!

#9  

@David.... I went to your website (haven't been in a while) and was reading up on Rafiki (especially the mobile robot platform) and it's planned abilities.... Dude, if you can deliver this my Mastercard is all yours....:)... Outstanding!

#10  

Thanks Richard. We have some lofty goals. Luckily I found someone with deep pockets to fund everything. I also found a lot of people who are not only very talented, but also are excited to work on Rafiki.

There is also a lot of new technology that has come out at the right time for us.

We tested our server today with a lot of load and it performed great. This was a concern for me but it performed far better than I expected. It worked so well for the hardware that it was on (sub $300 total for hardware and software) that i had to question if it was even really working. It was passing requests to our BlueMix application where most of the work was happening, but the coding is really good and the traffic is really light and fast. I was blown away.

This is a fun project to work on. I really look forward to being able to share this. The work is part of the reward for me. When advances are made I really enjoy it. It makes all of the little frustrations worth it.

#11  

Here are a few pics of the control box inside the prototype Rafiki bot.

This is the brain of the Rafiki Bot that allows simple commands to be passed from the EZ-B to the Rafiki Control box to read do things with the neopixel, move motors, get sensor readings, get the position of the servos and other things. When we go live, the goal is to have a single board handling all of this.

The feeds to the distant parts of Rafiki are handled through USB cables and HDMI cables. There was one failed print in this but I used it anyway. It really didnt matter for what I was doing with how the print failed.

The power comes in the back of the box and is fed to all of the devices in the box. Keeping all of these wires enclosed makes for a much cleaner setup.

Enjoy:)

Inside the control box

User-inserted image

top of the control box

User-inserted image

connections out of the control box

User-inserted image

side of the control box

User-inserted image

PRO
USA
#12  

Looking cool. What's the size of that control box?

#13  

its about 12" long, 8 inches wide and 4 inches tall.

The purpose of this box is to have all motor and sensor readings come back in a very usable and simple language approach. Here are some sample commands

Move forward 10 inches Drive forward 2 feet move forward 4 yards move backward 7 cm drive backward 2 meters turn right 19 degrees turn left 18 degrees raise right are 5 degrees lower right arm 5 degrees move arms in or out raise both arms 60 degrees Look up 30 degrees look left 4 degrees

I am working on light ring commands but there are so many that making these "common speech" commands is probably not going to happen.

The sensors return text like Object detected 18 inches sensor B drop off detected front/back/right wheel

All "servo" motors can be queried to know the current position of the pot in the motor.

When a request is sent to this box, it will report back when the action is completed or if there was an issue completing the action. This is done by querying the position of the pot or encoder of the motor. It also is smart enough to stop an action if a sensor is tripped detecting a drop off and then recalculate a route to get to the location that you requested. If a route cant be found, the EZ-B will be notified via a serial command.

This box also is running the code for the room mapping and path finding that the robot uses. You will be able to say "Hey Rafiki, Come here." The pod that is closest to you will pickup this speech and the robot will know which room it is in, and which room that it needs to go to. All of these features are outside of ARC so customizing them will not be possible. You will simply send serial commands to this device in a simple language from the EZ-B. The EZ-B will be notified when the task is complete. All sensor readings will come back to the EZ-B in plain English with the information from the sensors.

The communication back to the EZ-B will be very light and will not be repeated multiple times. This should keep the EZ-B very expandable with any other sensors that the user of Rafiki wants to add to it. These sensors can be programmed through ARC without having to worry about how someone would then tie actions of a robot back into these sensor readings. Creating scripts for use with this control module will be very easy to do.

PRO
USA
#14  

Very cool. That's a lot of progress!

#15  

I for one am looking forward to your release. It sound Incredible!

:-)

#16  

David, I went to your sight. It let me register, but did not let me sign in.

Mel

P.S. glad you got funding. You Really deserve it.

#17  

Hey Mel, I authorized your user. You should be able to log in now.

#19  

Here is an update just to prepare people for a change in the way that EZ-AI will work with the new release...

The way EZ-AI will work going forward is that there would be a small server placed on your network. It would contain all of the code that connects to the outside world for information, and would be the part that does the majority of the work with EZ-AI. The client would be installed on the machine that runs ARC. This not only protects the code, but allows a much smaller client computer to be used in the robot that is running EZ-AI.

We have tested clients with machines as small as a Raspberry Pi and it has performed extremely well which shows how light weight the client is. This is because the work is done on the server, which is a small single board computer also. Depending on the size of the installation (how many clients are used at the same time) the server might need to be larger. We are going to be conducting tests to get an idea of how the server/client count configuration should be.

Using a CochranRobotics configured EZ-AI server will also remove a lot of the headaches associated with installing EZ-AI. Right now, the Client listens for the words "Hey Rafiki" (which we will make configurable) and then gets the spoken words after these words, and sends them to the server for processing. The client waits for the response back and then decides what to do with the information that is returned to it. If it is a command, it will pass it along to ARC. This would be something like "Bring this to David.". If it is information or an audio stream, it would play that audio stream or speak the information. Most of what EZ-AI does is informative. We will be working on adding an option to allow either the audio to be returned or for the text for the audio to be returned to the client. I cant spring this new wrinkle on my dev guys until after the 22nd though but it will be one of the final things that we need to add to the server.

The server will allow multiple clients to access it. The costs associated with the services used by EZ-AI are all dependent on the number of requests to these services. The server will track the number of uses per month for your installation and will not allow you to go beyond the number of uses that the user has paid for for the features that cost us money to use. The first X number of uses each month will be no charge, which will allow you to run a limited number of EZ-AI external service type requests a month. This is about the only way to have EZ-AI have a free version still because of the advances services that we are now using for speech type things and for information from external services.

Costs will be determined shortly, but it is dependent on the number of requests not only by you but by everyone using these services. The more requests, the lower the cost per request goes... This makes it very difficult to estimate.

There will be an initial cost for the server that will be sold on CochranRobotics.com. This cost will be to cover the cost of the server hardware dependent on the size of the installation. The question will be asked, what if we want to run this on our own hardware? At this time, we won't be offering that option. By allowing us to do the hardware installation and ship you a working product, you will have fewer headaches and a much better end user experience. Also, we have to protect our code because we are using services that cost money for us to use. If we simply posted the code for use, the billing wouldn't be able to be setup and we would take a huge hit on pirated software. We are working on ways to stop the software from running in the event of theft right now which is something that we didn't have to worry about with previous versions of the software.

Welcome to the world of online authentication of license keys... The server will access a license server over the internet, which will allow your key to be validated against a database of keys. This will happen once a day and the EZ-AI server will work for that day. The local server will report back to the verification server the number of uses for the paid for services for that day. This will allow us to do billing from a different secured server. This is all a pain but it is really the only way to offer the advanced features that we are offering, and protect ourselves financially.

A lot said, but I just wanted to prepare people for a completely different approach to this product.

United Kingdom
#20  

This sounds great David. Thanks for the very informative update, very interesting. With the new setup and making it easier to instal, I will deffinatly look in to having a go at this for future projects.

It sounds like you have assembled a great team, and I look forward to seeing the new EZ-AI in action. Great work.

#21  

David

I tried to log on to your website to get more information on Rafiki but could not register, not sure if you are trying to limit the e-mail traffic. I am very interested in this project and hopefully you will be able to bring it to fruition, so count me in.

Regards

Hector

#22  

Hector,

I will check the website when I get to a location that I can. The admin side of the site isn't very smart phone friendly.

#24  

David that's a lot of work you've been doing please keep us informed with both

#25  

I just ran a test with running the server for ez-ai on a Beaglebone black. It was a bit slow compared to what I have been testing on, but it was definately still quick enough for a couple of robots in a home. That is good news for sure.

We haven't had a lot of time to optimize the code yet so this is very promissing from a cost perspective.

#26  

We conducted tests on the Beaglebone black today as the EZ-AI server. This little single board computer did pretty well with multiple devices attaching to it. We asked question after question to 4 different devices (robots) and there was an average of about a 5 second delay from when speech was complete from the user and the results were returned. We tested this with a tablet mic, headset mic and a dedicated desktop usb mic.

The server was using the 4GB 8-bit eMMC on-board flash storage only. This is really cool. Not having removable storage helps us by reducing cost and providing a board that just has to be flashed. This allows for a drop in place solution.

To install the client, you simply install Java, then launch the jar file. (java -jar c:\rafikipod\rafiki_pod_pi.jar) for windows. It can be anywhere your machine has rights to see. You will possibly have to change permissions depending on how we deliver the jar file. This has been tested on Mac, Windows8.1, Windows 10, ubuntu and debian. The client searches your network (based on the first 3 octets of your ip address) for the server. This takes about 1 minute to complete. If you have a more advanced network configuration, you can specify the IP address of the server by placing it in the config file. We are working on ways to more easily identify the IP address for you but this is not our primary focus.

We will be working on an ARC specific client shortly. We will build a plugin for ARC which will allow ARC to drive the use of this client. I expect this to be finished in the next month or so.

The client will notify you of when you can ask questions and when it is working on getting an answer to the question you asked. On our system, this is done in the form of a light ring.

We will be working on the billing piece of EZ-AI shortly. We are spending this week finishing up code optimization and some user friendly type things.

EZ-AI uses the Nuance cloud for speech recognition and Speech to text, which is then either processed locally for commands like "Remind me to buy eggs". If the classification isn't something that we expect, the text is passed off to Watson and to Wolfram Alpha. The results are returned and spoken at this time through the client. The ARC skill plugin app will just place this information into a variable that can be monitored for changes, and when changes happen can be spoken.

We will be adding features over time. This is the really cool part about the EZ-AI server now... Updates to add new features to EZ-AI all happen on the server. You will be notified if there are new updates to be downloaded. If there are, you click a link and the updates will be downloaded and be applied. The server will reboot and when it comes back online, you are ready to use EZ-AI again.

This is a huge step forward for this product. By doing things this way, we are able to provide a product that we are confident will work. If you would like to provide more substantial hardware for larger installations, this can be done. If have a medium sized installation (5-20 robots all running EZ-AI at the same time) we will offer a different board (still a single board computer) running at quad core 1.6ish ghz processor with 2GB ram and 8GB flash storage for a little over double the cost of the Beaglebone Black. Larger installations are available but would require some conversation to insure that we size the server right for the situation.

The server will come with a CochranRobotics case to protect it from damage. We will be assessing the cost of the monthly fee very shortly. This will be based on the number of transactions you would be making per month. I will be posting videos in the next month or so of the new EZ-AI working with and without ARC.

If someone wanted to allow this to run on other equipment such as computers or devices other than robots, that is definitely possible. The clients will be downloadable. The server will be distributed on flashed single board computer for the most part.

There is still some development to, but the core is in place and functioning great.

#27  

Just an update to EZ-AI here...

We have discovered a new software product that looks promising from the outside. We had meetings with their team and it seems to be a great fit for EZ-AI. They are excited about what we are doing and are partnering with us. The new software product promises to reduce our dependence on some other products which will lower the cost if successful. We are going to be working on a Java based client for this vendor and will share our work with them. [EDIT as I forgot to mention this -] In return, they have offered to have their linguists convert our services to use multiple languages. Win on both fronts I think.

This product is a language processor and classifier which will allow us to reduce our dependence on Nuance and Watson. It understands conversations so saying "I want to hear some Classic Rock" and then saying "Tell me about this artist" is possible. There is a decay period for conversations which allows you to move from one conversation to another pretty easily. We will still use our knowledge system to process the request but this would allow us to not have to track the conversations locally as the conversation would be tracked by this service. This would also allow us to link knowledge. For example, showing a picture of Abraham Lincoln to the robot could then spawn a conversation about Abraham Lincoln. Conversations are a huge addition to what EZ-AI is able to do.

We also met with a local home builder who wants to put our products in new houses that he builds. He has some really cool ideas about features that he wants added to make this happen. This only improves our product and its capabilities. There have been some others who have offered up some ideas that are really cool that we never thought of. One of these is a feature that would alert the user when they got an email from a particular person, or an email with a specific subject. We have the ability to check emails developed, but this type of feature makes the platform much smarter. I personally never used our email feature due to the huge number of emails that I get on a daily basis, but there are some from certain people that are always viewed due to their level of importance to me. I suspect that this is the situation for most people running a business.

We are continuing our tests using very small single board computers as the main brain for EZ-AI. These tests have been going great. We are also fine tuning the logic used to tell when someone has stopped speaking. This is a bit more difficult on Linux than it is on Windows but the goal is to keep the cost as low as possible. Linux uses a much smaller scale for volumes than Windows. Knowing when someone is finished speaking in windows is much easier due to this larger scale. We looked at using IoT for our pods for Rafiki, but not all of the devices that we use in our pods are supported by Windows IoT. Ugh... Work is continuing here.

An update on Rafiki The trip to talk to the investor was okay. I spent my time working with my developers to identify any issues that we were having and identify possible solutions. My lead developer (my son) is in his final weeks of College. He has had to focus on school and work more lately. Once school is complete, he will be pushing development on EZ-AI and Rafiki. I found a couple of things that I wanted to change on the robot from this trip. These are mainly strengths of the neck motor and mounting system for the wheels. Both of these changes have been made to the prototype. I also adjusted the arms a bit so that they are more pocketed when closed.

Another change that we made was that all of the subsystems now report back what they are when sent a "Q0" serial command. Serial ports can change on a computer. What was com9 can become com10. This is problematic for software that isnt smart enough to handle this. The test app that I have created had this issue so now I query each com port at the time that the application starts up and use the results of this query to set the port identification parameters that are used. While doing this, I also added the ability to query the position of the motors. Sending a "Q1" now gets the pot reading from the first motor attached to the specific subsystem. "Q2" gets the other motors position. This allows me to then know the position and calculate things like "move your left arm up 12 degrees". I will be adding this functionality to the client shortly.

We have developed the test software in a way that allows us to easily build the production software (using the same classes in production as in test). The client will eventually become invisible and will allow the user to simply pass information into a variable in ARC. The robot is looking for this variable and will process the command as soon as it sees it. The commands in ARC would be things like "Move left arm up 12 degrees". The goal of this system is to make it as easy as possible to program the robot to do anything that you want it to do. I think simple language that everyone can understand is the best way to accomplish this. The test software has worked great so far.

We have started development on an EZ-AI client plugin for ARC. This will have all of the same features as the pod for Rafiki, but will be run through ARC. We haven't focused much on this because it will be so simple, and it really isn't needed until we get done with paragraph 2 in this novel.

We had a rather severe ice storm this week. I think Richard R and many others are now getting this storm (or will be shortly). This storm caused major power outages and a lot of damage. This has caused some delays along with the Thanksgiving holiday, but it was also a good time for us to take a break. I got to play lumberjack the past could of days and there is more cleanup to do in my back yard. I should be back to working on Rafiki and EZ-AI this week and it will be nice to get back to it.

Okay, I'm done giving an update for now.

United Kingdom
#28  

Great update David, thank you. Lots of information there, but the language processor update got my attention, and sounds like that will be a great upgrade/addition.

#29  

Just an update of sorts...

I put in about 1500 hours of labor in 5 months on this project. It turns out that doing this and working at my normal daily job isn't very healthy. I visited my doctor a couple of weeks ago and found out that my health has gone down hill over the past 5 months. He told me that if I keep up this pace, I was risking a stroke. I decided to take a few weeks off to allow blood pressure and other issues to get back in line. I have allowed myself to get a lot of rest over the past couple of weeks and am feeling a lot better, so hopefully the damage I caused to myself wasn't too damaging.

Holidays and end of year type management responsibilities with my normal job have also cut into my time. My son has finished College (except for one easy class that he needs to complete online) so this allows him to have more time to focus on wrapping things up.

We have brought in another developer who is looking at adding another application to the project. I have mentioned it before but he is able to focus on this while others complete other parts of the project.

Right now, I put in only a couple of hours at most a day on the robot. I plan on spending more time on him this weekend. I have one issue to get figured out and then I can start focusing on SLAM for this robot. The code used to drive the robot got messed up somehow. I think I will probably just rewrite this code this weekend and go forward from there. The robot is almost all built again so after I get this piece done again, I will be able to hook up all of the subsystems again and go forward. Hopefully it won't be too hard to get SLAM implemented and I will be able to show a video of it all working together.

Have a great day all!

#30  

Dave

As much as I look forward to your AI for our robots, it's certainly not as important as your health. Life already passes us by very quickly, before you know it we are in our 60's (lol), tough to get going in the mornings and aches and pains that take forever to go away, if they ever do! No reason to push it any faster along. I myself want to be around to see my grandsons grow up. They all need a grandpa you know. So, kick back a little and smell the roses.

#31  

@David.... As Ted mentioned... We kinda' like you so take it easy so you'll stick around longer... We can wait for Rafiki... Besides, ez roboters are pros at waiting...:D

#32  

The past couple of weeks rest helped a lot. No huge worries, just have to pace myself. It's easy to forget how important sleep is. I have slowed down and let the young guys who can handle it do the major pushing. I have also spread things out more so not so much is on any single person.

I never have slept more than about 6 hours until the last couple of weeks. The last couple of weeks has increased to about 10 hours. I know you can't catch up on sleep, but I sure have tried:)

#33  

Hey! It's only Robotics. Healthy and happy life is much more important. The world will be here and ready to accept your work whenever it's finished. Take care of yourself first, then finish the platform when you can. No David, No new platform. It's a simple script. ;) Sounds like you've already realized that. Just don't fall back into your passion too much.

#34  

@David, I echo what Richard, Ted and Dave have said, take care of your self. You and Your family come first. BTY, Marry Chistmas.

#35  

Rest, relax and then compute! I have found that my robot is alway willing to wait for me. I look forward to EZ AI. Thanks for all of your hard work!

PRO
USA
#36  

Well heck! I've managed to miss about the last month of updates as AOL has decided to blacklist all ez robot emails and dumps them in the spam box!

First holy crap I'm excited about the ez AI development and eagerly await testing:) ( although I'm gone for work the next six months Maybe I should take ALAN with me ?!?) anyways I'm with everyone else on this. Take it from me a guy who works 100+ hour weeks on films, take it easy. My blood pressure is barely contained by the daily pills, sleep patterns are all crazy from working nutty hours over the past 25 years. Drink a lot of water (64 oz) a day to drop your blood pressure, rest when you can, try and get consistent sleep patterns! Once you get these things off the ground you can over see the work and get the young ones to burn the midnight oil!

#37  

My hope is that I will have an EZ-AI plugin for ARC by next week.

There are some changes that we are putting in that designing this as a plugin allows us to do.

The initial "Hey Rafiki" command can now be done through the Speech Recognition control. It would also allow the user to change this command to anything they want like "YO Adrian" or "I pity the fool!".

The facial recognition can now be done through the camera control which can set a variable that we would use.

This helps us to focus on what EZ-AI does well and get rid of some of the trigger events that can now be handled through scripts.

We have to redesign some things on the client to be more of a service based architecture, and those changes should be complete this weekend. From there I will work on making it into a plugin.

The pods would still run the Java client.

Portugal
#38  

Cant´t wait to test it. Great work!

#39  

Looking for some advice here...

As is the case with most things like this, it seems that others that I was depending on to complete certain parts of the Rafiki project are not able to come through as promised. This has slowed progress down a lot on Rafiki. I end up doing everything that was promised by others and I have limited time to complete everything. Something as simple as needing to design a circuit board ends up pulling me away from other areas, which slows things down.

I can either continue to complete a solution that is mass produced and sell it much later than I had hoped, or I can sell Rafiki as a kit of parts with the STL files. This would allow me to then focus on the code to make Rafiki work within ARC as I want it to while fulfilling orders. The good thing about this solution is that the manufacturing costs become far less on me. The kit would include printed circuit boards which greatly reduce wiring requirements and allow someone to simply plug in some USB type cables to make a huge majority of the connections. For the DIY type people, this would be a great option as you would be able to take the Rafiki products and incorporate them into other designs. For a majority of the public, this isn't what they are looking for.

I am really divided here but also know that there is a need to get some time to focus on the programming side to complete a lot of the features that Rafiki has been promised to have.

The thought is that the kit would come with everything needed (excluding the 3d printer and the filament to print him). The plugins would be downloadable, along with the code for each of the subsystems allowing someone to modify this to their own liking if they wanted to. This would still require me to stock some items that I really don't want to have to do, but it would be a way to get Rafiki out to DIYers much more quickly.

I appreciate your thoughts on this. I am struggling to make a decision.

#40  

I am a DYI kinda' guy and I have a couple 3d printers so I would love a kit version... Provided the kit version had adequate assembly and setup instruction support...

Is it possible for you make a kit version first then release a fully assembled consumer version down the road? Is this what you are kind of thinking of David?

#41  

Yea, those are the thoughts swimming in my head.

There are a couple of other benefits to this like having a group of people put the robot through its paces before coming up with a manufactured product and all.

Actually, you were one of the people I was first thinking about when I the thought popped into my head. This would be a perfect fit for you.

#42  

Well, I am all for it... I kinda' feel like a Guinea pig here but I would be willing to buy a kit from you and take the plunge.... You also know you can trust the stls with me...

When you can (If this is the direction you want to go) can you get an approx price for the kit and of course when it might be available... I have to budget now... LOL... Long gone are the days where I made good money when I had my own business...

You have my email (it's also in my profile) if you need anything off the forum... And there is no rush... I have to save my money anyway.... especially since the CAD is in the toilet right now (so be kind)... $1 US = 1.44 CAN (approx)...

#43  

I understand about finances for sure. Let me do some thinking about it. The STL's will take some sanding and fitting right now and there are a lot of them. I used the Flashforge that you have for printing them out so that would be fine. I did do quite a bit of bondo/sanding to get the prototype looking good.

Almost all of the other parts are consumer products. It would be possible to buy them over time from different distributes. Also, I think you already have the motors that are used for the shoulders.

Let me sleep on it and I will make up my mind as to the direction that I want to go.

Thanks Richard

#44  

@David... I have built 2 inMoovs so I am an artist with sanding and filing to get parts to fit LOL... I work pretty much exclusively with ABS which makes gluing parts together with acetone a breeze... I am more than willing to do lots of work with the 3d parts...

No worries.... I would love to help you get Rafiki to the market...

PRO
USA
#45  

David,

I feel your pain. Where are those cloning machines ? The physical stuff takes a lot of time. As for kits you are definitely going to limit your market, but you do gain some beta testers as you stated. How does this work out with your investors? Are they willing to let the product they were going to mass produce out onto the market? Your software will really be the most valuable assets and the sooner you get to that the better positioned you'll be. I still think EZ-AI should be a paid product and that additional income could help you move forward as well. A lot of folks like myself don't mind paying for something that we know the money goes back into its development.

#46  

I have a meeting with the investor on Wednesday. I will discuss this idea with him and get his input on it. I agree about limiting the marketplace. It is an possibly strategic move though...

A good idea is going to be copied. If I assume that this is going to happen, and just provide the ability to do so, and then keep the code locked up that makes the real magic happen, then it could be a win.

I am going to have to step away from this for a bit, but it won't be too long hopefully. I have something non-robot based that I need to deal with and focus on for a bit.

PRO
USA
#47  

I like the idea of you tucking that key away deeply somewhere:) it is the fuel that makes the machine go!

Hopefully everything is ok ... As far as stepping away is concerned ....

PRO
USA
#48  

@David,

Kit Idea:

how you managed the printed parts ?

  1. kit contain's the printed parts ?
  2. the buyer download the STLs and print. What kind of printer size, and how many parts are needed ?
  3. There's a 3d printing service supplier (where you deposit the STLs), the buyer orders the prints from them ?

electronics, motors, hardware ?

  1. Kit contain's all the pieces
  2. buyer gets a part list with and a recommendation for a supplier ?

firmware, software

  1. Kit contain's the micro-controllers with the firmware loaded
  2. buyer need to purchase the micro-controllers, and you need to provide a tool to upload the firmware (bins) to each controller

I think builders and consumers have different mindsets and different goals

for example a builder will be more likely interested in hacking or changing the initial idea, add or remove different parts, ask questions and will force you to share more about how the solution works, and eventually will lead to open architecture, or modular building.

consumers on the other side will be less patience for changes adaptations, delays or issues, less savvy and they want a working solution.

for me it's a business decision.

#49  

Thanks for the replies. I am leaning toward 2) the buyer download the STLs and print. What kind of printer size, and how many parts are needed ?

  1. buyer gets a part list with and a recommendation for a supplier ? The unique parts would be in the kit.

  2. Kit contain's the micro-controllers with the firmware loaded

I am not sure at this time.

As to the issue that I am dealing with... My son lost his mother yesterday. She wasn't married so all of the legalities are falling to him. He isn't equipped to handle this right now so I am helping him out.

PRO
USA
#50  

I'm sorry to hear that Dave. I'm glad you are there to help him out. It can be a quagmire. I've had to deal with once with my fathers passing. My condolences to your son.

#51  

@David... sorry man... family first... My brother died 3 weeks ago from cancer... I kind of know how your son feels right now.... We're always here so no rush for anything except family....

#52  

@David, sorry to hear that David I've lost a couple people in the last year and a half myself is not fun it's part of life but it's hard my condolences to your son and your Family.

#53  

@David, sorry to hear about your son's loss, I recently lost my Dad. My condolences.

#54  

Thank you for the kind words and emails. I have passed them along. My hope is that I will be able to get back to work this weekend. I have my doubts as sleep isn't something that is happening a lot around here right now between this and early morning issues with work, but my hope is that things will be calming down by the weekend.

#55  

Relax and take a breather, get some rest, we'll all be here when you get back or if you need us.

United Kingdom
#56  

Hey David.

Sorry to hear about your recent troubles and your sons loss (and to the others who have lost someone, especially Richard. Sorry to hear that dude). I think you should take a little time out and deal with family first, not forgetting your recent health scare. Just my humble opinion.

In regards to Rafiki, I think option 2 may be a better option for you for now. I would tend to go with option 1 for those of us who don't have printers, but option 2 will give you a good chance to get some third party beta testing done like has already been mentioned. The software/programming element of Rafiki and EZ-AI is a vital part of the project, and I think this needs more of your attention when you are ready. Those who are serious future buyers of the complete Rafiki kit, and who know you, I'm sure would be happy to wait a while. I certainly would be.

Anyway, just my 2 cents worth. Take it easy.

#57  

Here is the main difference between something like Rafiki and Alan. Both have their strong points for sure and both have different things that make them valuable in different areas. This is mainly due to the makers skill sets.

Alan - The value is in the STL files. The look of Alan is what makes it desirable. Protecting the STL files = profits.

Rafiki - There was a lot of work that went into the design but nowhere near what went into Alan. The appearance isn't even in the same ballpark. Where Rafiki's profits are is in the programming as Will has mentioned. This includes the subsystem controllers and in the plugins that are being written. There is also some value in the electronic boards (the first batch of which should be here today) that tie everything together and consolidates the wiring.

The cost to build one would probably be higher than the mass produced version. I think that I went through about 10 KG of ABS to print him out. I am sure that the body of a mass produced version could be much cheaper. There are also cosmetic things that are required after printing, but this isn't where the value is. Alan is a completely different thing all together. It is because of this that I really want and need to focus on the programming of the plugins. Without them, the capabilities of the robot become lessened.

Almost all of the parts (except for the subsystem controller board, Communications board and signal spliters that are used to take one USB cable to 2 cables) is purchasable as consumer products. Some are automotive grade components and the components are not cheap. Because of this, one off purchasing is expensive but every part was chosen because of its ratings and ability to withstand abuse. The only exception to this is the LIDAR that is currently being used, but in this area I decided to see what could be done with this sensor. The issue with this is that it will drive some away due to the cost unless mass produced, but if I go this route initially there are benefits as mentioned.

Programming is an interesting part of this. There are things that should be easy that end up being complicated. There are things that should be complicated that someone has already completed, which then makes them easy. It all takes time to focus on to either find an existing solution (like the licensing solution I found yesterday) or build something. Focus is really the key as interruptions shoot a timeline right out of the water. It is these interruptions that I need to get control of. Many of them are impossible to predict as the situation that I am in now, but many can be avoided. I am focusing on getting the boards completed so that I can then go focus on the programming. My brain multitasks at work all day. It doesn't like to do it on this project as I think that each part requires my full attention. I don't want to overlook things and end up with something that people are not happy with. Once something is released, it is hard to make adjustments.

This is where the DIY type people would be beneficial. They are much more willing to take something that might require updates than those that buy a product. This type of customer would also be able to make improvements and share those improvements with others in the community if they chose to do so. It is such an interesting time now for these types of products. It is also a challenge in that if I went this route, I would be competing with InMoov type products. Looks have a lot to do with peoples decisions on where they want to spend their money and many people are fascinated with human looking robots and I totally understand it. I have built one and look forward to getting back to it. This does however, reduce the selling aspect of a kit.

I will take care of myself and my family as that comes first. I will take my time to get things right on Rafiki as much as possible. It is an interesting balancing act for sure, and I am just looking for a way to focus on what needs to be focused on to be able to make a profit from this venture. The good news is that the expenditures are very low so far and it wouldn't take much to be able to make a profit.

#58  

I just spoke to my investor about making Rafiki as a kit initially. There are some things to iron out still and I have a meeting with him Tuesday to iron these out.

Right now the kit would include the Built and ready to use Circuit boards Programmed subsystems controller boards Any special wires needed to connect the communications board to the system controller board A list of the components needed for the build along with recommended vendors. All of the STL files needed to 3D print the robot along with any special instructions.

I am going to be building another Rafiki shortly. I will make video tutorials of how to assemble it.

While I am doing this, I will continue to program the plugins for ARC that will make everything work in one application.

The main reasons for this decision are

  1. Much of my support from other people have moved onto other things so this all has fallen back on my shoulders.
  2. The goal was to get a product to market by the end of the year and mass production will take a long time to get up and going.
  3. There are a lot of robots that are being released this year. We will watch the market to see how these do once released into the general marketplace and make a determination as to if now is the time to release something like this based on how these other robots do.
  4. The availability of inexpensive 3D printers has become far more prevalent.
  5. The 3D printed parts have proven to be very rugged and reliable so far due to the dome type structure of Rafiki. These 3D printed parts don't break easily at all.
  6. If the investor backs out in the future (no sign of this at this time), this is still a method that I can get Rafiki to market.
  7. It allows me to focus on getting the code complete instead of worrying about manufacturing processes.
  8. Sold as a kit, it removes a lot of the legalities associated with selling a completed robot.
  9. It lets us test the market before diving in and hitting our heads on the bottom of the pool.
  10. Technology and the changing of abilities in different components that I use (including the EZ-B) has changed in the past 6 months. This allows us to adapt to these changes as they are made.
  11. This allows people to replace components with things that they already have and allows the cost to be spread over an extended period of time for them. I don't have financing options so, this is a way to finance the build over time.
  12. It removes the supply chain issues associated with this large of a build.

This won't be a kick starter or anything like that. I will be setting up a store on the CochranRobotics.com website. When a purchase is made (don't know the cost yet) you would receive all of the downloadable content and then I will start manufacturing the board. I will have plenty of the controller boards and other custom type products on hand, but I would suspect that it would take me about a week to get the boards built if needed, tested and shipped out. There will be very little if any soldering needed from the purchaser. Most, if not all of the wiring would be as simple as plugging in wires to the appropriate sockets.

I realize that I will be doing a lot more documentation doing it this way, but this is all good in the long run. Having instructional videos for a build like this should easily translate into maintenance and repair instructions if this were to be a produced product. Also, these would be good for manufacturing anyway.

Anyway, just an update. I would suspect that this will reduce the timeline to have this out to somewhere closer to the summer instead of around Christmas.

I am excited about this new direction as I feel I am much more capable of providing this solution at this time. It takes a lot of stress off of my shoulders and allows me to focus on making the best robot that I can. This will be far less lucrative initially. This isn't really my goal at this time anyway as I have a good paying job and can more than take care of my responsibilities with it. I believe that it will pay dividends in the long run if we were to take it to the next level.

This direction allows me to start working with more of the maker community instead of the general public community. As seen with the Revolution robots recently and with Anthony's experience with one of his customers, I believe that this will allow me to "interview" the potential customer to figure out if they will be wasting their money on this type of a build or not and offer advice to them prior to the purchase, along with fine tune the platform instead of answer a lot of questions from people who don't understand or shouldn't have purchased in the first place. This lack of communication and research before purchase is where I see the issues that we have seen recently. It is also why I made the post "Is this for everyone" or something to that effect. I am really concerned about the company needing to hire a huge support staff to help people who bought this as a toy. I am concerned about people who have bought this and think "I just plug it in and it goes without any maintenance or repairs being done by me". It is a different mindset and I hope to set expectations prior to the purchase. This is a huge concern for me. I understand that it will cost me money to run a business like this, but I genuinely care that someone doesn't waste hard earned money on something that they will never understand or use. It keeps me up at night...

Thanks David

PRO
USA
#59  

Lots of great points. Many that I myself have come across with the ALAN project. I'll be interested to see how this goes for you. It's a similar idea in the business plan we had minus the crowd funding.

#60  

It is an interesting thing for sure Will. Manufacturing moving to the home brings in so many more possibilities for something like this. It also requires some redesign of a few of my parts but not many.

There is something about the "this is a robot that I made" feeling when you see it running around your house, even if someone else designed it. It puts me into the same space as InMoov I suppose but for a totally different type of robot. From my InMoov build, I learned a lot about making durable 3D printed parts. I also learned a lot about peoples reaction to InMoov. There were some things from the InMoov build that I wished I could have used, but I decided to stay as far away from those types of design decisions and opted for the stronger option where possible. For example, I could have used the shoulder gear type design that the InMoov has for moving some components with a rotational servo. I opted to use a linear actuator instead because of the strength that it provides for a minimal cost difference. Also, I lost all concerns of the gear breaking at that point. Also, this allowed me to keep more of the robot running at 12 volts instead of having to step down some of this voltage to something like 6 volts for different components.

DJ mentioned this before and the longer I am in the community, the more I understand what he was talking about. It would be awesome to have a consolidated effort by many of the members to come up with a robot platform that really made people stand up and take notice of ARC. So many members are so talented in different areas and each is working on their own robot. A consolidated effort by many of us on one platform would result in a simply amazing robot. That is so tough to do and I haven't thought of a way to make that happen with everyone working their own jobs and having different likes/dislikes and all. It is an interesting thought though...

PRO
USA
#61  

Yes Tony and I had discussed the same thing. A combined effort would be a great achievement for sure. Many talented people here. I've tried many many times in the past to work starting companies or be a part of a start up. It takes a very special group of people to successfully combine strengths, have the same vision without egos personalities clashing and getting in the way of progress. And probably why so many try to do it on their own.

It still would be interesting to start a thread with people throwing out ideas from potential robot ideas to sensors they would have, tasks they perform, software/hardware they would run, etc. We know from watching successful robots on crowdfunding what's popular with consumers now ( social robots) and consumer robots that have been out for awhile like the roomba and lawn mower robots. The balance is always cost and performance.

You never know what could develop out of talks like that!

#62  

I have ordered the parts needed to build a second Rafiki. I am documenting its build process as I build it so that the documentation can be used in a kit type build. The structure of the documentation will be in the following format...

Image to reference showing the STL file locations Summary of the build steps for the current part you are on. List of the parts and the recommended vendor along with expected cost Detailed step by step instructions Testing procedures to make sure that this series of steps was completed correctly

There will be videos available for the build process showing the step by step instructions.

While the lengthy print process is completing, I will work on finishing up the code for Rafiki to tie it in as plugins in ARC.

Is there is anything else that anyone can think of for documentation purposes on the build process? I would like to get input because so many kits have limited instructions that frustrate those who buy them. I am trying to make this as complete of a build kit as possible without causing frustration to the buyer.

#63  

David,

I know it would be a great pain in the ---, but detailed, dimentioned mechanical drawings would be a big help to those of us would like to build a Rafiki.

Are you planning to make the printed parts available through someplace like Shapeways?

When do you think you will be able to release a parts/price list?

I have followed your Forum entries and answers and learned very much from them.

Thank you.

#64  

Initially, I think I will just make them downloadable through my website, but I suspect they will find their way to other sites eventually.

I agree that mechanical type drawings would be good. I will look into doing that. Good advice.

The first part (base) price list is pretty much ready. I just need to identify what the kit would sell for. Also, I am making any adjustments to the STL files as I print them. As you know, the first print gets things about 90% correct and then adjustments are made. I made adjustments to the STL files from lessons learned during the first build, and will make further adjustments with the second build before releasing them for download.

I am also identifying what is easily purchasable from vendors and what I need to include in the kit. Things such as wiring that is the correct length and such would be included in the kit along with the things that are specific to this build like the subsystem controllers, circuit boards and such.

I will probably release everything in chunks (like the InMoov project) so that someone can start the build. The difference between this and InMoov is that all of the design and STL files already exist when someone starts the build. This is nice, and it will help with the build process as changes dont have to be made to fix something in order for it to fit with the next component developed. Also, by doing another complete build, I will be able to document it as it goes along. This should make the documentation process more complete.

I expect to release the first part of the build process in about a month with the final parts released by July or so. I think that is a reasonable timeline. The large majority of the cost is in the base. The motors and battery make up a large portion of the cost. Unfortunately, to test things out at that point you would need a 12v battery and the motors/motor controllers/motor driver and kit. The base is about 1/2 of the cost of the total build. I dont want people to think that every section of the build costs anywhere near what the base costs to complete so I am trying to figure out a way to convey this.

To set expectations, this build will cost around $3,000 to complete. Mass producing or mass purchasing some parts would reduce the cost but would require me to stock items and make a much larger kit that I am not equipped to do at this time. I wish I was, but unfortunately I can't. I believe the completed robot is well worth this cost when looking at what it will be capable of doing as the other plugins in ARC are completed. This path allows me to do those things that make the cost of the platform much more understandable.

Also, the cost of the parts are higher due to being either industrial grade components, automotive grade components, or just very robust components. The parts that normally fail in a robot are the moving mechanical pieces. All of these are really high grade components so, the hope and goal is that once built, you won't have a need to replace these in the future. Really, nothing is hobby grade where motors or sensors are concerned except for components that have been used for many years and have been proven to be stable and reliable. The battery that is used is very light weight for delivering the power that it does. This also costs more money initially but is really a cost savings in many areas over time.

#65  

One more thing to mention...

This timeline should allow the new components for the V4 to be available based on DJ's stated timeline. There are other components that I use that should also be updated by the time you are working on the areas that you would use these in. Basically, the timeline is based on a lot of different things, some of which I have no control of, but fit nicely into my timeline.

#66  

David,

I was thinking that I would like to build a Rafiki Bot with the powered base but realized I can't make good use of a roving bot. We live in a 2 story, split level home. I don't think a bot that can go up and down stairs would be economical or practical. But the Rafiki Pod is of great interest. Are you planning to release the Rafiki Pod as a kit?

I am a 79 year old, ex computer engineer. (Not a software developer.) I have a lot of experience with hardware and software package integration.

Some thoughts on Rafiki interaction:

We would not want enter the initial database information (People/Places/Important Dates, etc.) via keyboard.

Building the different databases as part of a voice interaction would be OK.

We also would like reminders of Dr. appointments. Most of ours are written on an appointment card. It would be great to have a scanning capability to enter the appointment Doctor, date, and time.

Scanning from a network printer or Rafiki Pod attatchment for other entries into the system would be a big help.

I would like to be a beta tester for many of the Rafiki capabilities if you thought that would be of any help. Buying a Pod in kit form has a lot of appeal.

Thank you.

#67  

I just got home from a trip to Dallas. I will give some thought to how I would go about doing this. There are some logistics to work out specifically with going into production mode with some of the vendors that I use that I need to look into. Turning this on costs me a pretty penny, so I have to make sure everything is ready. Not making money with it due to any failures would not be good.

#68  

The public release of Rafiki has begun. The base has been published on CochranRobotics.com for download. The document is incomplete but has the following available.

http://cochranrobotics.com/RafikiDIY

STL Files for the base Component list with recommended vendors for this part of the build and expected cost (except for kit cost) Slicer settings for printing the STL files First part of the instructions.

This layout requires that you have a 3D printer with a build envelope of 225 x 145 x 150 mm.

I will update the build instructions as I build this Rafiki. If you have any questions, please send me an email or make posts on the CochranRobotics.com forum.

Kit pricing will come shortly, but I didn't see a reason not to release these STL files as it is going to take some time to print these anyway.

Portugal
#69  

Hi CochranRobotics, I think the project is very interesting and the Rafiki design is very appealing. But one major problem with all mobile robots is localization and for me it is crucial that a robot knows where it is and where to go. It is a great challenge to do an efficient SLAM. Environment interaction is also important, like to grasp objects, door opening, etc, or else it's just a computer on wheels. Just my 2 cents. Regards

#70  

It has SLAM (localization) and there have been many posts on this topic specific to Rafiki. It isnt designed to be able to open doors and such. For what it was designed to do, there was no reason for this. Thanks for your input though.

Let me rephrase something - I am working on adding a SLAM module to ARC for Rafiki. While I am building the second one, I am using the first one to finish the programming on it.

#71  

New base with modifications and body shell have been posted. If I run into anything while printing and building, I will update the zip files containing these STL files.

The Base, Body, Wings, Neck, internal pieces and head are available for download.

http://cochranrobotics.com/RafikiDIY

#72  

I think all of the STL files are out there. There are no instructions to speak of. I will be writing the instructions as I go through the build. I will also make modifications as I find that they are needed when I come to them.

[edit] I have also opened this location up to all users, registered or not. I am working on the instructions for the base now, and should have it completed by tomorrow.

There is an STL file missing for the head motor mount and one for the other parts that holds the linear actuator block in place on the rods. I will find these and update these zip files.

#73  

I have decided to keep the instructions outside of the download of the STL files. This is so that I can make updates to the instruction documents without needing to then go back and gather all of the STL files again to replace the zip file containing the STL files.

I have posted an update to the STL files for the Base, and there is an build instructions document for the base. This makes sense to me but I would be interested in having someone look at it and give me their thoughts so far. The document can be found at this location if anyone would be willing to review it.

#74  

@dcochran, hey David I'll take a look at them tonight and get back to you after I read it. I'm sure others will do the same and probably have more insight then I do.

Portugal
#76  

Hi David, i think you just made me buy a 3d printer. would a Prusa I3 printer do the job?

#77  

It would. I would strongly suggest buying one that is in kit form. The reason that I say this is because the kit will have good starting settings to build from. I built a Prusa I3 from scratch and eventually scratched it because the initial setting were very hard to get right for me. A kit will take you a couple of weeks closer to printing.

I personally would get something like a flashforge creator pro. It made 3d printing much easier from the get go. There are a couple of issues that I found with them but if you do a lot of printing while still under warranty, these will be discovered and replacement parts will be shipped quickly. Here are a couple of threads that I had on this topic here and at Amazon.

https://synthiam.com/Community/Questions/5725 http://www.amazon.com/gp/customer-reviews/R3IX2PUIBDGM2E/ref=cm_cr_pr_rvw_ttl?ie=UTF8&ASIN=B00I8NM6JO

#78  

I have heard really good things about the Wanhao Duplicator i3... When I get some extra money that will be my next printer down the road...

Portugal
#79  

My budget only alows me to go with the prusa. Fighting a cancer is leaving me no choice lol. Will buy a kit and assemble it nice and slow.

#80  

@proteusy ahhh, sorry to hear that man... Have as much fun as you can no matter what...

#81  

@David, Wow! Nice work so far, the document looks good. I am still wounding how you will recouple the R&D and time spent writing the code for Rafiki? Releasing the STL files, you can e-mail me if you don't want to replay on this forum.

I like the fact you allow people to use cheaper part if they can find them, maybe you should have a disclaimer about this part? All links worked. I did not see a price on your Kits on your web site, did I miss this or are you still figuring that out?

I seen a few minor errors while reading your documentation. Line 4, you have super clue, I think you meant glue? Line 18, you ask we solid wires to the motor, what size gauge/length or is this in your kits'? line 21, you have sunk holes, maybe sunken holes would be better?

I would like to speak with you on the phone when you are not so busy, you have my number and e-mail, let me know.

Thanks, Merne

#82  

Thanks for the comments and reading this. I will make updates and repost the document.

Portugal
#83  

@Richard, thanks man, i will. @David, in step 18 you sugest to super glue the wires to the motors. I would not do that since the super glue afects the isolation of the wires over time and you could have a schort circuit and a nasty fire. I think a bit of hot glue would be better.

#84  

Loctite silicone works too. Less affected by heat.

Ron R

#86  

Hi Dave,

I am really excited about Rafiki. The complexity of him makes me a bit apprehensive. My program skills are very minimal, though I am willing to learn. I am a good candidate for the "less experienced builder".

I am reviewing the 3D .stl files. I am using Cura , so I will have to review your settings and convert them to settings I can use.

A humble remark / question, I saw a few minor spelling errors. I will re-read and make notes so when you do your update you can fix them, if you like me to.

Looking forward to the next steps.

Ron R

#87  

The build of the structure is much more difficult than wiring and the programming parts. I wont have time until probably Thursday to make a good reply, but I do believe that you would be a great candidate. I spent a couple of hours on the phone last night with Merne just to try to explain why I have done things the way that I have. It is pretty hard to put into written words so I think I will make a video explaining the why and how and so on that I discussed with him. You will not need to be a programmer. Scripting is probably the most you will have to program for anything custom that you want Rafiki to do. There is so much to explain and I think a video is the best way to do it. Depending on how something goes tonight, I might go ahead and make the video from my hotel room. If not it will be Thursday at the earliest.

#88  

Thanks for the quick reply Dave.

When you have an opportunity, a video will be great. It will allow someone to better understand your thoughts and reasons for doing things. This way we will all "get on the same page".

I assume for starts, I should look over the base files and see how they match up to my printer build area. Next look at the required parts and costs.

Once I get approval from you site I will continue any feed back I can offer there.

Regards,

Ron R

#90  

Back from my trip to Dallas. I met with my invester who is committed as ever, delt with work stuff, became a member of the 2000+ member Dallas makerspace, had a great conversation with the GetSurreal creator and found out about his plans for code changes, met with the Dallas Personal Robotics Group, introduced EZ Robot and the idea behind Rafiki to about 5 people, and overslept this morning. It was a productive 2 day trip. I will do my best to make the promised video tomorrow or this weekend.

#91  

Glad to hear you made it home safe David, Sounds like you had a very productive time in Dallas. interesting on the getsurreal person who is changing code.

Great to hear your investor is 100% behind you. Can't wait to see the video by the way check your other email.

PRO
Belgium
#94  

very educational video.

#95  

Dave,

Thanks for taking the time to explain Rafiki's background, concept and design points. It shows that this is the real thing, not a toy. The heavy duty components, and modular controllers will allow years of use, flexibility and expansion.

Regards, Ron

#96  

I have the base documentation updated and mostly completed for the DIY build. There are a couple more things to add at the end of the document. I made a couple of changes on the board after the first build of the board. These changes allow me to use locking headers for the wires from the communications board to the main sensor controller board. I am also changing a couple of the USB headers on one of the boards and giving an easier way to attach the encoders from the motors to the main sensor controller board. There are also small changes that will allow the USB connectors to fit better and small changes to the way the BEC mounts to the communications board. Also, there is a change to the power connectors from the BEC port on the communications board and the main sensor controller board. I also changed the board layout to not include the flip flop boards as I think I will have plenty of these to fulfill any orders for quite some time.

My hope is to have the kit available for sale this summer. The printed parts can be completed prior to that so that then the build is simply adding the electronics and such. If you choose to go down this path, don't assemble anything above the base until you have the electronics. It will be much easier to add the electronics as you build the robot. I am working on adding access panels under the arms so that you have more access to the inside of the robot. I will update the STL files when this is complete.

One other note... The first Rafiki was printed when I had my office setup. I could keep the office pretty hot. This allowed the tall body pieces to be printed well without layer separation as the print got to its taller sections. My office has moved to the living room to allow my son to move in, so I don't have the climate control abilities that I did when in an office. It was also printed in the summer which just really meant that I had to close the door to the office and block the vent to make sure heat stayed high in that room. I have had to order a lexan box to sit over my 3D printer due to the location of the 3D printer now. This box could have been built but I figured that I would rather pay for something that will work instead of going down the path of building something that might work for this. This should help the prints to complete without the separation issue. I mention this because you might experience the same thing. It has been my experience that even though the printer is enclosed, there can still be issues with this due to heat.

Another effect that I hope to gain from this is to make the printer more quiet while printing. The printer isn't loud, but being that it is now in the living room, I like for it to be quiet so that others in the family are not disturbed while watching a movie or TV or whatever.

Anyway, all of this is said because I am now waiting for the box to arrive. Once it gets here, I will start printing the body again, and then start making modifications to these parts as needed and such...

Another note on EZ-AI... My son is planning on starting work on this again in the coming weeks to complete the plugin for ARC. I hope to have this completed shortly and go forward from there.

Thanks David

PRO
Belgium
#97  

where does the name rafiki comes from?

#98  

It is Swahili for Friend.

There are some reasons that I chose it above this though. The son of mine that is helping to program some things, when he was a young child, we would watch "Lion King" over and over. Rafiki was a wise character in this movie. Also, there was a tree in the movie that was important. In math, there are a lot of different tree's used to demonstrate relationships to things. This is used because of the branches that a tree has and the branches that the are in math. We have something special planned for the tree with Rafiki.

PRO
Belgium
#99  

great thinking.i never saw the lion king,maybe i chould see it. am curious to see what you have planned.

PRO
USA
#100  

Thanks for the update David. Looking forward to EZ AI updates! I'm full into production on Gaurdians 2 so not much I can do in my apartment on weekends. By looking forward to playing with the plug in when available.

As a note, I printed some large parts for the BB8 and never used pla, always abs. Now I'll never use abs again, no warping or split layers .

#101  

Yea, just wish PLA would bond as strong as ABS without the mess of epoxy. I make huge messes with epoxy...

BTW, our online store is... NOW OPEN!

There are only a few items available for purchase though and it looks rough. I will be updating it with more as the week goes along.

#102  

I print both... I don't have any problems printing with ABS... It depends on your printer, settings and size of print... PLA is definitely easier to print, but if you leave anything printed in PLA in the sun or anywhere hot it will melt on you. ABS is awesome for strength, longevity and as David pointed out bonding... acetone bonds abs stronger than the actual part is...

PRO
USA
#103  

When building things big and have multiple intersecting pieces I find ABS warps to much even on heated bed in an enclosure. Assembling the parts leaves gaps and parts that don't connect easily. My personal experience. I'm sure there are many factors , the filament manufacture , the settings and machine that affect the outcome.

#104  

I was looking at many of the body parts. I was wondering if I lay them down and use support to the build plate, will I have less problems with separation and warping ? , than standing them up? They will be needing sanding anyway. I also assume the robot needs a skim coat of bondo for finish anyway.

#105  

@Richard R, do you use acetone vapor for outside finish and strength? Have you used epoxy paint ?

#106  

@Dave, What is the total estimated amount of ABS needed to build Rafiki. I plan on placing an order and want to be sure I have enough.

Ron

#107  

12 rolls is what the first on took. I would estimate about 10.

I print them standing up to prevent a lot of the support material needed.

#108  

Then it is best to print them as is, in the .stl?

#109  

You will have to rotate them to fit on your build envelope correctly. I use clips that hold the build plate in place. I account for the location of my clips. Some will need to be flipped upside down to give a flat surface on the bottom. The lower layer of the body has a point on it. These will need to be rotated to be upside down. Once you get past this row you should be able to print them as they are.

#110  

Thanks Dave,

I am looking forward to the build. The order is in and my two machine are set to go. I will also add a little heat to the build area to help prevent separation during printing.

Ron

#111  

One more thing that I did on the last one that I am doing differently on this one was that the last one was printed with a layer height of .2 mm. This one I am trying to print at .3 mm layer height. If the box doesn't fix the seperate on issue, I may have to go back to .2 mm height

#112  

@David, fantastic to hear your son will be getting this completed soon. I can't wait until it is complete and out to users. While I guess I will have to wait, lol.

I sure hope you will let us use this for other robots too. I also want you to know I do not expect anything for free, just hope I/we, will be able to afford it. :)

So if needed maybe this community can open their Robotic hearts eek and maybe we could come up with some prize money, not only for EZ AI but for the Lidar as well.

Just thinking out load. If anyone else want to jump in please do.

Thanks :)

#113  

Simplified my website a lot. There isn't a message board right now. I don't know if I will add it back in as it was rarely used. Also, no login required, but if you want to register, you can. The information is now a bit limited on the site, but my goal is to keep the site just to what it needs to have on it. It is easier to manage and maintain and is more like the current layout of most websites. I will add more to it as needed, but this is where today's time went.

In the store, we only ship in the US, Canada and Mexico right now. Shipping overseas is very expensive and I haven't researched the other possibilities for doing this. Sending a small box to England using UPS costs $86+ USD. Until I find a better shipping solution, or need to find a better shipping solution, I am not going to be selling items overseas.

Let me know what you think of the new layout. Thanks David

PRO
USA
#114  

Clean layout, sometimes simple & less is the best approach.

#115  

Thanks ptp. There is a lot behind the scenes. I have tied in Google Apps for my company communication stuff like Email, Calendar, Groups and Storage. It is nice to be able to go to one place for all of this. It makes a geographically dispersed company much easier to manage.

Simple is good. I just wasted a day to get simple done and really didn't want to spend a ton of time on it.

#116  

@David, I like the new look on your web site!

#118  

Hi Dave, Will you be marking .stl file updates with rev notations ? I have begun printing the base .stl files and want to make sure I have the latest versions, if changes are made.

Ron

#119  

Will do. All of the base is complete that you have except for part 17. There is a small change that I will get updated for download in the github repository and in the google drive share.

Thanks David

#120  

On the first layer of the shell there are a couple of modifications to 2 parts (under the wing) that I wont publish until I get a chance to print them out and verify that they work as expected. I will be adding access holes with the second layer of the shell that will be in the area that is covered by the wings. the covers for these holes will be held in place by the top and bottom sections of the shell and will use magnets to hold these covers in place. Until I can print these sections I would advise against printing the shell. Hopefully this week I will have that redesign completed.

#121  

Thanks Dave. Please do not take my enthusiasm as pushing. I was going to hold off on 17 and 20 anyway because you mentioned there could be modifications.

My only other concern is, are the base hardware (motors, wheels, hubs etc.) finalized. I would like to begin ordering them. I want to budget the cost over time, so I assume the motors would be the most critical part for the base build (fit and finish). I would order them first.

Ron

#122  

The electronics (non cochranrobotics pieces) are finalized for sure. The latest published STL files will be the final ones for the base. You will be fine ordering the parts in the instructions for sure and if there are any additional ones (I seriously doubt that there will be) I will post an update.

#123  

I wasn't planning on any of the shell until you announced a release. I want to take my time on the base to be sure I get a good fit.

My last request is, upon the finalization of the base assembly instructions, can you give critical dimensions of the base? I just want to be sure the completed section is correct for the shell fit / next assembly. No great detail is needed, just what is important. (you can even use the drawing on page 1 of the instructions)

Ron

#124  

One other thing...

I am glad that you are working on the build. It helps to keep me focused on something. There is so much to doing this that it is very easy to get distracted by many different things. By you doing this, it helps to put a priority on some parts that I may have let slide until later.

Thank you Ron!

#125  

The cool thing about ABS and the way that the Rafiki DIY is built is that you will have a gap at the back of the bottom of the shell. This is by design because it allows you to fit your printed pieces correctly based on the mm of difference between your 3D printed parts and mine. This allows you to fit your pieces of the shell to your base correctly. Excluding an adjustment by you to your printer settings after the printing process has started, I can see no reason why your parts wouldn't fit well.

ABS allows you to fill any gaps. I use a thick slurry (about the thickness of a milk shake) to fill parts if there are any gaps. I would wear old clothes as this is a messy process for sure, but it sure works well.

I will come up with something that has the dimensions though. I should have it out to you this week.

When building the shell, I use the wing pockets to align one layer to the next. This will help you to be able to align things as you build upward.

#126  

Please remember this is a stress free environment ! .... LOL ..... Health first, fun next.

Ron

#127  

Hey Ron,

I just finished up the side access panel modifications. Give me a couple of days to get it printed and checked out to make sure things work as designed. I am not sure if I should use a top or bottom magnet at this point though. I dont want to use both as this basically blocks the access panel with supports to hold the access panel. I will let you know how I make out with this. Once this part is complete, I think the shell will be ready to print.

Also, I owe you the layout with measurements. I will work on that soon and publish it with the instruction manual (replacing the top image and all).

Thanks David

#128  

Hi Dave, Thanks for the update. Can you glue in a small plate of steel ? , like a small washer, and have the magnets grab that ? then have a tab and pocket for the other side. these tab things can get glued in where needed. I have been looking for alternatives for this type of latching for an unrelated project. Velcro never seems to work real well. Magnets need locator tabs for alignment. I was thinking about using rubber rivets. My last resort will be pins and a dab of automotive clear silicone (Loctite brand) in 3 places. I won't be going inside the case that often (only for problems). The silicone can be removed and re-sealed easily. The brain keeps looking. Regards, Ron

#129  

The design is to use 2 rare earth magnets, one in the door and one in the support. I think I will include both the top and bottom support and allow the builder decide if they want to use the support on the top, bottom, both or neither.

These access doors were not in the original design. During the build they are not needed. They are very good if there is an issue though. It is a difficult thing to try to do things from the back door. Also, if nothing else, it helps to have more light in there.

#130  

I had an issue on my 3D printer (clogged hot end) that I will need to address. It is about time to replace the hot end anyway, so I will do that today and then keep printing. The good news is that the Plexiglas box fixed the layer separation issue that I had. Unfortunately the hot end clogged. I should have that fixed tonight and will start printing again.

Thanks David

#131  

LOL.. I just went thru replacing my hot end too. I have the same separation and warping issue even with a box. I just kicked up the heat bed and hot end temp 5 degrees and hope it will help. Even though there is some warping and a couple cracks in a couple of the parts, I will check the alignment from the top surface. If it looks ok I will assemble the parts anyway, if not I will do a re-print. You are look at a production process for sale, I can get away with minor flaws. The strength should not be effected. Sounds good about the added doors. If the side doors have a lip where they fit or pocket, I assume they can be held in by all the ways you mention. I may try the bathtub caulk idea and see what happens. It is white and Rafiki is white too. The caulk will release easier than the silicone sealant if access is needed. (LOL .... prevents mold too !)

Ron

#132  

@Ron, I just posed new STL files for the shell. These are on the GitHub Repository. I am only going to be updating it because it will show you the date that the files were updated. This should solve the version issue you mentioned.

There is a lip added to parts 1, 2, 3, 4, 5, 6, 7 and 8 in Shell1.zip. This lip will allow you to align the shell to the base a little bit easier. The magnet holes have been added to part 11 in Shell1.zip. The magnet holes have been added to parts 18 and 19 in Shell2.zip. The door supports have also been added to Shell3.zip.

The door supports are separate pieces that allow you to decide which you would like to use if any. I would probably just use the bottom ones. This should still allow you room to access the inside of the robot from these doors. It will also allow you to mount these (bond them to the inside of the shell) earlier. I didn't attach them to the body shell so that you could choose what you want to do.

Small layer separation is no issue. Fill it with Slurry and go on. A lot of layer separation isn't good and will need to be reprinted.

As far as the doors go, they have a gap on the sides of them. This is by design to allow some venting. I haven't needed venting but figured it was a good place to add them. The magnets and the amount of support should be fine. I haven't physically tested it out yet though. The holes for the magnets are designed to fit these magnets (http://www.amazon.com/Creative-Hobbies%C2%AE-Ceramic-Magnets-Science/dp/B014Q44HOK/ref=sr_1_4?ie=UTF8&qid=1456177695&sr=8-4&keywords=magnets+rare+earth)

The supports have the same curve as the doors so with the magnets, they should be held well. The top and bottom of the door are snug to the rest of the shell.

One other thing... Part 17 in the base has been updated to allow a little gap for the wires from the encoders to feed to the board more freely.

I still owe you the measurements. I wont have time to do this tonight probably. I need to get the lidar plugin to quit locking up for some people, and need to get the 3D printer going again. I haven't forgotten about this though.

#133  

Sounds good on the shell updates. I will see the door fit when I get there and figure out your magnet assembly.

I will check out the fit and finish of all the base pieces and if any are N.G I will re-print. Otherwise I will do a slurry fill and bond them all together on final base assembly. They are small cracks. I will probably wait for the motors to come in to be sure of alignments. Ok on base part 17. I will continue to complete all files.

As far as the dimensions, no problem, I'm still printing the base pieces. I won't need it until all are finished. It was really just a suggestion, just for confirmation of fit.

#134  

@David, you have mentioned you have your printer in the living room . You are using acetone and the slurry to hold the parts down . My question is how do you keep the smell down so it doesn't bother your wife or your kids . I ask because my wife kicked my printer out to the garage, which is too cold even though it's enclosed. so I won't be able to start printing Rafiki until it gets warmer out in the garage there is no heat.

Thanks

#135  

I use an acrylic cover over the printer for heat control but it also protects against sound and smell for the most part. My daughter woke up the other day and said "Nothin' like the smell of ABS in the mornin'" so I guess she can smell it. My nose has become pretty numb to the smell I guess. Also, just move an adult child of yours back into your house and give up your office. You can get by with a lot then without them complaining because they like seeing the child:)

Actually, I will probably move the printer to the garage soon. I have a nice work area setup and the box should be able to allow the printer to maintain heat even when it is a bit cold outside. If I need too, I will put a blanket over it to help hold more heat. I am further south than you, so 70 degrees in the winter isn't unheard of. I guess 0 degrees is also not unheard of but it seems this year 70 has been more the norm.

The box I bought cost about $200. The shipping on it was crazy high ($175 I believe) but I didn't feel like failing over and over again to get a box right. The one I got is 24"x24"x24" and gives a lot of room inside the printer for spools and such. It is a bit too deep for what I need, but too big is better than too small in this area.

User-inserted image

#136  

Thanks David I think I'll get a cover for mine, even though it is in closed already and try the blanket. I get frustrating when you print for eight hours and it doesn't come out right

#137  

I understand. I had a part printing for about 8 hours and the hot end clogged on me the other day. I spent this morning replacing the hot end assembly on my printer to get things going again. It is a pain to do but should last through the build of another Rafiki again. I need to soak the other hot end in acetone for a while to see if I can free the clog. Hopefully it isn't trash.

#138  

Hello Dave, I will be placing an order for EZ Robot parts soon. Do you have a rough list of EZ Robot components needed for Rafiki yet? I assume the new version of the EZB, once it is released, will be needed. I have some items now and want to see if I can use them (compass, cables, servos).

Ron

#139  

I show part 2 in the shell file fitting on my heat bed. It will take about 19 hrs 15 minutes , with brim, 1.2mm layer height 15% fill. Sounds ok ? (I changed the wall and infill. Will it be allright ?, or does it really need to be heavier?) It does fit on my Hictop (200 x 270) but not on My Borlee (200 x 200). I will try to see what fits on the smaller print area because it is the most common size bed, and let you know. I will begin tomorrow or Monday.

#140  

I use a power adapter, a power base, a 4 in 1 sensor and the EZ B.

When the new 1/2 comes out I will use it with one of the new com boards.

On part 2, it is probably the largest piece and will give you a good idea if your printer is going to have issues with printing the parts for Rafiki. I printed my last one at .2 mm layer height 15% infill. On this one I will print all of the parts except for 2 of them at .3 mm layer height.

The reason that I don't think it will matter is that after the parts are fitted, you will have sanding and filling to do anyway, so layer height really isn't going to do much in the end.

#141  

I am using Cura ver 15.04 on both of my machines. Temps are 225/80, and I will use .2mm 15% also. The top and bottom are 1.2. I think this will be strong enough. My only fear is layer cracking due to the size of the shell parts. If I get any cracks, I hope they stay small. I will increase the brim from what I am using now to keep from having any first layer issues and good adhesion to the glass.

I am near the end of the Base files and so far they came out ok. My slicer screwed up the top of the hook on one of the parts but it is only the hook. Once they are done I will sand and fit them and wait for the motors. I may have to modify the battery tray if my battery doesn't fit, but not by much. I also want to add a battery strap in case it ends up loose. The 20 ah sealed batteries are 7.1" x 3" x about the same height, which may be a future option. The cost is considerably lower than your production spec. battery. I will let you know how I make out.

#142  

Sounds good. I have also thought about a strap but haven't needed it with this battery due to its very tight fit in the base.

If you have any issues with printing the shell, let me know and I will cut down the large/tall pieces to fit more printers. I have been wanting to do this anyway.

Let me know how part 2 in shell1.zip goes. It will let me know if I need to do this or not.

Also, I really don't sand down the base. It is completely hidden so I don't worry about it much. I have thought hard about making cold cast parts after this one is completed which would be based off of one that is very smooth. It is a ways down the road though and it might not happen. It just all depends on if I can get all of the other things finished and this one printed and assembled.

I really need about 4 more 3D printers for the next 6 months. That sure would go a long ways toward getting everything done that I want to get done.

#143  

I used brims to try to eliminate loosing the bond to the glass so I just need to clean them up. I would still consider a strap in the future for the battery, just in case. My plan was to make a small notch in part 9 and 12 and slide a piece of 3/8" or 1/2" Velcro hook side under the base and up and around the ends of the battery. Then close it with a 5" piece of the Velcro "fuzz" side to strap it in. I may use the sealed battery down the road which will require a modification to 9 & 12 and the strap anyway.

I will print part 2 first and let you know how it comes out.

#144  

velcro is a good option. Let me think on this a bit. I may need to modify a couple of parts in the base to allow a velcro strap to be used.

#145  

Parts 10 and 11 were modified for Velcro strap for battery. The thought is to use this as a front hole and then the caster wheel hole for a velcro strap to hold the battery down.

#146  

I have hated relying on the EZ-Robot community to provide updates to my projects. There are so many ways now to deliver information and while I love this community, I don't like consuming resource at EZ-Robot as my own communications channel. There are some benefits to using this community as this is probably the online community that I am the most familiar with, but I also feel bad every time that I have to make a post about Rafiki or EZ-AI.

Because of this, there are now some other avenues available to get information about EZ-AI and Rafiki. They are:

EZ-AI Community group page

Rafiki Community group page

and http://www.cochranrobotics.com

I will try to keep these updated with information as it can be released. With the community group pages, you are able to turn on or off notifications on new posts. This does require that you have a free google+ account, which isn't a bad thing anyway. I will also reply to any questions posted here about either of these two products. Hopefully this will allow those who don't care to see these updates in this community to not have to see the posts, and will help to keep this community focused on the products that it was intended to support.

Thanks David

#147  

David,

The EZ-AI Community Group page is saying I am required to sign in with a Google+ account if I want to join. Is there another way?

Are we still allowed to post EZ-AI questions and discussion here?

Thanks!

#148  

Yes, it is just much easier to do in Google and all of the information is in one place. Nobody asked me to move it to the Google+ page. I just feel bad about doing things through here. I will be happy to answer questions if posted here.

To use that page, you would need to sign up for a free google+ account.

On a side note, it also allows me to see the level of interest in EZ-AI.

#149  

@Cochran Robotics

Dude, I just purchased a bunch of mold making materials for Archetype, so I can reproduce the shells in whatever material I'd want. Right now fiberglass/cold cast aluminum is the strongest contender, but other resins and even carbon fiber is possible. If you need molds made for Rafiki for mass production let me know if you're interested.

#150  

That would be great! I will be in touch when I get this one printed out and built. Your help would be greatly appreciated.

Email me if you would so I can figure out what any limitations are. My email address is in my profile here.

#151  

@Cochran Sent you an email.

#152  

Thanks Doom. Will be in contact in the next couple of days. You being around Houston makes not too bad of a drive for me when needed. I am in Dallas all of the time anyway, so what's another 8 or so hours worth of driving if needed?

I will be in touch.

#153  

@Cochran Cool bro. I'm all the way north of Houston so I'm only about 5 hours away from Dallas

#154  

David,

Question about the new EZ-AI: you noted that users will be able to choose and subscribe to the features that they want. If a subscription lapses, or the user decides they are satisfied with the current "level of intelligence" and don't renew, what happens? Does their bot go dark? Or does it continue to function as it did, but it would no longer get updates and improvements?

Thanks!

#155  

It goes dark as the information is not on your network. The knowledge base resides on an api. The speech recognition is an api. The service wouldn't do anything because the speech wouldn't be parsed and the knowledge base couldn't be accessed.

In order to offer the level of product that we wanted to offer, we leverage some third party tools which are mentioned in this thread. These cost money to offer and each transaction costs money.

To use EZ-AI will cost a monthly fee based on the level of the knowledge and the speech recognition engine you choose to use along with the number of requests you want to make per month. The advanced services will be better and cost more.

This thread also discusses what happens when you go beyond the number of requests that you have purchased, but basically we shut you off until you have either increased your total number of requests available by purchasing more, or you go dark for knowledge and speech recognition access in EZ-AI.

#156  

Implemented section had to be completed before the other things could be handled for the most part. We should have the Implementing list done by May 1. I am seriously considering using May the 4th as the release date for the Beta simply on principle as it will be Star Wars Day... Anyway, here is a list of the things completed and the things planned.

IMPLEMENTED:

  1. Natural Language Processing a. Deciding what action is to be taken for audio/text output
  2. Entity Recognition . Detecting names/places/dates/times/address/colors/currencies/and more
  3. Detection of a person through image
  4. Tracking that person from multiple devices
  5. Offering contextual information . Based on detecting a particular person at a particular time
  6. Detecting patterns in requests and offering to do or automatically doing a particular action for a particular user when they are detected
  7. Commands: . Reminders a. Calculator b. Basic QnA c. Small Talk d. News
  8. Undoing a particular action by saying Nevermind or similar
  9. Java client for interacting through microphone, speakers and camera
  10. Web interface for interacting through text.
  11. Option for using multiple speech to text, text to speech and knowledge engines.
  12. Sessions on a particular device (Requests can use information you provided in previous requests and take action)

IMPLEMENTING:

  1. ARC native plugin for interacting through microphone, speakers and camera
  2. Detection of a person by voice
  3. Adding multipart upload to the web interface to send images and audio to the server for testing purposes
  4. Learning the context of your queries (If you ask about HTML, JavaScript, and CSS, the AI will know that you are interested in Web Development Tools)
  5. Getting the AI to know a person through a brief(5 minute) tutorial’ period
  6. Key authentication server
  7. Sending logs of requests the AI cannot handle (optional) so that we can make the AI better TO BE IMPLEMENTED:
  8. Commands: a. Booking a reservation b. Contact book c. Finding social events d. Finance (stock prices and information) e. Flight information f. Notes g. Maps, points of interest and navigation h. Media playback i. Taking notes j. Social Media reading and posting k. Sports standings, scores, statistics, schedules, etc l. Setting an alarm m. Translating a phrase n. Getting the weather o. Web searches
  9. Device control through AllJoin and z-wave
  10. Learning relationships (my dad is David. Remind me to call my dad.)
#157  

David,

Any estimate of subscription costs?

#158  

I am waiting on one cost to be finalized. I hope to have this finalized this week. Once I have this cost I will post a breakdown of the subscription cost.

#159  

The subscription cost will be $30.00 per month for 1000 requests but read this through and you will see how the costs can be limited and why there is a charge. A request would be something like "Tell me to wash the car tomorrow" or "What is Myotonia Congenita?"

If you have a reminder setup and EZ-AI reminds you of this, this is not a transaction. If EZ-AI recognizes you and says hello, this isn't a transaction.

We are working on options to allow you to reduce your cost for the information that is returned. Here is a scenario of a paid for service vs using a free service. The paid for service has its information vetted through professionals in that area (in this example a Dr. Specializing in neurological muscle disorders). It would return a phrase that is meant to be spoken and display more detailed information on the computer screen if you have chosen this option. The free option would return the information from Wikipedia. This could be quite large and could be riddled with errors due to Wikipedia information not being vetted through a neurological muscle disorder doctor in this case.

Another of the paid for services is the best speech recognition service available. The odds of a normal user correctly pronouncing Myotonia Congenita (unless they are somewhat familiar with this disease) are pretty small. The speech recognition engine that we use will handle small mispronunciations of words and the knowledge system that we use does a great job of finding matches even with misspellings of words. This makes the experience far more enjoyable. Without these paid for services, we only know that the experience will not be as good as the paid for services.

I haven't made this publicly known but now is as good of a time as any...

EZ-AI is more than just an application. It is an AI framework. This allows programmers (java mainly) to extend the features of EZ-AI for their own purposes. By simply dropping a class file in a directory and making a simple modification or addition via a webpage, you can modify EZ-AI to use whatever services you would like it to use. This is the real power behind EZ-AI and where we see its potential. Additionally, you can customize the actions performed based on the main service that we use. You can setup your own API.AI account, download our environment to yours via a zip file download and then modify actions to then be specific to your environment. These do require some level of programming experience but you have the ability to change EZ-AI into whatever you want it to be.

What I am basically saying is that if someone develops a class that would use other services for information that are free (us or anyone else) then you would be able to use this class to bypass some of the cost associated with EZ-AI. If someone wrote a different class that accessed a different API on the internet and this didn't cost any money, you would be able to download and start using this class for actions pertaining to that type of request. You would make a change in your account and your billing the next month would not include the charges for the paid for service that was replaced by this new class.

This is all confusing for people who don't work with this type of a product and I understand that, but there are options to reduce or even practically eliminate the monthly cost of running EZ-AI. What we are delivering is the full functioning paid for EZ-AI at $30.00 per month. There will be documentation on how to go about changing out services and such. I don't have a timeline on when this would happen yet, but the goal is to allow you to make EZ-AI into whatever you want or need it to be.

#160  

David,

My use for EZ-AI would be purely recreational: have some interesting conversations, have him remind me of appointments, dates, etc. Considering that I don't use half the functionality on my smartphone (a Luddite working in the cyberage!), many of the features are not important to me. I don't use social media (Facebook, Instagram, Twitter), I'm afraid to fly and I don't invest in the stock market. :)

If I understand you correctly, EZ-AI "basic" will start at $30/month, but the user will be able to opt-out of certain features, which will reduce or eliminate the cost? Or at least bring it down drastically. If that's the case, it's very good news!

#161  

Quote:

If I understand you correctly, EZ-AI "basic" will start at $30/month, but the user will be able to opt-out of certain features, which will reduce or eliminate the cost? Or at least bring it down drastically. If that's the case, it's very good news!

That is sort of what he said. More like, the initial release will be $30/month. There is a framework built in that would allow sophisticated users to swap out the components with free ones that do the same function with perhaps less accuracy as long as they have APIs.

In another thread (about Google opening up their voice recognition API) David mentioned that a future release of EZ-AI might take advantage of that and a new API that Wikipedia is releasing to reduce or eliminate the monthly cost, but that would be a future thing after EZ-AI is released with its current speech and knowledge engines.

Alan

#162  

EZ-AI uses services for its information. Some of these services cost money to use. Some are free to use especially if you are signed up as a developer to use these services.

api.ai costs money to use if you make it public for people to use. api.ai allows you to also use a developer license for your own use, but if you make your service public for others, they will start to charge you or shutdown your service.

Nuance Cloud costs money to use if you are doing REST calls (basically using it from a computer). You can use this service for free for a limited time as long as you don't make your service public.

Wolfram|Alpha is free to use if you sign up to be a developer. Once you publish your project for others to use, this costs money.

Google speech stuff is free to use right now if you sign up to be a developer. It will cost money at some point to use this service but who knows when.

there are cookbook API's, Bible API's, map API's, Email API's and many many many other API's out there that are either free for a developer or run for a limited time.

EZ-AI uses CLASS files to interface to these API's. If someone needed an AI that associates dog tail length to the type of dog, this could be done in a CLASS file. The CLASS file could then be used by EZ-AI. It is possible for someone to publish this CLASS file which would then allow you to use these features from within EZ-AI. It might be that this costs money to use, or it could be free. The billing for the use of this CLASS file would have to be handled by the person making this CLASS file. EZ-AI is simply a way to leverage these CLASS files through an application.

To get EZ-AI, you will be charged the $30.00 for the first months service, and you will be charged the cost of the server and shipping. This all comes to about $120.00 roughly. You would then be able to modify your instance of EZ-AI to only use the services you want to use, or to use someone else's services through adding their CLASS files and turning off the default ones that come with EZ-AI.

If someone gets to writing a CLASS file that uses Wikipedia (example which is free) then they could use that CLASS file instead of the Wolfram class file. Because this API doesn't do Speech to Text, it wouldn't replace the Nuance Cloud API so you would still get charged for that service. If someone gets to writing a CLASS file that uses Google Speech stuff, and if it is still free by the time this happens, you could use that CLASS file to replace the use of Nuance Cloud. You would then not be charged for the use of Nuance Cloud's API. There is still the $10 charge for api.ai. This is the core service that makes EZ-AI work. That isn't to say that there isn't something else available that could do some of these things and if someone wrote a CLASS file to point to a different API, then this could also be replaced reducing the cost that you would incur for using our api.ai instance.

I believe that it is reasonable to charge $5.00 for the use of EZ-AI even if you don't use any of the other services, but right now, we are making it free to use if you don't use any of the API's that come with EZ-AI. You would still have to pay the $120.00 roughly for the hardware, shipping and first month's use of these services. Even if on the first day, you decided that you have other CLASS files that you want to use, you would still be charged the same $120.00 roughly.

So, why was the other version of EZ-AI free to use... Simply, I ate the costs associated with using the API's or found the least cost versions (sometimes free) of these API's. We proved that what we wanted to do was possible even though we didn't do it in the most efficient manner (as was evident from the brutal install process). We have done things far more efficiently now focusing on the end user experience. If we had a choice on API A or API B and API B offered a better user experience, we went with API B. This could mean that the data was more reliable, or that the speech recognition was better, or that the install was far simpler or whatever. Using the best of breed costs money. Designing EZ-AI to where someone has choices to make it what they want to make it costs money. Ultimately, if you choose to strip it down to its bare nuts and bolts that is your call and you have the ability to do this. If you decide that you want to use $100 a month in API's, that too is your call. We provide the services that we believe are the best for what people want, and give the option to the end user.

By providing a framework, we have allowed this product to be used in professional settings like hospitals or businesses, and allowed it to be used by hobbyists. This is like your cell phone. You can use it as a phone (which I rarely do anymore) or you can use it as a calculator, web browser, game machine or anything else. EZ-AI also has this type of flexibility depending on which CLASS files you add to it. If someone wrote a CLASS file to make phone calls and you use it, you too can make phone calls...

I hope this is logical. Let me know if you have any questions.

PRO
USA
#163  

developers are lucky guys, in some working places they eat for free too ...

i h**e those guys:) !

I've been developing since I was born, so i must be a developer ?

#165  

With the Beta test starting soon, and because we are ahead of schedule, I would like to pose the question here that I posed in the Google Group for EZ-AI. What features would you like to see in an AI?

Examples might be Home Automation Launch Applications upon request Remind me to do things at specific times Tell me the weather, sports, news... Answer questions when asked

There are a ton of things on our list to program in time. We have some of these things listed already finished and a lot more, but we would like your input.

Thank you David

PRO
United Kingdom
#166  

I would like to see choosing a music track or artist/album from your music library, similar to what I did with ARIEL see time 4:20 on the video.

Tony

#167  

Leave messages for others. When people check in with EZ-AI, it plays message back.

#168  

Thank you both for your comments. The messages piece is written. We have media playback on the list. Thank you for the video as it helps to see the way that yours works to give us some ideas.

PRO
USA
#169  

@David,

Different Profiles and a secure (i.e. non trivial) mechanism to switch the profile.

#170  

@PTP,

By profiles, do you mean users? Right now the way that people switch users is by one of three methods.

Either the variable is passed from the EZ-AI ARC client (through facial recognition or whatever means you deem necessary inside of ARC), facial recognition by passing an image taken by the camera, or voice recognition. If the user cant be identified by any of these, EZ-AI will ask who the user is.

#171  

User-inserted image

There are three ways to run EZ-AI:

  1. Using the CochranRobotics Ecosystem - We have contacted all external services and have paid keys for use. To use these keys, you pay CochranRobotics a monthly fee and the EZ-AI server is registered with our Authentication server. The Authentication server provides the necessary external API keys for the EZ-AI server to run properly, and the EZ-AI server reports its usage back to the Authentication server for billing. You purchase the EZ-AI server and hardware/software from us. You can purchase and use pods for this installation. Also, you can use any or our free and open source clients or program your own for your own products.

  2. Completely Local Installation - You provide your own keys to the EZ-AI server. You maintain and pay for any usage yourself. This is useful for individual developers who want to integrate EZ-AI server into their own projects. This is NOT recommended for production, though. The keys are stored in the EZ-AI server's database locally. If the key changes, it needs to be manually changed. You purchase the EZ-AI server and hardware/software from us. You can purchase and use pods for this installation. Also, you can use any or our free and open source clients or program your own for your own products.

  3. Using your own Ecosystem - This means that you are responsible for running and maintaining your own Authentication server, as well as you own external product keys. This is useful if you want to move your EZ-AI enabled product into production. The Authentication server will be free to download and we are discussing making the Authentication server open source like the clients. You purchase the EZ-AI server hardware and software from us, register the EZ-AI servers with your Authentication server, and distribute with your product.

Quick Points:

  1. Any EZ-AI client will work on any EZ-AI server on any Ecosystem.

  2. After you purchase the EZ-AI hardware/software from us, you are free to change your external API keys and not pay us a monthly fee. The only required external API is free to use for personal projects, and has a paid version for distributing in your own products.

2.B) This means, after you purchase the EZ-AI server hardware from us, you can run the hardware without a monthly fee while you develop your product.

  1. Developers will be able to create plugins for EZ-AI server to extend functionality. As of now, these plugins are written in Java.

  2. We are not planning on distributing licenses for the EZ-AI server to be run on non-approved hardware. This is to ensure that every EZ-AI server is of the highest quality, is fully setup/configured and is easy to use for everybody.

  3. We are not planning on releasing the EZ-AI server as an open source project.

#173  

@CochranRobotics

Hail POD!\m/

#174  

LOL! great talking to you. I'll get the files to you in the morning. Thanks again for your help!

#175  

We are extending the EZ-AI ARC skill plugin beta test for an additional 30 days. There were a lot of things that happened outside of this project that affected our ability to complete some of the new features that we want to add. The test group has provided some great information and some really good feature requests that we are working on getting added.

We have focused on making the plugin more "bullet proof" and more reliable. We just published a new plugin which should handle the unexpected errors that we were seeing. I want to give this a good test to make sure that it solves any issues.

We are working on the pod clients. We have decided to make the pods open source as they are a client. We will be publishing the hardware needed, STL files and a disk image for you to download and use. This will allow you to extend your AI to multiple devices and allow the features of EZ-AI to be used throughout your home or office. I am building 3 pods this weekend and hope to have all but the disk image posted. Once the pod disk image is complete and tested, we will publish it.

Thanks David

#176  

A couple of this mentioned above... The plugin has been running non-stop for over 24 hours and I have not had any issues yet.

On the EZ-AI pods, as promised, here is a list of the hardware to build one EZ-AI Pod hardware You can get all of this from Amazon or many other places...


Raspberry Pi 3

Raspberry PI 5MP Camera Board Module

Addicore Raspberry Pi Heatsink Set for B B+ 2 and 3 

CanaKit Raspberry Pi Micro USB Power Supply / Adapter / Charger

Patriot LX Series 32GB High Speed Micro SDHC - Class 10 UHS-I

3.5mm Right Angle Mono Plug to Bare Wire

Gikfun 2" 8Ohm 5W Full Range Audio Speaker Stereo Woofer Loudspeaker

uxcell® Super Mini PAM8403 2*3W D Class Digital Amplifier Board 2.5-5V USB Power

Kinobo - Mini "AKIRO" USB Microphone for Desktops

2 Male to male to male 6 inch jumper wires to power from the screen to the uxcell super mini amp

You should get this from Adafruit just so you know that you are getting the right ones


PiTFT Plus 320x240 2.8" TFT + Capacitive Touchscreen - Assembled - Pi 2, Model A+ / B+ https://www.adafruit.com/products/2423

40 pin GPIO Ribbon cable for Raspberry Pi
https://www.adafruit.com/products/1988

The STL files https://github.com/cochranrobotics/Public/blob/master/Podball%20STL%20Files.zip

Nick will be working on the pod client (since we added a screen there are a few changes needed to the one that was already written) soon.

Thanks David

#177  

We are in the final stages of releasing an update for EZ-AI to the beta testers that will allow you to use dictated text to tell your robot to do things. This would be something like "Robot, move forward 10 feet". This speech is converted to text and returned to variables containing the action (move forward), the unit (Feet) and the value (10). From there, you would script to catch these variables and use them to perform the desired action.

In this example, someone could have encoders of different click counts per revolution, or no encoders at all. The robots could have different wheel sizes and motor speeds. It could be a walking robot . It is up to the user to determine what needs to be done to move forward 10 feet, but you would know what the request was without a lot of speech recognition objects.

This is cool but paves the way for other things that we are adding to EZ-AI. For example, one of the beta users asked for his robot to be able to dance when his favorite team scores a touchdown in American Football. Through IFTTT this is entirely possible. We will be implementing IFTTT so that the robot can control other things, or other events can trigger actions in your robot. This command structure is the first step of that and should be available for testing soon.

#178  

Awesome, great work, as usual can't wait to give it a whirl!

#180  

Here is a list of example commands come here robot come here

find [person] robot find David

find [thing] robot find my keys

flash light robot flash light flash lights robot flash lights

go to [person] robot go to David

go to [place] robot go to the office

look [direction] [distance] look down ten degrees robot look down one radian look up ten degrees robot look up one radian

Arm Commands lower left arm [distance] lower left arm ten degrees robot lower left arm one radian raise arms ten degrees robot raise arms pi radians raise left arm ten degrees robot raise left arm pi radians raise right arm ten degrees robot raise right arm pi radians

Move/Walk commands move backward(s) ten feet robot move backward(s) 2 inches walk backward(s) ten miles robot walk backward(s) pi meters move foreward ten feet robot move foreward 2 inches walk foreward ten miles robot walk foreward pi meters

Turn commands turn left ten degrees robot turn left pi radians turn right ten degrees robot turn right pi radians

On the move and turn commands, we may need to add a duration option...

#181  

I'm jealous. Your SR is so awesome and works so well. :)

#182  

I can say that with the beta test users, so far I have had 0 complaints about the SR piece. This was one of my major concerns going in. This is completely untrained dictation and works really well.

There have been a couple of issues where the setting in the plugin and the mic needed to be adjusted, but that is all pretty self explanatory. I have used this with literally $2.50 mics up to about $75.00 mics and with slight adjustments, gotten the SR to work great.

Thanks Dave. It wont be much longer before we will be selling EZ-AI.

#183  

The SR goes back to the many times and efforts of "Can we use Nuance\Dragon\DNS" within ARC. This allows that to happen and a lot more. I promised that we would work on it:) There is a cost associate with doing this, but it is much cheaper entry point per individual than the total license cost. It is slower and does require an internet connection, but it is definitely an option. If nothing else, I believe that EZ-AI is worth the cost for this feature, much less all of the other things that you will get with it. Just my thoughts...

PRO
USA
#184  

This is so Amazing! I have been using Lucy an online free AI api. But this is over the top. Are you taking preorders yet? I'm in.

#185  

We will be taking orders by the end of the summer. The hope is that we will go live Aug 1 but you know how hopes go. There are a lot of things in the works so I wont know until the middle of July probably if we will hit the Aug 1 deadline or not.

#187  

The ARC EZ-AI plugin test is over. We are breaking things now and will have a new version that will include the ability to add plugins to the server, IFTTT integration through the maker channel, Toodledo integration and integration with OpenHAB for home automation. These will be available in the production release.

Along with this there is a client that is pretty well wrapped up for the pods, which extends the AI throughout the dwelling.

We also have an extensive list of features that we will be adding going forward. There are now 3 programmers working on EZ-AI along with a QC person who also specializes in Home Automation. Then, there is me... The face and voice of EZ-AI. I don't get to play with the cool stuff anymore:)

We have a few meetings this month which will determine when the release is. There are many interesting possibilities that I cant speak to now.

#188  

Plugins complete along with the java client. This allows the cost to be reduced as shown in the video. Nuance is a costly service to use. You are now able to choose to use the nuance service or not. Basically, this allows you to have a monthly cost of $22 instead of $30 with the current pricing model. We are working on adding other services that will reduce the cost to use EZ-AI, but allow you to add the services that you want to use. If you want to use the more costly service you have that option. Because ARC has its own TTS engine, the voice isn't a concern. The only negative impact on not using the nuance service for SST, you get slightly less impressive results.

Anyway, here is a rather lengthy video. The end of the video discusses some things we are working on currently before releasing EZ-AI.

PRO
USA
#189  

Awesome demo David!, Can't wait to have all my bots using EZ-AI. The list of ways this can be implemented is virtually endless. All those movies where a person walks into the house and asks the house ("... Any messages?"). This brings SCIFI to reality. "Gort, Clatu, Barada, Nictoe"

#190  

Thank you for the kind words Richard. I want robots to tie into other areas of life than just toys or education. I see there being so many possibilities that it kind of keeps me up at night. We have identified what will be in version 1 and are working toward completing that as soon as possible. There are things like documentation and making things pretty that need to be done still, but we are focusing on finishing up the core pieces. From there we will put a lot of effort into working on decreasing the time it takes for responses to return and then start adding more capabilities. We will also open things up for others to also add more capabilities and share these things if they want to do so.

We will be reviewing the code from others before making the plugins that they submit public. This is for a couple of reasons...

  1. To make sure that everything will work right.
  2. To make sure there isn't any malicious code, and
  3. To make sure that the standards that are needed are in place for the plugin to work.
    I have one guy who already is doing this for us and will do this type of review for submitted plugins also. I don't know of any other way to make sure that the plugins will work correctly that are submitted other than doing this. Once verified good, the developer can make the plugin public or keep it private.
#191  

David, A question about latency: will response times (from when user voice input ends to when reply begins) change depending on whether the installation is cloud, local install connected to cloud ecosystem or local install/local ecosystem?

#192  

the information is retrieved from the cloud and depending on what plugins you choose to use, there could be multiple actions accessing the cloud. If none of the cloud services you choose to use have the information, a local chat bot serves as the catchall. so, if you choose not to use any cloud based services, the local chat bot would reply quickly.

#193  

Just so everyone knows, we are still working on EZ-AI. We have published a presi showing what the release will have in it.

We spent some time at a conference last weekend demonstrating our AI to get feedback from people, and also promoting our new school. The crowd wasn't as large as the promoters had promised but we still got plenty of feedback along with some good promotion.

School link

EZ-AI presi

Rafiki was also shown off a bit. We didn't have him doing a lot because I wanted to focus on the school and EZ-AI information gathering. I was able to offer some assistance to the R2 Builders group and a few other groups at the show who want to make their robot props into actual robots. Anyway, enjoy the presi.

#194  

Hi David,

I was happy to hear you made it to the conference.

I enjoyed the presentation. It really showed the structure and features EZ-AI will offer. Will the first full release of EZ-AI available soon? I assume the "POD" will be a device offered also. Any projected release date window?

Ron

#195  

Hey Dave....

Typo on the school link. In the last paragraph you have "...continued support your school." Should be "...continued support for your school."

Alan

#196  

Thank you!

Ron, I hope so... I have had to step away from the EZ-AI development and let Nick take it over so I can get everything lined out with the school. I plan on going to Dallas on Thursday for work but hope to find some time to spend with him there catching up on where we are at... I will let you know.

#197  

Do you any updates on Rafiki robot itself. Will it be available soon?.

#198  

I had a few family emergencies that have prevented me from working on anything robot related for a bit. I plan on getting back to this after Thanksgiving.

#199  

The gist of the post is that there are a lot of things that have changed personally and with technology over the past 2 years that have caused me to stop this project and focus my attention in another area. Here is a list of these things.

Personally first

2 years ago, my daughter was injured during a cheer leading performance. In and of itself, this wouldn't have been a major thing to recover from but it lead us down a rabbit hole of situations that I will describe. My daughter tore her long thoracic nerve in her right shoulder. This caused her scapula to wing out and her shoulder to drop. There are literally 2 doctors in the US that have a clue about this injury but we didn't discover who they are until after two years of therapy and pain for my daughter. This injury caused us to have to home school our daughter which is another topic, but the pain caused her not to be able to sit in a classroom for more than about 15 minutes without being in severe pain. While doing EMG nerve tests early on to discover the extent of the injury, it was discovered that she has a rare disease called Myotonia Congenita Thomsen disease which is a nuerological condition which causes here muscles to lockup after tightening. This is a very rare disease (one other person in Oklahoma City has been diagnosed with it). This sent us down another trail of genetic testing and a lot of research to see how this will affect her life. Because of this she will not be able to have children and is also greatly affected by different types of medications and also affects her in other ways. We found a doctor in Houston TX who performs surgery to solve the issue with the long thoracic nerve and he performed the surgery last weekend which fixed the pain that she was having. Needless to say, all of these medical situations were expensive and time consuming so my focus on EZ-AI has been taken away from the project.

My grandson was diagnosed with autism. We have been helping his parents with finding help to learn how to handle his outbursts and what causes them. I am one of the few people that he reacts to on any sort of human level. This has consumed a lot of our financial resources and my time. I expect that it will continue to do so, but we are doing what we can to try to assist his parents and siblings.

My wife has had many medical conditions over the past few years, and had a 3 level neck fusion earlier this year. She is recovering but is not able to help as much with the grandchildren or our daughter. She is recovering but it hasn't been easy for her. This too has consumed a lot of resources along with time and money.

All of this has left me trying to recover from these issues. I have been trying to come up with something that my son can do that he will excel at (grandsons father) that will also allow him to be home more. We have decided to start a robotics school directed toward home-schooled students. There is a large market for this and really nothing exists that fits quite good enough, so we have developed a curriculum and a way to work with the students over the internet. At first we were going to focus on a brick and mortar school but we have decided that with technology being what it is now, we can do this virtually instead, which allows us to reach many more students world wide and also reduces costs drastically. This is where my focus has been for my robot time over the past 3 months or so.

EZ-AI was my other son's project (the programmer son) and we had a short meeting last weekend about the issues that we see with this product. EZ-AI is a good idea, but we lack the funding to make it a really cost effective product. The reason that I say that is as follows.

  1. Many other products have become available over the last 2 years which allow a lot of the features that we were including in EZ-AI to be publicly available. These include Cortana, ok google and Siri from the desktop, and Google Home and Amazon Echo for hardware solutions. Every one of these has one thing in common that we can't do. You pay a one time charge for each of these (ether computer OS or a hardware device) and then you can use these devices as much as you want without being charged by these companies for their use. This kills EZ-AI because we will never be able to get the product anywhere near this price point. We have to pay for services, and as such have to pass this along to the users of our services.

  2. There are some laws that were passed (one in the US and one in England) that make me not want to be in the world of cloud services. We were not going to store what someone does on these devices at all on our authentication server. These laws require online service providers to house the activity of its users for a minimum of one year. I don't like this in the slightest as it would require me to do so and then possibly have to turn over a users activity to authorities. This prevents me from perusing this type of a business.

  3. There are laws against recording youth without their parents consent. This means that if our platform were to be used by anyone under 18 years of age without their parents knowing about it, and we identified the person using facial recognition (which requires recording) then we could be liable. I really don't like this either and it could never happen, but it could open me up to litigation that I wouldn't be able to afford. I believe that this is why the devices that are currently out don't contain cameras and don't do facial recognition. That was the biggest thing that made us different from these other commercially available products and kills our advantage.

  4. API.AI got bought out by google. I am not a fan of google. I like my privacy a bit much and after learning more about the harvesting of personal data that is done by google, I have quit using their services. Their purchase of API.AI also leads back to point 2. If anyone is going to be requested to provide information to a government source, it is going to be google, and the use of services that they provide will then force those who base products off of their services to by default also have to provide this information.

  5. The market for this type of product has become difficult because of the huge players in this market now. Apple, Google, Microsoft, Amazon and IBM would be my main competition along with the main services that I would use. This becomes a loosing fight quickly simply because there is no way that I can compete against all of these players. Add to this that a almost all of these companies are now cooperating with each other to further these types of functions, I seriously can no longer compete.

It would be one thing for me to put together something that I can use and handle the costs associated with my use. It becomes something quite different to make a product used by others. Developers normally can establish relationships that allow the API's that I used to be used without cost. Once you publish these for others to use, there are costs. The DIY community tries to keep costs as low as possible, which I totally understand and do the same thing. There are not a lot of people willing to pay for monthly for an alexa type device that can be put into a robot. The cost would be about $25.00 a month, which now you can buy an amazon echo dot for $40.00 and have unlimited use of it (if you have a prime account which carries with it other advantages). I don't see the market being open like it was even 6 months ago.

Because of all of these reasons, I have turned my attention toward a virtual school that allows anyone to enjoy live broadcasts teaching EZ-Robot based topics initially. This will allow people to participate in the live shows via IRC, Mumble, Skype or phone. I have worked with Noah from altaspeed to get the streaming to the internet setup in a way that will allow people to enjoy the stream from Twitch, YouTube live, RTMP and others. I have two entertaining hosts to serve as hosts for the class and have the streaming figured out. I do have some equipment to purchase so that I have redundancy and very high quality sound for the videos. Really, sound is the most difficult part of the entire setup. I should have the studio setup by February and we will start producing some test shows at that time. We will be housing our content for 1 year on a cloud server at digitalocean.com. We will keep archives locally also. I am currently working on getting a kodi channel up and running and testing other steaming services like scaleengine and ustream to see what they can offer that I cant do myself.

Students who buy the classes would be able to participate in the classroom conversations and be able to also get access to previously recorded classes. We will also start a show which will delve more into non-EZ-Robot specific topics with robotics. There would be news segments, product reviews, answers to questions and other things. We would have guest speakers on to discuss topics as we found those who would be willing to participate. There are a few other ideas floating around in my head, but this is a good enough start. From there, we would do other classes on other topics that are used in robotics like 3D design and manufacturing, programming and Artificial Intelligence topics.

We plan on doing shows on what we have done with EZ-AI and how it works which will allow others to do the same thing that we did. It will probably be about 6 months before we will start broadcasting. We will be setting up a patreon page for those who want to assist in getting this off the ground.

PRO
USA
#200  

I had a feeling when I saw the commercials for Google and Amazon's products and did not see anything from you by Christmas that things were not going well. I was using another free service that the link just went dead, I assume for much the same reasons. I was unaware of the retention laws you spoke of. That is scary. I hope and pray everything turns out well with your family, out of my 10 grandchildren, I have 2 that are autistic so I understand the challenges. Will talk more off line.

RichardZ

PRO
USA
#202  

David,

I hope all the best for your future projects and specially for your real life!

Quote:

There are laws against recording youth without their parents consent. This means that if our platform were to be used by anyone under 18 years of age without their parents knowing about it ...

is an interesting topic ...

https://www.indiegogo.com/projects/jibo-the-world-s-first-social-robot-for-the-home

Quote:

What data does JIBO store? JIBO stores certain information about you and backs it up to the cloud. It is encrypted via SSL when being uploaded or downloaded from the cloud. While stored in the cloud it is encrypted via 256-bit AES. The information stored may include your name, information required to connect to your WiFi network, various preferences, and data that is entered or acquired through one of the JIBO applications. Such data includes photos, videos, lists.

The robot will interact with kids, family, friends of the Jibo's owners, so the question is how they handle that law.

If my kid goes to a friends house and their whatever robot e.g. Jibo records videos or photos and uploads to the cloud and then their Asia support center guru, downloads to his laptop for work purposes and then the laptop ends up in black market ... is a serious issue but i think most people gave up of their privacy when they started using FB, G+, Twitter, Snapchat, Instagrams, etc etc.

Another example Nest Camera https://nest.com/support/article/How-does-Nest-Cam-store-my-recorded-video

Quote:

Nest Cam doesn’t use memory cards to store your video on the camera, it uploads your video continuously to the cloud if you’ve subscribed to the Nest Aware service. This allows smooth, uninterrupted video in case there are network connection issues. Nest Aware subscription service provides a simple and secure way to automatically store video up to 30 days in the cloud.

I've been several times in a Friend's house, he uses a Nest cameras to monitor the house, one of them is in the Kids playroom. I didn't know until he grabbed a few screenshots and sent the pictures to me (my kids and his kids) so the question is how Nest handles that, special when you can hide cameras for security protection....

Do you think i can sue Nest ?:) I'm joking ... but is a grey area.

#203  

Yea, its a grey area. What can you record in your own home? What is the software manufacturer liable for? If it was bought by someone to monitor their house, there may not be issues but then storing the information on the cloud, that then becomes a different issue.

Really, I only see this as getting worse because people are willing to forfeit their right to privacy and governments are willing to take more of these away. The law in the US passed without a vote of the house or senate. It passed by them ignoring it. UK actually voted on it and it passed. In any event, with more and more going to the cloud, I really think that these types of laws are going to do one of two things. One is that it will prevent people like me from offering anything that is stored on the cloud and kills production. The second thing it is going to do is slow down peoples acceptance of the cloud. Both might not be bad. IDK. I just don't have the energy to investigate what it is going to take to keep me not liable, so it's not worth it to me to take a chance. Others can get beat up by this until it is cleared up from litigation.

If you are a huge company that can afford the litigation and outlast those who are suing, great. Many people can outlast me.

PRO
USA
#204  

In other words, if you are a small company the law will be effective, if you are a big company there is no law.

I don't think people are aware of these issues when they buy/own the technology and i believe 99.9% don't care.

Check the Arkansas case: http://www.cnn.com/2016/12/28/tech/amazon-echo-alexa-bentonville-arkansas-murder-case-trnd/

Is only a question of time... in the end they will surrender the information. what is the justification border line to cross the privacy rights ?

MSFT had plans for a Xbox Console with Kinect built in always on, internet connection required. You can imagine a nice sensor with infrared camera, microphone array, skeleton tracking, etc on your house accessible (under law pressure) ?

other point how the law applies to a foreign company e.g. EZ-Robot @ Canada or Buddy Robot @ France ?

#205  

Hey David been a long time, tried to get a hold of you several times. Sorry to hear of your troubles with family, I know the feeling and I know you know what I'm talking about. Anyways some things have changed on my end and would love to chat so just throwing it out there, I'm available to chat whenever. Reading your post about your wife hit home. On November 17 th I had a artificial disc put in at c5-c6. On top of all the other issues. Anyways this is not the place so hope to talk to you soon.

    Chris
#206  

@ptp, I think if the case happened after Jan 1, 2017 it probably wouldn't even be fought by Amazon. If the case went up the chain of courts, it would probably not go Amazon's way now.

I have started looking at the packets that the Echo Dot is sending to Amazon. So far I see nothing coming from the IP that the Echo is on unless it is activated. I might leave this sniffer on for a while to see if the IP sends anything in the middle of the night. I know that this is the subject of some podcasts that I watch for next year. I think that more and more people are getting their hands on these devices (especially with the dot selling for $40.00) and people will be doing a more thorough examination of what it is doing.

@Kamaroman68, send me a text and I would be happy to talk. I am off work this week. I saw you texted a while ago. I forgot to get back to you. Sorry man. Yes, I definitely know that you understand the road I have traveled. Will talk soon.

#207  

Hi Dave,

Regarding the Echo, I assume this will not allow your EZ-Ai to continue as planned. Do you think the Echo will become an issue as is thought? If not do you think an Echo and an EZB will be able to be interconnected?

Ron

PS, email sent

#208  

I saw the email. I will reply to it shortly, but wanted to reply to my thoughts on the echo and how it could be hacked to work within an EZ-Robot here. I haven't tried this yet, but I think it would work...

First, a little information about what the general consists is about the echo vs the google home. The google home will be better at general information that isn't "real world information". For example, the question "What is the Millennium Falcon?" would be better answered by Google Home right now. Questions like "Who is Harrison Ford?" would return similar results. Questions like "What is the weather?" would have similar results. Tasks like reading gmail or setting up lists and such right now is better on the Echo simply because it is older and has more development done on it right now. IFTTT allows you to setup a lot of things like when x happens do y between different systems and Echo has more built for it in IFTTT for the same reason. Buying things would be better through the Echo right now and probably forever if you purchase things through Amazon.

Again, I haven't tried this yet... The Echo has a switch on top of it that allows you to bypass the wake-up words. Currently the wakeup words are "echo", "Amazon" and "Alexa". There isn't a way to change these, but by triggering the switch, you are able to speak and have the echo hear what you are asking. This could allow the EZ-B to be attached to the echo (after some hacking) to allow it to then start listening and make the keywords be handled through the EZ-B instead of through the Echo.

With that said, the voice coming from the echo will be the Amazon Echo voice and will not match exactly to what your robots other statements are. Some may see this as problematic. One of the advantages of EZ-AI is that the voices would match because everything would have been passed back to ARC to speak.

Both the Echo and Google home go to a single source for its information. The main complaint of EZ-AI was that it was slow. I have to describe the paths that these devices take to finally return the information for you to see why EZ-AI was slower than the Echo, siri, cortana or Google Home.

EZ-AI The recording of the question happened in the EZ-AI plugin The recording would then send to the EZ-AI server The EZ-AI server would

  1. Start a thread that would send a message to the EZ-AI Authentication server
  2. The EZ-AI Authentication server would validate that this was a valid user and which services were paid for by this user
  3. The EZ-AI Authentication server would send a response back to the EZ-AI Server saying it was okay to process the request. While 1,2 and 3 were being executed a separate thread would send the request off to API.AI to see if it could convert the text to speech and then process the request. (this was successful about 80% of the time) If the request could be processed then API.AI would classify the text Run through its logic to return the result Return the text to the EZ-AI server If the user was a valid user from the Authentication server checks from the other thread, the text would be returned to the plugin The plugin would then take this text and place it into a variable to be spoken.

If API.AI couldn't process the request it would return a failed attempt back to the EZ-AI server If the user from the checks from the Authentication server was a valid user and had paid for Nuance services The recorded audio would be sent to Nuance which would then perform the SST (speech to text) conversion (this had a 99.97% success rate in the beta). This text would be returned to the EZ-AI server, which would then send this information to API.AI If API.AI determined that this was a question that it didnt have the answer to it would return a failure to the EZ-AI server. The EZ-AI server would see if the user had access to Wolfram|Alpha from the checks it did earlier. If the user had access to Wolfram|Alpha, it would then submit the text to Wolfram|Alpha. This was about 25% of the requests from the beta. The Wolfam|Alpha engine would run and gather a lot of information and return it to the EZ-AI server.
The EZ-AI server grabbed the spoke text data and passed it back to the EZ-AI client.

As you can see, there was a lot of hopping around due to trying to provide the most accurate results possible. Sometimes the results (if it went through the entire chain of events) could take up to 20 seconds to return the final results. This was due to transmission times and due to the massive amount of data that Wolfram|Alpha provided. It could take 15 seconds for Wolfram to retrieve the information. This feels like a very long time, but it returned accurate information. It could return things like "What is Myotonia Congenita?" which is amazing but very few people would have asked this type of question. It does make it somewhat useful for medical professionals though, but what is the market?

A question to the echo of "how far is the earth from the moon?" sent or received 214 packets from and to the same IP address on different ports and took ~10 seconds to complete from the first packet to the last. The Echo doesn't wait until you are finished speaking before it starts sending packets to its servers for processing. The question took 2 seconds to complete and 8 seconds for me to ask the question and it to finish processing the request. This is because it had already figured out and classified most of the text before the statement was completed. I had no way to do this with the technologies that we were using. The downside to this is that you can't ask things like "What is Invokana?" making this really more of a digital assistant or a Amazon sales point in your home than anything.

So, to get speed, the Echo is better than anything I could ever develop simply because it goes to one location and can do so prior to the question being completed. It allows the number one requested thing from our testing and the conversations that I had with various users which was digital assistant features. It covers about 80% of what we could do from a knowledge engine perspective and it has a huge community of developers working to improve it daily. The only thing left is to get the robot to trigger it which could be done by hacking the Echo to allow the EZ-B to electronically or mechanically activate the switch on the top of the echo. The only thing you are missing is really accurate data to more difficult subjects, a consistent voice, controlling the robot by voice (ie move forward 25cm) and data being returned to the ARC application itself.

I will keep working on EZ-AI in my spare time but there just isn't a market for this product outside of the robot DIY community. The robot DIY community isn't large enough to support the required funding to make this cost effective, so I will just keep working on it for myself and see where we are later in time.

#209  

I did a quick review of the Echo Dot. It does seems to be hackable to allow it's use in a robot. The price makes it much more realistic to be torn apart and made installable. There are videos available which show the disassembly and it's internals.

#210  

@Dave_C,

I'm so sorry both your family and your personal challenges are causing you all such trouble. Many here have been on that same road in our own lives so we totally empathize with you. Not that I am comparing but I also have a grandson with autism and also have had to stop more then one business venture because of economic issues or competition. It's hard to walk down a different path after you've put so mush into something or someone. My personal wishes and strength go out to you and your family to move through these times. I hope the next year sees better times for your family and your ventures. It sounds like you have a good plan and lots of the right kind of help to get you all there. Peace. :)