Asked
— Edited
This thread contains the more recent information about EZ-AI. For information about EZ-AI visit http://www.ez-ai.net And visit the plugin page at https://www.ez-robot.com/EZ-Builder/Plugins/view/123
EZ-AI version 1.0.0.6 has been posted for download. I am working on the patch for going from 1.0.0.5 to 1.0.0.6. This version has the following updates. I am updating the first post.
EZ-AI Product Enhancements General If ARC isn’t started, you are warn the user instead of erroring out Handles all errors more gracefully Made all screens resizable Made fields adjust in size based on the size of the window.
RSS Feed EZ-AI has a new table that allows the user to put in phonetically correct pronunciations of words.
Added a screen that allows the user to enter this data.
Outdoor Navigation Mode Called by using /NAV switch. Navigation module that allows a user to get directions and then navigate to the location based on google API walking directions.
Outdoor mode The map gets updated in a different thread making the screen more responsive.
Skype i am using VSee as it is free and works great. Skype works also. If you setup VSee on your computers, you can set the robot to autoanswer if a specific user calls it. This is how I am using it currently.
Cookbook The number of recipes returned is now a user definable variable in the EZ-AI.exe.Config file.
The 1.0.0.6 patch has been posted. These are downloadable from cochranrobotics.com.
To install the patch, upzip the the Install and Patch folders of the download to your C:\EZ-AI folder. Navigate to C:\EZ-AI\Patch and run the SetupDatabase.Bat file. The new application files will be copied over the old ones and the database will be updated with the new objects needed for this version of EZ-AI. You will also see an EZ-AI1006.exe.config file that contains any new keys that need to be added to the EZ-AI.exe.config file for your installation.
New keys added with 1.0.0.6 are <add key ="LatitudeShift" value="-.00018"/> <add key ="LongitudeShift" value=".000334"/> <add key ="DefaultMapZoom" value ="17"/> <add key ="RecipesReturned" value="20"/>
All of these can be adjusted by you for your install. RecipesReturned is used to determine how many recipes are returned when using the Cookbook module. DefaultMapZoom value is a range from 1 to 20, with 1 being the widest zoom and 20 being the most narrow zoom. The LatitudeShift and LongitudeShift should be adjusted by you depending on where you are in the world. If you live at the intersection of the prime meridian and the equator, your values will be 0 and 0 on each of these. The values in the file are for Oklahoma City, Oklahoma in the USA. You should use the /O option and say Current Address (if you are at home) to see what address these values have you placed at. Adjust these values to fit your needs.
Also, there is a value in the config
. Please set this to 0 or your application will break. This is something I am working on.
Are you using the EZ-robot camera with RoboRealm or an external camera? Fantastic work BTW.
I have one robot that is using the v4 camera. Another is using usb cameras. Both work but the v4 won't read barcodes. Not high Def enough. The v4 camera works great for facial recognitiion.
This video shows how to setup RoboRealm to work with the V4 camera.
I am thinking about checking out roborealm and seeing what other features it provides. I know it has an OCR feature that handles small text and would allow the robot to read things like signs. RoboRealm could also be used to flatten round images like coke cans which would allow it to the recognize things more easily. It has a pretty cool indoor navigation feature that recognizes objects and knows where it is based on those objects. I am sure there are other things that RoboRealm could be used for that I just haven't discovered yet.
I could do this or continue work on my associative learning module. It would take questions that were asked to the robot from the wikipedia engine, parse the results for nouns, and then perform searches for these words to multiple internet sources, which would then be parsed for obvious nouns and continue the process. It would do these searches when the robot isn't actively being used. I am not sure how this information would then be implemented into EZ-AI yet but getting this part completed with a decent parser would take some time.
What do you think? Should I put adding new features on hold to make a learning robot, or are there some other features that you would like added?
I think I am going to go down the path of measuring distance using a camera(s). I will attempt two different methods of getting this distance. The first is the stereo vision approach where I will take images from both cameras and compare the distance between an object to try to figure out how far away it is. The second is to use a couple of features of RoboRealm to find the intersection of the base of an object and then identify the angle that the camera is looking to use calculus to figure out how far away an object is.
This goes back to an old thread where someone asked if a camera could be used to measure distance.
From there, I will explore the RoboRealm API a bit more to see what is possible.
By the way, the outdoor navigation with GPS is in the last version I posted. You would need an onboard compass because I pass back the direction your robot should head in to get to the next endpoint on the route that is gathered from Google Maps. I haven't taken my robot out and tried it yet, simply because my robot isn't built yet, but the directions passed back to ARC look correct.