Asked — Edited

I'Ve Started To Work On Mapping Using The Vx-11 Lidar

The boards have been pretty quiet so I thought I would share what I am working on now.

This is for robots using an on-board computer I have started making a plugin that is designed to use the xv-11 sensor, wheel radius and encoder counts per revolution of the wheels to make a map of the environment. It also uses the compass part of the 4-in-1 sensor sold on this site.

The map is currently stored in a SQLLite3 database. It is housed in the plugin directory. I don't know if this will be the final design or not but it is what I have working right now. There is a map table that is created by the user with the plugin. It contains the X coord, Y coord, tile and Counter fields. This will be used to store the map as it is being updated. The tile will be marked when the LIDAR recognizes something in that location of a certain intensity.

Quote:

The tile size is determined by the wheel diameter. If you have small wheels, you have small tiles. If you have large wheels, you have large tiles. The current environment that I am testing in has tiles that are about 4.19 inches by 4.19 inches. This is because I have wheels that are 4 inches in diameter and if you take the wheel diameter * pi / 3, you come up with 4.188790266.... I round this to 2 decimal places. If you had wheels that were 2 inches in diameter, you would have tiles that are 2.09 inches. If you had wheels that were 12 inches in diameter, the tiles would be 12.57 inches. The logic is that the wheels would be much smaller for robots in smaller environments and much larger for robots in larger environments. Larger wheels means faster moving robots and thus the updating of the environment would have to account for faster moving robots. The number of tiles in the map is determined by the configuration screen by setting the size you want your map to be. In the test, the map is 50 feet x 50 feet. Using a robot with 12 inch diameter wheels indoors in a 50x50 foot house could become problematic. These are all subject to change depending on testing.

Well the information quoted above has changed. I am in the US and as such am more comfortable using inches and feet, so I am making 1 inch tiles for everything. The wheel diameter is still important but not as important in laying out the grid. I am converting the mm readings from the LIDAR to inches and marking the squares. We will see how this works out and go from there. This, along with everything else is subject to change as I go through it all.

The map on the screen is loaded from the SQLLite3 database initially. As things are seen by the LIDAR, the map table is updated and the display is updated by marking the corresponding tile on the map.

Eventually my goal is to take this logic and use it in SLAM. I plan on starting with some simple SLAM using the RANSAC algorithm which is best used in indoor environments. This is because it estimates and creates landmarks based on straight lines. From there I will use the Extended Kalman Filter for data association. This allows the robot to recognize landmarks and then adjust its current position on the map based on these landmarks.

One of the reasons that I want to store this information in a SQLLite3 database is that this would allow me to have multiple maps housed in different tables. The configuration screen could be modified to allow the user to specify which environment the robot is in (office 1, Office 2, home, Mom's house for example). These maps would be stored in different tables and the user would just switch to the map that pertains to the current environment. Another thing that these multiple maps could be used for is to handle different floors of an office building, one for each floor.

The test map is about 13 meg in size. This isn't too large but is only based on a 50x50 foot house on a robot with 4 inch diameter wheels. If you were in a warehouse or large office building with a robot with small wheels, the size of the database could get really large I would imagine. The goal is to get this to work in a smaller environment, and then see what needs to be done to handle larger environments.

Eventually, I plan on incorporating a path finding algorithm. This shouldn't be too hard to do because it is done in video games like crazy. There is plenty of sample code to build from.

Anyway, that is what I am working on currently. I suspect it will take some time before I have something to share. This is a pretty ambitious project and I will post updates as I accomplish different things with it.

I am not sure if I will sell this plugin or make it freely available. This is something that I will decide after I know how it works in multiple environments. If it turns out to be simply amazing, I might sell it. If it just works, I will give it away for free and continue working on a final solution.


ARC Pro

Upgrade to ARC Pro

Discover the limitless potential of robot programming with Synthiam ARC Pro – where innovation and creativity meet seamlessly.

#1  

Dude, if you were a girl I'd kiss you.... simply awesome man! :)

#2  

Ha ha ... this is transformational in terms of its potential so "man-hugs" from me!

Have a wonderful New Year everyone.

Cheers

Chris

#3  

Talk about timing, I just got my early version Get Surreal Teensy chip flashed with the latest version of the XV Lidar Controller that flashes an LED when powered up over a USB connection. Now on to building a Lidar mount on top of a Roomba.

PRO
USA
#4  

How are you on your schedule with the time line for getting your robot to market? You've been a busy boy! You are Incorporating a lot of technology into this robot!

#5  

I decided to use ARC as the interface for Rafiki. I have made a few parts into plugins so far for it and am working on this one. I have only shared one of them so far. The thought is to leverage what is available in ARC and add the controls needed for Rafiki. I want people to be able to customize Rafiki to their liking and be able to add anything that they want to add. EZ-AI will be a plugin for the client and a piece of hardware for the server.

As far as the schedule, this new direction sets me back a bit, but I think it helps out a lot in the long run. Gains will be had by not having to maintain an interface to ARC using the SDK.

Writing plugins takes a bit more time than just writing the code, but it works out in the long run and I believe in the product (EZ-Builder). I have been impressed with how DJ has done a lot of things. It isn't always easy to understand initially, but once you grasp how it works you realize the genius behind it. It lets Rafiki use other peoples plugins over time also. I wouldn't feel right about using other people's plugins if I didn't share mine, so as I complete the others, I will share them too.

The others that I am working on currently are used for ground height sensors, car bumper sensors, the Volvo motors (that dont have onboard controllers) that I am using and eventually I will get to the Omron B5T HVC. It just takes time to get things right and SLAM with path finding is what has my attention at the moment.

I had an issue a while back that blew all of the 5v devices on my prototype. I have discovered more things as time has gone on and just discovered that a kangaroo and one of my motor encoders also got taken out. I just ordered a replacement encoder and another Kangaroo. A lot of my devices could handle 12 volts so most weren't damaged, but unfortunately what was damaged was very difficult to get to. This caused me to have to disassemble a lot of the robot that I didn't want to disassemble, but also helped me to decide on a couple of design changes that will allow for easier access to the parts inside of the robot. Hard lessons to learn, but I just keep pushing forward with it.

Anyway, a lot more information than you asked, but I hope that this gives you an idea of all of the things going on. Focusing on SLAM is nice for me. It is a fun project for sure.

#6  

The toughest part of getting this working is the timing. If timing is off, the marking of the tiles is off. Because of this, the robot will move to a tile, then stop, then get the readings from that tile. This will be done for the map building process. Once the map is built, I shouldn't need to stop before taking readings from the sensor as slam will kick in at that point and do the location adjustments as needed. I think that this is the best way to start with an accurate map.

Also, the compass will determine when a turn is complete and will be used to determine the heading and thus the location of what the robot is seeing.

Documenting more for my own but also sharing so that others can understand what is happening when they use this.

#7  

This is another note to myself. I figure documenting here is as good as anywhere. Here are the functions used to calculate what angles and distances are and to decide which direction I need to turn the robot.


        public String GetDirection(double currentangle, double desiredangle)
        {
            double theta1 = currentangle;
            double theta2 = desiredangle;
            double delta = NormalizeAngle(theta2 - theta1);

            if (delta == 0)
                return "Straight";
            else if (delta == Math.PI)
                return "Backwards";
            else if (delta < Math.PI)
                return "Left";
            else return "Right";
        }
                
        private Double NormalizeAngle(Double angle)
        {
            return angle < 0 ? angle + 2 * Math.PI : angle; //This will make sure angle is [0..2PI]
        }

        public string GetDirectionPoints(Point a, Point b, Point c)
        {
            double theta1 = GetAngle(a, b);
            double theta2 = GetAngle(b, c);
            double delta = NormalizeAngle(theta2 - theta1);

            if (delta == 0)
                return "Straight";
            else if (delta == Math.PI)
                return "Backwards";
            else if (delta < Math.PI)
                return "Left";
            else return "Right";
        }

        private Double GetAngle(Point p1, Point p2)
        {
            Double angleFromXAxis = Math.Atan((p2.Y - p1.Y) / (p2.X - p1.X)); // where y = m * x + K
            if(p2.X - p1.X > 0)
            {
                return angleFromXAxis + Math.PI;
            }
            else
            {
                return angleFromXAxis;
            }
            
        }

        public static double GetDistanceBetweenPoints(Point p, Point q)
        {
            double a = p.X - q.X;
            double b = p.Y - q.Y;
            double distance = Math.Sqrt(a * a + b * b);
            return distance;
        }
 
        private void MoveRobot(Point comefrom, Point atnow, Point goingto)
        {
            //add code here to move the robot the requested distance

            string command = GetDirectionPoints(comefrom,atnow,goingto);
            double turnangle = GetAngle(atnow, goingto);
            double distance = GetDistanceBetweenPoints(atnow, goingto);

            switch (command)
            {
                case "Straight":
                    {
                        //distance will end up being devided by something to get inches... idk right now
                        break;
                    }
                case "Backwards":
                    {
                        //distance will end up being devided by something to get inches... idk right now
                        break;
                    }
                case "Right":
                    {
                        do
                        {
                          //turn right here
                          //distance will end up being devided by something to get inches... idk right now
                        } while (Convert.ToInt32(EZ_Builder.Scripting.VariableManager.GetVariable("CompassHeading")) > turnangle +1 && Convert.ToInt32(EZ_Builder.Scripting.VariableManager.GetVariable("CompassHeading")) < turnangle - 1 );
                        break;
                    }
                case "Left":
                    {
                        do
                        {
                            //turn left here
                            //distance will end up being devided by something to get inches... idk right now
                        } while (Convert.ToInt32(EZ_Builder.Scripting.VariableManager.GetVariable("CompassHeading")) > turnangle + 1 && Convert.ToInt32(EZ_Builder.Scripting.VariableManager.GetVariable("CompassHeading")) < turnangle - 1);
                        break;
                    }

            }

            serialPort_GetData();
        }

#8  

David,

Have you utilized the magnetometer (compass) in 'real life' over an extended run yet?

I ask because these compasses are very sensitive to all magnetic sources, not just "Magnetic North". In addition, the magnetic lines of flux can be disturbed by many outside influences such as metal structure and/or fastners that are near the sensor and the effect of motors or wires carrying large currents.

My experience with these sensors is from several years developing multi rotor UAVs (drones). In those applications the compass was a critical part of the autonomous flight control. To assure proper operation, the sensors are typically placed away from the motors, power leads, and any metallic objects, often on stalks atop the airframe in nonmetallic cases.

For accurate directional indications there is usually some sort of calibration process. The drone is rotated 360 degrees about each axis. The sensor readings are stored and compared to null out any static disturbances. Then an offset is introduced to compensate for the local magnetic declination. After that the directional data is good as long as you don't run across any buried metallic objects (like rebar) or nearby flux distorting objects like structural metal.

Your application may not need absolute directional accuracy but it will need static repeatibility.

I hope this information is helpful.

#9  

The beauty of SLAM is that it handles a certain degree of errors. Right now, the code above is more for the calculations. Good high Def encoders should prevent the compass from being needed. It is going to be used more for checking for wheel slipage.

The starting point for the compass when turning in a location should have a simular level of error as at the stopping point of a turn.

Also, there will be external devices that are available to validate the location of the robot using cameras doing object recognition in black and white mode to limit distortions based on lighting conditions. These will be stationary sevices.

Really, the compass is the smallest and least used component of the system.

#10  

Thanks for the advice though. I do plan on having some sort of initialization process to calibrate the robot.

#11  

More notes completed -

  • The user can specify the size of the environment, which will build a blank map.
  • Map units are in inches. one pixel = 1 inch.
  • The user can specify a point on a map as a location that will be used when the path finding is complete to allow the user to say "go to the kitchen" and the robot will know what point on the map represents the kitchen. These labels are visible on the map.
  • Math has been programmed to know the angle that the robot should turn and how far it should travel.
  • LIDAR is updating the map correctly when objects are detected.
  • The user can specify where the robot currently is by double clicking the map. This would be used when a map is initialized the first time or if the robot is moved.
  • Made Landmark class
    • RemoveBadLandmarks
    • UpdateAndAddLineLandmarks
    • UpdateAndAddLandmarksUsingEKFResults
    • UpdateLandmark
    • UpdateLineLandmark
    • ExtractLineLandmarks
    • LeastSquaresLineEstimate
    • DistanceToLine
    • ExtractSpikeLandmarks
    • GetLandmark
    • GetLineLandmark
    • GetLine
    • GetOrigin
    • GetClosestAssociation
    • GetAssociation
    • RemoveDoubles
    • AlignLandmarkData
    • AddToDB
    • GetDBSize
    • Distance
    • Distance (between landmarks)

working on:

  • Add A* type path finding
  • Functions in place for path finding
  • Use the MapArray which is set when the map is reloaded to set the squares that are either passable or not passable.
  • Make sure the path calculations can handle larger maps.
  • Draw decided route on the map
  • Get new route if the path is blocked.
  • Get route based on destination locations ("go to bedroom" ).
  • Use the Landmark class

todo:

  • Adjust the points returned from the LIDAR based on the direction that the robot is looking. (compass or calculation)
  • Put back in the code that will account for robots that do not have a full 360 degree view from the LIDAR.
  • Put in code to move the robot - make configurable via ConfigDictionary values (thinking of using scripts for this which will allow the user to make a Move Forward x distance, Move Backward x distance, Turn right x degrees and Turn Left x degrees script. This should allow the plugin to be used for pretty much any robot with different motor configurations and controllers. Config screen would have four variables, one for each script name (1 per direction). Also, there would be a variable that would be updated by the script when the movement has completed. I would wait for this variable to change, and then do the next step.
    • move forward 12 inches
    • move backward 12 inches
    • turn right 1 degree
    • turn left 1 degree
  • Add method to move the robot to a location on the map from some sort of a click event. (move robot button and then single click event I think).
  • Add method to turn the robot to face a specific direction based on user input.
  • Rewrite subsystem controller to work with Kangaroo.

Waiting on: Replacement encoder - Received and installed... Replacement Kangaroo - Should receive 1/7/2016

Brain is getting worn out. Taking a break. Also have laundry to do and Christmas stuff to take down.

#12  

With the addition of the LIDAR that I am working with to RoboRealm, and some of the other features that are already in RoboRealm, I am now torn...

I could continue to write my own SLAM module in either RoboRealm or ARC or both. There are some things in RoboRealm that could be helpful, like path finding, floor finding for camera, object recognition and a lot of video processing things that could be helpful in using a camera feed and the lidar together to accomplish SLAM. I just dont know until I dig into it pretty far.

I am trying to decide if I continue just making a SLAM module for ARC as a plugin or if I stop this and try to use RoboRealm for this. I may end up doing both, IDK at this point. I guess I could also use the RoboRealm API in an ARC skill plugin but that limits how many people would be able to use the plugin.

#13  

I'll answer in a purely selfish way... RoboRealm charges per computer (they used to allow a license to be used on 2, but not anymore). If you were going to release your plugin for free, or would have a multi-computer for one cost model (maybe based on EZ-Robot login ID to prevent abuse), then I would encourage you to continue in your efforts because I have several computers that I want to be able to use depending on where I am and how many robots I currently have running.

On a non selfish answer, if using RoboRealm makes it easier for you to achieve the goals, and allows you to move onto other functions that you have unique capabilities to deliver, then having two different applications which provide the exact same capability seems a bit silly.

Alan

#14  

I am leaning toward just building the ARC skill plugin. It will teach me a lot along the way and I am all for that.

I am going to be focusing on EZ-AI this weekend, and then on this after the plugin for EZ-AI is complete. I do like the idea of having everything work in plugins and I also like the thought of promoting EZ-Robot. I also like RoboRealm but I am not nearly as comfortable with VBScript and how RoboRealm works, so I dont know that path is the one that I want to go down first.

PRO
USA
#15  

Interesting discussions ;)

@OldBotBuilder: Regarding the compass, i agree i have mixed feelings, i had success with small robots and same room, indoor navigation and/or big robots (more metal, more current) is difficult to obtain reliable results. Some IMUs devices have a proprietary fusion algorithm combining gyro & compass, and the results are more stable, i have a few IMUs to try but no time yet. I use ROS for slam and navigation, there is a node which combines different IMUs plus the odometry and provides a correct/combined output.

@David: Regarding encoders, i agree too, is very important to have high resolution. I have 6" wheel robot with 144 CPR encoders and is not enough for PID control when driving slow speed.

What is your encoder resolution ? Is built in with the motor ?

RobotRealm, do they have SLAM/Mapping functionalities builtin ? if you can identify the missing functionalities you can challenge DJ Sures to build. if EZB is not powerful or does not provide the build blocks, is time to check the EZR roadmap otherwise you can end up overlapping efforts.

#16  

I'm also running several different PCs using EZB. The Roborealm I purchased will not allow me to upgrade to use the new Lidar module unless I shell out another $29.95, and as Alan mentioned, I can only use Roborealm on a single computer.

#17  

I don't mind paying for value, and I think Roborealm is a lot of value for the price. I would be happy if it was limited to two machines, one development and one production like the old license (or like Microsoft used to do with Office, one desktop and one laptop, with only one activity in use at a time. It wasn't enforceable, but was in the EULA.)

I have been spoiled by Android where the apps are based on the user id, not the device. I have the same apps on two tablets, two phones, and an emulator under Windows all with one purchase, but if someone else wants to use them, they need my Google credentials, which is NOT happening...

Alan

#18  

The encoders I use are about 20K per rev.

Ros slam is good but the goal here is to add slam to ARC. DJ allows plugins and I am working on one that uses specific sensors and allows customization with out programming. The plugin allows me to leverage what is already in ARC. Basically the gist is that the user would build scripts to move their bot a specific distance. They can specify which script to use to move the robot in any direction. This allows the plugin to work with a wide range of robots.

I'll write more when I'm not on a phone

#19  

My bad, the encoders/motor combo do 63500 pulses per rev. No, it is external to the motor driven off of the back shaft of the motor. RoboRealm has the ability to generate an image from an array of data, and then analyse that image using a lot of different filters. You can also update the image. It also has path finding built in. There isn't a SLAM module in there yet.

I am just not comfortable in their environment at all compared to C# and ARC. I understand what is going on in ARC. I get lost in RoboRealm but it is probably because I haven't spent the time trying to understand it like I have ARC.

There are some cool features in RoboRealm that could be leveraged, but their pricing model isn't free like ARC's.

I also just cant bring myself to write in vbscript. It is way too unstructured for me and I always have to go back and correct code to remove ; or other silly things. Old habits die hard. There are other scripting languages available in RoboRealm but I am far more comfortable with C#.

My problem right now is that the encoder has too many pulses for the kangaroo. I have to figure out how to make a flip flop circuit now. https://synthiam.com/Community/Questions/6225&page=3

Parts have been ordered to make this flip flop circuit. I will post the results when I have them.

#20  

I do have to say that I am a programmer. I try to look at the overall picture. If I jumped to a different platform every time that the one that I am on didn't have something that I wanted, I would have to question my first statement.

I could live off of what others have done, but then I wouldn't be a programmer.

I could leverage what is done and build onto it, then I would be a smart programmer.

I could build something open ended enough to allow others to use it in different configurations, which would make me a good programmer.

I could bang my head up against something that will never work, which would make me a dumb programmer.

I had started to go down the ROS path. It is a good path if that is what the rest of your stuff is built on. I decided that I didn't want everything built on ROS, and DJ opened up ARC to allow plugins to be developed to do anything you want it to do. ROS is the wild west of robot programming. There is a lot of half working code that only works in a specific configuration without ever considering that someone else might use it. It is good in that there are a lot of things being developed there, but nothing that I have found is a working solution outside of the specific config that it was designed for.

EZ-Builder's road map became whatever we want to make it as programmers when Plugins were added. This allows you to use the controls that someone else wrote, along with the controls that are in ARC by default. It allows you to write your own controls for things that are not there, and share them if you choose to with others. The real difference that I now see with ARC and ROS is ARC is more controlled and runs on Windows. ROS is more wild west and runs mainly on Linux. Either is a good option in my opinion, but I like to leverage controls that work for a wider audience than those that work only for a specific configuration.

#21  

@David... I would be willing to pay for your plugin... I haven't bought the lidar yet but I definitely plan to... Autonomous navigation is a big deal in my opinion and so I would be happy to pay to get it....

Cheers and thanks for your efforts in this,,, Richard

#22  

Thanks Richard. I think I can make this open ended enough to allow multiple configs. I don't want to say, you have to use this sensor with these motors spaced this distance apart and only use on this type of surface.

The key to SLAM is accurate movement. I cant control if someone has accurate movement in their robot nor do I want to. If I take the movement equation and place it on the user of the plugin, and assume that their movement is accurate, I then can focus on the mapping, landmark and path finding portions of the plugin. I would love to take this to the point that it could work with multiple sensor configurations at some point, but I have to start somewhere.

There is no reason that SLAM cant work with ping sensors, PIR sensors, IR sensors or really anything that can detect distances from an object. The difficult thing about using these types of sensors is that there is a very limited field of view, so figuring out if what the robot is seeing is a straight line or a point takes much more movement by the robot. The LIDAR can make this determination due to the data that is returned in a very quick time. This drastically reduces the movements required by the robot. By taking these measurements from mm to inches, I am able to reduce the map by 25.4 times in size and reduce the level of error from the LIDAR by taking all objects that are in the inch size to the pixel on the map.

By using a scale of 1 inch = 1 pixel as a standard, the calculations for how far to move become pretty easy to do. The concern that I have with this is the requirement to use Inches which is not the universal standard. I could go with 1 pixel = 1 cm but the issue there is that the map becomes too large to display reasonably inside of ARC, or on any application for that matter. This may need to be revisited after everything is working but I don't have a better unit of measure that isn't too large or too small at this time.

The compass would only be used to account for obvious wheel slippage. It has been pointed out that the compass may have interference and thus not produce accurate results. I do feel that this is still a valuable tool to have, but may also incorporate a camera to account for movement. For example, I can take a picture prior to a turn, and then one after a turn and compare them to each other to see how much variance that there is in the pictures. If a threshold has been met, then I could assume that there was no wheel slippage. This would have to be combined with the compass I would think as it would be possible to miscalculate this with walls. 2 walls in a house could look very similar in a corner situation if the environments decorations were not done by my lovely wife:) Every wall has things on it so in this environment, it would be reasonable to use the camera. In an office environment this might not be the case.

The EZ-Robot 4-in-1 sensor also provides a gyroscope. This could be used I suppose but I would have to understand its readings more to make a call on if this should be used or not.

Another option is to use another sensor like a ping sensor or car bumper sensor and the known points on the map to know what you expect to see change with these sensors as the robot turns. The same could be done with the LIDAR I suppose, but I think I want to complete a move, then get the LIDAR data as timing between these two actions (and the resulting odometer data or grid estimation) is important in order to get an accurate results. The defusing of the data from mm to inches might make this not as big of an issue though. Also, this will become a non issue once the map is build and the landmarks are populated as the position and direction of the robot will be adjusted based on the previously seen landmarks...

Okay, done thinking about this right now. Work calls.

#23  

Hi David, I think a small list of reccomended components would be the best way to go. If anyone is serious about adding this function to a robot they would have to use the components that best suit the project. I know you want to have the option of using many different sensors, but trying to work them all in the project means a lot of work. List the optimal components and one or two options if they exist and that's it. The costs for this system will not be cheap and everyone is aware of it. I look forward to hearing more and seeing the end result of your project. I just wanted to present my opinion.!

Ron Rh

#24  

Thanks Ron. You are probably right.

What I do normally is spew ideas as they come into my head. They are quickly forgotten if I don't write them down due to other things getting in the way and confusing things. Its funny how age and my office moving from the spare bedroom to the living room cause this to happen. My son needed a place to stay while he was in College so I gave up my office. I do have a lot of distractions now that I didnt before, so when it is quiet and I can think, I like to document my thoughts. It helps me to go back and read them before I start the next day of working on things.

I also go back and update posts with newer information on these thoughts quite a lot. What is funny is that I could do this in notepad or winword or a pen and paper, but find myself much more likely to go read these if they are posted here. I guess this is my second home or something now.

I also think that seeing what other people are doing or think about my ideas is important to me and I get more motivated to figure out the next issue.

Thanks for your thoughts, they are appreciated.

#25  

Also, if I write my thoughts to others, I am less likely to leave out details that I will forget later. Getting old stinks :)

#26  

Keep em spewing... LOL.... I look forward to seeing more of this project.

Ron R

#27  

David I've gathered together a Lidar, teensy controller and ordered a 4 in 1 sensor which is presently out of stock but will eventually arrive I'm sure. I wanted to use the Lidar for navigation using your plug-in and control a Roomba using the EZB controls that are already built. I'm also in the market for a quality compass sensor that could help with additional positioning control for better autonomous movement.

#28  

Hey Robot Doc,

I am initially going to set this up with the components you have ordered or already have. The spewing of ideas is only that. It is me thinking "out loud" and most of it is just ideas for further improving what I will build.

The ramblings that I do for Rafiki behind the scenes is quite long and it is interesting to go back and read some of the early thoughts that I had. Some of them I think, "What was I even talking about?" and others I think "Wow, I had completely forgotten about that. That's a good idea that I had." My son got me started on this. Its a form of story boarding kind of I guess.

Those young ones have some good ideas sometimes :)

#29  

Just be aware the EZ Robot 4 in 1 sensor has issues with the compass... Both Alan (thetechguru) and I have noticed it ... The thread about the issues is floating around here somewhere... Anyway, last I heard Jeremie was looking into it...

#30  

Cool, I am sure it is a thread that I missed when sanding Rafiki:) Thanks for the heads up Richard!

#31  

Quote:

Just be aware the EZ Robot 4 in 1 sensor has issues with the compass... Both Alan (thetechguru) and I have noticed it ... The thread about the issues is floating around here somewhere... Anyway, last I heard Jeremie was looking into it...

This is the thread. It started as a somewhat off-topic post in another thread and just continued there, so not obvious if browsing, but pops up if you search for 4in1 compass.

https://synthiam.com/Community/Questions/8604

Alan

#33  

Hmm, interesting read. So, from what I gather, the compass part won't even come close to telling me if a turn is complete (5 degree variance from start to finish of a turn for example.)

I hope Jeremy isn't too busy with other things to address it right now.

#34  

The way it sits now the compass is unusable... I have two 4 in 1 sensors that I can't currently use...

#35  

Just saw in another thread that the current 4in1 is only available from Brookstone, and they are re-designing a Rev 2 board at this time. I hope that isn't going to be required to get the compass to work, but it does mean you may be waiting a while before you get the one you ordered (and without trying to be mean, EZ has never met an expected ship date on new product yet, so I wouldn't hold my breath too much).

The thread I linked to has information on how to get a 3rd party sensor working correctly with EZ-B. You might want to go that route if you are in a hurry.

Alan

#36  

Thanks Alan,

I am going to focus on getting the encoder issue (too many counts with 2 motors of my config with a Kangaroo) going first I think. This is where I am with SLAM right now anyway, so it is important to get this resolved. It may be possible not to use the compass and will probably make me code the other stuff better anyway :)

If I come to a time that I need it, I will either make a subsystem for it or get the 4-in-1 v2 when it is available.

#37  

I am also in the same disappointing situation. None of the compass values are consistent or of reference value. Has anyone been able to use the data, or get it to work? Seems like a wasted function. I bought it thinking it would give me a zero to 360 reference. Got nothing.

Ron R

#38  

I got side tracked some from this by making the circuit boards for the logic divide by 2 circuits that I needed for my motor encoder/motor combo. I had planned on getting back to this right after getting the prototypes completed for these, but that didn't happen.

My brain was on circuit board design and I found it really hard to get my brain off of that. The cost of the first 64 of these little logic divider boards would be really high for me if all I did was produce the 2 that I needed for my prototype, so I toyed with the idea of selling the others to recover the cost. I gave it some thought and decided that I needed to build a subsystem controller board anyway, so I could just include these two logic divide by 2 circuits on this same board, thus making the cost go away for them. I had a far less difficult time convincing myself that the cost for these subsystem controller boards was out of line when there are a lot of benefits to doing this in a robot. The good news is that there was enough room on the board to include 13 additional digital divide by 2 circuits per board meaning that I will still be able to recover the cost to produce the Subsystem controller boards if I sell these. There will be a total of 52 of these logic /2 circuits up for sale with the first order.

I mention this just because it changes, quite drastically how the robot is wired. This will take some time to get the boards, and then to build them, and then to rewire the robot to use these boards. This is all good as in the long run it will be a much cleaner robot inside. It does mean that I lost my test platform for working on the navigation part. I estimate that it will be a couple of weeks before I have everything assembled again to a point that I can work on navigation.

#39  

Hey David, I now you are working to get this working, I was wounding if you seen this video...http://www.bing.com/videos/search?q=lidar-vx+11+windows&&view=detail&mid=AC1AF37427DC4D4B7403AC1AF37427DC4D4B7403&FORM=VRDGAR

This guy does not seem to reply publicly but maybe you could ping him off line to see if you can get ay tips to complete this awesome app.

Thanks

#40  

Really the only tips I need right now is time. I have all of the parts working. Just have to have time to focus on it.

#41  

Sweet! Thanks David. Who out there is working on a cloning machine Haha

#42  

@d.cochran,

I made a script to use your Lidar plugin and was hoping there is a way to shorten it, example.. [code} $LIDARclose =800 # Change for minimum distance

#Lidardistance[] Base Right $LIDARDistance[0] $LIDARDistance[1] $LIDARDistance[2] thru $LIDARDistance[80]

#Lidardistance[] Base Left $LIDARDistance[354] $LIDARDistance[353] Thru #$LIDARDistance[269]

#Right side lidar if($LIDARDistance[0] < $LIDARclose or $LIDARDistance[1] < $LIDARclose or ..... and $LIDARDistance[325]> $LIDARclose and $LIDARDistance[324]> $LIDARclose and $LIDARDistance[323]> $LIDARclose and $LIDARDistance[222]> $LIDARclose and $LIDARDistance[321]> $LIDARclose) goto(Left) sleep(3000)



My current script has every variable set in the script, $LIDARDistance[0]
$LIDARDistance[1]
 etc.. ; is there a way to define a range, eg, $LIDARDistance[0] thur  $LIDARDistance[359] ?

Hope this post makes sense.

Thanks
#43  

Hey Mike,

Sorry, I just saw this. I have been really busy lately. I won't have time until maybe tomorrow night to look at this. I have to go to Dallas this evening to make sure that a data center move for this weekend is planned well. This weekend will be busy with this move. I should have time tomorrow night to look at it depending on some other things, but will do my best to look into how you could make the script shorter.

Thanks for understanding.
David

#44  

Thanks David, no rush I was hoping to not have to define each Lidar degree variable, the full script I have with each variable takes about 500 ms to run through the the script.

Thank again, Mike

#45  

Hey Mike,

Without writing everything up for you, I wanted to give you the direction that I would go...

The gist is that you would use a variable that would be updated as you cycle through the array variable to get and evaluate the values from the LIDAR. When you get to 360, you drop out. You could also evaluate the other values returned from the LIDAR to judge how reflective the item is at those degrees.


YOU WOULD ONLY DO THIS PART AS A TEST
DefineArray( $test, 360)
$ArrayPossition=0

:TheLoop
  THIS IS WHERE YOU WOULD PUT IN YOUR RANGES
  if($ArrayPossition &gt; 10)
    if($ArrayPossition &lt; 50 )
      THIS IS WHERE YOUR LOGIC WOULD GO TO DO SOMETHING WITH THE VALUE FROM THE LIDAR
      if($test[$ArrayPossition] &lt;25)
          say(&quot;There is something close to me at &quot; + $ArrayPossition + &quot; Degrees&quot;) 
          endif
     endif
endif
CHECK TO SEE IF YOU ARE THROUGH ALL DEGREES
if($ArrayPossition &gt; 360)
  IF SO, GET OUT
     goto(OUT)
endif
goto(TheLoop)

:OUT

This isn't tested code as I am on a linux laptop right now, but it should be close to working. Remove the comments and you should be close to a solution.

#46  

I connected the GetSurreal.com Lidar and Teensy to a Raspberry Pi 3 Model B.

I installed vpython (with some help from the vpython forum as it is not 100% easy) and pyserial (in fact already installed), ran the visual python 3D script from GetSurreal, and hey presto I have portable lidar.

I might get the Raspberry to talk to my EZB at some point which would be a neat additional thing to do.

You need a Raspberry Pi 3 to do this and earlier Raspberry models will not work.

I think this is a good alternative to the impressive and really easy to use ez-robot plugin which is described in this thread as there is no need for a pc.

Cheers

Chris

#47  

I dont use the Pi but it is an option. I connect it directly to the tablet that is a part of my robot. The EZ-B V5 will take care of the need for an additional computer when it comes out.

#48  

Hi There

Just to finish off my comment. I found a way to put teamviewer on the raspberry pi and so now I have remote access from my master pc to the 3D neato lidar output on my raspberry .

As noted this could perhaps more easily be done with a compact PC but my way is probably a bit cheaper ...

This is what you do:

  1. use a raspberry 3 pi model b (or better) or vpython will not run

  2. use a large SD card in your raspberry as the software is large in size. I used a 32GB card after smaller ones were not big enough.

  3. use raspi-config in terminal to enable the experimental GL Driver graphics

  4. install vpython (sudo apt-get install python-visual)

  5. install exagear (which costs money) and teamviewer following these instructions: https://eltechs.com/run-teamviewer-on-raspberry-pi/

  6. Download lidar.py from: http://www.getsurreal.com/products/xv-lidar-controller/xv-lidar-controller-visual-test

  7. Identify the correct com port for the teensy and edit lidar.py

  8. run python lidar.py from terminal

  9. use teamviewer and enjoy lidar graphics on your master pc

Let me know if you have any problems but once you know the pitfalls it really is quite easy (ha!)

Cheers

Chris

#49  

Hey Chris,

Are you using the information for anything other than the display from lidar.py?

Thanks David

#50  

Hi David

If I had time I would want to do more but progress for me is slow due to other less fun commitments like work.

I think there is a real market possibility for someone to write mapping and pathfinding software for the lidar if it were written in a flexible way to:

a) support the multiple lidars which exist b) run on multiple machines (raspberry, pc etc)

Perhaps in python which seems quite efficient.

In the short term, at the very least I intend to make a serial connection between the EZB and the raspberry to do basic things like turn it on and off but this is so weak compared to the possibilities.

Cheers

Chris

#51  

Yea, was just wondering. It becomes difficult because different LIDAR's return data differently. They spin at different speeds and such. Python scripts can be used to determine which LIDAR it is possibly and adjust things as needed and all... I totally understand about work... I have been working on many different things right now so my experimentation and learning has been slow...

#52  

I think it is only matter of time before someone comes up with a good lidar script because the demand is out there. I would certainly be happy to pay for one. It seems daft for a whole bunch of people to write their own half-hearted scripts when it would be more sensible for a single more professional approach.

Anyway, for me now, I am going to progress my plans to put a whole bunch of arduino driven modular additions on to the roli including a geiger counter (just for fun). After that, I plan to find a second hand wild thumper or build my own and work out to auto-charge it. Then get it to climb up and down stairs which would be a fabulous challenge.

Once all that is done, then I will return to the lidar software issue. Probably several years at my rate of progress!