
Andy Roid
I have read about many ways to develop Indoor Navigation. The Hardware recommended varies from an IPS system to a Lidar based system, even straight scripts, the use of cameras, Encoders and maybe even a compass. There are simple methods given in tutorials but they are not the "real" navigation wanted. Total confusion is what I end up with. I can see the validity of each device and even combinations of devices with added safety sensors such as drop off sensor, bumper sensors etc. Sophistication is needed but I end up with Mystification.
My main question is what works, is needed and makes sense? Anyone want to discuss this subject?
I know there are many who want to build the base for an Imoov or their favorite robot build or just get an Adventure Bot to travel throughout the house using a single verbal command. I would.
If Dj wants to do a Hack Night on this subject I think many would be interested.
I am happy to eat crow sandwich if ez robot proves me wrong, but I am not holding my breath
I will look further into it's functionality to see how it works and what it uses. (Lidar etc.) I am not ready to send the $ 1,500.00 yet.
What else can be done using more familiar devices like I mentioned? Even if it is less sophisticated.
Synthiam could do this easy.
The oculus rov uses this camera. It is worth the 149 dolars:
http://shop.orbbec3d.com/Astra-Pro_p_35.html
i can’t speak for Ezrobot, but I imagine the lidar and ips do not align with their target market. That being said, all synthiam reference design hardware is open source and there’s a number of companies looking to manufacture them.
The main challenge with a lidar configuration is the robot needs to be designed for a very specialized case. For instance, that expensive robot Richard posted is incredibly useless for any practical robot application because it’s designed to demonstrate pre-installed navigation features.. doesn’t align with the creative aspect of DIY robots.
My initiatives are always to make solutions modular. I’ve done that with the synthiam hardware reference designs. You can make it yourself, or wait to our partner team figures out who should make it for sale.
In in the meantime, don’t let Richards crudeness affect your creative efforts. An industry that requires you to feel outcasted because you’re not not a programmer would not be sustainable industry to exist. Fortunately, like all technologies, our efforts will prevail by removing the barriers of engineering requirements. You know, like html did for the internet or windows did for the PC...
Back to the original question. The IPS, is what I believe to be the most reliable solution for indoor navigation.
I understand Richards response. Due to the slow advancement and releases of products which will do the job without someone having to be an engineer, programmer, designer etc. to make it work is frustrating.
I see the advantages of a modular device which can be added to projects which are already a work in progress. You have been working on this for quite while. I hope to see it develope and happen again soon.
I thank you for the advice to continue to look into the IPS.
Ron
Thanks,
Ron
ASFAIK the IPS is a IR beacon with a IR Camera, so each IPS device will tell you if the Bot/Beacon is in the room and if is visible and not hide behind an obstacle and only then you have the X / Y beacon position in a Picture.
So you will need multiple IPS devices hanged to the walls, furniture or ceiling spread around the house/apartment, plus power cables.
You will need to explain your family members why there are cameras everywhere (privacy issues) and then you need to calibrate the sensors X / Y to distances.
IR beacons are prone to daylight interference so you need to close some blinds to avoid the daylight e.g. windows / balconies etc.
Only after you wire all the sensors, you need to script all the positions / distances.
So basically you gave up your privacy, and you live like a Vampire (no daylight)...(exaggerating)
Is this simple and reliable or am i missing something ?
Why IR beacons, how is different from a camera with robot/object tracking ?
ptp you have the skills to develop such a plugin.
https://www.youtube.com/watch?v=NonBlt-wKCI
I know there are a few different Indoor navigation products out there now that can be purchased at robot shop. Some use Ultrasonic sensors that communicate with each other. Others use an Infrared patterns on the ceiling. These systems don't use ROS but are involved to set up in their own right. By contrast, with the Synthiam IPS Reference Design it's as simple as plugging in a USB device and clicking a mouse. I sincerely hope the project will be made by ezrobot or other partner company, as it is so easy to use. While the technology for those systems I mentioned are very cool, it's the cost that is the biggest detriment. If produced, the IPS reference design would likely be the most cost effective solution on the market.
@Ptp
I can speak a bit on how it works in the practical sense. Since it's Infrared based it can be disrupted by direct sunlight but isn't effected much by reflected sunlight. The image contrast can be adjusted manually to block false positives. It also has a pretty dark daylight filter on the Infrared camera side to block as much daylight as possible. The IPS itself can be placed on a wall and aimed toward the floor which will cut out the majority of any infrared interference that could possibly get picked up. Using it with the curtains open shouldn't be a big deal as long as you don't have many shiny reflective items on the ground level. Yay! You don't have to be a Vampire!
You are correct in saying that you would need one IPS per area if you would like to have full coverage. Full coverage would likely only be feasible with an on-board system.
In terms of getting trapped under obstacles, you would just have to map out your robot's path ahead of time with the app to avoid those kind of areas.
As for privacy, I am certain the IPS could be used without the real-time camera if needed. Really, once set up in a permanent location the IPS could simply use a screenshot for the real time image, the navigation is all done with the Infrared camera so essentially the real-time camera could be turned off. I don't believe that functionality exists in the app but I'm sure that this kind of feature could be explored.
In the case of the IPS, the IR beacon is used because it is tiny and can still be picked up from upward of 16 feet away. Object/glyph/color recognition is much harder to do at this kind of range, unless you have a huge glyph/color sample.
For example, a way point could be a kitchen or a doorway to the kitchen. you can then use a ControlCommand to have the robot navigate to the waypoint.
You only require one ips per room. This is because the visual lens of an ips is 170 or so degrees. Higher mounted the better. The ips also has usb or serial output, so it can connect to an iotiny for WiFi connectivity or usb direct to pc.
The positive experience arises when comparing to lidar because ips operates far more successful for its indoor placement in a home. You will have to experience to believe it.
Also, the ips combined stellar 100% successful navigation with swarm arrangement - meaning you can have hundreds (or two) robots navigating and participating in the environment. Sharing is caring!
The suggestion of buying a robot with ros navigation blah blah is silly. If you want a navigation diy robot that is custom built to your requirements - go to school for 10 years and learn ros and C++ lol. Or the alternative that synthiam will provide shortly will be far more amazing.
I used to keep the proven template of standard industry economics related to maturity of this industry to myself because I feared others would copy us. But, at this stage it seems others are far too determined to prove economics wrong and keep making robotics more complicated rather that accessible - even at the cost of Jibo, Anki, Kuri, baxtor, etc.... So, that’s where I come in. I wait patiently for a technology to be proven, I adapt it and make it accessible to people with ideas and lack programming skills. Hey, quote me now... that paragraph will belong in history books :). I predicted the bankruptcy of Kuri, Baxter, Anki AND jibo on many many occasions. There’s one other approaching that you’ll say wow I thought warehousing robots were all successful. But, I won’t say who it is. Keep you guessing.
Back on topic...
We’re developing something to leverage existing low cost navigation products to hack for development - such as the cost effective old neato lidar equipped vacuums. And also a number of robotshop lidars with our new slam device driver - which you’ll see in about a month (sooner in beta). It lets many popular lidars connect to ARC’s new navigation subsystem. Using a standardized protocol that can be adapted by any lidar hardware.
Remember, synthiam doesn’t have the overhead of hardware manufacturing. This means high speed implementation of software features... stay tuned!
Needless to say - ros and others pave the groundwork for synthiam to identify what’s useful and what isn’t so we can make it accessible to everyone.
richard - it’s never been about who’s first. Which is why coming in last is a good thing for this business.
In my opinion, if the component prices are realistic, a combined navigation system using both optical and lidar will work well.
The only issue I have is, my need is now. I have three platforms ready to go.
What I was trying to say is SLAM (Oculus ROV can anyway) can navigate a constantly changing environment and still navigate to it's destination or next way point... It is way too complicated and expensive imho, however. Another "reliable", simpler and cheaper solution is needed...
The visual slam slam stuff I find most intriguing because the math must be nuts!
If someone stands in front of an ips camera sensor, the robot stops and waits for the beacon to be detected again.
I don't mind building one, although a PCB/KIT would reduce the build time ....
@DJ:
I see there is a BOM, firmware, can we start, is the plugin ready to use ? Otherwise does not make sense the challenge.
The only snag I can foresee is that ezrobot has a custom sized ribbon PCB for their camera. So you won't be able to directly order the camera. It is much smaller in length than the standard camera you can buy off the shelf. That being said, if you had 2 x ezrobot cameras (the second generation) you could transplant the camera units themselves onto the IPS. Oh and the camera's have a 100 degree wide angle lens installed, those could be a bit tricky to source. One of those lens also needs the IR filter removed. I will upload some more documentation for the cameras to github today.
I can let @DJ answer your other question. I believe there is a bit more work to be done on the plugin.
Q1) Can you confirm wide angle lens degree ?
Q2) Is the regular (non wide with IR filter) camera required ?
Q1. So it is a 100 degree wide angle lens which likely translates to a large amount of degrees if it were a normal lens.
Q2. The documentation that I will be uploading later today will make it clearer.
Are the jtag pins available in the PCB ?
Can you generate/upload the gerber files ?
@ptp, no the JTAG pins aren't available but the SWD programming pins are broken out. SWD is an STmicro specific programming interface that only requires 4-pins. SWD works with the ST-Link v2 programmer ($30) which is much more affordable than most JTAG programmers.
I will upload the gerbers for all the hardware projects to github today.
@ptp
the E-39 Indoor positioning system gerbers and more assembly instructions have been uploaded to Github:
https://github.com/synthiam/E-39_Indoor_Positioning_System
as well as the gerbers and more assembly instructions for the E-44 IPS Transmitter:
https://github.com/synthiam/E-44_IPS_Transmitter
Thanks, the gerber files worked. I ordered the PCBs.
Are you trying to build the IDP?
Cumps.
Remember, a solution requires an actual problem
maybe the answer is I don’t know. I want the robot to move around and charge itself and make my breakfast and walk my kids to school. If that’s the case, the answer to your navigation needs is still in exploration stage. And therefore you simply need a navigation device to excitement with.
My goal is to be able to have my robot travel to a specified location, perform a function, and return to the home position.There can be multiple locations determined by a verbal command.
Also a second command would be the function. Then an execute command. An example of a function is speak a greeting, scan a face using an omron camera for facial recognition respond, then return to a home location.
( A greeter at the front door or travel to the play room to tell at the kids to be quiet. )
This is just what comes to mind right now.
Does this help?
Ron
1) Robot wanders the house (or the first floor at least. Stair Climbing is probably outside my budget) without getting stuck or lost or banging into walls and furniture.
2) Robot reacts to family, guests, and intruders, but not to pets that it encounters (I'll leave the details of the specific reaction to a later conversation, but I think this is a use case that all users who have been interested in navigation share on some level).
3) When I call it (either by voice if within range, or with a request from my computer, phone or smartwatch) robot comes to me using the shortest possible unobstructed route.
3a) I want the robot to follow me, without bumping into obstructions even in areas it is not familiar with (have discussed beacon following before and have received a bunch of suggestions, but haven't had time to experiment yet. The obstruction avoidance is why I am interest in Lidar for this. I find the ultrasonic and IR distance sensors are not reliable enough)
4) Robot self charges when battery begins to run low, resume wandering when charge is complete.
5) Robot retrieves recognized cat (or dog or child) toys that it has been trained to recognize and brings them to me upon request.
My Neato vacuum can do #1, #3, and #4, and any EZ-Robot can do #2. #5 I can currently do with Roli, but it takes too much human interaction to find the toy and get it picked up. It is more a remote arm than an autonomous tool, but I am sure if I spend more time with it I could probably make it happen with existing scripting and auto-actions. Maybe some hardware changes to make the gripper not need as precise positioning, and I might need specific toys with brighter colors.
So, I think this is all possible with technology that exists, the question is whether it can be done at DIY prices. (I don't want to spend more than about $400 on the navigation piece).
Alan
i understand you question but are a bit confused why someone as creative and visionary as you would need a reason for localization in robotics. Well, what is the reason for having a hobby or a home robot at this stage in the first place? For me it is all about having fun, explore new things in robotics, making the most of what exists in this field and evolving new ideas and possibilities. If someone like me likes autonomous robots (I think all do), then having the possibility to give the robot that awareness is one big step for autonomy and more.
Regards,
Paulo
Thanks for the compliment. My creativity and vision is influenced by your challenges and desired outcomes. I need to understand your true needs before I can provide a solution. As I mentioned, a solution requires a problem... right now there’s just I want navigation. I require more details of the expected outcome. It hasn’t been very clear of what the outcomes were.
So far it kind of looks like you’d want the robot to randomly navigate around the home and pickup things or scan faces?
the random navigation and exploring might be interesting because it would be like a pet. And when it sees someone it knows, it can say hello lol
i wouldn’t use the omron camera though. The cognitive face skill control is super smart: https://synthiam.com/Software/Manual/Cognitive-Face-16210
My neato app maps the room and then I can simple use my finger on the app to mark off "do not enter" areas.
https://news.microsoft.com/innovation-stories/microsoft-build-autonomous-systems/
I'm excited !
With this in mind, I am beginning to modify my Adventure Bot which already has a EZR camera mounted on it. I am adding 3 ultrasonic sensors around the base and a platform for a Lidar. I will place EZ bits near the rear to allow for the IPS transmitter when it becomes available.
I look forward to updates when they become available.
I really don't want to see this thread go dry and I understand this is not an easy task. This is a huge endeavor. I hope you can come up with a plan.