Asked — Edited

What Is Really Needed For Indoor Navigation?

I have read about many ways to develop Indoor Navigation. The Hardware recommended varies from an IPS system to a Lidar based system, even straight scripts, the use of cameras, Encoders and maybe even a compass. There are simple methods given in tutorials but they are not the "real" navigation wanted.  Total confusion is what I end up with. I can see the validity of each device and even combinations of devices with added safety sensors such as drop off sensor, bumper sensors etc. Sophistication is needed but I end up with Mystification.

My main question is what works, is needed and makes sense? Anyone want to discuss this subject?

I know there are many who want to build the base for an Imoov or their favorite robot build or just get an Adventure Bot to travel throughout the house using a single verbal command. I would.

If Dj wants to do a Hack Night on this subject I think many would be interested.


Related Hardware EZ-B v4

ARC Pro

Upgrade to ARC Pro

Experience the transformation – subscribe to Synthiam ARC Pro and watch your robot evolve into a marvel of innovation and intelligence.

PRO
Synthiam
#17   — Edited

Well said Jeremie. I’ll add a bit. The ips uses the visual light spectrum camera to map the areas the robot cannot drive (obstacles like sofas and chairs). The system also maps the way points, which are used for navigation.

For example, a way point could be a kitchen or a doorway to the kitchen. you can then use a ControlCommand to have the robot navigate to the waypoint.

You only require one ips per room. This is because the visual lens of an ips is 170 or so degrees. Higher mounted the better. The ips also has usb or serial output, so it can connect to an iotiny for WiFi connectivity or usb direct to pc.

The positive experience arises when comparing to lidar because ips operates far more successful for its indoor placement in a home. You will have to experience to believe it.

Also, the ips combined stellar 100% successful navigation with swarm arrangement - meaning you can have hundreds (or two) robots navigating and participating in the environment. Sharing is caring!

The suggestion of buying a robot with ros navigation blah blah is silly. If you want a navigation diy robot that is custom built to your requirements - go to school for 10 years and learn ros and C++ lol. Or the alternative that synthiam will provide shortly will be far more amazing.

I used to keep the proven template of standard industry economics related to maturity of this industry to myself because I feared others would copy us. But, at this stage it seems others are far too determined to prove economics wrong and keep making robotics more complicated rather that accessible - even at the cost of Jibo, Anki, Kuri, baxtor, etc.... So, that’s where I come in. I wait patiently for a technology to be proven, I adapt it and make it accessible to people with ideas and lack programming skills. Hey, quote me now... that paragraph will belong in history books:). I predicted the bankruptcy of Kuri, Baxter, Anki AND jibo on many many occasions. There’s one other approaching that you’ll say wow I thought warehousing robots were all successful. But, I won’t say who it is. Keep you guessing.

Back on topic...

We’re developing something to leverage existing low cost navigation products to hack for development  - such as the cost effective old neato lidar equipped vacuums. And also a number of robotshop lidars with our new slam device driver - which you’ll see in about a month (sooner in beta). It lets many popular lidars connect to ARC’s new navigation subsystem. Using a standardized protocol that can be adapted by any lidar hardware.

Remember, synthiam doesn’t have the overhead of hardware manufacturing. This means high speed implementation of software features... stay tuned!

Needless to say - ros and others pave the groundwork for synthiam to identify what’s useful and what isn’t so we can make it accessible to everyone.

richard - it’s never been about who’s first. Which is why coming in last is a good thing for this business.;)

PRO
Portugal
#18  

What if someone stands in front of the beacon? In my humble opinion the future is in cameras and robot vision, and the future is now. With this I do not want to contradict your sharp eye for the tech business, its just my opinion.

#19   — Edited

What if someone stands in front of the Camera or Lidar? A similar issue. In my opinion, if the component prices are realistic, a combined navigation system using both optical and lidar will work well.  The only issue I have is, my need is now. I have three platforms ready to go.

#20  

@Andy Roid "What if someone stands in front of the Lidar? A similar issue." ROS on the Oculus ROV will navigate around obstacles even if they change position.... In other words moving furniture around won't affect SLAM navigation...

#21  

Again, I agree, but even my Arduino can work out such a situation.

#22   — Edited

@Andy Roid Well if your arduino can do it then your indoor navigation problem is seems solved then, right?

What I was trying to say is SLAM (Oculus ROV can anyway) can navigate a constantly changing environment and still navigate to it's destination or next way point... It is way too complicated and expensive imho, however. Another "reliable", simpler and cheaper solution is needed...

#23  

Agreed ! Lets hope the NEAR future satisfies our needs.

PRO
Synthiam
#24  

There’s a lot of work to be done in the slam space. I’m intrigued by it and hope it’s a success because what matters to me is that one of the solutions works - I don’t care which one:). The beacon technology is used most in warehousing robots or where reliability is required. Slam and lidar has been attempted to be used for outdoor navigation, and it’s been getting better.

The visual slam slam stuff I find most intriguing because the math must be nuts!

If someone stands in front of an ips camera sensor, the robot stops and waits for the beacon to be detected again.