
yatakitombi
Philippines
Asked
— Edited
can EZ-Camera read the range of the object? because i need to know if the camera can calculate the distance of the object then grab it...
Agree totally. it was suggested by Richard and myself early on. The funny thing is that I was discussing this specific issue with the head master of the school system that I am helping out to demonstrate the difference between what robotics students should be working on and what they are working on. The laser pointer would be on for a limited time, just to get the distance of the first object and get the size of that object from the camera. after that, all other measurements of distance could be done using the first method of comparing sizes.
I think a robust module that finds distances to objects multiple ways would be really cool. I think an interface that says something like here are the options, which do you want to enable and what ports are needed for these to work. The you could give a weighting system to different methods of finding distance... Just thinking out loud, but its the way that programming normally goes for me. Something to ponder as there are many methods and each has their drawbacks.
Ultrasonic, Infra-red and Laser are all too easy, where's the challenge?
Evolve, improve, innovate, do something new, do something you can't do... That's why I jumped in
If you want simple then yes, an ultrasonic, infra-red or laser measuring device can be used. If you want a sense of achievement figure it out using the camera only.
Yeh, Camera Only would be really nice.
I saw once where a guy took TWO cameras and placed them side by side. He then focused one at a distance and the other upclose. He programmed the program to know when the camera was in focus, thereby knowing when something was near or far.
I don't know HOW he made the program know when the camera was in focused. That would be hard for me to personally do. But, it would be nice if you, Rich would look into how that was done.
To be honest, the near/far focus example wouldn't be something I would waste my time with as there are much better ways using the same, or less equipment which makes that method redundant before it's even attempted.
If you're using two cameras then one for depth and one for imaging (i.e. the kinect) is the better choice rather than two for imaging.
good advice.
Realistically stereoscopic machine vision is something i would love to do but it is truly outside my skill level at this time. Lol , ultrasonic above my camera is what i installed on JARVIS 2000 and i pretend i came up with an original idea.
The EZ-AI application (next release - v 1.0.0.7) will allow for Steroscopic distance sensing. It also allows for distance sensing using one camera in a couple of ways. The first is to find the object, then find the base of an object. It then uses Distance = height of the camera * tan ( angle of the camera using a gyroscope ). This is the less accurate of the two measures. We are working on adding a third option that takes a picture with a recognized point, turns the camera x degrees and takes another picture with a recognized object and then turns the camera x degrees and takes another picture with a recognized object. It then calculates the distance to the object using the known variables.
None of these will be as fast or as accurate as a sensor is, but all will calculate distance much further out than a sensor can. The furthest that I have seen a sensor measure is about 40 Meters or about 120 feet. Using a camera will will allow distances to known objects to be measured at far further distances as long as the object isn't moving and the robot isn't moving.
I just had to revisit this thread because it is something that has been on my mind for six months.