### yatakitombi

Philippines

Asked
— Edited

can EZ-Camera read the range of the object? because i need to know if the camera can calculate the distance of the object then grab it...

Philippines

Asked
— Edited

can EZ-Camera read the range of the object? because i need to know if the camera can calculate the distance of the object then grab it...

#12The example I used when I posted was explaining how to find the size of an object at a distance rather than the distance of an object from the size but it should all transpose easily enough.

The theory is, the further away an object is the smaller it becomes, therefore the smaller the object the further away it is. If it gets bigger then it is closer than the known distance.

I'm waiting for my Eureka moment on this one, chances are it'll happen on the way to work tomorrow while I'm not thinking.

#13#14#15The ability to be multi-threaded, you can gather input from multiple sensors and make a determination based on multiple pieces of information. You could identify the object and then go toward it until an ir sensor is at a measurement that you know is the distance you need to be at to reach the object. If you can't add an ir sensor, use math to calculate the distance based on the object size. Say the object is 1/2 the size that you expect it to be at a known distance, then you know it is 2 times the know distance away from you. The other option is to look at the object and see if it is larger or smaller than the known size. If it is larger, move back. If it is smaller, move toward it until it is the known size.

This is where having the processing power of a computer helps out your robot. The other controllers won't perform these calculations nearly as quickly as yours.

#16d = h * tan a

d is distance

h is the height of the camera

a is the angle of the camera.

The way this works is by taking a known, height of camera, and a value that can be known, the angle when pointing the camera to the point that the bottom of the object is touching the ground or table the object and the robot are sitting on and calculating the distance.

Maybe using this formula and the formula Rich provided would give you a decent chance of getting a close to accurate distance.

The further the object is away, and uneven ground will skew the calculations. Add a ping sensor and an ir sensor for close up and you should have about everything you need to compare multiple values and get a good distance. I might have to work on this. It's an interesting possibility. It could defiantly be used to help with navigation. The two formulas together could be used to train each other.

#17This looks like the most accurate solution. Using some tig and a camera and a laser pointer, mounted at known distances from each other, you can make a laser range finder. I may have to do this. It would be cool for mapping a room along with a ping sensor and would give me something to put in Wall-'s other eye.

#18Three solutions have been provided including the formulas that you would use and explanations of the formulas to some extent. The short answer is that there is nothing built into the camera or ARC at this time, but it is doable with some scripting or programming, depending on what you choose to do.

Let us know if you have any other questions about this. If not, please mark as answered and give credit to whoever helped you the most.

I will be working on these in my limited spare time. I do think this is a feature that should either be added to ARC or something worth someone's time to develop outside of ARC. If anyone else wants to tackle it, I'm good sitting back and watching. If not, with some specific information about the camera lens, I could start working on it. @yatakitombi, since you have a need for this and have expressed an interest in ez-sdk, and have stated that you have a mentor, maybe this is something you could develop with the information provided here. Then you could share your code and solution with the community. It's what we do here.

#19#20To avoid needing camera angles:

1) Calculate two distances, one for the Height and one for Width.

2) Use the largest of the two as the distance

^ That will give you the most accurate distance with easiest calculation. Because the size of the object can only get smaller if displayed on an angle, not larger. So getting the largest of the two will tell you how far the object is away

#21If you know the size at a known distance, the equation that Rich provided works. Still very interested in the laser range finder...

Each of these solutions has its own downfall.

solution 1 - dont know the size at a distance

solution 2 - unlevel or uneven ground and cant determine where the base of the object is

solution 3 - light that affects the color of the laser reflecting back or dark objects.

Ping sensors also have their own issues. Some materials will not reflect back the signal accurately.

IR also has its own issues.

This is why multiple types of sensors are used to get data. Also, if a size at a distance were not known, but another sensor were used to get the distance, and you would be able to get the size from the camera, you could then switch to using solution 1. you would have to have one of the other solutions in place to allow you to use solution 1 if you didnt know the specifics.

Using these methods to train each other would give you the best results, but would require the most code.

Sounds like a project for an advanced student in my robotics class next year.

#22dY = \sum_{n=0}^{N-1} x_n e^{-\frac{2\pi i}{N} nk }

\qquad

k = 0,\dots,N-1 ?Nn (n = 0, ..., N - 1):

\omega_N^n = e^{-\frac{2\pi i}{N} n }

and define the polynomial x(z) xn:

x(z) = \sum_{n=0}^{N-1} x_n z^n.

X_k = x(\omega_N^k) = x(z) \mod (z - \omega_N^k)

:<math>X_k = \sum_{n=0}^{N-1} x_n e^{-\frac{2\pi i}{N} nk }

\qquad

k = 0,\dots,N-1. </math> :<math>z^{2M}-1 = (z^M - 1) (z^M + 1) \,</math>

:<math>z^{4M} + az^{2M} + 1 = (z^{2M} + \sqrt{2-a}z^M+1) (z^{2M} - \sqrt{2-a}z^M + 1)</math

p_{s,0}(z)&= p(z) \mod \left(z^{2^{n-s}}-1\right)&\quad&\text{and}\\

p_{s,m}(z) &= p(z)\mod \left(z^{2^{n-s}}-2\cos\left(\tfrac{m}{2^s}\pi\right)z^{2^{n-1-s}}+1\right)&m&=1,2,\dots,2^s-1

\end{align}</math>''m''=0, the covered indices are ''k''=''0'', 2<sup>''k''</sup>, 2·2<sup>''s''</sup>, 3·2<sup>''s''</sup>,, (2<sup>''n''-''s''</sup>-1)·2<sup>''s''</sup>, for ''m''>''0'' the covered indices are ''k''=''m'', 2<sup>''s''+1</sup>-''m'', 2<sup>''s''+1</sup>+''m'', 2·2<sup>''s''+1</sup>-''m'', 2·2<sup>''s''+1</sup>+''m'', , 2<sup>''n''</sup>-''m'' :<math>\phi_{N, \alpha}(z) =

\left\{ \begin{matrix}

z^{2N} - 2 \cos (2 \pi \alpha) z^N + 1 & \mbox{if } 0 < \alpha < 1 \\ \\

z^{2N} - 1 & \mbox{if } \alpha = 0

\end{matrix} \right.

</math>:<math>\phi_{rM, \alpha}(z) =

\left\{ \begin{array}{ll}

\prod_{\ell=0}^{r-1} \phi_{M,(\alpha+\ell)/r} & \mbox{if } 0 < \alpha \leq 0.5 \\ \\

\prod_{\ell=0}^{r-1} \phi_{M,(1-\alpha+\ell)/r} & \mbox{if } 0.5 < \alpha < 1 \\ \\

\prod_{\ell=0}^{r-1} \phi_{M,\ell/(2r)} & \mbox{if } \alpha = 0

\end{array} \right.

</math>

=2.61949494946494946168989119

Done:

#23Thanks for that. I needed a good laugh this morning.

#24#25I think a robust module that finds distances to objects multiple ways would be really cool. I think an interface that says something like here are the options, which do you want to enable and what ports are needed for these to work. The you could give a weighting system to different methods of finding distance... Just thinking out loud, but its the way that programming normally goes for me. Something to ponder as there are many methods and each has their drawbacks.

#26Evolve, improve, innovate, do something new, do something you can't do... That's why I jumped in

If you want simple then yes, an ultrasonic, infra-red or laser measuring device can be used. If you want a sense of achievement figure it out using the camera only.

#27I saw once where a guy took TWO cameras and placed them side by side. He then focused one at a distance and the other upclose. He programmed the program to know when the camera was in focus, thereby knowing when something was near or far.

I don't know HOW he made the program know when the camera was in focused. That would be hard for me to personally do. But, it would be nice if you, Rich would look into how that was done.

#28If you're using two cameras then one for depth and one for imaging (i.e. the kinect) is the better choice rather than two for imaging.

#29#30#31None of these will be as fast or as accurate as a sensor is, but all will calculate distance much further out than a sensor can. The furthest that I have seen a sensor measure is about 40 Meters or about 120 feet. Using a camera will will allow distances to known objects to be measured at far further distances as long as the object isn't moving and the robot isn't moving.

I just had to revisit this thread because it is something that has been on my mind for six months.