### yatakitombi

Philippines

Asked
— Edited

can EZ-Camera read the range of the object? because i need to know if the camera can calculate the distance of the object then grab it...

Philippines

Asked
— Edited

can EZ-Camera read the range of the object? because i need to know if the camera can calculate the distance of the object then grab it...

#1No. Not accurately. You would need at least 2 cameras and some complex scripting.

#2The issue is the same as if you only had one eye. Your brain estimates distance based on using both eyes and what it has learned is the distance between them. It also bases this information on the size of the object and the size of the objects around it. Ping sensors and ir sensors can do this based on the time it takes for a signal to be sent out and then returned to the bot. The combination of these two methods will get you a more accurate measure of how far you are from an object.

many advanced robots are now placing cameras on the hands of the robot also. This allows your hand to guide itself into position from a perspective that is far better for grabbing an object. This with an ir sensor on the arm, along with a switch to know that you are now in contact with the object, would be your best bet. It would be more expensive.

#3I use a ping sensor beside the camera to calculate distance to an object or wall....

#4If you knew the size of the object, then you could calculate the distance using ez-script.

#5@DJ, might you have an example of that or perhaps could one be added in a future release of ARCs? I total follow what you are saying in "theory" but I can't wrap my head around how to apply using a know size calculation in a script to determine range via vision.

#6I think this will be moot if DJ pulls yet another rabbit out of his hat and introduces his camera based indoor navigation system he has mentioned... DJ, if your listening I am keep my credit card warm and cozy for that....

#7You'll need to know 3 things to determine the distance of an object based on it's apparent size.

First you need to know the size of the object at a known distance. So let's call them sX and dX

We then need to know the apparent size, let's call it sY. From this we can work out the Apparent Object Distance, or dY.

The formula to work out dY (or any of the other three) is

or

So, if we know we have an object that is 25" wide at 30" from the camera and we know it is 17" wide according to the camera at an unknown distance which we want to work out...

sX = 25 dX = 30 sY = 17 dY = ?

As always, I push people to think a little for themselves... Let's see who can do the calculation for the above example or better yet write the script

Note: this assumes the object is directly ahead at 0 degreesedit: Changed the example values to make it a little more fun size and distance of 1" was boring...

#8LOL, rich saves the day

and Richard R you're right, the localized positioning is in the works. Once we're comfortable with the logistics of the existing sku's, we will start manufacturing the new ez-bit components such as positioning

#9going with 50.4. dont know for sure. Just helped daughter with History, Algebra and Science homework.

#10(1+(((30

17)/25)/30))301 plus the known distance * perceived size divided by the known size divided by the known distance times the known distance.

My best guess assuming the object is round or you are looking straight at the side of a square object.

#11Although, if you were closer than the known distance, I don't think the formula works. You would have to use an if and a comparison to know that the object is closer than the known distance and then script to account. I would have to sit down with some paper to figure the formula for that. It may be as simple as subtracting 1 instead of adding 1 in the formula but I am on a cellphone now and watching American football. Brain shutting off for this evening.

#12The reality of it is I'm learning this one too and I haven't yet sat down to work it all out (my brain shut off about 2 hours ago, it's gone midnight now, I should be getting some sleep)

The example I used when I posted was explaining how to find the size of an object at a distance rather than the distance of an object from the size but it should all transpose easily enough.

The theory is, the further away an object is the smaller it becomes, therefore the smaller the object the further away it is. If it gets bigger then it is closer than the known distance.

I'm waiting for my Eureka moment on this one, chances are it'll happen on the way to work tomorrow while I'm not thinking.

#13You would have to know the number of pixels per unit of measure. It would probably be better to use pixels as a unit of measure. Dang it. Brain won't stop thinking about this...

#14More food for thought here...

#15Honestly, in thinking about this, if you moved forward toward the object until it is the size you expect, and then pickup the object, I think you will be fine. Go to the largest object, pick it up and then go to the next largest object and pick it up. Do you have to know how far you are from an object or are you trying to use this information for something else?

The ability to be multi-threaded, you can gather input from multiple sensors and make a determination based on multiple pieces of information. You could identify the object and then go toward it until an ir sensor is at a measurement that you know is the distance you need to be at to reach the object. If you can't add an ir sensor, use math to calculate the distance based on the object size. Say the object is 1/2 the size that you expect it to be at a known distance, then you know it is 2 times the know distance away from you. The other option is to look at the object and see if it is larger or smaller than the known size. If it is larger, move back. If it is smaller, move toward it until it is the known size. This is where having the processing power of a computer helps out your robot. The other controllers won't perform these calculations nearly as quickly as yours.

#16Still digging on this for a solution that doesn't require some of the known variables that you might not know...

d = h * tan a

d is distance h is the height of the camera a is the angle of the camera.

The way this works is by taking a known, height of camera, and a value that can be known, the angle when pointing the camera to the point that the bottom of the object is touching the ground or table the object and the robot are sitting on and calculating the distance.

Maybe using this formula and the formula Rich provided would give you a decent chance of getting a close to accurate distance.

The further the object is away, and uneven ground will skew the calculations. Add a ping sensor and an ir sensor for close up and you should have about everything you need to compare multiple values and get a good distance. I might have to work on this. It's an interesting possibility. It could defiantly be used to help with navigation. The two formulas together could be used to train each other.

#17laser range finder with camera and laser pointer

This looks like the most accurate solution. Using some tig and a camera and a laser pointer, mounted at known distances from each other, you can make a laser range finder. I may have to do this. It would be cool for mapping a room along with a ping sensor and would give me something to put in Wall-'s other eye.

#18@yatakitombi,

Three solutions have been provided including the formulas that you would use and explanations of the formulas to some extent. The short answer is that there is nothing built into the camera or ARC at this time, but it is doable with some scripting or programming, depending on what you choose to do.

Let us know if you have any other questions about this. If not, please mark as answered and give credit to whoever helped you the most.

I will be working on these in my limited spare time. I do think this is a feature that should either be added to ARC or something worth someone's time to develop outside of ARC. If anyone else wants to tackle it, I'm good sitting back and watching. If not, with some specific information about the camera lens, I could start working on it. @yatakitombi, since you have a need for this and have expressed an interest in ez-sdk, and have stated that you have a mentor, maybe this is something you could develop with the information provided here. Then you could share your code and solution with the community. It's what we do here.

#19All this math reminds me of clue:

#20If you have the known sizes at two known distances, then that's enough. A simple equation can give you the factor between the two distances, and therefore used as a multiplier to find out the distance in the future.

To avoid needing camera angles:

^ That will give you the most accurate distance with easiest calculation. Because the size of the object can only get smaller if displayed on an angle, not larger. So getting the largest of the two will tell you how far the object is away

#21The other two solutions were if you dont have a known size at a known distance.

If you know the size at a known distance, the equation that Rich provided works. Still very interested in the laser range finder...

Each of these solutions has its own downfall.

solution 1 - dont know the size at a distance solution 2 - unlevel or uneven ground and cant determine where the base of the object is solution 3 - light that affects the color of the laser reflecting back or dark objects.

Ping sensors also have their own issues. Some materials will not reflect back the signal accurately.

IR also has its own issues.

This is why multiple types of sensors are used to get data. Also, if a size at a distance were not known, but another sensor were used to get the distance, and you would be able to get the size from the camera, you could then switch to using solution 1. you would have to have one of the other solutions in place to allow you to use solution 1 if you didnt know the specifics.

Using these methods to train each other would give you the best results, but would require the most code.

Sounds like a project for an advanced student in my robotics class next year.

#22Or, to put it quite Simply:

dY = \sum_{n=0}^{N-1} x_n e^{-\frac{2\pi i}{N} nk } \qquad k = 0,\dots,N-1 ?Nn (n = 0, ..., N - 1):

\omega_N^n = e^{-\frac{2\pi i}{N} n } and define the polynomial x(z) xn:

x(z) = \sum_{n=0}^{N-1} x_n z^n.

X_k = x(\omega_N^k) = x(z) \mod (z - \omega_N^k) :<math>X_k = \sum_{n=0}^{N-1} x_n e^{-\frac{2\pi i}{N} nk } \qquad k = 0,\dots,N-1. </math> :<math>z^{2M}-1 = (z^M - 1) (z^M + 1) ,</math> :<math>z^{4M} + az^{2M} + 1 = (z^{2M} + \sqrt{2-a}z^M+1) (z^{2M} - \sqrt{2-a}z^M + 1)</math p_{s,0}(z)&= p(z) \mod \left(z^{2^{n-s}}-1\right)&\quad&\text{and}\ p_{s,m}(z) &= p(z)\mod \left(z^{2^{n-s}}-2\cos\left(\tfrac{m}{2^s}\pi\right)z^{2^{n-1-s}}+1\right)&m&=1,2,\dots,2^s-1 \end{align}</math>''m''=0, the covered indices are ''k''=''0'', 2<sup>''k''</sup>, 2·2<sup>''s''</sup>, 3·2<sup>''s''</sup>,, (2<sup>''n''-''s''</sup>-1)·2<sup>''s''</sup>, for ''m''>''0'' the covered indices are ''k''=''m'', 2<sup>''s''+1</sup>-''m'', 2<sup>''s''+1</sup>+''m'', 2·2<sup>''s''+1</sup>-''m'', 2·2<sup>''s''+1</sup>+''m'', , 2<sup>''n''</sup>-''m'' :<math>\phi_{N, \alpha}(z) = \left{ \begin{matrix} z^{2N} - 2 \cos (2 \pi \alpha) z^N + 1 & \mbox{if } 0 < \alpha < 1 \ \ z^{2N} - 1 & \mbox{if } \alpha = 0 \end{matrix} \right. </math>:<math>\phi_{rM, \alpha}(z) = \left{ \begin{array}{ll} \prod_{\ell=0}^{r-1} \phi_{M,(\alpha+\ell)/r} & \mbox{if } 0 < \alpha \leq 0.5 \ \ \prod_{\ell=0}^{r-1} \phi_{M,(1-\alpha+\ell)/r} & \mbox{if } 0.5 < \alpha < 1 \ \ \prod_{\ell=0}^{r-1} \phi_{M,\ell/(2r)} & \mbox{if } \alpha = 0 \end{array} \right. </math> =2.61949494946494946168989119

Done:

#23Roll

Thanks for that. I needed a good laugh this morning.

#24If you dont want to ruin anyones vision with a laser and dont have superpower math skills you could put a ultrasonic right above your camera

#25Agree totally. it was suggested by Richard and myself early on. The funny thing is that I was discussing this specific issue with the head master of the school system that I am helping out to demonstrate the difference between what robotics students should be working on and what they are working on. The laser pointer would be on for a limited time, just to get the distance of the first object and get the size of that object from the camera. after that, all other measurements of distance could be done using the first method of comparing sizes.

I think a robust module that finds distances to objects multiple ways would be really cool. I think an interface that says something like here are the options, which do you want to enable and what ports are needed for these to work. The you could give a weighting system to different methods of finding distance... Just thinking out loud, but its the way that programming normally goes for me. Something to ponder as there are many methods and each has their drawbacks.

#26Ultrasonic, Infra-red and Laser are all too easy, where's the challenge?

Evolve, improve, innovate, do something new, do something you can't do... That's why I jumped in

If you want simple then yes, an ultrasonic, infra-red or laser measuring device can be used. If you want a sense of achievement figure it out using the camera only.

#27Yeh, Camera Only would be really nice.

I saw once where a guy took TWO cameras and placed them side by side. He then focused one at a distance and the other upclose. He programmed the program to know when the camera was in focus, thereby knowing when something was near or far.

I don't know HOW he made the program know when the camera was in focused. That would be hard for me to personally do. But, it would be nice if you, Rich would look into how that was done.

#28To be honest, the near/far focus example wouldn't be something I would waste my time with as there are much better ways using the same, or less equipment which makes that method redundant before it's even attempted.

If you're using two cameras then one for depth and one for imaging (i.e. the kinect) is the better choice rather than two for imaging.

#29good advice.

#30Realistically stereoscopic machine vision is something i would love to do but it is truly outside my skill level at this time. Lol , ultrasonic above my camera is what i installed on JARVIS 2000 and i pretend i came up with an original idea.

#31The EZ-AI application (next release - v 1.0.0.7) will allow for Steroscopic distance sensing. It also allows for distance sensing using one camera in a couple of ways. The first is to find the object, then find the base of an object. It then uses Distance = height of the camera * tan ( angle of the camera using a gyroscope ). This is the less accurate of the two measures. We are working on adding a third option that takes a picture with a recognized point, turns the camera x degrees and takes another picture with a recognized object and then turns the camera x degrees and takes another picture with a recognized object. It then calculates the distance to the object using the known variables.

None of these will be as fast or as accurate as a sensor is, but all will calculate distance much further out than a sensor can. The furthest that I have seen a sensor measure is about 40 Meters or about 120 feet. Using a camera will will allow distances to known objects to be measured at far further distances as long as the object isn't moving and the robot isn't moving.

I just had to revisit this thread because it is something that has been on my mind for six months.