Home  |  Site Map  |  Contact   


Product Support
: Knowledge Base

Our on-line Knowledge Base contains answers to some of the most common support questions. It has information about all PGR products and was developed to help customers resolve product issues. It is constantly updated, expanded, and refined to ensure that our customers have access to the latest information.

Knowledge Base Search

Search For: In:
Any Word All Words Exact Phrase
Show:
 
How is depth determined from a disparity image?
Article 63: How is depth determined from a disparity image?

SUMMARY:
This article details how users can determine the depth of a pixel based on the disparity image.

APPLICABLE PRODUCTS :
All Stereo Vision Products • 

ANSWER:
The easy answer is one should use one of:
  • triclopsRCD8ToXYZ()
  • triclopsRCD16ToXYZ()
  • triclopsRCDFloatToXYZ()
to determine the 3d position of a pixel in a disparity image.

triclopsRCD8ToXYZ() is used when subpixel interpolation disabled and the user obtains a disparity value from the 8 bit TriclopsImage structure.

triclopsRCD16ToXYZ() is used when subpixel interpolation is enabled and the user obtains a disparity value from the 16 bit TriclopsImage16 structure.

triclopsRCDFloatToXYZ() is used when one obtains the disparity measure from other means, such as doing your own feature-based stereo.

The underlying equation that performs this conversion is:

Z = fB/d

where
Z = distance along the camera Z axis
f = focal length (in pixels)
B = baseline (in metres)
d = disparity (in pixels)

After Z is determined, X and Y can be calculated using the usual projective camera equations:

X = uZ/f
Y = vZ/f


where
u and v are the pixel location in the 2D image
X, Y, Z is the real 3d position

Note: u and v are not the same as row and column. You must account for the image center. You can get the image center using the triclopsGetImageCenter() function. Then you find u and v by:

u = col - centerCol
v = row - centerRow


Note: If u, v, f, and d are all in pixels and X,Y,Z are all in the meters, the units will always work i.e. pixel/pixel = no-unit-ratio = m/m.

How accurate is it?

Accuracy for stereo vision is difficult to exactly quantify, since the accuracy depends on the accuracy of the correlation match, which can depend on the image texture. However, a good 'rule of thumb' is to expect a 0.2 matching error in disparity matching. The PGR stereo vision calibration process minimizes the RMS pixel error between the observed and predicted locations of a set of calibration points, as dictated by the camera’s intrinsic and extrinsic values. The calibration of Digiclops and Bumblebee systems is usually within 0.08 pixels RMS error; the rest of the error is attributed to the stereo matching algorithm.

To determine the error at a particular Z value, do the following:

Assume Z = 1.0m

d = fB/Z


Since fB is a constant, you can determine the d value for this Z.

The error in Z would then be:

delta Z = |Z - Z'|

where Z' = fB/(d+e) where e is the error in matching. So the whole equation works out to:

delta Z = | fB/d - fB/(d+e) |

RELATED ARTICLES :
KB Article 70: No depth information computed at the boundaries of large depth discontinuities
KB Article 103: Accuracy of Point Grey Stereo Vision camera disparity (depth) calculations.
KB Article 85: Determining the focal length, image center, baseline and field of view measurements for PGR stereo vision cameras
KB Article 126: Extended disparity range available with new Triclops SDK
KB Article 150: Stereo accuracy and error modeling.


ARTICLE INFO:
Article ID:
63
Published:
12:00:00 AM
Last Modified:
7/19/2010 4:16:40 PM
Keywords:
depth, disparity, 3D position, stereo accuracy, RMS error, calibration
Issue Type:
Normal Use



ARTICLE FEEDBACK:
How useful was this article?
less
more
1
2
3
4
5
Additional comments or suggestions?

Go Back Printable Version Email This Article Bookmark This Article

 
 
 

Point Grey Fly Capture
FlyCapture 2.7 Release 8 is available for download!
Download Software