Browse KB Articles




Geometric Vision using Ladybug Cameras
KB Number: 10621 Technical Application Note TAN2012009 Revised October 20, 2016 SubjectTechnical Application Note (TAN2012009): Geometric Vision using Ladybug^{®} Cameras 10400
Applicable Product(s)
Application Note DescriptionEffective warping and stitching of the images produced by the camera system's six sensors is achieved through accurate calibration of the physical location and orientation of the sensors and the distortion model of the lens. This calibration also enables photogrammetric analysis of image data. This application note discusses the representation used to describe the physical orientation of all of the sensors with respect to one another and provides instructions for transforming 2D local points to 3D global points and vice versa. Coordinate Systems on Ladybug CamerasEach lens has its own righthanded 3D coordinate system. As well there is a Ladybug 3D Coordinate system that is associated with the camera as a whole. This makes a total of seven 3D coordinate systems on every Ladybug camera. As well, there is a 2D pixelgrid coordinate system for each sensor. Lens 3D Coordinate SystemEach of the six lenses has its own 3D coordinate system.
Sensor 2D Coordinate SystemEach sensor has its own 2D coordinate system.
Ladybug Camera Coordinate SystemThe Ladybug Camera coordinate system is centered within the Ladybug case and is determined by the position of the 6 lens coordinate systems.
Relating Lens Coordinate Systems and the Ladybug Coordinate SystemThe position of each lens coordinate system relative to the Ladybug coordinate system is retrievable from the Ladybug API. First, use ladybugGetCameraUnitExtrinsics() defined in ladybuggeom.h to retrieve the 3D translation and Euler angle defined rotation.
/** Note the function comments illustrate how to convert the provided Euler angles (Rx, Ry, Rz) and translation (Tx, Ty, Tz) into a 4x4 Transform T. Using standard homogeneous transform formulation: Where in the appropriate coordinate frame.
Converting a Pixel Location to a 3D RayA common task when using the Ladybug camera for geometric vision is to interpret a pixel location in a particular image to a 3D ray in the Ladybug Coordinate System. There are a variety of image spaces from which the pixel may be extracted – for example, spherical, cylindrical, rectified or raw. Users are encouraged to use raw images for this kind of application. Raw images are the only images which have not been resampled and consequently should provide the best accuracy when finding or tracking image features. To convert a pixel location in a raw image to a 3D ray in the Ladybug Coordinate System the following steps should be taken: 1. Obtain the focal length for the appropriate camera using ladybugGetCameraUnitFocalLength() 2. Obtain the image center for camera using ladybugGetCameraUnitImageCenter() 3. Obtain 6D extrinsics vector (Euler angles and translation) for the camera using ladybugGetCameraUnitExtrinsics() 4. Rectify 2D pixel location using ladybugRectifyPixel() 5. Find the (u,v) pixel coordinate for this rectified image location 6. Transform the rectified 2D pixel location into a 3D ray within the local camera coordinate system 7. Transform the local 3D ray to a 3D ray in the Labybug Coordinate System To find the (u,v) pixel location from a rectified (column,row) image position, the image center information must be taken into account: Where equals the pixel row position and equals the image center row position. The rectified image position (u,v) can be transformed to a local 3D ray value by interpreting the rectified image using the standard pinhole camera model. Note that the focal length and image center obtained for the camera is in pixels and is valid only for the rectified image of the specified camera. To calculate the local 3D ray from the rectified 2D pixel location Z is arbitrary and can be set to 1. Applying the pinhole model equations then yields: The vector . To convert this vector to the Ladybug Coordinate System, one simply applies the 3x3 rotational component of the homogeneous transform shown in section 1.5:
Where R is the upperleft 3x3 submatrix of T. The origin of this vector will be the origin of the local coordinate system transformed into the Ladybug Camera coordinate system, or Converting a Pixel Location to a 3D Ray Corrected for Lens OffsetMapping a 2D raw pixel to a 3D ray is complicated by the fact that each lens center is offset from the center of the Ladybug Coordinate System. To get the most accurate results the ray has to have both a starting point and a direction – not just a direction from the center of the Ladybug Coordinate System. Mapping a raw pixel proceeds in two steps: 1. Map the raw pixel to its rectified coordinates – API: ladybugRectifyPixel() /** 2. Map the rectified coordinates to a ray location and direction – API: ladybugRCtoXYZ() /** Example code for this mapping type, and other mapping types that don’t rely on the API functionality as heavily, is available in the ladybugTranslate2dTo3d example, included with the Ladybug SDK. Converting a 3D Point to a Pixel LocationReversing the pixelto3D problem described in section 1.6 and 1.7 is slightly complicated by the requirement to first find which lens a 3D point will project into. Otherwise it is straightforward using the function ladybugXYZtoRC(). The function header information is provided below. /** To determine which lenses a 3D point will project into, this function can be used for each of the 6 lens locations (by setting the uiCamera parameter appropriately). If the return code is successful (LADYBUG_OK) then the 3D point defined by dLadybugX, Y and Z projects into camera uiCamera at the rectified row,column position provided by pdRectifiedRow and pdRectifiedCol. The location of the pixel in the raw image can be determined by using ladybugUnrectifypPixel. Calibration AccuracyThe average accuracy of a Ladybug5 camera is 2 mm at 10 m distance, or 0.0116 degrees. Local to Ladybug Transform ErrorThere is an error associated with how well the mathematical model can match the actual position and orientation of the lens with regards to the Ladybug camera coordinate system. This relates to the difference between the real physical position and the position calculated during camera calibration. This is related to: Rectification ErrorThere is an error associated with how well the mathematical model can match the underlying lens distortion. This is related to: Parallax to Center of Camera ErrorThe parallax error is an additional source of error, not included in the 2 mm at 10 m accuracy stated above. The parallax error is the difference between two rays pointing at the same destination from two different starting positions. In this case, the difference in position is between the lens center and the camera coordinate center. The following diagrams show the angular difference observing the same point in 3D space between when the lens center is located at the real position and when the lens center is at the center of the Ladybug coordinate system. This is related to: Thank you for your input 