Effective warping and stitching of the images produced by the camera system's six sensors is achieved through accurate calibration of the physical location and orientation of the sensors and the distortion model of the lens. This section discusses the representation used to describe the physical orientation of all of the sensors with respect to one another. The Ladybug software manages the camera coordinate system by breaking it down into seven right-handed coordinate frames of one of two types: six independent image sensor coordinate frames and a camera coordinate frame.
Each of the six image sensors has its own independent coordinate frame. Each of these coordinate frames follows standard PGR camera coordinate conventions in that:
The following image shows the sensor coordinates as they are oriented on a camera unit.
C:\Documents and Settings\davidb\My Documents\CVS\dragonfly\doc\Gazelle
The Camera Head Coordinate Frame presents a unified coordinate frame for the device as a whole. It is setup roughly as follows:
It may be desirable to project a 3D point in space into the ideal image frame of any of the individual image sensors. The following API calls provide users with the necessary camera intrinsic and extrinsic parameters required to do so.
Combined with some basic geometry, these functions can be use to project a point into an image as follows:
Given a 3D point in the Ladybug camera head coordinate frame,
ladybugGetCameraUnitExtrinsics()to transform the point into the image sensor coordinate frame.
ladybugGetCameraUnitFocalLength()) and image center (provided by
ladybugGetCameraUnitImageCenter()), apply the normal projection equation to the 3D point (which must be in local camera frame) to determine where it falls on the ideal image plane.
Having determined where the point falls on the rectified image, users can then use a call to
ladybugUnrectifyPixel() in order to determine where this point appears in the raw image plane.