Greetings from Point Grey Research
Season's Greetings to
you and your family from everyone at Point Grey Research.
Please take note of our office closure during the holiday season.
Our offices will be closed December 24, 2004 to December 31, 2004,
inclusive. Regular business hours will resume on Monday January
Here's to a healthy and enjoyable holiday season for everyone!
edition of Point
Grey's Insights newsletter showcases how the Bumblebee®
stereo vision camera was used in the Bounded Hough Transform (BHT).
What is the Bounded Hough Transform?
Hough Transform (BHT) is an algorithm
that tracks in real-time the pose of objects in the data sequence
returned from a Bumblebee®
stereovision sensor. Real-time tracking is a challenging problem due to
the large number of possible values (i.e., the pose space) that an
object's pose can have. The pose space of a 3D object is actually six
dimensional, containing 3 translational and 3 rotational dimensions,
which leads to a huge number of possible object poses in any
Click on the image
below to view the entire tracking sequence.
An example of
The key to the BHT algorithm's speed is based on two factors:
1) Using the knowledge of the previous frame's pose to limit the search
in the current frame. Given the 30 fps maximum frame rate of the
Bumblebee®, and assuming reasonable limits to the interframe motion of
the object, the search for the object's pose in the current frame can
be limited to a small bounded region around the previous frame's pose.
2) Tracking is done in discrete rather than continuous space. Lets
consider just one translational dimension of the object's pose, i.e.
the x-axis. If the x-axis is a discrete grid, then the current x-value
will have one of only 3 possible values relative to the previous pose,
i.e., the same grid element, the element to the left, or the element to
the right. If we extend this reasoning to all 6 dimensions, then at
each frame there are only 36 = 729 possible pose values to
The end result is both fast (it runs at ~500 Hz on a 2GHz Pentium) and
robust. The input to the algorithm is a surface model of the object to
be tracked and its initial pose. The output is an estimate of the
object's pose at each frame. The object does not require any geometric
features and can be freeform as long as it is rigid and has sufficient
texture to extract surface points.
The above work was co-authored by Michael Greenspan, Piotr Jasiobedzki and
Limin Shang. For more information
on the Bumblebee®, contact email@example.com.
encourage you to email this newsletter to a friend or colleague.
always interested in pursuing new projects and welcomes your comments
and ideas. For information on Point Grey Research
and our products, please visit our website www.ptgrey.com
|Dragonfly® EXPRESS™ in Production
Point Grey Research's
latest IEEE-1394b camera, the Dragonfly® EXPRESS™ is now in
Using Kodak's 1/3"
KAI-0340 CCD sensor and the IEEE-1394b interface, the camera can run at
speeds up to 200 fps.
As with other Point Grey IEEE-1394 imaging products, the Dragonfly® EXPRESS™ is featured
To order your
Dragonfly® EXPRESS Kit, contact firstname.lastname@example.org
- Custom Image Modes
- General Purpose Input
OHCI PCI Host
9 pin to 9 pin,
aluminum case with C/CS-mount lens holder
12 pin circular connector
FlyCapture C/C++ API and
source code for quick
within the programming environment