Quantcast More than a Machine
 
Case Studies
Click an item to read from the list below.
Case Studies
1/20/2011
Perfect Book
12/1/2010
Quality Kegs
10/1/2010
Sweet Success

More than a Machine

Reprinted with permission from the Imaging and Machine Vision Europe article, "More than a machine", featured in the April / May 2010 issue.

 

Simplifying Automitive Assembly

by Greg Blackman

Robots crop up a lot in science fiction. Having your own personal robot to help you round the house could be what life’s like in the future. Of course, it’s when robots begin to think for themselves that things generally take a turn for the worse – need I remind you of The Terminator or Blade Runner or The Matrix? The word robot, from robota meaning serf labour in Czech, was first introduced by Czech writer Karel Capek in his play R.U.R. (Rossum’s Universal Robots), published in 1920. Science fiction writer, Isaac Asimov, is credited with coining the term robotics in his 1941 short story, Liar!. More recently, Pixar Animation’sWALL-E brought to the screen a loveable garbage compactor robot programmed to clean up the mountains of rubbish littering planet Earth. WALL-E is particularly expressive as a robot, albeit a fictional computer-generated robot, and we, the audience, get a pretty good idea of his emotions throughout the film.

Meanwhile, back in the real world, researchers at Carnegie Mellon University, Pittsburgh, US, are conducting studies on how humans interact with a real robot platform, through their Snackbot robot. Snackbot is a social mobile robot designed to deliver (you guessed it) snacks to the people around the university campus. It is a test bed for studying social interaction between robots and humans, with the robot participating in dialogue and performing head and face gestural interactions to elicit natural human interactions – instead of typing at a keyboard, one speaks to it.

Snackbot prototype

Figure 1: Full-scale mock-up of the robot.

 

The project is a collaboration between The Robotics Institute and the Human Computer Interaction Institute (HCII) and Design departments. Researchers at the latter (Jodi Forlizzi, HCII and Design; Min Kyung Lee, HCII; and Wayne Chung, Design) are interested in studying the human side – how do people interact with the robot – while Dr Paul Rybski and his team at the Robotics Institute are interested in the technology development and understanding what it will take to make the robot fully autonomous. ‘That’s not an easy task at all,’ Rybski says, ‘and we have to have a lot of perceptual work and situation awareness – where is the robot in the environment, where are the people, is the robot speaking to a person, does it understand what was said and how that corresponds to the current dialogue question.’

The concept of Snackbot originated in 2007/08, with the first end-to-end system deployed in semi-autonomous trials in autumn 2009. This was to gather data on human responses and so a human operator was supplying the robot with high-level controls. ‘The autonomy is still at an early stage of development and is an independent research project from the human subject experiments at this time,’ comments Rybski. ‘The navigation was fully autonomous, but the human interaction component was carried out manually,’ he says. By autumn 2010 Rybski’s team hopes to have an improved version that can go out and interact with people to carry out more trials and studies.

Snackbot has a Point Grey Bumblebee2 stereo camera mounted in the head, which is used as a standard camera, but also to generate disparity data, providing the distance from the camera to each point in space. A Dragonfly2 camera from Point Grey with a wide angle 190° field of view lens is also mounted in the top of the head to provide peripheral vision (a 360° field of view at the horizon).

The Bumblebee2 stereo camera is used for object detection and 3D object learning. Here, a laser from German company Sick is used to show the robot the position of the object. The robot can then drive up to the object – using the data from the stereo camera to gauge the distance – and circle it, learning the views from all sides. The robot would then recognise the object if it came across it in the environment. Through driving the robot around its environment, Snackbot learns a floor map of the area from the data it collects. During operation, it uses a stochastic state estimator called a protocol filter to localise itself in the map, based on its sensor readings.

Snackbot sketches

Figure 2: Sketches for the robot housing: a) machine-like, b) rounded and friendly, c) concepts combined.

 

Other work involved teaching the robot how to recognise people. A point cloud of data from the stereo camera was used to localise the person’s arms, torso and head. Then a bank of different features, such as skin tone detection, colour histograms, the person’s height and other aspects of their size and shape in the image plane, are fed to a learning system to generate a model that can be used to recognise the person at a later date. The concept of person recognition was taken a step further, with the same learning aspect used to recognise whether the person was paying attention to the robot through body pose. ‘This is quite a challenge, as the person could be anywhere in the robot’s field of view, with changing distances and orientations,’ comments Rybski.

Rybski remarks: ‘Integration for a real system is a huge challenge at all levels. Available computation, available network bandwidth, available throughput of the various data channels, all have to be taken into consideration.’ The vision was originally designed to be standalone and operate on log files and so compromises had to be made to operate in real-time: ‘There isn’t enough computation onboard the robot to have the luxury to operate at several seconds of computation per frame of video, for instance,’ he says

Robots for inspection

The two most common scenarios using robots with vision are pick-and-place and inspection. According to Williamson, using the camera to control what the robot does is generally used for pick-and-place, in which the location and type of product is identified, and coordinate instructions are sent to the robot. ‘For inspection tasks this would typically be reversed,’ he says. ‘The robot would control the vision system instructing the vision system when to inspect, and which inspection to run.’

One example of using robotics for inspection is a machine built by Italian company, SIR, to inspect knives during mechanical resharpening. The robot positions the knife under the vision system in order to acquire the shape. Then both sides of the blade are grinded separately. The vision aspect of the robotics cell uses a Cognex VPM-8501 image acquisition card connected to a camera, with Cognex’s VisionPro software carrying out the image processing.

Johan Hallenberg, senior applications engineer at Cognex, says: ‘There needs to be a common communication protocol used by both the robot and the vision system.’ Cognex’s solution is Cognex Connect, a package of commonly used communication protocols that, Hallenberg says, greatly simplifies the set-up of communication between the camera and the robot. Cognex Connect is integrated into the company’s In-Sight Micro smart camera, which can be mounted on a robot arm.

Piacentini of ImagingLab says that, currently, only a small percentage of robots use vision. ‘Robotics driven by vision can carry out more complex tasks and there is no need to palletise the parts or to put them in a specific place, as they can be identified at random,’ he says. ‘There is plenty of room for advanced applications of small accurate robots, not only in automated assembly and packaging, but also in emerging areas, such as biomedical applications and food processing.’ With the addition of vision, and as the technology for integrating vision with robotics matures, the variety of applications that employ robotics will only increase.


Related Links

 

Vancouver, BC, Canada