Printer Friendly

A clearer look at machine vision.

A clearer look at machine vision

Is your vision of a machine-vision application really feasible or just a pipe dream? Claims of its proponents notwithstanding, the technology does have its limits. It helps to have a few basic generic guidelines from knowledgeable experts when deciding whether your particular inspection situation is worth pursuing. Here, from "Rules of Thumb for Evaluating Machine Vision Applications," by Nello Zuech, Vision Systems International, Yardley, PA, are a few tips on how to sort machine-vision fact from "illusion." VSI is an independent consulting firm providing educational and application engineering in machine-vision automation.

Pixel pointers. Determining feasibility begins with a fundamental understanding of how a computer operates on a television image to sample and quantize data. That image is cut up into a finite number of spatial, two-dimensional data points called pixels, and each is assigned an address in the computer and a quantized value, from 0 to 63 in some systems or 0 to 255 in others. The actual number of sampled data points is dictated by camera properties, analog-to-digital sampling rate, and the memory format of the picture buffer where the image is stored.

Because the limiting factor is often the camera (typically 512 by 512 pixels), certain application judgments can be made (assuming each pixel is approximately square). If the viewed object will have a 1" field of view, for example, the smallest geographic region you can view will be a 2-mil square (1/512th of an inch). This may not be the smallest detail your system could observe, depending on what you're trying to accomplish: verify an assembly operation, make a dimensional measurement, locate an object, detect flaws, read characters, or simply recognize an object.

Physical repeatability of the part is also important--how well will it be positioned in front of the camera each time? With precise positioning, the field of view can be opened up to view more of the part. Conversely, any vibration in the camera can change the size of each spatial data point by increasing or decreasing the viewing distance.

Contrast. Contrast is always a key factor--the gray-scale differences between part and background. To verify an assembly feature, you need high contrast, and the smallest feature you can expect to detect is a two-pixel square area. With relatively low contrast, that feature should cover at least 1% of the field of view, or 2500 pixels.

Measurement. For dimensional measurement, the machine-vision system can interpolate between pixels, although this ability is very application-dependent. Although some vision vendors claim the ability to interpolate to 1/15 pixel, a better rule of thumb is 1/10 or less.

A current metrology rule of thumb is that the sum of an instrument's repeatability and accuracy should be 1/3 of the tolerance to be measured. Given a sub-pixel capability of 1/10 and a 1"-square part, the discrimination (smallest detectable dimensional change) of that machine-vision system would be 0.0002". Repeatability will be typically plus or minus that value, and accuracy about the same. Hence, the sum of accuracy and repeatability in this example is 0.0004". Using the one-third rule, part tolerance should be no tighter than 0.0012" for machine vision to be a reliable metrology tool; i.e., part tolerance for this size part should be on the order of [+ or -] 0.001" or greater.

Therefore, as parts become larger than 1" with the same tolerances (or tolerances tighten), machine vision with 512 x 512 pixel cameras may not be appropriate for dimensional checks.

Flaws. For flaw detection, contrast is especially critical. With high contrast--virtually white on black--flaws can be detected to 1/3 pixel. Significantly, you can detect these flaws, but not actually measure or classify them. Scratches or porosity, for example, can frequently be exaggerated by creative lighting and staging. Thus, for simply detecting such flaws, a rule of thumb is that they be greater than 1/3 pixel size.

As with assembly verification, when contrast is moderate, the rule of thumb is that the flaw cover an area of 2 x 2 pixels. Classifying the flaw with moderate contrast requires a larger area, 25 pixels or so. Similarly, with low contrast, the 1% or 2500-pixel rule should be used.

Character recognition. For optical character recognition (OCR) or verification, the rule of thumb is that stroke width of the smallest character should be at least 3 pixels wide. At typically 20 pixels wide and 2 pixels spacing between characters, the total number of characters to be read becomes a limiting factor. Only 22 characters will fit into a 500-pixel viewing area.

Another rule of thumb is that the best OCR systems have a correct read rate of 99.9%. That means one of every thousand characters will be misread or go unread. If, for example, 300 objects per minute are to be read, and 0.1% (18/hr) sorted out to be read manually, you must ask if this is acceptable.

Pattern recognition. A reasonable rule here is that the differences between patterns should be characterized by areas 1% of the field of view, or 2500 pixels. Gray-shade pattern is a major factor here in being able to see pattern differences of less than 2500 pixels, for example, where both geometry and color are factors.

More pixels? Where more than one of these requirements is involved, obviously, the worst-case scenario should be used to determine feasibility. There is always the choice of moving up to a camera with 1000 x 1000 pixels. These cameras, however, can be more expensive than the vision system itself. Furthermore, few commercial machine-vision systems can process that many pixels and make decisions in real time.

An alternative is a linear-array camera. Many are offered with up to 2000 pixels, thus four times the discrimination of a 500 x 500 area camera. However, the speed that the object passes the camera must be well regulated--object speed and camera speed determine the size of the pixel in the direction of travel.

These vision systems typically operate at 2 MHz. For a 2000-element array, this means a scanning rate of 1000 lines/sec (2,000,000/2000) in the direction of travel. For example, an object moving at 10"/sec would have an effective pixel size in the direction of travel of 10 mils.
COPYRIGHT 1991 Nelson Publishing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1991 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Publication:Tooling & Production
Date:Jul 1, 1991
Words:1050
Previous Article:Selection of deburring processes.
Next Article:Prepreg maximizes cargo-plane tooling accuracy.
Topics:


Related Articles
Machine-vision systems - what can they do for you?
High-speed, low-cost vision systems for Q-C & mold safety.
Machine vision: reaching for maturity.
Automated lumber processing: a glimpse of the future.
Focusing on vision.
Vision thing: machines provide it for control.
Machines sort colored PCR from natural.
Machine vision market celebrates growth in NA.
NEW LASER MACHINE, DONATION RESTORE EYE SIGHT OF OJAI MAN.
The right Vision.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters