Printer Friendly

Robot vision adds flexibility to finishing.

Industrial robots are well established as effective means to increase automation and flexibility of manufacturing operations. But keeping that robot supplied with material or workpieces has often remained a monotonous and sometimes dangerous manual operation.

Adding expensive dedicated feeder mechanisms can quickly offset the economic advantages of the robot, particularly in applications where batches are small or there are a number of different parts in the part family. Sensors that mechanically feel for the workpiece to identify it have been used in the past, but a better response can be the use of robot vision to identify the part and its relative position for transfer purposes--enabling the robot to "see" the part and adjust his response accordingly.

A good vision system, fully integrated with the robot, permits the robot to quickly identify the part and its orientation. This can eliminate or greatly reduce the capital investment required for workhandling equipment, reduce the use of floor space, and simplify the need for related engineering design work. ASEA sells ASEA

An application of vision to a robotic deburring and finishing station is an excellent example of the flexibility benefits of a "seeing-eye" robot. Although the developer of the vision system (and robot) is also the user--ASEA Robotics at their headquarters complex in Vasteras, Sweden--the plant manager assured us that his decision to add the vision system was purely economic. It was simply cheaper than adding separate part-handling tracks to expand the ability of the robot station to handle up to 12 different relay contractor bodies. (Service is also very prompt and thorough.)

A high-accuracy part magazine means high cost. Because the vision system was designed for the robot, there was no extra cost to integrate the two control systems. ASEA engineers calculated that payback for the vision-system upgrading was one year. Thus this was a profitable move even though they knew at the time that a change in part material three years down the road would eliminate the need for the deburring station.

The parts are a family of relay/contactor bodies that are compression molded of thermosetting plastic with fiber reinforcement. The robotic operation deburrs flash, and finish bores the holes in the relay bodies. Previously, there were only three body styles and they were nearly equal in geometry. They could be fed in large batches to the robot using a single feeder track. Between batches, the robot program and grippers had to be changed manually.

Now, a simple belt conveyor feeds the parts to the robot. The parts are manually loaded on the conveyor twice each shift, and they can be randomly oriented. The only requirement is that the parts be spaced apart sufficiently to allow the robot grippers to pick them up individually, and that they must be right side up (but they can be turned at any angle in the horizontal plane).

Two cameras are used, mounted about 3 m above the conveyor belt. (This is not for any stereo or depth-perception reason, but simply to be able to cover the full width of the belt.) The system identifies each part, chooses the proper robot gripper from a magazine, the proper angle in the horizontal plane to grip the part and pick it up vertically, and the correct deburring and hole-finishing program. Vision-system features

The new ASEA vision system is a gray-scale system with 64 levels of sensitivity. This increases the robot's ability to work in a variety of industrial environments with normal lighting conditions, since the system is not affected by most light-intensity variations. Thus, no special lighting arrangements are required.

Although the time period required to process an image depends on its complexity, processing time uner normal conditions is less than 1 sec. Positioning accuracy is better than [plus-or-munus] 0.2 percent of the scene width. Orientation accuracy is better than 2 degrees; the system automatically corrects for parallax errors, and both the image-processing system and robot program can be stored simultaneously on the same computer disc.

The camera is a solid-state chargecoupled device (CCD) with very low geometric distortion characteristics, and it is insensitive to vibration, shock, and electromagnetic fields. Chip array is 256 X 240 pixels. Images of defined workpieces are stored in a battery-backed memory. The robot program is immediately suspended if an error is detected in the image-processing system.

The first step in the teaching procedure is to place the object under the camera. The system processes the image and presents a contoured image on the monitor. The operator then points out up to 16 significant features by using a joystick-controlled cursor. To collect statistical data, the training procedure is repeated approximately 10 times to assure that it will be able to recognize the object. The object outline, view number, and elapsed identification time are displayed on the monitor each time.

For accurate identification, the object cannot be smaller than 5 X 5 pixels, i.e., one side of a surface cannot be less than 2 percent of the total width of the scene.

Up to four different cameras can be used in a single robotic system, and they have external synchronization generated by the image-processing system. The electronic image-memory and 9" television monitor are integrated into the robot controller. The vision-programming unit is the same as the robot programming unit (here, the ASEA SII for electric robots). The image-processing system is compact enough to be housed in the robot-control cabinet. Vision example

The two photos above illustrate how well the vision system identifies each of the 12 contractor bodies it must discriminate among. On the left is a photo of seven of the parts. Yes, there are seven; in the middle of the first column of parts is one molded in black plastic that does not show up in the photo taken against the dark background of the conveyor belt.

This dark part would be a difficult task for most vision systems to pick up and correctly identify. However, as the video-display screen shows at right, all seven parts are clearly outlined, including key internal locating holes and profiles. They are all unique, despite the family similarities. They can be identified regardless of how they are rotated in position on the belt, as long as they are right-side up, and the robot gripper will be programmed to pick them up accurately.

For more information from ASEA Robotics Inc, New Berlin, WI, on this installation, circle E32.
COPYRIGHT 1984 Nelson Publishing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1984 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Sprow, Eugene E.
Publication:Tooling & Production
Date:Apr 1, 1984
Previous Article:Should you switch to plasma wearsurfacing?
Next Article:Broadband network to support many types, brands of terminals.

Related Articles
The finishing touch: robots may lend a hand in the making of Steinway pianos.
New robots and vision systems emphasize ease of use.
The promise of robots.
Parts-removal robots.
Robot on board at TOYODA/TRW.
The next robotics frontier.
Robots with vision assemble with speed and precision.
Machine vision focuses on quality.
Force control machining cuts manufacturing costs: improved automated grinding and deburring of castings.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters