Printer Friendly

Chips vs. tubes: a never-ending controversy.

FEW YEARS AGO THE CONTROVERSY of the tube camera versus the chip camera was more heated than it is today. The proliferation of chip cameras over the years seemed to have all but put tube camera usage to rest. However, tube cameras continue to be offered in the security marketplace though in ever decreasing numbers.

The road from tube to chip camera usa e has not been a smooth one. Roadblocks have occurred in both price and performance. Many manufacturers did not make the commitment to offer only charge coupled device (CCD) cameras until as late as 1987, while others still offer tube cameras out of pricing considerations.

Understanding the differences between these devices is easier once the user understands how each creates an image. Take the tube camera. its front surface consists of a light-absorbing chemical coating. This transparent layer is the target. A small positive electronic charge is applied to the target to maintain a positive polarity with respect to the cathode, an electron-emitting device located at the back of the tube.

The cathode beam charges the back side of the target to a near zero potential. When light hits the target surface, the electronic transfer increases in the areas subject to the brightest light, thereby creating the picture. The cathode device becomes heated, and a large amount of power is consumed.

The electron beam is then shot through a vacuum and projected on the back side of the target area. This requires the cathode to be at a distance from the target, thus accounting for the large size of the tube and the camera itself.

As the beam is projected, its travel through the tube vacuum is controlled by a magnetic field. The field then scans the tube surface, picking up all the elements of the video picture.

As the intensity of this magnetic field changes from the tube's center point to its outer edges, certain distortions are introduced. Picture resolution drops between the tube's center and its edges. In addition, a geometric distortion is introduced in the picture's comers.

Finally, since the light-gathering surface of the tube is composed of a chemical element, its longevity is limited. Tubes usually maintain their original specified ratings for only 1,000 hours. While the actual operating life of the tube is longer, it usually comes at the expense of picture degradation.

As the tube ages, it loses its ability to take in and give up light. As a result, images bum into the tube surface, picture noise increases, and the camera loses its sensitivity.

This entire image pickup operation is why the tube faces problems in its continued existence and its only opportunity for continued usage.

The chip is not a new invention, having been first introduced in 1970. However, only in the last five years have technological advances and lower manufacturing costs enabled it to be used in a broader range of applications.

From its inception, the chip has gone through a number of different development stages, each with its own method of producing a picture. Metal oxide semiconductors MOSS), charge-injection devices (CIDS), charge-primed devices (CPDS), and CCDS have all been used in creating cameras for security applications. Presently, the CCD is the most widely used chip.

The actual structure of the chip consists of individual light collection points, known as pixels, arranged in a horizontal and vertical pattern across the faceplate of the tube. The information contained in each pixel is read across a horizontal line and down a series of vertical pixels. The timing of the sequence is much the same as the beam scanning across a tube's target area.

The number of pixels along this XY axis determines both the resolution and sensitivity of the chip. The pixel numbers are expressed as, for example, 574H x 485V. The first number is the number of pixels on the horizontal line, and the second number is for the vertical plane.

No set formula can determine the exact ratio of pixels to produce a specific number of resolution lines. The most common approach taken is that 70 to 75 percent of a chip's pixel will approximate the number of lines of resolution the device is capable of reproducing. However, no universally accepted method exists now for determining pixel number to horizontal line resolution ratio.

The subject of resolution not only brings up the difference between tube and chip cameras but also raises some controversy. Camera resolution is measured by looking at a still image. As the speed of the image increases, camera resolution decreases. This measurement is known as dynamic resolution and can be greatly increased in a CCD camera by shuttering.

Shuttering works in this way: Under normal conditions, for either a tube or chip camera, a complete picture is created once every 1/160 of a second. This rate conforms to the requirements of the National Television Standards Committee (NTSC) system.

In CCD cameras containing an electronic shutter, the light-imaging surface is exposed to light. No set time is required even though rates of 1/200, 1/500, I/1,000, and 1/2,000 of a second are typical. After this time, the light is cut off.

This part of the system operates in the same manner as the shutter on a 35 mm camera. As the signal can be held in the storage area of the CCD and released at a rate of one picture every 1/160 of second, the system meets the NTSC requirement.

Shuttering can be used to overcome two problems found in tube cameras. First, it can dramatically increase the picture resolution of action happening faster than 60 times per second.

In reality, the resolution specification of a camera means nothing unless the speed of action occurs at a rate slower than 60 times per second. As shuttering captures the image at rapid rates, dynamic resolution greatly improves.

Shuttering also has an advantageous by-product. During shuttering, light received by the imaging surface is dramatically reduced. Extremely bright images can be made to fall into the normal camera contrast range.

Unlike tubes, chips do not require heaters, consume much power, or need large housings. Since reading the signal from each pixel remains consistent throughout the entire scan, resolution does not drop off, nor is there any geometric distortion from the center of the picture to the picture edges. Chips suffer no picture or specification degradation regardless of age.

While the chip can be viewed as the closest answer to perfection, it does have a number of drawbacks in resolution and sensitivity. From the earlier mentioned formula it seems the simplest method for increasing resolution is to increase the number of pixels. Restrictions to this exist due to the spacing requirement of pixels.

This spacing requirement is determined by the need to prevent picture artifacts known as aliasing. As the size of the chip is mandated to conform to standard camera sizes, the overall number of pixels can only be changed by decreasing their individual size.

This leads to the next problems facing chip camera sensitivity. The smaller the individual pixel, the less sensitive it is to incoming light. This is a paradoxical situation since increasing chip camera resolution means decreasing pixel size and decreasing pixel size results in decreasing sensitivity. Tube cameras, on the other hand, gather light in a uniform method across the entire target surface, resulting in maximum sensitivity.

The amount of light required to produce a usable picture is a complex subject. The amount of light on a scene can be recognized by its light level. A video camera requires not only that light be present but also that the light contain the correct color or trimistic values.

Light is divided into different wavelengths in the color spectrum. The portion of light the eye can see-visible light-exists on wavelengths between 400 and 700 nanometers. In the visible spectrum are the primary colors-red, blue, and green-and the secondary colors-violet, yellow, and orange.

Of all the colors, our eyes are most sensitive to the frequencies that represent green. Just like our eyes, each image pickup device, tube and chip, responds best to a particular frequency of light. This factor is important in selecting the correct video camera, as each lighting source emits its own particular spectral light frequencies.

For example, mercury light emits frequencies that are blue-green. Sodium lighting produces a green-yellow light, and tungsten lamps emit lighting from the upper end of the visible red spectrum into the invisible (to our eyes) infrared (IR) regions. Spectrum response now becomes the component used to get the most out of the camera.

The ability to work in the invisible IR regions brings out another important difference between tubes and chips. The spectrum response of any given tube is fixed, where most chip cameras can be varied. CCD cameras have a wide-range spectrum response all the way into the high IR region.

Under normal lighting conditions the amount of available IR light can cause the tube to overload, washing out the picture. An optical IR filter can be placed in front of the image surface. This filter can greatly affect camera sensitivity, cutting light response by as much as 90 percent.

Many manufacturers were quick to recognize this potential and quickly turned a disadvantage into a positive selling tool. By building cameras with removable IR filters, sensitivity is greatly increased under controlled lighting conditions in addition to producing acceptable pictures under IR conditions.

The differences in operating performance between tube and chip cameras have been dramatically reduced. The major difference between the two types of cameras now are due to price and operation under extremely low light conditions.

In tube cameras this equates to the low cost, low performance vidicon tube camera and the high performance, high cost intensified silicon intensified target (ISIT). For many users, regardless of the manufacturers' approach, the lure of a vidicon camera under $200 is too much to resist. While CCD technology has made significant advances, the sensitivity of the chip is not great enough to recover images under starlight conditions. Here the ISIT camera is the only choice.

It's the middle ground-the Newvicon/Ultricon level of performance where CCDS have made their greatest advances. Tube resolution of 600 to 650 lines is being challenged by CCD resolution of 570 lines.

CCD resolution is the greatest hurdle to overcome due to the design limitations of the chip. As a result, chip designers have had far greater success in increasing camera sensitivity. Removal of the IR filter can produce sensitivity levels closely matching those of silicon intensified target (SIT) cameras.

These advances go a long way to dispel any pricing differences between tube and chip cameras. As tube camera production decreases, tube replacement costs will increase. During the past two years distributors' prices for replacement tubes have increased almost 33 percent. This trend will probably continue.

Fortunately, the competitive nature of the video security industry will result in further CCD technology advances and cost reduction long after the tube is only a memory.
COPYRIGHT 1990 American Society for Industrial Security
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1990 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:closed circuit TV
Author:Heller, Neil R.
Publication:Security Management
Date:May 1, 1990
Previous Article:When the walls come tumbling down.
Next Article:CCTV: lens is more.

Related Articles
Technology Assuring a Bright Future for CCTV.
KISS and choose.
And the winner is....
Focusing on lenses.
Phoenix x-ray Systems + Services Inc. (Technology Spotlight).
No assembly required: DNA brings carbon nanotube circuits in line.

Terms of use | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters