Printer Friendly

Image processing in astronomy.

A FUZZY PICTURE of Jupiter scrolls onto the computer screen. After a few seconds a miraculous wave of sharpness advances down the image, and Jupiter becomes crisper. Belts and zones barely visible at first now show clearly. It looks as if a sheet of wax paper has been lifted from the screen to reveal the real image underneath.

A second wave of sharpening leaves delicate festoons and streamers in its wake. After a brief pause the brightest belts become pure white and the surrounding sky turns from dingy gray to deep black. It seems like magic, but it is not. Image processing has brought to amateur astronomers with home computers what 20 years ago was the latest in space-age technology.

The recent surge of interest in CCD cameras has heightened the importance of image processing. These cameras produce digital images; that is, the picture consists of an array of numbers. Each number represents the intensity of light at a given place. Image processing transforms this mass of numerical data into a form that the eye and brain can readily interpret.

My fascination with this technique dates back nearly 30 years to the Mariner 4 flyby of Mars. The 22 pictures returned by the spacecraft -- a mere 630 kilobytes of data -- were nearly featureless. Yet, through image processing abundant hidden detail was retrieved and shown to the world. As a high school student I was deeply impressed. Today's personal computers are more powerful than the mainframes that manipulated those images, and we have entered an age in which anyone can do similar work.

Image processing falls naturally into three broad areas: calibration, analysis, and alteration. Calibration attempts to produce a pure record of the intensity of light. Image analysis strives to extract as much data as possible from the calibrated image, such as positions and brightnesses. Finally, image alteration tries to reveal all there is to see.

DIGITAL-IMAGING BASICS

Digital imagery introduces yet another exotic vocabulary into amateur astronomy, though the underlying principles are very simple. A digital image is a picture divided into a large number of picture elements, or pixels. These are like the ceramic tiles in a mosaic: put enough of them side by side and you form a picture.

Each pixel in a digital image has a number representing its brightness. I call these numbers pixel values (PVs), but they have also been called ADUs (analog to digital units), DNs (data numbers), intensity values, gray levels, and other names. When the pixel value refers to brightness, as it usually does in astronomy, larger numbers mean more light in the image.

The greater the range of numbers used to represent the image, the more precisely you know the brightness of the pixel. If the computer uses one byte to represent each pixel, the image is called an 8-bit image (in computer jargon, eight bits equals one byte). An 8-bit number can represent |2.sup.8~, or 256 brightness levels. Images using 12-bit and 16-bit data allow 4,096 and 65,536 brightness levels per pixel, respectively.

Several CCD cameras popular with today's amateur astronomers are based on the Texas Instruments TC-211 CCD detector, which has 165 rows of 192 pixels for a total of 31,680. Another group of sensors, also by Texas Instruments, has arrays of 378 by 242 pixels. Although this is a small number of pixels relative to the 2048 by 2048 arrays used by professional astronomers, it is more than adequate for capturing diffraction-limited detail on the Moon and planets, searching for supernovae, measuring the positions of asteroids and comets, and making beautiful tricolor images of deep-sky objects.

IMAGE CALIBRATION

Image processing begins even before the first picture is taken. Raw CCD images contain considerable unwanted "baggage" that must be removed to produce a faithful record of the light that struck the CCD. In addition to the signal generated by the light itself, every pixel has a different zero point (bias), sensitivity, and response to temperature (dark current).

To remove these effects from the raw image, we generally take a dark frame (an exposure of the same duration as the raw image but with no light allowed to strike the CCD) and a flat-field frame (an exposure with the telescope aimed at a uniformly illuminated source). Subtracting each pixel in the dark frame from the corresponding pixel in the raw image removes the bias and thermal signals, leaving only the signal generated by light striking the CCD. Then, dividing each pixel by the value of the corresponding pixel in the flat-field frame removes the effects of sensitivity variations and vignetting in the optical system. Ideally, a calibrated CCD image has the following properties: a pixel value of zero corresponds to zero light striking the CCD, the sensitivity to light of every pixel is the same, and the numerical value of every pixel is linearly proportional to the amount of light that struck the CCD during exposure. However, each pixel is subject to random variations in the number of photons striking the CCD, random variations in the number of thermal electrons created during the exposure, and random noise in the CCD camera's amplifier and analog-to-digital converter that "reads" the image out of the chip. In CCD cameras available to amateurs today, the random variations are sufficiently small that images approach the ideal.

IMAGE ANALYSIS

Any good image contains a wealth of information. However, the utility of an image depends on who views it, why they view it, and how they view it. If an expert on stellar photometry were to examine an image of Jupiter, he or she might entirely miss a dramatic change in the Great Red Spot. Conversely, a specialist in planetary atmospheres might fail to notice a 7th-magnitude supernova in M31 that a deep-sky observer would recognize instantly.

Image analysis deals with obtaining specific information, such as the precise positions and magnitudes of stars or the total light gathered from a galaxy. Such data cannot be obtained by inspection; software is required to obtain it from the numerical values in the pixel array. Even a quick look at the data can tell us a lot. Is the sky background uniform? Is the range of pixel values reasonable? Is there a scattering of abnormally dark or light pixels that may signify poor calibration? Before investing time to perform a precise analysis of an image, it's nice to know that it appears normal.

This is one of the jobs that histograms perform. Histograms are graphs showing how many pixels have a given pixel value in an image. A typical deep-sky scene contains many pixels with the brightness of the sky background, while the pixels making up the object itself will be slightly brighter. There will also be a small number of pixels with relatively high brightness representing stars in the image.

Analysis is most effective with images that have been calibrated but not otherwise processed. Such an image consists of pixels whose values mirror the amount of light that fell on the CCD -- a digital simulacrum of the real sky. To derive the desired information, image-analysis software creates "virtual" diagnostic instruments.

Position. For nearly a century astronomers determined the positions of stars on photographs with a special device called a measuring engine. Positions are now easily obtained because the pixels on a CCD are spaced very precisely. To measure relative positions, you need only locate the exact center of a star's image in the array.

In this case the virtual instrument first identifies the approximate center of a star's image, perhaps selecting the brightest pixel. The software then sums pixel values in the rows and columns around this center, taking one-dimensional cross-sections of the stellar image. Assuming the star image has a Gaussian profile (bell-like curve), the software seeks the center position and width that best fit the star's actual profile. It is thus possible to determine the center of a star image to a tenth of a pixel or better. Brightness. Early this century astronomers began to measure the brightness of stars with photoelectric photometers. It was a tedious process fraught with technical difficulties that limited precision.

With digital images a virtual photometer used in a comfortable location can largely eliminate photometric variations due to seeing, telescope tracking, and sky transparency. The software sums the values of all the pixels in the stellar image as well as those in an equally large starless region nearby. The star's contribution is the difference between the two measurements. From good images, the brightness of a star can be ascertained to 0.01 magnitude or better. Measuring the total brightness of an extended object such as a galaxy is more complicated because it is difficult to determine how much light comes from the sky background. However, once the sky brightness has been determined, the software can scan outward from the center of a galaxy until it has found every pixel brighter than that threshold. The pixel values are then summed, the inferred sky brightness subtracted, and light from foreground stars removed; what remains is the integrated magnitude of the galaxy.

SCALING

We have it easy on Earth. The scenes we view in our daily life generally span a factor of 100 in brightness, from the darkest shadows to the brightest highlights. When the range departs greatly from this, we feel uncomfortable. Dense fog reduces the world to a single shade of gray, and we find it hard to see. Harsh lighting pushes the contrast too high, and our world divides into inky shadows and glaring highlights.

Celestial scenes seldom have so "normal" a brightness range. Faint galaxies may shine only a few percent brighter than the night-sky background, while the extended halo of a bright galaxy may be thousands of times fainter than its core. To understand these objects, we need to see them in a way that does not stretch our vision or exceed the limits of display hardware.

Suppose we have taken an absolutely splendid 16-bit image of a galaxy cluster. The range of values that a pixel can have is 0 to 65,535. Let's say the average pixel value of the sky background is 500, while for the galaxy nuclei it is 1,200, for several hundred stars it is more than 2,000, and for a dozen stars it exceeds 5,000. If we attempt to display the full range of brightness only the dozen brightest stars would be visible. The galaxies and fainter stars would be at the bottom few percent of the brightness range, and limitations of the eye and output device, such as a computer monitor, would render them invisible. To be clearly seen, features must lie roughly between 15 and 85 percent of the total range displayed, though the precise values vary with the output device. To show faint deep-sky objects well, the background sky should appear dark gray. When the average sky pixel is 15 percent of full brightness, all the pixels in deep-sky objects will be above this level and thus readily perceived. If the brightest nebula pixels have 85 percent of full brightness, you will still see structure in them; if they are much brighter, they'll appear as burned-out white areas.

In altering the brightness scale, it is best to render the most important details in shades of gray that are easily distinguished on your display. Here are some common methods for tweaking an image.

Linear scaling. When "stretching" the gray-scale values from low to high, we perform linear scaling. This type of scaling does a fine job of displaying features that have a limited range of pixel values. It is thus effective for enhancing low-contrast objects.

Gamma scaling. When the brightness spans a wide range, linear scaling falls down. You are stuck with losing the dark end in murk or burning out the high end. Of course, we're used to seeing images of galaxies with a burned-out nucleus set against a jet-black sky, but this artificiality can be overcome with digital imaging. Pixel values in a CCD image of a galaxy contain information about the bright nuclear region, the bright disk, and the faint outer areas. With gamma scaling we can display all of that information at once. I like gamma scaling because it is such a flexible tool for fine-tuning middle tones without changing whites and blacks. Furthermore, when outputting images with a laser printer, gamma scaling can compensate for the machine's tendency to darken the middle grays. Laser prints can then be made to match what is seen on a computer monitor.

Logarithmic scaling resembles gamma scaling in that it has a curved "transfer function," but here the curve rises very rapidly, providing an extreme enhancement of low-value pixels. This function is so powerful that it is seldom useful by itself; deep-sky objects always appear too bright. We can, however, moderate the logarithmic function by averaging it with linear scaling. I usually reserve logarithmic scaling for those "hard-case," deep-sky images that gamma scaling cannot handle. For example, I've used it to show the sprays of stars and debris around the Whirlpool Galaxy, M51, and the faint shells of gas that surround some planetary nebulae. Logarithmic scaling allows me to display these extraordinary features and still see the familiar, bright, inner parts of the objects.

Histogram scaling. It is possible to manipulate the shape of the histogram directly, to display the full range of information in virtually any image.

The simplest form of this scaling is called "histogram equalization." It produces a flat histogram with an equal number of pixels for each value of brightness. While doing this allows the full range of intensity to be displayed, the result is often aesthetically poor. It is far more pleasing and realistic to create a histogram with its peak around 20 percent of full scale and an exponential drop toward higher and lower pixel values. In this manner, virtually any deep-sky image can be displayed well.

When I have a large number of pictures to process and not much time to spend on each one, I trust histogram scaling to yield satisfactory results. Later, I can apply gamma and logarithmic scalings to see if I can milk a little more visual information from the image.

It is important to remember that any form of scaling renders an image unusable for photometry. Not only does scaling toss away the zero point of the brightness scale, but it also distorts the relationship between the original pixel value and its brightness in the final product. If you plan to do quantitative analysis, do not alter your images; but if you want beautiful and exciting pictures, treasure these manipulative routines.

SPATIAL ENHANCEMENTS

Contrast often depends on the angular size of details in an image. For example, large-scale features, such as the gross distribution of light across a planetary disk, may be rendered faithfully while small-scale clouds or surface markings have greatly reduced contrast. Atmospheric turbulence, diffraction effects, and optical quality spread the light from small details over a large region, making them difficult or impossible to see.

"Spatial processing" seeks to reverse this abnormality and increase the contrast of small-scale features without changing that of large-scale ones. Images that start out soft and fuzzy become crisp before your eyes. Tiny lunar craters, Jovian festoons, Martian dust storms, and solar granulation emerge from images that appear hopelessly blurred.

There are two broad approaches to spatial enhancement: convolution methods and Fourier methods. In general, convolution methods are fast and reasonably effective, and Fourier methods are slow and powerful.

Convolution methods include the process of "unsharp masking" to manipulate contrast. Generally we want to improve the appearance of small features, leaving large ones unchanged. If we make a copy of the image where each pixel is an average of the group of pixels surrounding it, the large features will not be affected very much, but the small ones will become blurred. This image is an unsharp mask of the original.

Now, for example, we might multiply the pixel values of the original image by three, and then subtract the pixel values of the unsharp copy twice. In this way large features are hardly changed, but small ones are enhanced. Although their pixels were multiplied similarly, there was essentially a constant being subtracted because the small features were blurred out of existence in the unsharp image. This process has tripled the contrast of small features!

Like any powerful tool, it's easy to misapply convolution filters. The trick is to match their strength to the amount of image softness. Applying too small a convolution "kernel" (the size of the neighborhood around the original pixel) to a soft planetary image will enhance nonuniformities in the CCD detector and introduce spurious details without helping the image very much. We need to apply a kernel that has a radius about 1 1/2 times that of the "blur circles" that form the image.

Frankly, unsharp masking sounds suspiciously like a free lunch -- we are getting something for nothing. This is not true, however, since we pay a heavy price to extract detail. Our original data must be very precise. For an image to process well, we need to know the brightness of every pixel to better than 1 part in 1,000. (Uncalibrated 12-bit images have roughly 10 times this much error, and 8-bit images cannot be calibrated well enough to give acceptable results.) Data this good demand careful attention to every calibration step: a set of tricolor planetary images requires taking and averaging more than 100 dark and flat-field frames. However, once they are derived, they will serve for all the images taken during that observing session. Fourier enhancement. Just as any sound can be broken down into some combination of pure (single-frequency) tones, any image can be reduced to a combination of sine-wave intensity patterns by a process called Fourier analysis. Small details correspond to high-frequency patterns, and large details to low-frequency patterns. The result is a spectrum of intensity values across the various frequencies -- the "Fourier transform." We can then enhance the frequencies that correspond to small details. Performing an inverse Fourier transform on the altered spectrum converts it back into an image with enhanced detail. Another powerful enhancement technique is maximum-entropy processing. In any image with stars, we know exactly how an ideal point of light (a star) became blurred by atmospheric seeing, tracking errors, and the telescope's optical system. Maximum entropy seeks the most probable distribution of light in the original image. It is an iterative process that tries to create the "ideal" original image in a series of increasingly better approximations.

Image convolution and unsharp masking, Fourier enhancement, and maximum-entropy processing are fundamentally similar in that all three methods allow us to restore images that have been blurred.

COMPOSITING

CCDs record only small black-and-white fields, yet we value colorful, panoramic views. Image processing provides the means to combine individual CCD frames into larger mosaics, color renditions, or both. This work is fun and everybody loves the results.

Color composites require three images shot through color filters, with careful attention paid to the balance of exposures. The trio is then assembled in the computer to create a full-color view. A mosaic is made from slightly overlapping images that can be joined together.

Producing a color picture is difficult because of slight shifts and seeing changes among separate images. Furthermore, most computers are not able to display a full range of color, but this situation is rapidly changing because of demands by the electronic-publishing industry. The basic idea is to load three monochrome images into the computer's memory, and then adjust their sizes and orientations until they fit over each other perfectly. Once they conform, the computer can mix each pixel's red, green, and blue intensities to display the color version. Color CCD imaging was described in this magazine's May 1993 issue, page 34.

I started making mosaics almost by accident. With a Lynxx CCD camera attached to my 12-inch reflector, I would tour the Moon hoping to catch moments of perfect seeing. I mentioned my "moonwalking" to New York amateur Gregory Terrance, who was also taking CCD images of the Moon. He sent me a series of four pictures that happened to overlap. After processing Greg's images, I used a graphic-arts program called PhotoStyler to paste them together in the computer. I was amazed how nice the mosaic looked.

Good mosaics require careful flat-fielding and identical processing or the edges won't match. Even so, I've found it necessary to adjust the brightness and contrast slightly when I paste each new image into place.

I could never have done this work in my darkroom. This does not mean that image processing will supplant photography; what we will see is photography and image processing growing side by side. Kodak has recently announced its Photo CD system that converts slides and negatives into digital files on a CD-ROM disc. Astrophotographers are already beginning to combine the advantages of conventional silver-based emulsions with the power of digital image processing. For me, and I suspect for many other amateur astronomers, the CCD and computer at first seemed alien to amateur astronomy. Yet the longer I work with CCDs, the more convinced I become that the opposite is true. CCDs and digital image processing give us sharper eyes than ever before to see deeper into the universe.

The author of An Introduction to Astronomical Image Processing and The CCD Camera Cookbook, Richard Berry is now developing astronomical image processing software for Microsoft Windows.
COPYRIGHT 1994 All rights reserved. This copyrighted material is duplicated by arrangement with Gale and may not be redistributed in any form without written permission from Sky & Telescope Media, LLC.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1994 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Berry, Richard
Publication:Sky & Telescope
Date:Apr 1, 1994
Words:3585
Previous Article:Endeavour's excellent adventure.
Next Article:Clementine goes exploring.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters