Printer Friendly

Understanding video compression.

The motor of digital video is compression. Compression is essential because, without it, transmitting full-motion 525-line video (used in the U.S.) would require a bandwidth of as much as 200 megabits per second. If this volume of data had to be transmitted over regular telephone lines using modems, it would be difficult to achieve more than 33.6 kilobits per second. At this speed, the transmission of a single second of high quality video would take some two hours.

The MPEG Family

The heart of video compression is the MPEG standard. MPEG stands for Motion Picture Experts Group, a committee formed in 1988 under the leadership of Leonardo Chiariglione of italy. Curiously, when two consecutive 1988 meetings were held in Turin and London, only two of the delegates attended both sessions. However, the Photographic Experts Group of the International Organization for Standardization (ISO) began work on a data compression standard for color images in 1982. The ISO was joined in 1986 by a study group from the International Telegraph and Telephone Consultative Committee and became the Joint Photographic Experts Group (JPEG).

The basic JPEG standard is called a "symmetrical" system and is mainly used for still images, Web sites and video conferencing, in which every conference station must be able to compress and decompress information equally. In contrast, broadcast television is characterized by a powerful compressor (at the studio) feeding a very large number of smaller decompressors (at the viewer's end). The MPEG broadcast standard was therefore designed as an "asymmetrical" system. The more this technology reduces the data rate (bandwidth), or simplifies the decompressor (by additional complexity in the compressor), the more the system will benefit economically at the viewer's end.

What Is Compression?

Compression acts on several levels. First, it removes irrelevant information. Some information is truly irrelevant in that the intended recipient cannot perceive that it is missing. In addition, there is no point in transmitting more resolution than the receiving device can use. Even noise from film grain and film scratches takes up irrelevant bits. Compression eliminates noise. Most importantly, many images (backgrounds, for example) do not change very often. By eliminating redundant information, one reduces the size of the data stream (bandwidth) needed for transmission.

From cinematography we know that one frame of film is exposed (or sampled) 24 times a second to create flickerless motion. On video in the United States, the TV cameras photosensitive image area is sampled 30 times a second. To compress each of these two-dimensional images, one uses a sample in time of the moving image. The fact that there is usually more (or faster) horizontal motion than vertical motion in real-world scenes helps reduce compression complexity. By taking advantage of these characteristics and digitally encoding the video information with computer algorithms (a sequence of steps), the MPEG standards can ultimately deliver compression ratios of up to 200:1. In contrast, the jPEG video compression algorithm achieves a ratio of less than 20:1.

4:2:2

The MPEG frequency for sampling (the measurement of a signal at periodic intervals) was set at a rate of 13.5 megahertz. As this was fairly close to four times the NTSC (the U.S. analog TV standard) subcarrier frequency, it acquired the descriptor "4" to represent the luminance (Y) sampling frequency. The descriptor 4:2:2 means that Y is sampled at 13.5 megahertz and that blue and red are each sampled at half this frequency; hence, 4:2:2.

The first of the MPEG standards, MPEG-1, delivers decorepressed data in the range of 0.6 to 5.0 megabits per second, allowing CD-ROM players to play full-motion color movies at 30 frames per second. MPEG-1 is also optimized for the transfer rate of between one and 1.5 megabits per second utilized by the telephone communications links that usually connect to the Internet.

Interlaced vs. Progressive

But MPEG-1 includes no concept of interlaced scanning; every picture or frame contains all the lines that make up a complete image (this is the so-called sequential or progressive scanning used by computers to make text more readable). Therefore, captured from an NTSC source, MPEG-1 operates at 30 frames per second and 30 fields. A field is the same as a frame in computer video because all the lines are drawn progressively. Interlaced scanning is a system that was devised in the early days of television to reduce the flicker generated in a progressive scanning system without using a wider bandwidth. Film projectors eliminate flicker by using double- or triple-bladed shutters, so that each frame is projected two or three times. With a repetition rate of 48 or 72 frames per second, there is no perceptible image flicker. In the U.S., interlaced scanning is a technique that scans alternate lines of a picture or frame (each called a field) in one-sixtieth of a second; therefore, a full frame of 525 lines requires two alternate fields of 262.5 lines each united like two combs in one-thirtieth of a second. It is then the field rate, rather than the frame rate, that determines whether there is any perceptible flicker. Since MPEG-1 had no tools to accommodate interlaced video, there was a need for a standard that would include broadcast-quality video.

Therefore, MPEG-1 was "frozen" in 1991. In the same year, the MPEG-2 process was started. MPEG-2 became a standard in 1995. MPEG-1 had a very simple system layer designed to work with digital storage media that are nearly error-free (such as CD-ROMs). The requirements for MPEG-2, however, were different. It had to work with a process in which more time and resources are required to encode video than to decode or decompress it (an asymmetrical system). Indeed, MPEG-2 has the higher decompressed data rate, ranging from six megabits per second to 15 megabits per second, needed for broadcast-quality signals.

Hooray for MPEG-2

Perhaps the success of MPEG-2 is best highlighted by the demise of MPEG-3, which was initiated with the objective of providing a compression system suitable for high definition television. It was soon abandoned when it became apparent that the versatility of MPEG-2 embraced this application with ease. MPEG-2 is now the basis for the Advanced Television Systems Committee's Digital Television Standard, which is being implemented for both standard and high definition transmissions in the U.S.

MPEG-2 can support displays of different aspect ratios. For example, a program may be transmitted with a 16:9 aspect ratio. "Pan and scan" information may be associated with the pictures to tell the decoder which part of the image to run on a 4:3 display. The mechanism supports both horizontal and vertical instructions (allowing it to show 4:3 material on a 16:9 display).

MPEG-2 offers a number of tools for coding interlaced pictures, thus allowing for 60 fields per second, as opposed to the 30 fields per second allowed by MPEG-1. Viewing tests with non-expert consumers in Ottawa, Canada, showed that, for a given number of bits, interlaced scanning showed a "better" picture than progressive scanning. On the other hand, The David Samoff Labs have demonstrated a black-and-white camera test system capable of producing a 750-line progressively scanned picture that appeared to have resolution superior to that of a 1,125-line black-and-white interlaced picture. It is important to note, however, that the arguments about the advantages and disadvantages of interlaced versus progressive scanning may never be resolved. One proposed solution is to broadcast an interlaced signal and convert it at the receiving end into a progressively scanned signal for display.

MPEG-2 permits some form of "scaling" (variable resolution). In each case the bitstream is divided into a base layer, which must always be decoded, plus one or more enhancement layers in which decoding is optional. Commercially speaking, a video-on-demand service might deliver the base layer for one price and the enhancement for a supplemental charge.

Another standard, the MPEG-4, was originally conceived as a system optimized for very low bit rates. However, it has evolved into a totally new structure for multimedia and now supports 3-D. Another version, MPEG-7, is aimed at indexing video and other media by content.

Looking further into the future, scientists predict that instead of compressing an electronically scanned image, technology will make it possible to send parameters associated with, for example, the model of a human face. In any case, it seems that today's compression technology has only scratched the surface.

* Information partially excerpted from "Video Compression" (McGraw-Hill, 322 pp.) by Peter D. Symes and "The Dictionary of Multimedia" (Franklin, Beedle & Assoc. Inc., 346 pp.) by Brad Hansen.
COPYRIGHT 1998 TV Trade Media, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1998, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Webcasting
Publication:Video Age International
Date:Oct 1, 1998
Words:1432
Previous Article:Webcasting at low cost.
Next Article:Latin TV market in facts & figures.
Topics:


Related Articles
My two cents.
Webcasting: the new frontier?
International Webcasters Association gears up.
Webcasting and U.S. law.
Dreaming of Web streaming? Broadcast.com explains how to get your program on the Net.
Webcasting: a necessity for broadcasters.
Seeing you loud and clear: will visual technology ever make a real impact on business communication?
Creating new revenue streams through event multimedia management.
Webcasting now the 'must have' amenity at 440 Ninth.
SONIC FOUNDRY ENABLES ADVANCED WEBSITE SEARCH CAPABILITES.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters