Does Your Computer Security Measure Up?
"To the extent that the federal government establishes for itself more operational standards and baselines for its own activities, those become rebuttable presumptions for what industry should do," says Advisory Board Chairman Franklin S. Reeder of the Reeder Group.
The board, whose 12 members are drawn from both government and the private sector, has begun the process by looking at measurement models that are already in use in government and the private sector. Recently, for example, the board met to hear from CitiGroup, First Union, and General Motors, among others, about the approaches or models they use to assess the effectiveness of their computer security programs. "There's an enormous amount that government can learn from private sector practices," says Reeder.
On the government side, NIST has been working on a security measurement model with the Federal Chief Information Officers (CIO) Council, which plays a lead role in helping the federal government develop information technology (IT) policies. That model (referred to by the initials SSECMM) was also discussed by the board. (And it was expected to be out for public comment by press time.)
The SSE-CMM would categorize an agency's computer security programs based on their maturity--rating their progress along a continuum. The model consists of five levels, says Edward A. Roback, acting chief of NIST's Computer Security Division, who serves as the advisory board's secretariat.
The levels include the following: 1) incomplete--this is the lowest level, signifying that an agency has developed computer security plans for most major systems but not all; 2) complete--at this level the agency has a well-documented security program that meets basic requirements, including plans for all general support systems and applications; 3) implemented--the agency that meets this level's requirements has developed plans and implemented procedures across all components; 4) measured--to qualify for this level, the agency must be able to show that the program in place is measurable, meaning the agency has developed some way of assessing how security has resulted in a reduction in vulnerability; 5) pervasive--at this level, the program in place is continuously adapted to improve cost-effectiveness and respond to changes in the organization, in the threat profile, and in technology.
The model, notes Robeck, is not meant to move everyone to the highest level. "We want cost-effective security," he says. That means it may be appropriate for many agencies (or private companies if they were to adopt this model) to fall in the middle.
Roback also notes that use of this type of model assumes that a measurement method, or metric, exists for carrying out level four's requirements. Measurement might be based on some event analysis, such as assessing the number of successful attacks a system had sustained over time or how long the system was down due to viruses over consecutive periods. Other options include "red teaming," to carry out attack scenarios to test the system; benchmarking; and audit processes.
There is no consensus as to what would be the right approach, but "there's a lot of interest in trying to come up with some kind of a scorecard" to assess the level and adequacy of an agency's computer security program, says Reeder. The board welcomes input from industry, he says.
|Printer friendly Cite/link Email Feedback|
|Author:||HAROWITZ, SHERRY L.|
|Article Type:||Brief Article|
|Date:||Sep 1, 2000|
|Previous Article:||Case Closed.|
|Next Article:||Going Beyond Anti-Virus Software.|