# Smart security through mathematical modeling.

THE FEDERAL GOVERNMENT AND many private enterprises have used
mathematical modeling and computer simulation techniques since the late
1950s as aids in developing policies, conducting research and
development, and engineering complex systems. During the early years,
the costs associated with using these techniques were beyond the means
of most managers and analysts. In fact, for most applications the cost
of modeling or simulating a system was greater than the cost of a bad
system.

Today the cost of modeling and computer simulation are within the reach of managers with modest resources. In addition, the cost of a bad system or bad decision (particularly in the security industry) is significantly greater than it was 30 years ago. Given these facts, modeling and simulation are emerging as viable, even necessary, tools for the modem manager.

What is a model? It is an idealized, usually simplified, representation of something that exists in the real world. Models have been used throughout history as people have tried to understand, describe, and influence the environment they live in. Maps and globes depicting the earth are models. Similarly, Cesare Lombroso's late 18th century theory of criminal anthropology was a model intent on explaining the cause of, and perhaps predicting, criminal behavior.

A good model stimulates thought, supports further research, and usually evolves until it is no longer needed or becomes invalid. Think about the differences between the navigational charts used by Christopher Columbus in 1492 and those used today. Lombroso's model became invalid long ago, but it is studied in criminal justice courses because it is the foundation for most positivist theory. Early 20th century work by other criminologists was stimulated by Lombroso's model to a large degree.

Although expressed in terms of mathematical relationships, a mathematical model is no different than a globe or sociological model. It is a language designed to describe some real phenomenon in terms that can improve our understanding of the phenomenon and assist us in making better decisions as we attempt to build or modify systems or respond to chance events.

The formula A = - 32 feet/[second. sup. 2] describes the acceleration an object falling toward earth experiences due to gravity. The term "normal (20,5)" could describe an access control point that processes an average of 20 people per minute. It also tells you that the number of people processed varies normally (in a bell curve) and that the standard deviation around that average is five people per minute. Once the language of mathematics is understood, we have a powerful analytic tool for describing and manipulating complex systems.

The two basic categories of mathematical models are deterministic and stochastic (sometimes called probabilistic). The formula A = - 32 feet/[second sup. 2] is an example of a deterministic model. Day, night, weekday, or weekend, this relationship is a physical fact. It does not change. Variations, or parametric analysis, can be built around that fact. Questions like "How big should a parachute be?" could not be reliably answered if these physical relationships were not completely predictable.

Conversely, a stochastic model has an element of uncertainty or risk associated with it. Using an access control point as an example, a random collection of one-minute counts at the point would certainly not consist entirely of 20s (that is, exactly 20 people passing the point). We can, however, make some reliable predictions about the most likely capacity range of the point based on the distribution information available. Couple that information with data on the size of the population you need to serve, and the foundation exists for an informed decision.

Linear and nonlinear programming, goal programming, network analysis, and deterministic dynamic programming are examples of mathematical techniques appropriate for solving problems in which we are certain of the relationships that exist between the different factors that are important to us. These approaches can also be an important foundation for analyzing stochastic systems because deterministic models are normally simpler and easier to manipulate than their probabilistic counterparts.

By judiciously assuming away the uncertainties in a system, we can often identify the parts (attributes) of a system that have the greatest influence over its operation. This process focuses a problem and allows for more effective analysis of a stochastic system.

Markov chains, queuing theory, inventory theory, decision analysis, and simulation are examples of probabilistic models. A Markov chain consists of a set of sequential stochastic events that are independent of each other. An example of a Markov chain could be an alarm response. A specific alarm may or may not activate; if it does, it may or may not be false; if it is an actual alarm, force may or may not be required; if force is used, it may or may not be deadly physical force.

Based on this chain, the probability that deadly force will be required for any given alarm activation can be estimated. Queuing theory and inventory theory focus on moving things (entities) through a system. They answer questions like, "How long can I expect lines to be?" or "How many widgets do I need to keep in stock?"

Most of these techniques are complementary. In fact, most complex systems and issues require a combination of these tools if planners are to understand the interrelationships fully and work them toward an optimal solution.

LINEAR PROGRAMMING IS A DETERMINISTIC, mathematical, problem-solving technique. It is designed to optimize a specific goal, such as minimizing cost or maximizing profit.

To look at this modeling technique from a security manager's perspective, assume we are installing an alarm system with a fixed budget of $40,000. For the sake of simplicity, we will use only a few types of sensors, and our objective is to cover the maximum area possible using these sensors-and stay within the budget. The sensors and their relevant characteristics for this example are in Exhibit 1. Our specific protection objectives and technical considerations are listed in Exhibit 2.

Exhibit 3 shows the formulation of the model. It can be solved manually-using the principles of linear algebra-or entered into a commercial software package that computes the optimal mix.

This particular problem is small and could be solved intuitively. However, I'm sure you can imagine a scenario containing 25 or 30 different types and brands of sensors with 50 or more considerations that must be taken into account.

By the way, approximately 294,000 square feet can be covered using seven infrared sensors, 414 ultrasonic sensors, 10 exterior microwave sensors, and five proximity devices. In addition to the number and type of sensors that should be used, the sensitivity of the solution to changes in sensor costs and characteristics can also be determined easily.

Goal and nonlinear programming, two other techniques mentioned before, attack the problem of optimizing a deterministic set of circumstances in a fashion similar to that of linear programming. Goal programming solves problems that have more than one goal by using multiple iterations of linear programming techniques. Nonlinear programming relaxes the assumption that the relationships that exist are linear and uses the principles of calculus to arrive at an optimal solution.

HERE'S AN EXERCISE IN SIMULATION. Many of us employ central station alarms. We know alarms will activate. However, we do not know exactly when they will occur, whether someone will be immediately available to respond to them, how long it will take to respond, or how long it will take to resolve the alarm condition.

Let's assume we are planning to add 25 zones to a system that currently has 75 zones. We have two patrols on duty around the clock. They are busy but able to accomplish periodic security and safety checks in addition to responding to alarms. Our task is to determine what the impact to our system will be if we add the additional zones.

The first step is to visualize the system as it currently exists. The focus of this process is to identify the portions of the system that have the greatest effect on the overall operation and the problem being studied.

In this particular instance, the system is set in motion when an alarm activates. If a patrol is available, it will be dispatched to the alarm and will assess and resolve the alarm condition. If a patrol is not available, the alarm will be placed on hold until one of the patrols becomes available. A technique that can assist in visualization and simulation is a flowchart. A flowchart for this example is found in Exhibit 4.

The flowchart describes the system logically. A quantitative understanding would complete the picture of the system. One method for quantifying the system is to review alarm records and describe everything that occurs in terms of time. For example, the occurrence of 48 alarms every day, on average, could also be expressed as a mean time between alarms of 30 minutes.

The same approach can be used to estimate the average time it takes to respond to and resolve an alarm. For this example, let's assume the average time between alarms is 30 minutes. We will also assume that the average response and resolution times are 15 minutes each. (All distributions are normal, with a fixed standard deviation.)

When the system is understood in both logical and mathematical terms, its operation can be reproduced by a computer. Such a simulation allows us to assess the impact of changes in the system quickly and test possible responses to those changes easily.

A number of different computer languages and commercial software packages are suitable for conducting this kind of analysis. The actual selection of a computing environment is contingent on the hardware and software available, the types and quantity of data that need to be evaluated, and the preferences of the modeler. This particular simulation is written in SIMSCRIPT II.5 from CACI Products Company.

Once a simulation is written, it must be validated to ensure it is accurately describing the system. Validation is accomplished by running the model and examining whether the computer's output closely approximates the system being studied. In this particular instance, the simulation generates results that closely approximate what we would expect from the system we are modeling. (See Exhibit 5.)

In this example we are assessing the impact of adding additional zones to the system. It is logical to assume that adding alarm points will cause an increase in the number of alarms that occur. That means the average time between alarms will decrease.

Exhibit 6 shows the results of the simulation factoring in the additional alarms. This iteration of the simulation reports that the maximum delay will increase slightly and that alarms could back up. However, the magnitude of the changes is not statistically significant and could just be a chance difference between the systems. Not suprisingly, patrol use increases significantly. For this simple example, the percentage increase in use parallels the growth in the number of zones.

This is another example that is intuitively obvious. As with linear programming, you can imagine security-related systems (people and machines) that are much more complex. When these systems get larger, the relationships between components become more complex. The total impact of a change to one component of a system is not always obvious if it is addressed in isolation from the rest of the parts.

MODELING IS AN INTUITIVE PROCESS that requires the ability to express concrete phenomena in abstract terms. As the system or problem being modeled becomes more complex, the model usually becomes more abstract. The importance of certain fundamental modeling principles grows with the complexity of the model. Some of the more important principles are the following:

* Understand the problem under study and clearly define the desired objectives. Every model has strengths and weaknesses. Knowledge of the objectives facilitates the selection of the most appropriate model.

* Use the correct model. Think about the difference between a globe and a flat map of the world. A flat map would be perfectly suited to measuring the straight-line distance between two distant cities. A ruler and the scale for the map are all that is required. Measuring that way over a sphere would certainly be more difficult.

This concept carries over into mathematical models. Although there is usually more than one way to solve a problem, the approach chosen certainly influences the ease, and often accuracy, of solutions.

* Carefully select the level of resolution. Our simulation example is perfectly suited to describing alarm response in a small guard force. That level of detail with a 100-person guard force would probably not emulate the broader factors that are critical to that scale of operation.

* Make sure your data reflects the real world. Consider the guard force simulation. What if there were a policy that all alarms would be responded to in under seven minutes and the first-line supervisor's performance appraisal depended on meeting that requirement? The alarm records might not reflect what actually happened on the ground. If those alarm records were used to develop the model, a bad decision would probably be made about the best way to meet the demands of an additional 25 zones. There is no substitute for listening to first-line people. (Very often a good solution will come from that level, eliminating the need for a model.)

Modeling is not always the best approach to planning or problem solving. Sometimes it is more effective just to plunge in, make a decision, execute it, and see what happens. Other times, however, these tools and skills can be truly invaluable. Some possible guidelines for when to use modeling tools are as follows:

* When doing so is cost-effective. Although the cost of computing and modeling have become more reasonable in the past 20 years, the costs of collecting data, gleaning the critical information, and developing and operating a model are significant. The potential for a $500 mistake does not justify the effort. A potential savings of two or three man-years every year projected into the future would justify modeling, even in a large organization.

* When you are comfortable with either the tools themselves or the analysts who are supporting you. In addition, if your role is only to recommend solutions, make sure the decision maker is comfortable with the modeling approach to planning.

* When you are developing plans for critical situations that have no room for failure. Many situations, particularly in the security profession, involve one chance only. Emergency evacuation, disaster management, and VIP security are just a few of the duties security managers have to face. Modeling and simulation are perfect to support planning and training for those possibilities.

Mathematical modeling and computer simulation are limited only by the ingenuity of the person or team conducting the analysis. They have been used for such tasks as improving personnel scheduling for police services, allocating response areas for urban fire departments, and describing security systems to test their security posture.

However, these tools and techniques are not a panacea for security managers. The effectiveness of these approaches depends on the availability of accurate information and the user's ability to express and analyze a system in abstract terms.

Most important, one must never forget that these tools are only aids to the decision process. Hard work and a healthy dose of common sense are always the most valuable tools any manager, security or otherwise, can possess.

About the Author ... Daniel T. Maxwell is operations research project manager at the Defense Communications Agency in Washington, DC. He is a member of ASIS.

Today the cost of modeling and computer simulation are within the reach of managers with modest resources. In addition, the cost of a bad system or bad decision (particularly in the security industry) is significantly greater than it was 30 years ago. Given these facts, modeling and simulation are emerging as viable, even necessary, tools for the modem manager.

What is a model? It is an idealized, usually simplified, representation of something that exists in the real world. Models have been used throughout history as people have tried to understand, describe, and influence the environment they live in. Maps and globes depicting the earth are models. Similarly, Cesare Lombroso's late 18th century theory of criminal anthropology was a model intent on explaining the cause of, and perhaps predicting, criminal behavior.

A good model stimulates thought, supports further research, and usually evolves until it is no longer needed or becomes invalid. Think about the differences between the navigational charts used by Christopher Columbus in 1492 and those used today. Lombroso's model became invalid long ago, but it is studied in criminal justice courses because it is the foundation for most positivist theory. Early 20th century work by other criminologists was stimulated by Lombroso's model to a large degree.

Although expressed in terms of mathematical relationships, a mathematical model is no different than a globe or sociological model. It is a language designed to describe some real phenomenon in terms that can improve our understanding of the phenomenon and assist us in making better decisions as we attempt to build or modify systems or respond to chance events.

The formula A = - 32 feet/[second. sup. 2] describes the acceleration an object falling toward earth experiences due to gravity. The term "normal (20,5)" could describe an access control point that processes an average of 20 people per minute. It also tells you that the number of people processed varies normally (in a bell curve) and that the standard deviation around that average is five people per minute. Once the language of mathematics is understood, we have a powerful analytic tool for describing and manipulating complex systems.

The two basic categories of mathematical models are deterministic and stochastic (sometimes called probabilistic). The formula A = - 32 feet/[second sup. 2] is an example of a deterministic model. Day, night, weekday, or weekend, this relationship is a physical fact. It does not change. Variations, or parametric analysis, can be built around that fact. Questions like "How big should a parachute be?" could not be reliably answered if these physical relationships were not completely predictable.

Conversely, a stochastic model has an element of uncertainty or risk associated with it. Using an access control point as an example, a random collection of one-minute counts at the point would certainly not consist entirely of 20s (that is, exactly 20 people passing the point). We can, however, make some reliable predictions about the most likely capacity range of the point based on the distribution information available. Couple that information with data on the size of the population you need to serve, and the foundation exists for an informed decision.

Linear and nonlinear programming, goal programming, network analysis, and deterministic dynamic programming are examples of mathematical techniques appropriate for solving problems in which we are certain of the relationships that exist between the different factors that are important to us. These approaches can also be an important foundation for analyzing stochastic systems because deterministic models are normally simpler and easier to manipulate than their probabilistic counterparts.

By judiciously assuming away the uncertainties in a system, we can often identify the parts (attributes) of a system that have the greatest influence over its operation. This process focuses a problem and allows for more effective analysis of a stochastic system.

Markov chains, queuing theory, inventory theory, decision analysis, and simulation are examples of probabilistic models. A Markov chain consists of a set of sequential stochastic events that are independent of each other. An example of a Markov chain could be an alarm response. A specific alarm may or may not activate; if it does, it may or may not be false; if it is an actual alarm, force may or may not be required; if force is used, it may or may not be deadly physical force.

Based on this chain, the probability that deadly force will be required for any given alarm activation can be estimated. Queuing theory and inventory theory focus on moving things (entities) through a system. They answer questions like, "How long can I expect lines to be?" or "How many widgets do I need to keep in stock?"

Most of these techniques are complementary. In fact, most complex systems and issues require a combination of these tools if planners are to understand the interrelationships fully and work them toward an optimal solution.

LINEAR PROGRAMMING IS A DETERMINISTIC, mathematical, problem-solving technique. It is designed to optimize a specific goal, such as minimizing cost or maximizing profit.

To look at this modeling technique from a security manager's perspective, assume we are installing an alarm system with a fixed budget of $40,000. For the sake of simplicity, we will use only a few types of sensors, and our objective is to cover the maximum area possible using these sensors-and stay within the budget. The sensors and their relevant characteristics for this example are in Exhibit 1. Our specific protection objectives and technical considerations are listed in Exhibit 2.

Exhibit 3 shows the formulation of the model. It can be solved manually-using the principles of linear algebra-or entered into a commercial software package that computes the optimal mix.

This particular problem is small and could be solved intuitively. However, I'm sure you can imagine a scenario containing 25 or 30 different types and brands of sensors with 50 or more considerations that must be taken into account.

By the way, approximately 294,000 square feet can be covered using seven infrared sensors, 414 ultrasonic sensors, 10 exterior microwave sensors, and five proximity devices. In addition to the number and type of sensors that should be used, the sensitivity of the solution to changes in sensor costs and characteristics can also be determined easily.

Goal and nonlinear programming, two other techniques mentioned before, attack the problem of optimizing a deterministic set of circumstances in a fashion similar to that of linear programming. Goal programming solves problems that have more than one goal by using multiple iterations of linear programming techniques. Nonlinear programming relaxes the assumption that the relationships that exist are linear and uses the principles of calculus to arrive at an optimal solution.

HERE'S AN EXERCISE IN SIMULATION. Many of us employ central station alarms. We know alarms will activate. However, we do not know exactly when they will occur, whether someone will be immediately available to respond to them, how long it will take to respond, or how long it will take to resolve the alarm condition.

Let's assume we are planning to add 25 zones to a system that currently has 75 zones. We have two patrols on duty around the clock. They are busy but able to accomplish periodic security and safety checks in addition to responding to alarms. Our task is to determine what the impact to our system will be if we add the additional zones.

The first step is to visualize the system as it currently exists. The focus of this process is to identify the portions of the system that have the greatest effect on the overall operation and the problem being studied.

In this particular instance, the system is set in motion when an alarm activates. If a patrol is available, it will be dispatched to the alarm and will assess and resolve the alarm condition. If a patrol is not available, the alarm will be placed on hold until one of the patrols becomes available. A technique that can assist in visualization and simulation is a flowchart. A flowchart for this example is found in Exhibit 4.

The flowchart describes the system logically. A quantitative understanding would complete the picture of the system. One method for quantifying the system is to review alarm records and describe everything that occurs in terms of time. For example, the occurrence of 48 alarms every day, on average, could also be expressed as a mean time between alarms of 30 minutes.

The same approach can be used to estimate the average time it takes to respond to and resolve an alarm. For this example, let's assume the average time between alarms is 30 minutes. We will also assume that the average response and resolution times are 15 minutes each. (All distributions are normal, with a fixed standard deviation.)

When the system is understood in both logical and mathematical terms, its operation can be reproduced by a computer. Such a simulation allows us to assess the impact of changes in the system quickly and test possible responses to those changes easily.

A number of different computer languages and commercial software packages are suitable for conducting this kind of analysis. The actual selection of a computing environment is contingent on the hardware and software available, the types and quantity of data that need to be evaluated, and the preferences of the modeler. This particular simulation is written in SIMSCRIPT II.5 from CACI Products Company.

Once a simulation is written, it must be validated to ensure it is accurately describing the system. Validation is accomplished by running the model and examining whether the computer's output closely approximates the system being studied. In this particular instance, the simulation generates results that closely approximate what we would expect from the system we are modeling. (See Exhibit 5.)

In this example we are assessing the impact of adding additional zones to the system. It is logical to assume that adding alarm points will cause an increase in the number of alarms that occur. That means the average time between alarms will decrease.

Exhibit 6 shows the results of the simulation factoring in the additional alarms. This iteration of the simulation reports that the maximum delay will increase slightly and that alarms could back up. However, the magnitude of the changes is not statistically significant and could just be a chance difference between the systems. Not suprisingly, patrol use increases significantly. For this simple example, the percentage increase in use parallels the growth in the number of zones.

This is another example that is intuitively obvious. As with linear programming, you can imagine security-related systems (people and machines) that are much more complex. When these systems get larger, the relationships between components become more complex. The total impact of a change to one component of a system is not always obvious if it is addressed in isolation from the rest of the parts.

MODELING IS AN INTUITIVE PROCESS that requires the ability to express concrete phenomena in abstract terms. As the system or problem being modeled becomes more complex, the model usually becomes more abstract. The importance of certain fundamental modeling principles grows with the complexity of the model. Some of the more important principles are the following:

* Understand the problem under study and clearly define the desired objectives. Every model has strengths and weaknesses. Knowledge of the objectives facilitates the selection of the most appropriate model.

* Use the correct model. Think about the difference between a globe and a flat map of the world. A flat map would be perfectly suited to measuring the straight-line distance between two distant cities. A ruler and the scale for the map are all that is required. Measuring that way over a sphere would certainly be more difficult.

This concept carries over into mathematical models. Although there is usually more than one way to solve a problem, the approach chosen certainly influences the ease, and often accuracy, of solutions.

* Carefully select the level of resolution. Our simulation example is perfectly suited to describing alarm response in a small guard force. That level of detail with a 100-person guard force would probably not emulate the broader factors that are critical to that scale of operation.

* Make sure your data reflects the real world. Consider the guard force simulation. What if there were a policy that all alarms would be responded to in under seven minutes and the first-line supervisor's performance appraisal depended on meeting that requirement? The alarm records might not reflect what actually happened on the ground. If those alarm records were used to develop the model, a bad decision would probably be made about the best way to meet the demands of an additional 25 zones. There is no substitute for listening to first-line people. (Very often a good solution will come from that level, eliminating the need for a model.)

Modeling is not always the best approach to planning or problem solving. Sometimes it is more effective just to plunge in, make a decision, execute it, and see what happens. Other times, however, these tools and skills can be truly invaluable. Some possible guidelines for when to use modeling tools are as follows:

* When doing so is cost-effective. Although the cost of computing and modeling have become more reasonable in the past 20 years, the costs of collecting data, gleaning the critical information, and developing and operating a model are significant. The potential for a $500 mistake does not justify the effort. A potential savings of two or three man-years every year projected into the future would justify modeling, even in a large organization.

* When you are comfortable with either the tools themselves or the analysts who are supporting you. In addition, if your role is only to recommend solutions, make sure the decision maker is comfortable with the modeling approach to planning.

* When you are developing plans for critical situations that have no room for failure. Many situations, particularly in the security profession, involve one chance only. Emergency evacuation, disaster management, and VIP security are just a few of the duties security managers have to face. Modeling and simulation are perfect to support planning and training for those possibilities.

Mathematical modeling and computer simulation are limited only by the ingenuity of the person or team conducting the analysis. They have been used for such tasks as improving personnel scheduling for police services, allocating response areas for urban fire departments, and describing security systems to test their security posture.

However, these tools and techniques are not a panacea for security managers. The effectiveness of these approaches depends on the availability of accurate information and the user's ability to express and analyze a system in abstract terms.

Most important, one must never forget that these tools are only aids to the decision process. Hard work and a healthy dose of common sense are always the most valuable tools any manager, security or otherwise, can possess.

About the Author ... Daniel T. Maxwell is operations research project manager at the Defense Communications Agency in Washington, DC. He is a member of ASIS.

Exhibit 1 Sensor Types Sensor Type Cost Per Unit Area Protected (sq. ft.) Passive infrared $45 300 Ultrasonic $90 700 Exterior microwave $200 200 Proximity $75 0 Exhibit 2 Technical Considerations and Objectives Two rooms of 1,000 sq. ft. each are not good ultrasonic areas. A 2,000-sq. ft. exterior area requires protection. Five safes require additional protection. Exhibit 3 Formulation of the Model Maximize 300 X1 + 700 X2 + 200 X3 + 0 X4 (Note that each X is a type of sensor) Subject to: 45 X1+ 90 X2 + 200 X3 + 75 X4 [greater than]40,000 (The budget in $) 300 X1 [less than] 2,000 (The poor ultrasonic area in sq.ft.) 200 X3 [less than] 2,000 (Exterior sensor area in sq. ft.) X4 [less than] 5 (The five safes) EXHIBIT 4 OMITTED Exhibit 5 Simulation Output SIMULATION WAS RUN FOR 24 HOURS THERE WERE 2 PATROL(S) ON DUTY TIME BETWEEN ALARMS WAS 30 MINUTES PATROLS RESPONDED IN 15 MINUTES AND WORKED FOR 15 MINUTES PATROL UTILIZATION WAS 49.06 PERCENT MAXIMUM DELAY WAS 17.06 MINUTES MAXIMUM NUMBER OF ALARMS WAITING FOR PATROL WAS 0 THE NUMBER OF PATROLS REQUESTED WAS 49 Exhibit 6 Simulation Output with Additional Alarms SIMULATION WAS RUN FOR 24 HOURS THERE WERE 2 PATROL(S) ON DUTY TIME BETWEEN ALARMS WAS 20 MINUTES PATROLS RESPONDED IN 15 MINUTES AND WORKED FOR 15 MINUTES PATROL UTILIZATION WAS 72.86 PERCENT MAXIMUM DELAY WAS 17.15 MINUTES MAXIMUM NUMBER OF ALARMS WAITING FOR PATROL WAS 1 THE NUMBER OF PATROLS REQUESTED WAS 72

Printer friendly Cite/link Email Feedback | |

Author: | Maxwell, Daniel T. |
---|---|

Publication: | Security Management |

Date: | Feb 1, 1991 |

Words: | 2831 |

Previous Article: | The ups of downsizing. |

Next Article: | The brave new world of hotel security. |

Topics: |