Printer Friendly

Uncertainty and deterrence.

Strategic decisions of war or peace are surrounded by uncertainty arising from geopolitics, the adversary's intentions, third parties, doctrinal innovations, new technologies, and more. Due to surprising future developments, what looks like the better option may turn out much worse than a putatively worse choice. When one option looks better but is far more uncertain than another alternative, the planner may select the latter even though it is ostensibly less attractive. In making one's choices or when anticipating those of an adversary, one must consider both the estimated outcomes of the selections and the uncertainties of these estimates. Preference ranking of options may be reversed from their evident order due to uncertainty. This article develops an analytic framework for studying this reversal of preference in evaluating deterrence, without any probabilistic assumptions. It does not propose a theory of deterrence but a method for analyzing and choosing between options. The analysis is based on two concepts: the innovation dilemma and robustly satisfying critical requirements.

This study responds to two weaknesses in theories of deterrence. First, the treatment of uncertainty is inadequate. Deductive mathematical approaches from rational-choice theory often treat uncertainty as a probability that, even when labeled as subjective, is too structured to capture the richness of decision makers' ignorance. Inductive or case-based theories are vulnerable to Solomon's error of presuming that the rich diversity of the past characterizes the rich variability of the future. (1) In fact, the future is far more surprising and inventive than the past. Second, rational-choice theories of deterrence presume that the decision maker's goal is--and should be--to achieve the substantively best possible outcome. Doing so is not prescriptively feasible under severe uncertainty. Rather, decision makers should prioritize their options according to robustness against surprise in attempting to realize critical or essential (but not necessarily optimal) outcomes.

A decision maker faces an innovation dilemma when choosing between a new and innovative option and the current state of the art. (2) The innovation dilemma is a paradigm, a metaphor--originating in the technological realm--that is relevant to decisions under uncertainty in all domains, including the strategic decision to initiate or refrain from war. Returning to the metaphor, one finds that new technologies often yield better outcomes than standard technologies. However, what is new is usually less thoroughly studied and less well understood than what is old. Hence, the new technology may entail unexpected and severely adverse consequences that could make it much worse than the current state of the art. The strategic planner who must choose between an innovative strategy and a more standard policy faces an innovation dilemma. The latter is a paradigm for the dilemmas of uncertainty that face the strategic planner, even when the options do not entail innovative technology. In many situations, the innovation dilemma is an instance of the security dilemma due to the great uncertainty surrounding the intentions and capabilities of a state's potential adversaries, as discussed elsewhere. (3)

To "satisfice" means to satisfy a critical or essential outcome requirement. The decision strategy called robust satisficing is motivated by severe uncertainly. Good outcomes are better than bad ones, suggesting that the best outcome is best. However, when an option must be selected despite severe uncertainty in the outcomes of all available options, the logic of outcome optimization is not implementable: we don't know which option will have the best outcome. In this situation, the strategic decision maker in a deterrent situation can ask what outcome must be attained or, equivalently, what the worst tolerable outcome is. Under severe uncertainly, the robust-satisficing planner chooses the option that will produce the required outcome over the widest range of deviation of future reality from current anticipation. As we will see, the robust-satisficing choice may differ from the outcome-optimizing choice.

The article shows how robust satisficing is used to manage an innovation dilemma in the context of strategic decisions of war or peace. As mentioned above, it does not propose a theory of deterrence but a method for analyzing and choosing between options. After critiquing deterrence theory, the article then presents a formal analysis of info-gap robust satisficing, the innovation dilemma, and the loss of deterrence. (4) The following section discusses a historical example. The goal here is not descriptive but illustrative: to demonstrate how decision makers could have implemented robust satisficing as a decision strategy. The article then considers a theoretical application, stressing prescription over description.

Three caveats are necessary. First, the study examines binary choices between war and no war. However, strategic choices are rarely binary. Nonetheless, the analysis provides a conceptual framework for understanding and supporting real decision processes under uncertainty. Second, the discussion is limited (mostly) to conflict between two states. The world is never strictly bipolar although subconflicts, camps, or coalitions often emerge that resolve confrontations into bipolarity at some level. Extension to multistate conflict remains largely unexplored. Third, the article examines "the planner" or "the decision maker," while governments are rarely unitary actors. However, the analysis can be applied separately to individual protagonists, illuminating their positions. Moreover, the info-gap robust-satisficing methodology is a prescriptive tool for individual decision makers.

Critique of Deterrence Theories

This portion addresses both quantitative and qualitative treatments of uncertainty and the optimization of outcomes.

Treatment of Uncertainty: Quantitative Probabilistic and Game-Theoretic Approaches

In a series of articles, R. Harrison Wagner develops an approach to deterrence based on game theory, with incomplete information represented by subjective probabilities, primarily in the context of bipolar nuclear competition. He argues that bargaining between competing powers exists solely because of "lack of complete information about each other's values." (5) The bargaining hinges on assessments of credibility or probability of threat rather than certainty of threat. Wagner subsequently asserts that "the use of nuclear counterforce strategies... is not necessarily inconsistent with rational behavior." (6)

These game-theoretical arguments depend on incomplete information represented, for instance, by probabilities of strikes, counterstrikes, and levels of damage. Putative values of these probabilities can lead to rational prioritization among the options (e.g., counterforce strike or not). Wagner continues the exploration of game theory for "formalization of theories of deterrence that incorporate incomplete information, learning and the development of reputations" and studies "how much misperception by foreign decision-makers is consistent with rationality in light of these new developments in game theory" (7) In a long section entitled "Deterrence and Uncertainty," he explains that each protagonist "must estimate the probability" that the other protagonist will act or not act in particular ways. Furthermore, "probability... can only be a subjective probability." (8) After characterizing subjective probability, Wagner comments on why it is not necessarily unique and shows that evaluation of probability can be quite involved. He also notes that there can be dispute over utility assignments. (9)

Responding to Wagner, Anatol Rapoport emphasizes the "formidable difficulty" in game theory of obtaining a "meaningful operational definition of subjective probability" and a "meaningful operational definition of utility," which "become insuperable in the context of nuclear deterrence." (10) Also responding to Wagner, Michael McGinnis criticizes the "implausibility of some of the underlying assumptions," especially the heavy computational burden required of decision makers to implement a game-theoretical analysis. He notes that "neither preferences nor beliefs can be directly observed, and yet knowledge of both is crucial to any determination of equilibrium conditions." (11)

Christopher Achen and Duncan Snidal reply, as would Wagner presumably, that rational deterrence theory does not suppose the agents actually implement the game-theory analysis--only that their behavior is consistent with its rational predictions. (12) This point suggests that game theory is useful as a descriptive tool for political scientists and historians, but it also indicates its limitations as a prescriptive tool for decision makers. McGinnis criticizes the use of game theory because it will "tightly constrain the range of uncertainty" while in fact agents may be uncertain about others' preferences and types and degrees of rationality. He adds that "crucial aspects of any empirical situation remain outside the formal structure" of game theory. (13)

Wagner's important contribution is incorporation of incomplete information in a game-theoretic treatment of deterrence. This incorporation is limited, however, to probabilistic representation of uncertainly. Although strict Bayesians maintain that all uncertainty and ignorance can be represented with subjective probability, Wagner expresses some reservations about Bayesian learning despite its attractive features. (14)

D. Marc Kilgour and Frank Zagare study deterrence and credibility of threats given "lack of information about the preferences of one's opponent." (15) They consider the prisoners' dilemma and modifications of it, recognizing that "the real world, of course, is not so simple or transparent. It is characterized by, among other things, nuance, ambiguity, equivocation, duplicity, and ultimately uncertainly. Typically, policymakers are unable to acquire complete information about the intentions of their opponents; at best, they can hope to obtain probabilistic knowledge of these key determinants of interstate behavior." (16)

Probabilistic analyses supply valuable insights although these depend on strong assumptions. For example, Philipp Denter and Dana Sisak show how loss of deterrence may result from "incomplete information" modeled as parametric uncertainties represented by probability densities on bounded finite intervals and other more explicit assumptions. (17)

Not all deductive theories rely exclusively on probability. Barry Nalebuff, for instance, recognizes that "in the presence of incomplete or imperfect information... there is no longer any guarantee that the calculations will provide a unique answer." This "indeterminacy" may lead to "a multiplicity of equilibria" that is not resolved probabilistically. (18) Recognition that uncertainty may transcend probability is welcome and needed, to which we return subsequently. Nonetheless, Nalebuff uses strong probabilistic assumptions. His most important innovation is the study of expectations by each protagonist of other protagonists' behavior: "Each side starts with some expectation about the distribution of the other's parameter. For analytic convenience, we take the initial beliefs to be uniformly distributed between zero and one." (19) One wonders how such a model captures, for instance, Egyptian president Gamal Abdel Nasser's uncertain expectations in the mid-1960s about Saudi and Jordanian reaction to Egyptian involvement in the civil war in Yemen; Israeli reaction to Egypt's closing the Straits of Tiran; and Arab public reaction to forward deployment of Egyptian forces in Sinai.

The approach to uncertainty in the present article, based on info-gap theory, is nonprobabilistic and offers a potential supplement or alternative to probability. Info-gap models of uncertainty are well suited to representing ignorance--for instance, incomplete information about protagonists' preferences. (20) Similarly, the nonuniqueness of subjective probability or utility, mentioned by Wagner, can be captured with info-gap theory. (21) Most importantly, info-gap robust satisficing can be implemented conceptually without resorting to mathematics, as discussed subsequently. (22) The innovation dilemma developed here can lead to rational reversal of preference among the options in light of one's uncertainties, as we will explain. This study takes the view that ignorance and deception may preclude knowledge of probabilities or even full identification of the event space. Info-gap robust satisficing provides a response that is epistemically less demanding than probabilistic approaches.

Treatment of Uncertainty: Qualitative Historical Approaches

Studying the past refines one's judgment and insight about possible futures. The historical case-based school claims, rightly, that historical, political, psychological, and organizational factors underlie comprehensive understanding of human affairs. However, that very contextualization may limit the ability to anticipate and respond to surprising future contexts.

Authors from historical-inductive schools address uncertainties without employing probability. In case studies of deterrence in the Middle East, Janice Gross Stein emphasizes that success or failure of deterrence depends on leaders' judgments, suspicions, and fears regarding many diverse issues, including opportunities, vulnerabilities, challenges, political or psychological needs, balance of power, broad historic or intrinsic interests, and long-term strategic reputation. Furthermore, a leader's attention to issues may change over time as circumstances change. (23)

Similarly, Richard Ned Lebow and Stein claim that theories of deterrence based on rational choice theory "are incomplete and flawed" because judgments of "subjective expected utility will vary enormously depending on actors' risk propensity and relative emphasis on loss or gain" and because they ignore factors such as domestic politics. They also point out that leaders' preferences will alter over time, asserting that "misperception and miscalculation arise from faulty evaluation of available information. Studies of deterrence and intelligence failures find that error rarely results from inadequate information but is almost always due to theory and stress driven interpretations of evidence." (24) In a subsequent publication, Lebow and Stein point out that "existing theories of deterrence rely on technical, context-free definitions of deterrence, but deterrence--and any other strategy of conflict management--takes on meaning only within the broader political context in which it is situated" and that historical context is also important. Furthermore, deterrence may be intertwined with other goals such as compellence. (25) (Compellence is an action or threat intended to induce an adversary to take a specific action, unlike deterrence, which aims to prevent a specific action.)

They write that "deterrence encounters are embedded in two kinds of contexts. The first... concerns the specific situation in which a deterrence encounter arises. The second is historical; it consists of the expectations that the adversaries have of each other, themselves, and third parties." (26) Similarly, Alexander George and Richard Smoke maintain that "deterrence at the substrategic levels of conflict is highly context-dependent [and that] there is a critical need in policy making for good situational analysis." (27)

As understanding becomes more contextually detailed, it becomes more contextually contingent and potentially less pertinent to the future. Scholars are well aware of the trade-off between contingency and generality, as George and Smoke illustrate with the concept of "conditional" or "contingent generalizations." (28) The info-gap robust-satisficing strategy discussed in this article is a tool for managing that tradeoff. Solomon's error could be rephrased as saying that the worst that was, is as bad as it will ever get. Contextual understanding must support imaginative recognition that things will be different, possibly worse than before. One cannot know what is not yet devised or discovered, and guessing the future is usually infeasible. One can, however, prioritize one's options according to their robustness against future surprise while aiming to satisfy critical goals. This robust-satisficing approach can lead rationally to reversal of preference among options, especially when one faces an innovation dilemma, as we will see.

Optimization of Outcomes

Rational choice theory postulates that "actors... seek to optimize preferences in light of other actors' preferences and options." (29) Likewise, in discussing the role of expectations in rational deterrence theory, Nalebuff asserts the centrality of "maximizing behavior." (30) What is postulated is that actors attempt to optimize the substantive quality of the outcome. Paul Huth and Bruce Russett "assume that decision-makers are rational expected utility maximizers.... We use 'rational' in the sense of being able to order one's preferences, and to choose according to that ordering and perceptions of the likelihood of various outcomes.... [This] does not require that perceptions be accurate, or that a given decision-maker's preferences be the same as other people's." (31)

George Downs explains that "the calculus [of rational deterrence theory] thrives on optimization, but it is compatible with the addition of numerous constraints that collectively dull the effects of the optimization assumption to the point where they are unrecognizable and quite mild." (32) We claim, however, that constraints limit the domain on which solutions are sought but do not alter the logic of seeking the substantively best outcome. This search may be infeasible and undesirable under severe uncertainly, as demonstrated later.

Zagare focuses on the incomplete knowledge and limited analytical capability of the decision maker. Citing Duncan Luce and Howard Raiffa, he studies the instrumentally rational actor "who, when confronted with 'two alternatives which give rise to outcomes,... will choose the one which yields the more preferred outcome.'" (33) However, under severe uncertainly, what looks like the more preferred option may turn out worse than the alternative. We show that, when one faces an innovation dilemma, this possibility can rationally lead to choosing the evidently less preferred option.

Zagare explains that "only two axioms, associated with the logical structure of an actor's preference function, are implicit" in instrumental rationality: connectivity and transitivity, pointing out that "connectivity simply means that an actor be able to make comparisons among the outcomes in the feasible set and evaluate them in a coherent way." Transitivity means that if option A is preferred to B, and B is preferred to C, then A is preferred to C. Zagare continues: "Surely these are minimal requirements for a definition of rationality. Without them, choice theory would be well-nigh impossible." (34)

Transitivity and connectivity imply that the agent chooses the option that is anticipated to yield the best outcome. However, sensible decision makers can hold nontransitive preferences, as illustrated by the Marquis de Condorcet's paradox in aggregating preferences over several voters. (35) Moreover, connectivity depends on identifying all relevant options--an often challenging or even infeasible task. Both axioms depend on the preferences being stable over time, which need not hold. James March has criticized the rigidity of such axioms as "strict morality," noting that "saints are a luxury to be encouraged only in small numbers." (36)

We claim that, prescriptively, it is better to optimize the reliability or confidence of achieving critical goals than to try to attain the highest possible goal. It is not optimization per se that is objectionable; we advocate optimizing the robustness. But robustness is an attribute of a decision, not a substantive "good" that is enjoyed at the outcome. It is unrealistic, as shown subsequently, to try optimizing the substantive outcome under severe uncertainty.

Deterrence and Uncertainty

We distinguish between two aspects of the relationship between deterrence and uncertainty. First, uncertainty deters. Second, deterrence is uncertain.

On the first point, that uncertainty deters, Yehoshafat Harkabi writes that "deterrence, one can suppose, results not from certainty that the threat [of massive nuclear response] would be realized, but from uncertainty that it would not be realized. Thus, it is not certainty, but rather doubt, that deters." (37) In a similar vein, but not limited to the nuclear context, Thomas Schelling observes that the threat of a limited war "is a threat that all-out war may occur, not that it certainly will occur, if the other party engages in certain actions" (emphasis in original). Hence, "the supreme objective [of limited war] may not be to assure that it stays limited, but rather to keep the risk of all-out war within moderate limits above zero" (emphasis in original). (38) The other party is deterred by the uncertain possibility of all-out war.

The second point, that deterrence is uncertain, derives from myriad uncertainties in planning, preparing, and executing war. The difficulty is the tremendous uncertainty in anticipating the development of the conflict. Herman Kahn notes that "history [meaning the future] has a habit of being richer and more ingenious than the limited imaginations of most scholars or laymen" (emphasis in original). (39) He analogizes to an engineer's design of a building that must perform "under stress, under hurricane, earthquake, snow load, fire, flood, thieves, fools, and vandals.... Deterrence is at least as important as a building, and we should have the same attitude toward our deterrent systems. We may not be able to predict the loads it will have to carry, but we can be certain there will be loads of unexpected or implausible severity." (40)

The calculations and estimates that underlie deterrence are mostly deliberative, not quantitative, and always highly uncertain. (41) For example, evaluating the balance of local military power underlies deterrence assessment. (42) However, evaluating the balance of future local power is highly uncertain because it depends on many case-specific factors: geopolitics, adversary capability and commitment, extent of forward-deployed friendly combat power, the adversary's unknown future access-denial technologies, and so forth. Similarly, deterrence depends in part on "deciding what targets are most valuable in the state to be deterred." (43) In the Persian Gulf War of 1991, the US threat to remove Saddam Hussein's regime was an effective choice of target that, David Szabo argues, deterred the use of weapons of mass destruction by the Iraqi regime. In other situations, identifying high-value targets may be much more difficult and uncertain because they depend on unfamiliar cultural values of the adversary, as in a conflict with the Taliban or the Vietcong.

Assessing or planning deterrence depends on judgments based on the best available knowledge. However, this knowledge is inevitably wrong--sometimes substantially wrong--and the error may have "unexpected or implausible severity." This article concentrates on the implications of this principle for deterrence and its failure. Stated differently, even though uncertainty deters, we will demonstrate that deterrence can be lost because of uncertainty--not in the sense that enemies miscalculate but in the sense that prioritization of options (e.g., use or don't use weapons of mass destruction) is fundamentally altered under severe uncertainty. This phenomenon may be important in explaining the paradoxical or seemingly irrational breakdown of deterrence. Our emphasis, though, is prescriptive--how planners can manage uncertainty in deterrent situations. Info-gap robust satisficing is a generic decision methodology applicable to any deterrent situation such as conventional or nuclear war, asymmetric war, or terror. (44)

Innovation Dilemma: Formal Analysis

This section formulates the concept of an innovation dilemma--a paradigm for decision under severe uncertainty, through which we identify situations in which deterrence may fail. Following the presentation of a formal analysis of the innovation dilemma is a discussion of the relationship between the prisoner's dilemma and the innovation dilemma and a summary of the formal characteristics of the info-gap analysis of the innovation dilemma.

The choice and its dilemma. An analyst must choose between two alternatives with severely uncertain outcomes. One alternative is purportedly better, but also much more uncertain, than the other. This analysis is based on info-gap decision theory in which uncertainty means substantial lack of knowledge or understanding about essential aspects of the problem.

Consider a conflict between two states. We reason from the position of one side, and we must choose between two highly stylized strategies: either initiate war (IW) or do not initiate war (NIW). In the hypothetical example, NIW is more attractive than IW because available knowledge and understanding indicate that the other side won't initiate war, so war would not ensue. (Our analysis is also applicable to the reverse situation in which IW is more attractive.)

However, the available knowledge and understanding--referred to as our "model"--are highly uncertain. This uncertainty is especially acute regarding our adoption of NIW. The adversary's reasoning may differ from ours, and we may not fathom his thinking. For instance, regarding the Strategic Defense Initiative in the mid-1980s, David Windmiller comments that "the Soviets are fundamentally different from Americans in their politics, ideology, social system, the way they think about peace and security, and in their world outlook." (45) Attitudes may differ towards countervalue (civilian and economic targets) rather than counterforce (targeting retaliatory capability) as an IW strategy. Thus, NIW is accompanied by substantial uncertainty and might result in vastly greater damage than anticipated because the adversary might preempt with IW.

There is, of course, uncertainty about the outcome of adopting IW, for which the damage might be "severe" or perhaps "huge"--or maybe even "devastating." The range of this uncertainty, although significant, is much less than the uncertainty attendant upon NIW.

In summary, NIW is anticipated to have a better outcome than IW (based on our model in this stylized example), but the model is more uncertain regarding NIW; therefore, NIW could be worse than IW. This is an innovation dilemma: should we choose the purportedly better but more uncertain and hence potentially worse alternative (NIW), or should we choose the purportedly worse but more accurately predictable and potentially better alternative (IW)? The innovation dilemma induces a decision methodology based on robustness against uncertainty that is different, both normatively and prescriptively, from what is usually called optimization, as we now explain.

Models and outcomes. Here, the term model refers to our information, knowledge, and understanding--both quantitative as well as contextual, subjective, or intuitive. We refer to outcomes and suppose that better ones have less of something (like destruction) while poorer ones have more.

Model-based optimization of the outcome. Our model--assuming it is correct--indicates that NIW will lead to a better outcome (less destruction) than IW. The model indicates preference for NIW over IW--called "model-based optimization of the outcome." This choice is a good one when the model is pretty good. However, info-gap theory provides a critique of this decision strategy when one faces severe uncertainty, as we will see.

As things go wrong. According to our best understanding, NIW is better than IW, but we have good reason to believe that our understanding is substantially wrong--that is, the model is accompanied by severe uncertainty. This situation produces a fundamental trade-off, central in info-gap theory, that will ultimately lead to a mechanism for loss of deterrence that supports decision making under severe uncertainty. We first explain the idea intuitively.

Suppose we err just a little: our information and understanding are just a little bit wrong. What is the worst outcome that could happen with NIW or with IW? If the worst were to happen (assuming we err just a little), which strategy would we prefer? Since NIW is purportedly better than IW, it is reasonable to suppose that the worst outcome with NIW (with very small error) is still better than the worst outcome with IW. We would probably still prefer NIW over IW.

Nonetheless, the magnitude of the advantage of worst NIW over worst IW, at small error, is probably less than that of putative NIW over putative IW--because NIW is much more uncertain than IW. NIW can go wrong in more ways, and more severely, than IW.

Now suppose we err a bit more. Worst NIW is perhaps still better than worst IW, but the advantage of NIW over IW is now even smaller.

At some horizon of uncertainty, the worst NIW just equals the worst IW. At an even greater horizon of uncertainty, the advantage switches over to IW: worst NIW is now worse than worst IW.

So which do we choose, NIW or IW? The choice depends on how wrong our model is, but this we don't know. Herein lies the innovation dilemma. A graphical metaphor and then a reinterpretation will lead us towards a solution that assists in choosing between alternatives.

Graphical representation. We now consider a useful graphical metaphor for the trade-off between the horizon of uncertainty and the worst possible outcome of a strategy. The graphs do not represent quantitative analysis; instead, they support judgment and deliberation to reach a decision, as we will see.

First consider figure 1, dealing only with the NIW option. The vertical axis is the horizon of uncertainty in our model, so the lowest point on that axis is labeled "no uncertainty." Higher points on the vertical axis represent greater uncertainty, such as "small" or "large" uncertainty. The horizontal axis represents the worst possible outcome of NIW for each corresponding horizon of uncertainty. The point at which the curve intersects the horizontal axis--at no uncertainty--is the purported no-war estimate of damage, based on our model, of the outcome of NIW. The worst possible outcome gets worse (larger: more destruction) as the horizon of uncertainty increases. Thus, the curve slopes up and to the right. The positive slope represents an irrevocable trade-off. the worst that can happen gets progressively worse as the horizon of uncertainty increases.


Figure 2 shows uncertainty versus worst outcome for both strategies. We see that the purported outcome for NIW is better (smaller destruction) than the purported outcome for IW, as indicated by the relative positions of the horizontal intercepts of the two curves. We also see that, at small uncertainty, the worst NIW outcome is still better than the worst IW outcome: the short solid vertical line is to the left of the short dashed vertical line. However, the curves cross each other because the IW curve is steeper than the NIW curve since NIW is accompanied by greater uncertainty than IW. This intersection of the curves results in the fact that, at large uncertainty, the worst NIW outcome is now worse than the worst IW outcome: the long solid vertical line is to the right of the long dashed vertical line (the reverse of the short vertical lines at small uncertainty). In other words, at large uncertainty, IW is predicted to have a better worst-outcome than NIW. At large uncertainty, we would prefer IW over NIW while at small uncertainty, we preferred NIW over IW when considering worst possible outcomes. The preference between the strategies is not universal; it changes, depending on the level of uncertainty we consider or, equivalently, the level of destruction we accept. Before continuing to explore the implications of this preference reversal, we should offer a different interpretation of the axes in figures 1 and 2.


The trade-off: robustness versus performance. The curve in figure 3 is the same as the one in figure 1: starting at any specified uncertainty on the vertical axis, the arrows across and down lead us to the corresponding worst possible outcome of NIW at that horizon of uncertainty. Figure 4 is the same as figure 3 except that we now reverse the direction of reasoning. Looking at the horizontal outcome axis, we ask, What is the worst outcome we can tolerate? What is the maximum tolerable damage? Let's denote the worst outcome that is still tolerable (the critical or greatest acceptable damage) by [D.sub.c]. Now we ask the robustness question: what is the greatest horizon of uncertainty that we can tolerate? What is the greatest horizon of uncertainty up to which we are sure that the outcome will not be worse than [D.sub.c]? The arrows up and to the left in figure 4 lead us to the answer. The resulting point on the vertical axis is the greatest tolerable uncertainty of the NIW strategy for this outcome requirement. We will call this point the robustness to uncertainty of NIW for this choice of critical damage. The robustness is large when vast uncertainty is tolerable; small robustness implies great vulnerability to uncertainly.

We can now understand the trade-off mentioned earlier in this section. As the required outcome becomes less demanding (further to the right, accepting greater damage), the intervention is more robust to ignorance. Conversely, the positive slope of the robustness curve in figure 4 implies that robustness-against-uncertainty decreases as the outcome requirement becomes more demanding (further to the left, lower critical damage). This trade-off between robustness and the outcome requirement is intuitively obvious, but it has important implications for choosing a strategy, especially when one faces an innovation dilemma, as we now explain.




Preference reversal and the innovation dilemma. Figure 5 plots the robustness curves for IW and NIW. These curves cross each other, just as in figure 2. The intersection between the robustness curves in figure 5 expresses the innovation dilemma. Comparing NIW and IW in this figure, we see that NIW is purportedly better (horizontal intercept further left) but more uncertain (lower slope) than IW. The greater uncertainly of NIW causes the robustness of NIW to increase more slowly as the critical requirement is relaxed: the curve for NIW rises more slowly than the curve for IW as we move right on the horizontal axis (greater critical damage). Hence, their robustness curves intersect because NIW hits the horizontal axis to the left of IW. The graphical manifestation of the innovation dilemma, and of the resulting preference reversal, is that the robustness curves of the two alternatives intersect in figure 5. NIW is more robust--thus preferred over IW--if the outcome requirements are very demanding (the performance requirement is less than [D.sub.x] on the horizontal axis). For less demanding outcome requirements (the performance requirement exceeds [D.sub.x]), then IW is more robust than NIW and therefore IW is preferred.

The purported preference is for NIW over IW, implying that war would (purportedly) be avoided. However, if the acceptable level of damage is large enough (exceeding [D.sub.x]), then IW is preferred over NIW, implying that war occurs and deterrence has failed. In other words, uncertainty leads to the possible loss of deterrence even though the apparent preference of both parties is for the avoidance of war. This mechanism acts even though the robustness curves in figure 5 are schematic and cannot be evaluated quantitatively. The value of [D.sub.x] is unknown, but deterrence can fail as a result of uncertainty even though the best knowledge and understanding of both parties indicate that no war is preferred.

Robust satisficing: summary. The decision strategy described above entails two elements. The first is called "satisficing": the decision maker must satisfy an outcome requirement. Second, more robustness against uncertainty is preferred over less robustness. Taken together, robust satisficing is the decision strategy that chooses between alternatives to produce the required outcome as robustly as possible. Robust satisficing attempts to satisfice the requirements over the widest range of deviation of reality from the model.

Conceptually different from outcome optimization, robust satisficing may lead to different decisions from outcome optimization. The model-based, outcome-optimal choice is for NIW over IW, as illustrated by the horizontal intercepts in figure 5: NIW is predicted by our model to be better than IW. The robust-satisficing choice is the same if the critical requirement, [D.sub.c], is less than the crossing level [D.sub.x]. On the other hand, robust satisficing and outcome optimization disagree if [D.sub.c] exceeds [D.sub.x]. Outcome optimization and robust satisficing may, or may not, agree in managing an innovation dilemma. However, even when they agree on the decision, they disagree on the reason for the decision. That is, outcome optimization and robust satisficing are normatively and prescriptively different: the standard of what constitutes a good decision is different, and the actual decision that is made can be different. We illustrate how robust satisficing is operationalized in a subsequent historical example.

Summary of Formal Conclusions and Comparison to the Prisoner's Dilemma

The prisoner's dilemma has been fruitfully applied to deterrence and other military decisions under uncertainty. (46) The prisoner's dilemma and the innovation dilemma both deal with choice under uncertainty, but they illuminate different aspects of the challenge despite a superficial similarity.

In the prisoner's dilemma (see the table below), if both prisoners remain silent, they are both fined lightly. If they both testify, they are both fined heavily. If one testifies and the other remains silent, then the first goes free and the second is hanged. For each prisoner, testifying is the choice that minimizes the worst outcome, given the unknown choice of the other prisoner. This minimum-maximum (min-max) outcome (heavy fine) is worse than the light fine they would receive if both remain silent. The dilemma for each prisoner is that remaining silent is the most dangerous choice (one could get hanged), yet it would have a fairly good outcome if both remain silent (light fine).

An innovation dilemma is the choice between two options, one of which--the purportedly better option--could be worse than the other one. This scenario is different from the prisoner's dilemma in which the potentially best option (testifying and going free) is not also the potentially worst option (remaining silent and being hanged). So the prisoner's dilemma is not an innovation dilemma, and an innovation dilemma is not a prisoner's dilemma--but the difference is not only structural.

The info-gap analysis of an innovation dilemma begins by recognizing the severe uncertainty of one's knowledge (what we have called the model), leading to three conclusions. First, model-based predictions are unreliable; hence, prioritization of the options according to model-based predictions is also unreliable (represented by the horizontal intercepts of the robustness curves in figure 5). Second, as a consequence, the decision maker must ask what is the worst acceptable outcome (different from asking what is the best outcome consistent with one's knowledge). Third, if the putatively better option is also more uncertain, then the other option will be more robust to uncertainty if the outcome requirement is not too demanding (the robustness curves in figure 5 cross one another). Again, the innovation dilemma is a choice between two options, one of which--the purportedly better option--is more uncertain and could be worse than the other one. The info-gap resolution of the dilemma is to select the option that most robustly satisfies the decision maker's outcome requirement.

The prisoner's dilemma highlights the difference between individual and collective rationality or between selfish and altruistic behavior. The prisoner's dilemma demonstrates how uncertainty can inhibit cooperation that otherwise would be mutually beneficial. The innovation dilemma reveals the reversal of preference between options resulting from the robust advantage of the suboptimal option for achieving some critical outcomes. The info-gap analysis of robustness shows that outcome optimization can be more vulnerable to uncertainty than suboptimal satisficing of a critical goal.

Both the prisoner's dilemma and the innovation dilemma highlight deficiencies of the min-max or worst-case decision strategy, but in different ways. In the prisoner's dilemma, the worst that can happen if one testifies is a heavy fine, which is less onerous than the worst that can happen if one keeps silent (being hanged). Testifying minimizes the maximum penalty and is the min-max choice. That choice, though, differs from the collaborative selection that would result in a better outcome for all (light fine). The altruistic or group-conscious decision maker should not use the min-max strategy, assuming all other decision makers are like-minded.

One can understand the min-max choice for an innovation dilemma by referring to figure 2. At large uncertainty, the worst that happens with IW is better than the worst that happens with NIW, so IW minimizes the worst outcome and is the min-max choice. However, that choice may not be the most robust one for attaining a specified outcome. For example, suppose that the analyst agrees that the uncertainty is large. But suppose that the decision maker must deliver an outcome that is better (smaller) than the value on the horizontal axis at which the curves intersect in figure 2. The min-max choice is IW while NIW is more robust against uncertainty for achieving the required quality of outcome. In this case, the min-max optimum and the robust optimum do not agree. The decision maker charged with producing an outcome better than the crossing value should not use the min-max strategy even though the uncertainty is large.

Historical Example of Uncertainty and Deterrence: Six-Day War (Israeli Decision to Attack)

A historical case demonstrates how the innovation dilemma provides a prescriptive model for decision making. Figure 5 is the graphical paradigm. NIW is the preferred option based on the best available but highly uncertain information and understanding. If this information and understanding (the model) were correct, then IW would entail greater loss. However, the model is uncertain--and more so for NIW than for IW. Thus, while NIW is purportedly better, it could lead to a worse outcome than IW. This dilemma is portrayed in the graphical metaphor of figure 5 by the crossing robustness curves. The dilemma is resolved by choosing the option that would lead to an acceptable outcome over the widest range of unknowns--by choosing the option that most robustly satisfies the critical outcome requirement.

The Israeli decision to initiate war on 5 June 1967 against an array of Arab states provides a good example of decision making in the face of an innovation dilemma. We analyze the options to initiate war (IW) or to not initiate war (NIW) from the perspective of Israeli decision makers. The claim is not that the decision makers actually reasoned this way but that this reasoning is implementable and instrumentally justifiable.

The Model in Support of IW

Tensions grew among Israel, Syria, Egypt, and subsequently Jordan and other Arab countries in mid-May 1967. Exchange of fire between Israel and Syria in the north had occurred repeatedly over the years and intensified in April due to Israeli and Syrian disputes over agricultural use of demilitarized zones, Syrian support of terrorist actions against Israel, and operations to divert sources of the Jordan River in Syrian and Lebanese territory to bypass Israel. Egyptian president Nasser responded, in support of Syria, with a series of threatening actions on Israel's southern border. On 18 May, Egypt demanded the immediate departure of the United Nations Emergency Force stationed in the Sinai desert since the end of the 1956 Sinai war, to which UN secretary-general U Thant acquiesced with little resistance. Egypt then began a massive buildup of infantry, armor, and airpower in the Sinai, partly deployed offensively and close to the Israel-Egypt border. Syria and Jordan mobilized massively, together with significant troop movements to the theater from Iraq and other Arab countries. On 22 May, Egypt announced the closure of the Straits of Tiran to Israeli shipping. The United States tried to enlist maritime countries in the Regatta Plan to sail through the Straits of Tiran, accompanied by destroyers from the Sixth Fleet, asserting international rights of free passage. Regatta never materialized. On 30 May, Egypt and Jordan signed a mutual defense pact allowing Egyptian forces to operate from Jordan and placing Jordanian forces under Egyptian command, creating a strategic threat close to the main Israeli population centers. (47) Arab public opinion and leadership pronouncements vigorously expressed the desire to change the status quo: to liberate Palestine and to eliminate the State of Israel. (48)

The Model in Support of NIW

The Egyptian force buildup in the Sinai during May 1967 seemed closely parallel to Operation Retama in February 1960. Egypt brought two divisions to the Sinai following escalation between Syria and Israel in early 1960. In response, Israel rushed reinforcements to the south, placed its air force on alert, and began diplomatic exchanges to assure all parties that Israel had no offensive intentions against Syria or Egypt. Tensions ran very high until Nasser removed Egyptian forces from Sinai late in February 1960 and declared that he had saved Syria from war with Israel, thereby strengthening his stature as a pan-Arab leader. The situation in May 1967 looked similar, and Egyptian moves were widely understood--at least at first--as demonstrative but not expressing actual desire for war. Furthermore, Israeli strategy in 1967 was predicated on big-power support. President Charles de Gaulle rebuffed Israeli approaches, and President Lyndon Johnson repeatedly asserted that "Israel will not be alone unless it decides to go it alone," clarifying US unwillingness to support an Israeli initiation of war either verbally or materially, thus supporting NIW. (49) Finally, Israeli strategy sought to maintain the status quo by virtue of the existence of a vast deterrent force. As long as Israeli deterrence is effective, NIW is preferred. (50)

Uncertainty, the Innovation Dilemma, and Analysis of Robustness

If the model of NIW is correct, then IW would initiate an unnecessary war, so NIW is seemingly preferable. In figure 5, the robustness curve for NIW intersects the horizontal axis at a lower (better) value of "maximum tolerable damage" than the curve for IW.

The uncertainty in the NIW model is prodigious, especially regarding Egyptian intentions in 1967 and the strength of the parallel to Operation Retama in 1960. The Arab military deployment in 1967 far exceeded that in 1960, so even if this reflected Nasser's hand being forced by circumstances in 1967, the possibility of an Arab IW was not negligible. An Arab IW on the small number of highly vulnerable Israeli airfields could be disastrous for Israeli ability to repulse a three-front invasion. (51) In other words, the actual damage resulting from even moderate error in the NIW model could far exceed the putative damage of the alternative--IW.

The uncertainty in the IW model hinges on uncertainty about foreign support and on the Israeli assessment that a surprise Israeli attack would succeed rapidly. Quick success would obviate the need for diplomatic support (primarily from the United States) and for materiel from foreign powers (primarily France and Britain) during the conflict. Confidence in such success eroded somewhat as time passed and the Arab military deployment strengthened. The Arab deployment was enormous, but deficiencies in Egyptian logistics, training, and command structure supported the Israeli assessment. Uncertainty about foreign, especially US, support was prominent.

The uncertainty in the IW model is substantially less than in the NIW model. In figure 5, the IW robustness curve is much steeper than the NIW curve, indicating that IW is less vulnerable to uncertainty. Thus, even at moderate uncertainty, the maximum damage that could result is less from IW than from NIW. The innovation dilemma is that NIW is ostensibly better but more uncertain and hence potentially worse than IW. Graphically, the dilemma is reflected in the robustness curves crossing one another in figure 5.

Israeli defense minister Moshe Dayan, Israeli chief of staff Yitzhak Rabin, and most members of the Israel Defense Force General Staff supported a preemptive strike against Egyptian forces to gain the advantage of air superiority by eliminating the Egyptian air force and crippling the massive offensive Arab deployment. On 4 June, 12 of 14 Israeli cabinet ministers voted to initiate war the following morning. (52) This decision follows the logic of robust satisficing: choosing the option that will lead to an acceptable outcome despite vast uncertainty. One can imagine, counterfactually, that the cabinet would have chosen NIW if it had felt that the model supporting this option was fairly certain. However, this putative first choice was rejected (after several agonizing weeks) in favor of the IW option that would lead (or so Israel hoped) to lower loss despite the surprises yet to be encountered. We are not imputing a specific mode of reasoning to the Israeli decision makers. Rather, the claim is that the robust-satisficing methodology was an implementable method of analysis that could have been used in this historical case.

Theoretical Application: Does Uncertainty Increase the Propensity for War?

In the previous section, we saw how decision makers could have used info-gap robust satisficing in a historical situation. We now consider how the info-gap robust-satisficing decision strategy can guide theoretical analysis in support of decision making.

A state must choose between two prototypical strategies: NIW (no initiation of war) or IW (initiation of war). The purported optimal preference--based on the available model--is for NIW over IW. The propensity for war increases if the propensity for preference reversal from NIW to IW is increased by uncertainty. Suppose that one or both protagonists' perceptions change between an overall low level to an overall high level of uncertainty. For example, when the United States discovered a small number of nuclear warheads in Cuba in 1962, US uncertainty regarding Soviet intentions increased greatly. We examine how an increase in uncertainty influences the robustness curves for these two strategies.

Consider, first, a special case: uncertainty of NIW increases without increased uncertainty of IW. New nuclear capability in unstable countries, for example, raises the possibility of IW on their part, so NIW by a traditional power has more uncertain outcome. (53) The uncertainty is severe because a small regional power with limited nuclear capability might argue that a superpower would "deter itself from using nuclear weapons because of the prospect of collateral damage to the region." (54) NIW can also become uncertain if new defensive technology is ambiguous and could be interpreted offensively by an adversary. For instance, President Ronald Reagan's Strategic Defense Initiative could protect US offensive missiles or US cities. The former is defensive: missiles are protected for use in retaliatory strikes. The latter is offensive: missiles are not defended because they are intended for first strikes and cities are defended against retaliation. The ambiguity could make adversarial IW more likely. (55) In these examples, uncertainty surrounding IW by the traditional power is not changed.

Figure 6 shows robustness curves for this situation. The robustness of IW is unchanged, but that of NIW is reduced because of increased uncertainty of NIW. The maximum acceptable damage up to which NIW is robust-preferred is reduced: [D.sub.x.sup.hi] is smaller than [D.sub.x.sup.lo]. Consequently, a reversal of robust preference from the reputed optimum--NIW--to robust preference for IW occurs at a lower damage threshold due to increased uncertainty in NIW. The latter condition causes greater tendency for the selection of IW and therefore an increased tendency for war. This tendency is further strengthened because, over the range of critical damage for which NIW is more robust than IW, the robustness of NIW is lower because of high uncertainly.


Now consider the reverse special case: increased uncertainty of IW without increased uncertainty of NIW (fig. 7). Comparing this graph with figure 6 shows a reverse direction of change. In figure 7, the threshold for preference reversal from NIW to IW increases as uncertainty increases: [D.sub.x.sup.hi] is larger than [D.sub.x.sup.lo]--the reverse of figure 6. Furthermore, in figure 7, when NIW is more robust than IW, the robustness advantage increases because uncertainty in IW increased, unlike the depiction in figure 6.


In summary, we cannot conclude that greater uncertainty will necessarily strengthen, or necessarily weaken, the propensity for preference reversal from NIW to IW and thus increase the propensity for war, as is sometimes claimed. IW and NIW are influenced in opposite directions by uncertainly. However, the analysis identifies those aspects of a scenario determining the impact of uncertainly. The analyst must assess the counteracting effects of uncertainly on the two options--NIW and IW.

Our discussion is purely qualitative; magnitudes of these tendencies cannot be deduced. Nonetheless, the monotonicity and shifts of the robustness curves, due to elevated uncertainly, are unambiguous. The tendencies that we deduced hold for all robustness curves.


We summarize our analysis by returning briefly to the stylized example discussed earlier. Consider conflict between two states. One state, initially deterred from war, must choose between NIW and IW when this choice is an innovation dilemma: NIW purportedly has a better outcome than IW, but NIW is far more uncertain. Reversal of preference to IW reflects loss of deterrence by the other state. The info-gap robust-satisficing decision methodology for managing an innovation dilemma is to choose the option that is more robust-to-uncertainty for realizing a specified goal.

Figure 5 is the schematic graphical representation of this dilemma in terms of robustness curves when NIW is purportedly the better choice so that preference reversal constitutes loss of deterrence. The graphs do not represent quantitative analysis. They are a metaphor of the concept of preference reversal that can arise in response to uncertainly. The graphs support judgment and deliberation in reaching a decision.

If the analyst requires that the level of damage be less than the value at which the two options are equally vulnerable to uncertainly ([D.sub.x], where the robustness curves in fig. 5 cross one another), then NIW is preferred over IW. In this case, the purported optimum, NIW, is also the more robust option. However, if the analyst is willing to accept greater damage, then greater robustness to uncertainly is achieved with IW. The putatively better option is not necessarily the most robust (and hence preferred) option. The anticipated best option is not necessarily the most robust for reaching specified goals.

The robustness curves in figure 5 are schematic, and we don't expect analysts to be able to evaluate them numerically or to make a quantitative comparison between a numerical value of [D.sub.x] and an explicit value for maximum acceptable damage. The innovation dilemma is real even though it is not quantitative, and the choice of an option can rationally be made by qualitative verbal deliberation. We can expect that in some situations, an analyst will choose IW over NIW--as in the Israeli decision in 1967 -- by arguing that this action is the most reliable and responsible in light of the vast uncertainties, especially those associated with NIW. In such situations, deterrence has been lost due to the impact of uncertainty. In other situations, the purportedly better but more uncertain option, NIW, may be chosen. Finally, in some situations, no decision is made, or conflicting decisions are adopted for institutional or other reasons.

A theoretical paradigm, such as info-gap robust satisficing, is neither sufficient nor necessary for military success, as Harold Winton concludes in his study of Gen George Patton and Gen Ulysses Grant. (56) However, "to a mind that artfully combines discipline with intuition, theory offers the opportunity to roam freely back and forth between the general and the particular." (57) Theory supports deliberation and decision.

Deliberation takes place, conceptually, on both axes of schema such as figure 5. The analyst makes judgments about how bad an outcome one can accept (or, equivalently, how good an outcome is required) and about how much uncertainty is tolerable. These judgments use models: historical precedent, theoretical insight, contextual understanding, data, social or organizational values and goals, and decisions about reliability or uncertainty of the previous elements. This analysis is particularly useful when one faces an innovation dilemma: one option seems better than another (based on available models), but the apparently better option is also more uncertain. Like all dilemmas, an innovation dilemma has horns, but these can be managed systematically with an info-gap robust-satisficing analysis.


(1.) "That which was, will be; that which was done, will be done again; there is nothing new under the sun." Eccles. 1:9.

(2.) Yakov Ben-Haim, Craig D. Osteen, and L.Joe Moffitt, "Policy Dilemma of Innovation: An Info-Gap Approach," Ecological Economics 85 (2013): 130-38.

(3.) Robert Jervis, "Cooperation under the Security Dilemma," World Politics 30, no. 2 (1978): 167-214; and Yakov Ben-Haim, "Strategy Selection: An Info-Gap Methodology," Defense & Security Analysis 30, no. 2 (2014): 108-9.

(4.) Yakov Ben-Haim, Info-Gap Decision Theory: Decisions under Severe Uncertainty, 2nd ed. (London: Academic Press, 2006).

(5.) R. Harrison Wagner, "Deterrence and Bargaining," Journal of Conflict Resolution 26 (1982): 339, 343.

(6.) R. Harrison Wagner, "Nuclear Deterrence, Counterforce Strategies, and the Incentive to Strike First," American Political Science Review 85 (1991): 746.

(7.) R. Harrison Wagner, "Rationality and Misperception in Deterrence Theory," Journal of Theoretical Politics 4, no. 2 (1992): 115.

(8.) Ibid., 119.

(9.) Ibid., 119, 124.

(10.) Anatol Rapoport, "Deterrence Theory Discussion: III; Comments on 'Rationality and Misperceptions in Deterrence Theory," Journal of Theoretical Politics 4, no. 4 (1992): 481.

(11.) Michael D. McGinnis, "Deterrence Theory Discussion: I; Bridging or Broadening the Gap? A Comment on Wagner's 'Rationality and Misperception in Deterrence Theory," Journal of Theoretical Politics 4, no. 4 (1992): 446, 447.

(12.) Christopher H. Achen and Duncan Snidal, "Rational Deterrence Theory and Comparative Case Studies," World Politics 41, no. 2 (1989): 160, 164.

(13.) McGinnis, "Deterrence Theory Discussion," 448.

(14.) Wagner, "Rationality and Misperception," 138-39.

(15.) D. Marc Kilgour and Frank C. Zagare, "Credibility, Uncertainty, and Deterrence," American Journal of Political Science 35, no. 2 (May 1991): 306.

(16.) Ibid., 312.

(17.) Philipp Denter and Dana Sisak, "The Fragility of Deterrence in Conflicts," Journal of Theoretical Politics 27, no. 1 (2015): 43-57, doi: 10.1177/0951629813511712.

(18.) Barry Nalebuff, "Rational Deterrence in an Imperfect World," World Politics 43, no. 3 (1991): 313, 328.

(19.) Ibid., 320.

(20.) Yakov Ben-Haim and Keith W Hipel, "The Graph Model for Conflict Resolution with Information-Gap Uncertainty in Preferences," Applied Mathematics and Computation 126 (2002): 319-40.

(21.) Wagner, "Rationality and Misperception," 119; and John K. Stranlund and Yakov Ben-Haim, "Price-Based vs. Quantity-Based Environmental Regulation under Knightian Uncertainty: An Info-Gap Robust Satisficing Perspective," Journal of Environmental Management 87 (2008): 443-49.

(22.) Ben-Haim, "Strategy Selection." See also Yakov Ben-Haim, "Dealing with Uncertainty in Strategic Decision-Making," Parameters 45, no. 3 (Autumn 2015): 63-73.

(23.) Janice Gross Stein, "Extended Deterrence in the Middle East: American Strategy Reconsidered," World Politics 39, no. 3 (1987): 329-31, 333.

(24.) Richard Ned Lebow and Janice Gross Stein, "Rational Deterrence Theory: I Think, Therefore I Deter," World Politics 41, no. 2 (1989): 208-9, 211, 214, 217.

(25.) Richard Ned Lebow and Janice Gross Stein, "Deterrence: The Elusive Dependent Variable," World Politics 42, no. 3 (1990): 351-52, 353, 355.

(26.) Ibid., 353, 355.

(27.) Alexander L. George and Richard Smoke, "Deterrence and Foreign Policy," World Politics 41, no. 2 (1989): 180.

(28.) Ibid., 171.

(29.) Achen and Snidal, "Rational Deterrence Theory," 150.

(30.) Nalebuff, "Rational Deterrence," 321.

(31.) Paul Huth and Bruce Russett, "What Makes Deterrence Work? Cases from 1900 to 1980," World Politics 36, no. 4 (1984): 499.

(32.) George W Downs, "The Rational Deterrence Debate," World Politics 41, no. 2 (1989): 235.

(33.) Frank Zagare, "Rationality and Deterrence," World Politics 42 (1990): 240; and Duncan R. Luce and Howard Raiffa, Games and Decisions: Introduction and Critical Survey (New York: Wiley, 1957), 50.

(34.) Zagare, "Rationality and Deterrence," 241.

(35.) Ken Binmore, Rational Decisions (Princeton, NJ: Princeton University Press, 2009); and Steven J. Brams, Mathematics and Democracy: Designing Better Voting and Fair-Division Procedures (Princeton, NJ: Princeton University Press, 2008).

(36.) James G. March, "Bounded Rationality, Ambiguity, and the Engineering of Choice," in Decision Making: Descriptive, Normative, and Prescriptive Interactions, ed. David E. Bell, Howard Raiffa, and Amos Tversky (Cambridge, UK: Cambridge University Press, 1988), 51.

(37.) Yehoshafat Harkabi, War and Strategy (Tel Aviv: Maarachot Publishers, 1990), 221-22 (in Hebrew).

(38.) Thomas C. Schelling, Strategy of Conflict (Cambridge, MA: Harvard University Press, 1960), 191, 193.

(39.) Herman Kahn, On Thermonuclear War, 2nd ed. (Princeton, NJ: Princeton University Press, 1961), 137.

(40.) Ibid., 138.

(41.) Colin S. Gray, "Deterrence Resurrected: Revisiting Some Fundamentals," Parameters, Winter 2010-11, 100.

(42.) Michael S. Gerson, "Conventional Deterrence in the Second Nuclear Age," Parameters, Autumn 2009, 32-48.

(43.) David Szabo, "Disarming Rogues: Deterring First-Use of Weapons of Mass Destruction," Parameters, Winter 2007, 82.

(44.) Amir Lupovici, "The Emerging Fourth Wave of Deterrence Theory: Toward a New Research Agenda," International Studies Quarterly 54, no. 3 (2010): 718-20.

(45.) David E. Windmiller, "SDI: A Strategy for Peace and Stability or the End to Deterrence?," Parameters, Summer 1986, 21. See also John M. Weinstein, "Soviet Civil Defense and the US Deterrent," Parameters, March 1982, 70.

(46.) Frank C. Zagare, "Toward a Reformulation of the Theory of Mutual Deterrence," International Studies Quarterly 29, no. 2 (1985): 155-69; Harkabi, War and Strategy, 299; and Kilgour and Zagare, "Credibility, Uncertainty, and Deterrence."

(47.) Ami Gluska, Eshkol, Give the Order! Israel's Army Command and Political Leadership on the Road to the Six Day War, 1963-1967 (Tel Aviv: Maarachot Publishers, 2004), 47 (in Hebrew).

(48.) Ibid., 199.

(49.) Michael B. Oren, Six Days of War: June 1967 and the Making of the Modern Middle East (New York: Ballantine Books, 2003), 157.

(50.) Gluska, Eshkol, Give the Order!, 199, 401.

(51.) Ibid., 60.

(52.) Oren, Six Days of War, 157-58.

(53.) Frederick R. Strain, "Nuclear Proliferation and Deterrence: A Policy Conundrum," Parameters, Autumn 1993, 85-86.

(54.) Ibid., 87.

(55.) Windmiller, "SDI," 20.

(56.) Harold R. Winton, "An Imperfect Jewel: Military Theory and the Military Profession," Journal of Strategic Studies 34, no. 6 (December 2011): 870.

(57.) Ibid., 874.


The author initiated and developed info-gap decision theory for modeling and managing severe uncertainty. Scholars and practitioners around the world apply the theory to decision and planning in engineering, biological conservation, economics, project management, climate change and response to natural hazards, national security, medicine, and other areas (see Professor Ben-Haim has been a visiting scholar in many countries and has lectured at universities, technological and medical research institutions, public utilities, and central banks. He is a professor of mechanical engineering who has published more than 100 articles and 5 books, and he holds the Yitzhak Moda'i Chair in Technology and Economics at the Technion-Israel Institute of Technology.

The author is indebted for valuable comments and criticisms to Uri Bar-Joseph, Stuart Cohen, Eado Hecht, Avi Kober, Eitan Shamir, and Tal Tovy.

This article derives from a paper of the same title presented at the International Studies Association Annual Conference, New Orleans, 18-21 February 2015.
Table. Prisoner's dilemma

                            Prisoner B
                     Testify        Silent

            Testify  A: heavy fine  A: freedom
Prisoner A           B: heavy fine  B: hanged
            Silent   A: hanged      A: light fine
                     B: freedom     B: light fine
COPYRIGHT 2016 Air Force Research Institute
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Ben-Haim, Yakov
Publication:Air & Space Power Journal - Africa and Francophonie
Geographic Code:7ISRA
Date:Sep 22, 2016
Previous Article:Dilating pupils: the pedagogy of cyber power and the encouragement of strategic thought.
Next Article:Enable and enhance--that's it? European Union Peace Building and the Enable and Enhance Initiative.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |