Printer Friendly

The effect of corruption on intelligence cooperation.

Corruption and Intelligence Policy

In an intelligence agency, information typically falls into four general categories: information that can be shared; information that cannot be shared by law; information that cannot be shared by policy, and information that is not shared because the agency chooses to withhold it. There are myriad reasons why agencies elect to withhold information in a joint intelligence venture, including: inter-agency rivalries and jealousies; political differences between intelligence executives; misunderstandings of law and policy; potential for future bargaining leverage; inadequate or non-aligned technology, and lack of formal reciprocity agreements, among others. However, field research by this author has revealed that the most frequently cited and heavily weighted reason among European and U.S. intelligence executives in deciding to withhold information from their counterparts is the perception of corruption.

Where cooperation in information-sharing is formalized by a pact, such as The Europol Convention, intelligence executives may opt to ignore legislated sharing parameters in order to safeguard proprietary information. In a multilateral or joint intelligence effort against terrorism, reluctance to share information based on the perception of corruption could fatally hobble the combined effort. Mary Noel Pepys writes, "The perception of corruption is as insidious, and just as important to overcome, as corruption itself, as they both have the effect of undermining the public's trust ..." (1) Whenever an intelligence executive violates a formal agreement, whether internal or external to the agency's host government, and if the purpose of the breach is to safeguard information, then almost certainly the perception of corruption is at the root of the executive's decision-making process.

In fairness to the intelligence executives interviewed for this study, none of them used the term corruption discussing security concerns regarding information, but rather more often employ the industry term leakage of proprietary information, whether deliberate or unintentional. Once proprietary information has left control of the originating agency, the perceived risk of an unauthorized release escalates relative to the importance of the information. The reason for this is apparent: information valuable to the originating agency will be perceived by the sender as equally or more valuable to the recipient, essentially consigning the worth of the information to the second agency and discounting the worth to itself (the information having been shared).

The concept behind this perception is simple: an intelligence executive will view information and intelligence generated by his agency as more valuable than that received from any other agency. (2) The commonly held perception is that because the originator's intelligence will out-value the intelligence of the recipient, by its origins alone, shared information will be an attractive nuisance for theft. Intelligence executives perceive shared information as an attractive nuisance to any corrupt intelligence official who might care to profit by trading with a hostile (3) entity.

Although hostile entities surely place value on guarded/protected information, values placed on government intelligence by hostile entities are not necessarily coequal with government valuations. For example, carefully guarded information such as bank records-which are useful in tracing the movement of illicit funds-may be valuable in tracking terrorist cells; however, the same information is doubtless of little value to terrorists who are certainly well aware of their own cash flow. Nevertheless, what doubtless is of greater interest to hostiles is to utilize government information to learn who or what is being targeted or perhaps to develop a better understanding of how far along the government is in an inquiry. And so, as a counterintelligence measure, the intelligence executive must anticipate the risk of information leakage valuable to hostiles and mitigate that threat. Often the result is sanitizing information or stovepiping it altogether. Other countermeasures include screening intelligence employees and taking technological steps to mitigate the risk of information leakage to hostiles. (4)

One method of sanitizing information is to strip it of its source references. Another sanitizing method is to strip information of name or ownership references and a third is to remove quantitative references such as numerical amounts, sequences, etc. The bank records example above illustrates the benefit of sanitization of name or ownership information before sharing, because corrupt officials will perceive this information as attractive for theft. This is not because officials would benefit directly from the information, but instead because hostiles may benefit by acquiring name and ownership information through a corrupt official.

This typifies information deliberately withheld in Joint intelligence, i.e., naming names in an open inquiry. Additionally, it is rare that an agency will allow unfettered access to open case information, specifically because revealing the contents of a current inquiry or operation may divulge names. Because open case information is potentially far more valuable to the hostiles than to a recipient agency in a sharing agreement, it is information highly valued to all sides and frequently stovepiped.

This is important in understanding the depth of policy coordination between and among agencies engaged in information-sharing. Succinctly put, intelligence agency executives coordinating decentralized policies by definition must have authority and control over the negotiated policies. Therefore, the presence of withheld information with perceived corruption as a variable in information-sharing is direct evidence that intelligence executives must have independent control in manipulating the depth of coordination between agencies. The revelation that the perception of corruption is integral in the executive decision making process (regarding whether or not to share), and consequently instrumental in agency cooperation and policy coordination, suggests that corruption in intelligence should be examined further.

Informal Cooperation and Corruption

One alternative to utilizing established regimes of cooperation in information-sharing is utilization of informal cooperation to acquire information. Whether the means are legitimate or not-and they could be either-informal cooperation has been a longstanding institution. (5) Ostensibly, informal information-sharing between agencies is most often used to expedite the collection process, cut red tape, or for non-official purposes. However, in a setting wherein sensible discretion is a hallmark of good practice,6 it should not be surprising when one intelligence official asks another to share information and advance a common cause, but violate policy or domestic law in the process. When an illegal act is suborned through informal cooperation and consequently a law is violated in information-sharing, it is generally understood among the players that the illegal transaction likely will never be made public; hence, agents may employ illegal means to acquire information as it is well understood they likely may do so with impunity.

Not all corrupt acts in intelligence are law violations to further cooperation in a common cause, just as not all informal cooperation is illicit. To thwart unauthorized flow of information, intelligence executives take steps to control information flow from the agency, including establishment of a review process as an internal requirement to access-guarded information. Frequently, this is all that is necessary to stem the flow of unauthorized information leakage or, minimally, to send a message to agents engaged in informal coordination that sharing guarded information is officially discouraged. Still, most intelligence executives concede that some guarded information passes illicitly between agencies and, although perceived by the public as a corrupt practice, little will come of it as it frequently benefits all players (i.e., beyond the intelligence industry), arguably, including the general public.

However, informal interagency cooperation in information-sharing can foment corrupt practices. Although the players' motives are most often benign and the end product is often beneficial, what of those acts that are indeed corrupt? According to Richard Ward and Robert McCormack, "[Corrupt] activities can be generally classified into four categories:

* "Acts which are common throughout the whole [agency] and are generally accepted.

* Acts which are less common than those of the first category but which are generally overlooked.

* Acts which are common to particular units ... and which are accepted or overlooked by unit members.

* Acts which are not common, which involve a few individuals, and which would be reported if discovered." (7)

Any of the above acts described by Ward and McCormack can apply to informal cooperation in information-sharing. The first illustrates how one agent might contact another with an information request outside of normal channels to expedite its receipt. Whereas, this request is not necessarily unlawful, it may be perceived as corrupt if it violates internal policy (which it almost certainly would). However, requests as these may be so common that acquiescing to them within certain agency cultures may also be common. In the experience of this author while in government service, when contacted by agencies with requests for information, official channels were often foregone in favor of whatever was the most expedient method of information delivery. However, exceptions to secure practice such as these were accounted for in standard operating procedures and so remained within the domain of formal cooperation under tactical intelligence guidelines.

The second corrupt act described by Ward and McCormack may be illustrated as a so-called "favor" request for information from one agency to another agency that is infrequent, but requires a policy or law violation to comply. When one agency makes a request of another to violate its standards of practice, the request is usually not secret; therefore, the agency that complies with an illicit interagency request does so at its own peril. Nonetheless, compliance with such requests is not uncommon. A typical example would be of an agency that could not locate an individual of interest and would resort to protected telephone, banking or tax records held by another agency to learn (e.g.) a suspect's place of residence or employment. The source of this information likely would never come to light at trial.

The third corrupt act can be viewed as particularly insidious if found within an intelligence agency, as doubtless the opportunities for bribes, payoffs and blackmail abound in the information collection field and institutionally could manifest itself in a bureaucratic kleptocracy. Additionally, information could be bought and sold by analysts who sit at the command center of information flow every day. Whereas, occasions as these are rare, they certainly are not unprecedented in the United States or Europe.

And finally, the fourth corrupt act as described by Ward and McCormack suggests that the culture of the agency or its agents is not out of the ordinary, but rather the corrupt acts of a few are an anomaly to the whole. This would be illustrated by a blackmailer, information thief or bribe taker who acts alone or with few accomplices in secret and whose acts once discovered would not be tolerated.

Cooperation, Discretion, and Suborning Corruption

With the above descriptions it is demonstrated how the ethical culture within an agency may play a part in whether or not corrupt practices will be tolerated. The culture of discretion is embedded in government at many levels, and the breadth by which discretion is measured is often a reflection of agency culture. In writing about police discretion, specifically, K. C. Davis stated that it is exercised "whenever the effective limits of his power leave him free to make a choice among courses of action or inaction." (8) In this context discretion means to sidestep often-complex laws regulating information safeguards, and distribute to another agency information it is not legally authorized to acquire independently. However, it is difficult to conclude that the exercise of legal discretion in information-sharing is always a corrupt practice. Exercising discretion in circumventing a formal system with informal cooperation to acquire information more rapidly and with a lesser chance of error is a common option. Many participants in this author's research have noted the legal requirements in sharing information are cumbersome, which slows delivery of guarded information often beyond its usefulness; however, only very few have suggested that laws protecting information are unjust.

Leakage through informal cooperation is often quite explainable: one party who stands to benefit (by whatever means) suborns another to violate the law or policy to provide high-value information. It is important to note that usually no consideration or promise of a consideration passed between parties, and neither is made a quid pro quo offer.

Because the practice of officially encouraging informal coordination is a well-established institution in the fabric of multi-agency cooperation, doubtless executives view it as either helpful or benign. As the practice flourishes, the agencies themselves are beneficiaries. But informal coordination also serves as a source of leakage when an agent funnels away guarded information, whether knowingly or not engaging in a corrupt act. The hinge pin apparently does not lie in whether or not an agent is corrupt or engages in corrupt practices, but where exactly the request itself may fall (or not) into the ability of the recipient to exercise discretion. If the costs are low, if the potential gains are high, the opportunities for informal coordination escalate. In this fashion, informal coordination has flourished for decades.

There is no doubt that from time to time the line is crossed in information-sharing and practices traverse from murky shadows into dark corruption in its official dealings with other agencies. Indeed, either the public or government may say the ends justify the means; but if successful ends require the means of government to be unlawful, then who is corrupt?

Optimizing Formal Cooperation to Minimize Corruption

The previous sections bring to light a global practice in government that promotes informal cooperation among agencies to build regimes where formal cooperation may be inadequate. It is important to recall that not all informal cooperation is illicit, and that most informal cooperation regards lawfully traded information rather than suborned corruption. (9) But the previous sections illuminate one of the greatest institutional maladies of informal cooperation-the loss of regulation-since, once the informal cooperation spigot is open, it is often difficult to close as regulation has fallen from control of the intelligence executive. For generations, professional conventions of mutual interest and purpose have served as a breeding ground for informal cooperation opportunities, with exchange of business cards among the myriad participants serving a high purpose. (10) If no working arrangement exists between two agencies, an informal contact gleaned through a business card exchange is an avenue by which preliminary communication to establish informal cooperation can be made.

Intelligence executives each articulate interagency policies governing share/no share regulations within regimes. One dividend of an ineffective or insufficient formal regime (i.e., in which policy co-ordination has not or cannot successfully advance information-sharing) is informal cooperation that potentially yields corrupt practices. In other words, when formal channels are insufficient in information-sharing, informal channels are a viable alternative, even when considered an undesirable option by intelligence executives. Therefore, the formal cooperation decision-making process should be examined closely as a matter of counter-corruption. It is reasonable to expect that if a cooperative regime facilitates greater sharing capabilities, perhaps the motivation to defect to informal cooperation (and potentially corrupt practices) would diminish. In this context it becomes important to understand how a regime of cooperation best operates.

Robert Axelrod noted that a cooperative arrangement is identifiable as a simple Prisoner's Dilemma, writing: "Fortunately, the very simplicity of the framework makes it possible to avoid many restrictive assumptions that would otherwise limit the analysis:

* "The payoffs of the players need not be compatible at all.

* The payoffs certainly do not have to be symmetric ... One does not have to assume, for example, that the reward for mutual cooperation, or any of the other three payoff parameters, have the same magnitude for both players. ...

* The payoffs of a player do not have to be measured on an absolute scale. They need only be measured relative to each other.

* Cooperation need not be considered desirable from the viewpoint of the rest of the world ... In fact, most forms of corruption are welcome instances of cooperation for the participants but are unwelcome to everyone else." (11)

For the purposes of this work, we must assume that when cooperation is achieved in a formal cooperative regime, an iterative game begins in information-sharing. In the game Agency A requests information from Agency B. Agency A values the information but has no knowledge of the information's value to Agency B (as is usually the case). Agency B values the information but has no knowledge of the information's value to Agency A. In this game, the bargaining parties are on an equal footing. As Axelrod indicated, a typical Prisoner's Dilemma game is the product, with Agency A valuing its information at 2 and Agency B also valuing its information at 2, since for ease of the game we shall assume the information is what it is, a small piece in a large puzzle and nothing more. Shared information loses proprietary value and is reduced by 1. Non-cooperation is valued at 0, and giving away information and getting nothing in return is valued at -1 for either player. Minimally, Agency B benefits by establishing a link with Agency A and learning Agency A's information, yielding 2 - 1 = 1. Agency B has information that Agency A wants, and so if Agency B cooperates (and shares), its information is reduced in value, yielding 2 - 1 = 1. (12)

In a one-shot game, as might be the case in informal coordination, it is important to note that Nash Equilibrium is found where neither party cooperates. Axelrod observed, "two egoists playing the game once will both choose their dominant choice, defection, and each will get less if they had cooperated." (13) However, in a repeating game as would be found in a cooperative and coordinated relationship, Pareto Optimal Equilibrium is found at cooperate, cooperate (i.e., share, share). And so, in a repeated game the cumulative benefit quickly outweighs the payoff of a single defection. The advantage of cooperation over either non-cooperation (i.e., both players do not cooperate) or what Axelrod calls the sucker's payoff (14) (i.e., defect, cooperate for either player), becomes clear once a cumulative benefit is realized. Non-cooperation repeated over a number of games will yield a benefit of zero, no matter what greater strategy may be in play. The remaining alternative in a repeating game is that players exchange turns exploiting one another. According to Axelrod, "This assumption means that at an even chance of exploitation and being exploited is not as good an outcome for a player as mutual cooperation." (15) And so, in a repeating game cooperation between players is the Pareto Optimal Equilibrium, yielding maximum benefit.

Axelrod also suggests that cooperation evolves in three stages: first, cooperation must be based on reciprocity; second, a strategy based on reciprocity can thrive where other strategies are tried; and third, a strategy of reciprocity, once established, can survive in an environment of competing strategies. (16) In non-zero-sum games the nature of other strategies with which the player's strategy interacts must be considered. And so, the history of interaction between strategies must be taken into account.

To develop an appropriate response within the Prisoner's Dilemma framework, Axelrod conducted a computer tournament for theorists in psychology, economics, political science, mathematics and sociology in an attempt to discover the best strategy to play the Prisoner's Dilemma game. The winner was a strategy called TIT FOR TAT. To begin with, as would two agencies, the players agree to cooperate. Thereafter, the player with the TIT FOR TAT strategy chooses to do whatever the other player chose in the previous move. The strategy elicits the most cooperation in a Prisoners Dilemma game when compared with others in Axelrod's competition. Regarding the nature of the strategy (mentioned above), the property of nice distinguishes TIT FOR TAT from other strategies and simply means that the player who utilizes this approach in a Prisoner's Dilemma is never first to defect.

According to Axelrod, a strategy is collectively stable if no other strategy can invade it. Presuming for a moment that a number of players are involved in a Prisoner's Dilemma game and all are using TIT FOR TAT as a strategy (including the property of nice), there is no incentive to defect. However, if one player is not likely to be in the game much longer, it may be better to defect and optimize a sucker's payoff (i.e., not cooperate, cooperate). This strategy is only workable when the game is not collectively stable because the weaker player cannot ensure reciprocity. A practical example is found in World War I trench warfare in which the French traded two shots from the trenches to every one unprovoked German shot, and the French never fired first. As long as the battlefield was collectively stable, either no shots were fired or if the Germans shot once, the French would fire back twice. (17)

Donald Chisholm noted that, "When the norm of reciprocity is thoroughly internalized by members of an organizational system, it provides benefits beyond the actual changes in informal relationships by reducing the level on conflict in the system." (18) Whereas, this is illustrated well in the French-German scenario with a stabilizing strategy developed in the midst of conflict, in information-sharing the strategy offered by Chisholm illustrates how a culture of sharing must replace the culture of stovepiping to achieve a stable environment in cooperation.

It is worth noting that the French-German strategy was not the result of negotiation within the common definition. Instead, cooperation evolved. And regardless of generals' prodding to do otherwise, while the battlefield was stable the strategy held fast. Axelrod noted, "This is a case of cooperation emerging despite great antagonism between the players." (19) In time, the strategy changed when the "raid" was introduced into warfare practice, and the so-called "live-and-let-live" system perished. Nonetheless, the system proved that antagonists could cooperate and reach a stable environment.

Conclusion

In the end, a rather unsettling conclusion is apparent that flies in the longstanding traditions of information sharing. If one agent offers to share with another agent one time, the one-shot game applies; therefore, the equilibrium endgame is to not share (!) even when counter-intuitive and there may be an apparent benefit. A formal arrangement involving reciprocity (in a repeating game) appears to be the better solution (at least in a game theoretic sense), in which both players would benefit over time and multiple plays (sharing, in other words). In this case, sharing is controlled, informal sharing that could amount to corruption would be avoided, and the likelihood of leakage to hostiles diminishes.

Endnotes

(1.) Mary Noel Pepys, "Justice System," Fighting Corruption in Developing Countries, Bertram I. Spector, ed. (Bloomfield, CT: Kumarian Press, 2005), 14.

(2.) Daniel P. Moynihan, Secrecy: The American Experience (New Haven: Yale University Press, 1998), 169.

(3.) U.S. Department of State, Counterintelligence ... Working Together, publication 9655 (Washington, D.C.: Bureau of Diplomatic Security, 1990)

(4.) Mark M. Rosenthal, Intelligence: From Secrets to Policy (Washington, D.C.: CQ Press, 2000), 99-101.

(5.) Richard H. Ward, Introduction to Police Investigation (London: Addison-Wesley Publishing, 1975), 122.

(6.) Thomas Barker and David L. Carter, Police Deviance (Cincinnati: Anderson Publishing, 1994), 17-18.

(7.) Richard H. Ward and Robert McCormack, "Anti-Corruption Manual for Administrators in Law Enforcement," Managing Police Corruption: International Perspectives (Chicago: Office of International Criminal Justice, 1987), 37.

(8.) K. C. Davis, Discretionary Justice: A Preliminary Inquiry (Baton Rouge: Louisiana University Press, 1969), cited in: Laure Weber Brooks, "Police Discretionary Behavior: A Study of Style," Critical Issues in Policing: Contemporary Readings, Roger G. Dunham & G. Alpert, eds., (Prospect Heights, IL: Waveland Press, 1989), 122.

(9.) A typical informal exchange might begin with an agent from another jurisdiction offering, "I have information about an auto theft ring working in your venue. What do you have for me in mine?" More often than not, there was something worthwhile to trade.

(10.) In the author's experience, I have sent many an agent off to a convention with the admonition to "bring lots of business cards." The practice is global.

(11.) Robert Axelrod, The Evolution of Cooperation (New York: BasicBooks, 1984), 17-18.

(12.) Ibid., 8.

(13.) Ibid., 10.

(14.) Ibid., 8.

(15.) Ibid., 10.

(16.) Ibid., 20-21.

(17.) D. V. Kelley, 39 Months (London: Ernst Benn, 1930), 18, cited in Axelrod, 61.

(18.) Donald Chisholm, Coordination without Hierarchy: Informal Structures in Multiorganizational Systems (Berkeley: University of California Press, 1989), 118.

(19.) Axelrod, 74.

by Kenneth J. Ryan, California State University at Fresno
COPYRIGHT 2010 U.S. Army Intelligence Center and School
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Ryan, Kenneth J.
Publication:Military Intelligence Professional Bulletin
Date:Jan 1, 2010
Words:4005
Previous Article:Understanding cultures: a vital tool in war, crisis resolution, and peace.
Next Article:Converting the unknown to the known: misconceptualizing culture.
Topics:

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |