Printer Friendly

GLOBAL POLITICS AND THE GOVERNANCE OF ARTIFICIAL INTELLIGENCE.

The Governance of Artificial Intelligence (AI) Program at the University of Oxford's Future of Humanity Institute focuses on the political challenges associated with the rapid development of artificial intelligence. The Journal of International Affairs spoke to Allan Dafoe, the Director of the AI Program, about the AI governance problem, the risks and challenges involved, and the role that governments and the private sector will have in establishing a comprehensive AI governance framework.

Journal of International Affairs (JIA); Your research focuses on the AI governance problem. What are the distinctive characteristics of Artificial Intelligence from a political or international governance point of view? How can AI be governed adequately?

Allan Dafoe (AD): AI is the study of machines capable of sophisticated information processing. AI systems enable us to automate, improve upon, or scale up critical human skills in prediction and decisionmaking. This makes AI a critically important, potent technology which--like electricity, the internal combustion engine, or the microprocessor--has the potential to transform the economy, society, and the military. In fact, even if there were no further technical AI developments from today onwards, the already existing capabilities create major governance challenges. How we build and use advanced AI will be the defining development of the 21st century, and one of its chief governance challenges.

The emerging field of AI governance,' as my team at the Governance of AI Program understand it, explores how humanity can best navigate the transition to advanced AI systems. We expect that such governance will prove difficult given the strategic importance of the technology, its diverse applications, and the uncertainty associated with its developmental trajectory. While we may be able to draw important lessons from past attempts at governing powerful technologies such as nuclear technology, biotechnology, and aviation, examples of unambiguously successful technology governance are historically rare. Of course, it will also be valuable to examine and learn from historical failures of international cooperation, such as the 1946 Baruch Plan for international control of nuclear weapons. However, one should keep in mind that many of these historical examples might also be of limited transferability in the context of AI, since the development of AI involves distinct sets of nations, private actors, and considerably broader stakes and interests.

As we have described in our research agenda, designing good AI governance will need to take into consideration the technical landscape of AI. It will have to consider the political impacts of advanced AI on inequality, international political economy, and international security to understand how distinct actors may compete or cooperate on powerful AI. Such research will have to explore what we call "AI ideal governance," the question of exploring cooperative possibilities, values, and institutional mechanisms.

At a minimum, governance approaches should try to develop good norms, as well as initiatives and policies to address present-day issues, such as those around transparency, fairness, privacy and autonomous weapons. How we manage these near-term challenges could have lasting effects on the shape and vitality of our governance landscape, and determine how well-equipped we are to take on later issues including powerful military capabilities, such as in cyberspace, pervasive labor displacement, concentration of market power, and the safety and alignment of increasingly advanced AI systems. However, such short-term interventions may not be sufficient if they cannot continuously track advancing AI capabilities. What we would ideally put in place moving forward is an operationalization of the "common good" principle--that is, mechanisms by which actors pursuing the development and deployment of advanced AI technologies are incentivized and rewarded for pursuing progress in AI in a safe manner that serves to benefit humanity.

JIA: How important will multilateral organizations and negotiations be for AI governance?

AD: Multilateral organizations could play a pivotal role in AI governance by providing a joint forum for the formulation, coordination, and dissemination of the cooperative norms between actors, enabling participating parties to signal sincere commitment to beneficial and shared AI development. At present, there is no single multilateral organization that has taken up this mantle, although the UN has developed a vision of pursuing "AI for Global Good," and to use it as a driver in achieving its Sustainable Development Goals by 2030. Bodies such as the International Organization for Standardization could also be critical in shaping technical standards for AI technologies in a manner that integrates goals of safety and ethics.

Other initiatives that are not strictly multilateral can also play a role. For example, the Partnership on AI has hosted a number of conversations about AI governance and ethics. These conversations have included for-profit and non-profit actors from several industries and countries, including Baidu of China who recently became a partner. To offer another example, the Future of Life Institute hosted an event that led to the 2017 Asilomar AI Principles, that were, this summer, adopted by the California State Legislature.

JIA: You have spoken of extreme systemic risks, which could emerge alongside the benefits linked to superhuman capabilities in strategic domains. What are these risks and what are the most urgent questions to deal with in this field?

AD: While advanced AI capabilities offer large opportunities, there are indeed also distinct risks which emerge as a system achieves superhuman, or even merely human-equivalent capability in relevant domains. Even near-human performance would allow for AI to substitute for humans in a range of tasks, and this alone could lead to massive labor displacement, radically increased inequality, erosion of privacy, risks of nuclear instability, a reorganization and concentration of the global economy, and upsets to the military offense-defense balance that increase the risk of conflict.

Often, there are trade-offs between the short-term performance of a system and its longer-term safety and reliability. To offer a hypothetical example, imagine a future where autonomous cyber AI agents are defending our networks. For the purposes of safety we could put a human in the loop who, whenever one of these AIs perceives a large cyberattack in progress, has several hours to deliberate before sanctioning a large-scale action in response. Such a policy might avoid unintentional escalation. However, if, as in cyberwarfare, a quick response could be critical to performance and defense, there would be strong strategic pressures to empower the AI cyber system with a fully autonomous response. This would expose us to the risk of a rapid cycle of escalation to high levels of hostility, which we might think of as analogous to the flash crashes that frequently occur with trading bots.

In contrast, it may be easier to resist these automation pressures and reserve some roles for human actors in non-competitive or less time-sensitive contexts, such as hiring committees or hospitals that are in part relying on algorithmic analysis. Even in the context of key strategic decisions, if there is less direct time pressure towards immediate response, decision makers will be better able to retain some human judgment in certain decision-processes, or to otherwise adopt measures to increase safety.

JIA: What are the risks of intellectual piracy of AI and knowledge transfer from governments and private organizations to non-state actors and individuals?

AD: Restricting control of a technology has always been difficult at best, and the risk of theft or dissemination is particularly severe for digital technologies, including AI. Whereas the military has more experience in constraining the diffusion of innovation, private-sector development and distribution may increase the 'attack surface' for prospective thieves. As such, certain AI tools may become relatively accessible to non-state actors, especially since some of the major players in AI are at least nominally committed to open-source development. For instance, in 2015, Google opened up TensorFlow, a formerly proprietary machine learning software library, to public use. Conversely, the barriers to proliferation may be higher for more advanced AI systems which require more computer hardware or large structured datasets to train and implement, or which require other forms of expertise in building large-scale systems.

JIA: You have previously cautioned that advanced AI is likely to massively increase the potential gains from cooperation and potential losses from non-cooperation, as well as the chance of AI nationalism or mercantilism. What lessons emerge from other general-purpose technologies and economy-wide transformations?

AD: As a generally enabling technology, advanced AI can raise the stakes for all parties, for better or for worse. On the one hand, if different states trust that they can share in the large bounty created by AI, they might be incentivized to cooperate more. But if they do not, the perception of advanced AI as a strategic asset might drive destabilizing arms races or nationalism. Kai-Fu Lee, formerly at Google China, has suggested that the market capture by a few ultra-profitable AI companies might create new forms of dependency as many states will be forced to negotiate with whichever country, China or the United States, supplies most of their AI services.

Past technological transformations have shown that the dissemination of a general-purpose technology throughout society can take some time, and that its full potential is not always evident or easy to predict at the earliest stages. The dissemination of technologies such as electricity or steam power shows that such technologies usually start off having only a small number of important uses, but then achieve a progressively larger impact through technical improvement, investments, knowledge diffusion, and adaptation by individuals and institutions. Another potential lesson from history is that states that do a particularly good job of adapting to a new general-purpose technology can gain important advantages. For instance, during the Second World War, the German military took advantage of the internal combustion engine, electricity, and radio in realizing its blitzkrieg strategy. The U.S. military similarly envisioned using its lead in advanced computer technologies to gain advantage over the Soviet Union in its second 'offset' strategy.

These examples should lead us to consider new important principles or goals for any policies that govern this technology. Underlying this, we should articulate clear and compelling visions that draw on and are responsive to the concerns and worldviews of people and elites from a range of backgrounds, so that these proposals are likely to resonate globally and be compatible with the incentives of key stakeholders.

JIA: Today, private firms are at the cutting edge of advanced AI research. Should these firms be involved in AI governance?

AD: Firms need to be involved in AI governance and may be better placed than governments to take the initial lead in forming the foundations of an AI governance framework. Even in the past, when government or state-sponsored research labs were the source of a larger portion of technological innovation, governments often found it hard to anticipate the course, impact, or regulatory demands of technological development in time. Moreover, elected legislatures may lack the requisite technological expertise. The mandate and disciplinary composition of specifically appointed 'expert agencies' may become rapidly outdated, and courts, which may be reactive and inclined to focus on the particularities of a case, may ignore general societal trends and needs. More importantly, states simply are no longer the sole source of governance capacity, especially in areas such as AI, where the private sector plays an increasingly large role. Moreover, fast-developing technologies may follow unpredictable trajectories and create problems for regulatory pacing and oversight. Governing such emerging technologies will require in-house knowledge and familiarity with the fundamentals of the technology that is so rapidly changing, a capacity that currently favors firms over states.

The key here will be striking a balance between private and public interests, and aligning firm incentives with the pursuit of the common benefit of humanity. Firms are not as averse to pursuing pro-social goals as some may expect. Indeed, leading AI labs have demonstrated a willingness to invest in safety and ethics. Nevertheless, the role of the government, as well as civil society and an independent research community, act as important counterweights to firm incentives and need to be integrated into the fabric of an AI governance framework.

An Interview with Allan Dafoe
COPYRIGHT 2018 Columbia University School of International Public Affairs
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Dafoe, Allan
Publication:Journal of International Affairs
Article Type:Interview
Date:Sep 22, 2018
Words:1982
Previous Article:THE FOURTH INDUSTRIAL REVOLUTION: DIGITAL FUSION WITH INTERNET OF THINGS.
Next Article:BUILDING TRUST IN ARTIFICIAL INTELLIGENCE.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters