How driverless cars think.
Debates about autonomous vehicles (AVs) can sometimes become quite polarized, with neither side willing to acknowledge that there are both risks and opportunities. In "We Need New Rules for Self-Driving Cars" (Issues, Spring 2018), Jack Stilgoe provides a much-needed, balanced overview of AVs, calling for increased public debate and governance of the development, not just deployment, of AV technology. We agree with many of his recommendations, particularly about the importance of data sharing with the public and other parts of the industry. We also share his concern that AVs may exacerbate social inequalities, or shape the public environment in ways that disproportionately harm the worse off. At the same time, we raise two issues that are deeply important to our understanding and policy-making, but that receive only passing mention in his article.
First, "reliability" and "safety" are oft-used terms in these discussions, but almost always without reference to the contexts in which AVs are hopefully reliable or safe. Stilgoe notes challenges presented by "edge cases," but the importance of deployment context is not limited to unusual or anomalous cases. Rather, those contexts shape our whole understanding of AV safety and reliability. In particular, proposals for approval standards based on criteria such as "accident-free miles driven" or "number of human interventions per mile" are misguided. Such criteria are appropriate only if test conditions approximately mirror future deployment contexts, but many technologies used in AV development, such as deep networks, make it difficult to determine if test and deployment conditions are relevantly similar. We have thus proposed that the approval process for AVs should include disclosure of models used by developers to link AV testing and deployment scenarios, including the validation methodology for that model, along with credible evidence that the developer's test scenarios include a sufficient range of likely deployment contexts and that the AV performs safely and reliably in those contexts.
Second, development of AVs requires many technological decisions, where different technological options are acceptable. As one simple example, an AV can be designed to always drive as safely as possible, or to always follow the law, but not to always maximize both values. Driving as safely as possible might involve breaking the law, as when other vehicles are themselves breaking the law (say, by speeding excessively). Moreover, the technology does not dictate which value to prioritize; the developer must decide. Importantly, there is no unique "right" answer: one could defensibly prioritize either value, or even try to balance them in some way. Nonetheless, some choice is required. Technology is not value-neutral, but rather encodes developer decisions about, for example, what counts as "success" at driving. Public discussion should thus go beyond the issues mentioned by Stilgoe, and further include debate about the values that we want AVs to embody and what AV developers should be required to tell us about the values in their particular technologies.
L.L. Thurstone Professor of Philosophy and Psychology
ALEX JOHN LONDON
Clara L. West Professor of Ethics and Philosophy
Carnegie Mellon University
|Printer friendly Cite/link Email Feedback|
|Author:||Danks, David; London, Alex John|
|Publication:||Issues in Science and Technology|
|Date:||Jun 22, 2018|
|Previous Article:||Diane Burko: Reef Series.|
|Next Article:||Big data, small towns.|