Printer Friendly

Crashing into the unknown: an examination of crash-optimization algorithms through the two lanes of ethics and law.

I. INTRODUCTION

One, a[n] [autonomous vehicle] may not injure a human being, or, through inaction, allow a human being to come to harm....

Two, ... a[n] [autonomous vehicle] must obey the orders given it by human beings except where such orders would conflict with the First Law....

Three, a[n] [autonomous vehicle] must protect its own existence as long as such protection does not conflict with the First or Second Laws. (1)

Isaac Asimov first penned his three rules of robotics in a short story titled "Runaround." (2) These rules formed the governing principles of the robots in his stories, and also served as a plot device to show the shortcoming of programming robots to follow these three simple rules. (3) These shortcomings arose due to the ambiguity of the rules. (4)

In addition to being impractical in Asimov's stories, these rules would be ineffective for programming autonomous vehicles. (5) For example, how would an Asimovian autonomous vehicle address a situation where a truly unavoidable accident must occur and someone must be harmed? No matter how well autonomous vehicles are programmed, they will inevitably be involved in accidents. (6) In Asimov's short story, titled "Liar," a robot named Herbie had the ability to read people's minds. (7) Herbie read two characters' minds and told each what they wanted to hear--lies--so that he would not cause "emotional harm," which Herbie considered to be in violation of the first rule. (8) Based on what Herbie told them, the characters changed their behavior; eventually, a disagreement ensued. (9) The characters confronted Herbie to sort out their mess. (10) Herbie, realizing that he could not answer the question without causing harm to someone, broke down and stopped working. (11)

An autonomous vehicle that breaks down when it encounters a truly unavoidable accident would be impractical and dangerous. (12) Shutting down would not prevent the accident, and would most likely aggrandize it, and thus, would still violate the first rule. For that reason, and many others, Asimov's laws are impractical for addressing the ethical issues created by autonomous vehicles. (13)

Instead of the autonomous technology shutting down when faced with an unavoidable accident, society will want the autonomous vehicle to minimize the amount of harm that results from such an accident, regardless of who is at fault. (14) An algorithm writer can minimize harm that results from an accident through use of a "crash-optimization algorithm," which is the method by which an autonomous vehicle determines who or what to hit. (15) The nature of the decisions made by the algorithm writer in creating the crash-optimization algorithm implicates serious ethical and legal questions.

This article examines those ethical and legal questions. Part II begins by providing background information on autonomous vehicles, and the benefits that those vehicles are projected to bring to society. Part III introduces six moral dilemmas as a lens through which to examine the ethical and legal questions arising out of a crash-optimization algorithm. In Part IV, the article provides an overview of Utilitarian and Kantian ethics, as well as the application of both ethical theories to autonomous vehicles. Part V examines tort and criminal law issues relating to the crash-optimization algorithms. Part VI initially addresses whether these decisions should even be made by robotic cars. After concluding that the decisions should be made by the autonomous vehicles, Part VI examines who--the car owner, the car manufacturer, or the government--should make that decision. Finally, the article provides a legal framework for the application of criminal and tort law to accidents involving the use of crash-optimization algorithms.

II. OVERVIEW OF AUTONOMOUS VEHICLES

A. Background

Since at least 1939, when General Motors introduced the Futurama exhibit at the World's Fair in New York, generation after generation has awaited the arrival of autonomous vehicles. (16) However, it was not until recent years that automated technology has become more advanced. (17) Today's vehicles are equipped with such autonomous technology as automatic lane-keeping, automatic braking, adaptive cruise control, traffic jam assist, and automated parallel-parking. (18) Mercedes-Benz's E- and S-Class models utilize "Stop&Go Pilot," which navigates the vehicle in traffic jams. (19) The 2016 Cadillacs and Tesla's new Model S will be capable of driving themselves on highways. (20)

The race to create automated vehicles accelerated after the Defense Advanced Research Projects Agency ("DARPA") held its first "Grand Challenge" in 2004. (21) The first Grand Challenge was a 142-mile autonomous vehicle race in the Mojave Desert. (22) Unfortunately, all of the cars failed shortly after the race started. (23) The following year, four of the twenty-three cars completed a 132-mile course in the second Grand Challenge. (24) Since then, all major car companies, (and Google) have been in the process of developing technology to allow for self-driving vehicles. (25) Although no autonomous vehicle is on the road yet, it is projected that these vehicles will be operated by a complex computer system, with the use of radar, laser, lidar, ultrasonic sensors, video cameras, global positioning systems, and maps. (26) This technology allows the vehicle to safely operate by constantly watching the road and obstacles in its view. (27) Most automakers intend to keep the human operator in the loop, (28) at least at first. (29)

Autonomous vehicles have demonstrated both safety and efficiency on the roadway. Google's autonomous vehicles have travelled over two million miles without causing an accident; (30) an autonomous vehicle created by the automotive supplier Delphi drove from San Francisco to New York City within nine days. (31) These trends have led many experts to predict that fully autonomous vehicles will be available within decades, (32) and that they will be commonplace by 2040. (33)

The federal government has also begun researching autonomous vehicles. The National Highway Traffic Safety Administration ("NHTSA") released a preliminary statement of policy regarding these vehicles. (34) In that statement, the NHTSA outlined its five levels of automation:

Level 0--No-Automation: The driver is in complete and sole control of the primary vehicle controls (brake, steering, throttle, and motive power) at all times, and is solely responsible for monitoring the roadway and for safe operation of all vehicle controls....

Level 1--Function-specific Automation: Automation at this level involves one or more specific control functions; if multiple functions are automated, they operate independently from each other. The driver has overall control, and is solely responsible for safe operation....

Level 2--Combined Function Automation: This level involves automation of at least two primary control functions designed to work in unison to relieve the driver of control of those functions....

Level 3--Limited Self-Driving Automation: Vehicles at this level of automation enable the driver to cede full control of all safety-critical functions under certain traffic or environmental conditions and in those conditions to rely heavily on the vehicle to monitor for changes in those conditions requiring transition back to driver control....

Level 4--Full Self-Driving Automation (Level 4): The vehicle is designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip. (35)

Current automated vehicle technology is between Levels 2 and 3. (36) Because this technology is not at Level 3 or Level 4, the NHTSA believes it is premature to issue regulations. (37) The NHTSA signaled its encouragement for the innovation of autonomous vehicles, but it cautioned that they should not be used on public roads except for testing purposes. (38)

Some states and the District of Columbia are foreshadowing the arrival of Level 3 and Level 4 autonomous vehicles onto their roadways and have already enacted autonomous vehicle laws. Thus far, California, Florida, Michigan, Nevada, and the District of Columbia have enacted autonomous vehicle laws that explicitly allow for testing of autonomous vehicles on their roads. (39) None of these laws, however, address crash-optimization algorithms or, more generally, tort liability for accidents caused by autonomous vehicles. (40)

In addition, foreign countries are enacting autonomous vehicle laws and fostering the growth of such technology. (41) For example, the United Kingdom is on the cutting edge of the autonomous vehicle movement. (42) Starting this year, the United Kingdom is funding autonomous shuttles in the cities of Greenwich, Bristol, Milton Keynes, and Coventry. (43) The shuttle resembles an elongated golf cart. (44) A two-seater vehicle called a "pod" will be operated on the streets of Milton Keynes. (45)

B. Benefits of Autonomous Vehicles

A major reason why there is so much interest in autonomous vehicle development by car manufacturers--and so much support for this development from governments worldwide--is the societal benefits that autonomous vehicles are projected to have. Approximately 1.24 million people die annually worldwide due to accidents on roadways. (46) In 2013, 32,719 Americans died and 2,313,000 American were injured in car crashes; a total of 5,657,000 accidents occurred in the United States that year. (47) To put the amount of Americans who die on the roadways into perspective, more Americans died from motor vehicle accidents during the United States' involvement in World Wars I and II, the Korean War, and the Vietnam War than Americans who died defending the country in those same years. (48) This has led one commenter to suggest that self-driving cars will save more lives than world peace. (49)

Most experts believe that autonomous vehicles will prevent accidents because ninety percent of accidents are caused by driver error. (50) Unlike the human driver, the autonomous vehicle '"sees' everything in the vicinity; reacts at speeds humans cannot match; and constantly checks the performance of every component in the vehicle to ensure that it is functioning properly." (51) The autonomous vehicle also does not drink alcohol; it does not get drowsy; and it does not make phone calls, text, eat, or engage in any other activity that distracts the human driver. (52) Therefore, it is likely that autonomous vehicles will greatly reduce the number of accidents. One study has shown that when autonomous vehicles reach ten percent penetration in the marketplace, 1,100 fewer people will die and 211,000 fewer accidents will occur on American roadways. (53) At ninety percent penetration, 21,700 fewer people will die and 4,220,000 fewer crashes will occur annually. (54) The prevention of car accidents will also reduce other societal costs--such as hospital stays, days of work missed, and the emotional toll accidents have on families. (55)

Drivers in the United States spend an average of seventy-five billion hours per year commuting, (56) and the average driver spends fifty-one minutes commuting to work daily. (57) Over 3.5 million people commute at least ninety minutes just to get to work. (58) Study after study has shown that daily commutes make people unhappy and have other negative effects on their lives. (59) Autonomous vehicles should be able to alleviate much unhappiness that results from a daily commute because people will no longer need to pay attention to the road for their vehicles to safely operate, and they will be able to engage in activities during the commute, such as reading, sleeping, or working. (60) Indeed, a main reason that people will purchase autonomous vehicles is so they can engage in other activities.

Additionally, people who cannot safely operate an automobile due to age or disability will have an incentive to purchase automated vehicles. Currently, people who are under a certain age, or physically disabled are prohibited from driving by law. (61) Level 4 autonomous vehicles can drive safely without human intervention; (62) therefore, these vehicles have the potential to provide people who cannot currently drive with an opportunity to increase their mobility and independence, which should lead to happier lives. (63)

Autonomous vehicles will also reduce congestion and fuel consumption. (64) Currently, "[t]raffic jams account for 3.7 billion wasted hours of human time and 2.3 billion wasted gallons of fuel." (65) Autonomous vehicles will reduce traffic congestion for a variety of reasons. Approximately twenty-five percent of traffic congestion is due to traffic accidents. (66) As indicated above, autonomous vehicles should greatly reduce the number of accidents on the road. (67) Autonomous vehicles will also be able to coordinate and anticipate traffic more precisely than human drivers, which will harmonize traffic flow. (68) In addition, the ability to smoothly accelerate and brake will decrease traffic congestion and enable vehicle platooning. (69)

Although vehicles are one of the costliest assets that people purchase, (70) car owners only utilize their vehicles for about four percent of their lifetime. (71) Car-sharing services, such as Uber and Lyft, are becoming commonplace, and it is likely that autonomous vehicles will greatly increase the abilities of car sharing services in the United States. (72) Autonomous vehicles can provide an "unprecedented level of convenience" for car sharing because they can drive passengers and return to their charge stations. (73) If an autonomous vehicle is shared such that it replaces four vehicles in the United States, such vehicle-sharing may lead to about $1.8 trillion in savings annually. (74)

Even if such car-sharing does not materialize, the benefits of autonomous vehicles will result in enormous economic savings for society. Morgan Stanley has predicted that, upon full penetration of autonomous vehicles into the market, such vehicles "can contribute $1.3 trillion in annual savings to the US economy alone, with global savings estimated at over $5.6 trillion." (75) The $1.3 trillion represents $507 billion in productivity gains, $488 billion from accident avoidance, $158 billion in fuel savings, $138 billion in productivity gains from congestion avoidance, and $11 billion from fuel savings due to congestion avoidance. (76) Therefore, the proliferation of autonomous vehicles will greatly benefit society, and their arrival into society should be welcomed.

III. MORAL DILEMMAS

This Part introduces six moral dilemmas as a method of analyzing crash-optimization algorithms. Each moral dilemma includes discussions on the ethical and legal issues that relate to the use of a crash-optimization algorithm.

A. The Shopping Cart Problem

After pulling into the parking lot of a grocery store, an autonomous vehicle's brakes stop working. Directly in front of the autonomous vehicle is a mother pushing a baby in a baby carriage. To its left is an overloaded shopping cart, and to its right is the grocery store. Assuming that any of the choices would be capable of stopping the autonomous vehicle, what should the autonomous vehicle do? (77)

This problem provides an example as to which choice not to make. No one in this situation would hit the baby carriage, unless the person knew that the baby carriage was empty. And we would expect, if not require, the autonomous vehicle to make that same choice. The decision of what the autonomous vehicle hits depends on the vehicle's crash-optimization algorithm. One potentially popular choice of programming a crash-optimization algorithm would be to have the vehicle protect itself and its occupants. Thus, the rule for the algorithm--hereinafter, the "radical self-preservation" algorithm--would be that the vehicle should mitigate accidents by focusing solely on the damage to the vehicle and the well-being of the vehicle's occupants.

Under the radical self-preservation algorithm, the autonomous vehicle would not turn right into the grocery store. Hitting the grocery store would probably destroy the vehicle and seriously harm the occupants. Therefore, for the radical self-preseration autonomous vehicle, the decision is between turning left and hitting the full shopping cart or staying the course and hitting the baby carriage. Turning left to hit the shopping cart would result in damage to the autonomous vehicle. Most shopping carts are metal, and because this cart is full of groceries, it will be less forgiving than an empty shopping cart. Staying the course to hit the baby carriage is also likely to damage the autonomous vehicle, but it is clear that this would result in less damage to the car than hitting the shopping cart. (78) A baby carriage is mostly plastic and metal with a canvas covering. Therefore, the radical self-preservation autonomous vehicle would likely strike the baby carriage and kill the child, which is a "morally unacceptable result." (79)

The impact of an autonomous vehicle choosing to kill a baby instead of destroying groceries cannot be understated. The child's family would be emotionally devastated, and would likely never recover from the trauma. The owner and occupants of the autonomous vehicle would also be seriously emotionally impacted. In addition, such a result could lead to the demise of autonomous vehicles. (80) Newspapers and 24/7 cable news would run the headline: "Robot Car Kills Baby to Avoid Groceries." (81) Public opinion of autonomous vehicles would turn negative, and movements to ban autonomous vehicles would form. (82) Thus, the autonomous vehicle's decision to target the baby carriage and cause one death could mean thousands more accidents and deaths in manually operated vehicle accidents, as well as trillions of dollars in unneeded costs. (83) Accordingly, the Shopping Cart Problem introduces the inherent risks in programming crash-optimization algorithms--especially the risk in programming a vehicle to follow a radical self-preservation algorithm.

B. Motorcycle Problem

An autonomous vehicle encounters a situation in which it must strike one of two motorcyclists. To the vehicle's front-left is a motorcyclist who is wearing a helmet. To the vehicle's front-right is a motorcyclist who is not wearing a helmet. Which motorcyclist should the autonomous vehicle strike? (84)

Eighty percent of accidents involving a motorcyclist result in injury or death. (85) Helmet use reduces the risk of fatality by at least twentytwo percent, and decreases the risk of brain injury by at least forty-one percent. (86) Although helmet use reduces the risk of serious injury to a motorcyclist in the event of an accident, only nineteen states and the District of Columbia have enacted universal helmet laws that require all motorcyclists to wear a helmet. (87) Twenty-eight states have enacted partial helmet laws, requiring helmet use by only a portion of the motorcycle-riding population--typically motorcyclists under a certain age. (88) Even in those states where a helmet is required by state law, motorcyclists only wear their helmets approximately eighty-six percent of the time. (89) In the states that do not require motorcyclists to wear a helmet, fifty-five percent of motorcyclists wear a helmet anyway. (90)

Based on these statistics, it is clear that the motorcyclist who is not wearing a helmet has a higher chance of injury or death if the autonomous vehicle hits her. (91) Thus, an autonomous vehicle programmed to reduce the chance of serious injury or death would hit the motorcyclist wearing the helmet. (92) On the other hand, hitting the motorcyclist who wore a helmet does not seem fair, or even in the best interest of society. (93) It seems unfair to punish someone solely because she was responsible, while rewarding someone solely because he was less responsible. Society wants people to wear helmets while riding a motorcycle, and targeting the motorcyclist who wears a helmet may incentivize people not to wear helmets. (94) Furthermore, in some cases the motorcyclist not wearing the helmet would be breaking the law. (95) Thus, the Motorcycle Problem introduces another potential issue: an algorithm programmed only to minimize harm may not take into account other important society values, such as fairness.

C. The Car Problem

An autonomous vehicle's brakes fail as it approaches a stoplight. Vehicles are crossing the intersection, and there is no way for the autonomous vehicle to safely maneuver without causing an accident. Slightly to the vehicle's left is a 2015 Mercedes-Benz E Class 4-Door sedan with only the driver inside. Slightly to its right is a 2011 Jeep Liberty with only the driver in the car. Which vehicle should the autonomous vehicle strike? (96)

An autonomous vehicle could be programmed to hit the vehicle with the highest safety rating. (97) The Insurance Institute for Highway Safety ("IIHS") provides annual highway safety awards based on two aspects of safety: (1) crashworthiness and (2) crash avoidance and mitigation. (98) Crashworthiness is rated based on five tests--moderate overlap front, small overlap front, side, roof strength, and head restraints--which are used to determine "how well a vehicle protects its occupants in a crash." (99) The IIHS then assigns a score of good, acceptable, marginal, or poor based on each of those tests. (100) The crash avoidance and mitigation factor looks at each vehicle's crash prevention system and the car's performance in track tests to score each car's "technology that can prevent a crash or lessen its severity." (101) The IIHS assigns a score of superior, advanced, or basic based on those tests. (102) Each year, the IIHS releases a list of its "Top Safety Pick" and its "Top Safety Pick+" for various categories of vehicles, such as minicars, large family cars, minivans, and large luxury vehicles. (103) Generally, a Top Safety Pick is a vehicle that received at least "good" ratings in the crashworthiness tests. (104) A Top Safety Pick+ is a Top Safety Pick vehicle with an advanced or superior rating for its front crash prevention. (105)

The IIHS scored the 2015 Mercedes E Class four-door Sedan (Mercedes) a 2015 Top Safety Pick+. (106) The Mercedes is equipped with side airbags, front and rear head curtain airbags, front-seat mounted torso airbags, and a driver knee airbag; additionally, it can be equipped with rear seat-mounted torso airbags. (107) The vehicle has a rollover sensor which triggers the side airbags to deploy in the event of a roll over. (108) The Mercedes received the highest crashworthiness scores in all of the tests except in the small overlap front test, where it received an acceptable score for lower leg and foot injury. (109) The Mercedes received a superior score for the crash avoidance and mitigation factor, but only when it is equipped with the optional front crash prevention system; without that system, the car only scores a basic for this factor. (110)

The 2011 Jeep Liberty (Jeep) did not receive any top safety pick awards from the IIHS. (111) The Jeep is equipped with front and head curtain airbags, and it too has a rollover sensor to deploy the side curtain airbags in the event of a rollover. (112) It received a good score only for the moderate overlap front and roof strength tests. (113) On the side test, it received a marginal overall score due to the average rating for its structure and safety cage and its poor rating for the risk of driver injury to the torso. (114) The notes state that an accident to the side of the Jeep would likely result in rib fractures, internal organ injuries, or both to the driver. (115) The Jeep received an average score for the head restraint and seats test. (116)

Therefore, the Mercedes is equipped with better safety features than the Jeep. But the fact that the Mercedes has better safety features does not necessarily mean that it would be the safest vehicle to hit. The IIHS cautions that "[l]arger, heavier vehicles generally afford more protection than smaller, lighter ones. (117) Thus, a small car that's a Top Safety Pick+ or Top Safety Pick doesn't necessarily afford more protection than a bigger car that doesn't earn the award." (118) The Mercedes is a sedan while the Jeep is a sport utility vehicle--which means that, according to principles of physics, the Jeep would be able to withstand more force than the Mercedes. (119)

Therefore, if the autonomous vehicle chooses to target the vehicle that has the best safety rating, the autonomous vehicle would hit the Mercedes, even though it may not be able to withstand the impact as well as the Jeep. Conversely, the vehicle could hit the Jeep because it is the larger vehicle--even though the Jeep is not as well equipped to withstand an accident, especially one to the side of the vehicle. In any event, as in the Motorcycle Problem, it does not necessarily seem fair to target a vehicle because its owner chose a safer car. (120)

An autonomous vehicle that makes its decision based on the safety features of the other vehicles is only considering the amount of harm that could result to the occupants of those vehicles. It is likely that an autonomous vehicle would consider other factors too. One such factor would be the amount of monetary damage that would result to each vehicle if the autonomous vehicle hit it. Depending on which Mercedes E-Class Sedan the person is driving, the manufacturer's suggested retail price (MSRP) is between $52,650 and $101,700. (121) The 2011 Jeep Liberty's original MSRP was between $23,250 and $28,250, but now this vehicle can be purchased used for around $11,584 from a dealer. (122) Therefore, not only would the Jeep better withstand the impact of being hit, the damage to the Jeep would be far less expensive to repair than the damage that would result from hitting the Mercedes.

Another factor an algorithm writer may consider is minimizing the amount of damage to the autonomous vehicle and harm to occupants. Presumably, this factor would lead the autonomous vehicle to always strike the smaller vehicle. As a matter of physics, hitting a smaller vehicle will typically result in less damage to the vehicle and less harm to the occupants. (123) Therefore, an autonomous vehicle programmed to minimize its damage and the harm to its occupants would hit the Mercedes.

The autonomous vehicle could also consider various other factors, including the number of people in each vehicle, the seat belt use of the occupants, or perhaps the age of the people in the vehicles. (124) If the autonomous vehicle considers the demographics of the occupants of the other vehicle, it will have a different set of risks to evaluate:
   If one driver is a man, and the other a similar-age woman, the
   woman is 28% more likely to die. If one driver is age 20 and
   the other age 70, the older driver is three times as likely to
   die. If one driver is drunk and the other sober, the drunk is
   twice as likely to die [because alcohol affects many body
   organs, not just the brain]. If one driver is traveling alone
   while the other has a passenger, the lone driver is 14% more
   likely to die than the accompanied driver, because the
   accompanied driver is in a vehicle heavier by the mass of its
   passenger. (125)


An autonomous vehicle with face recognition technology could detect these characteristics about the occupants of the other vehicles. (126) As discussed in Part IV, these factors would be relevant to a consequentialist autonomous vehicle concerned with minimizing harm. But most people would find an autonomous vehicle that targets people on these grounds disturbing and unethical, (127) and the ethical standards for engineers would prohibit them from programming their vehicles to make collision decisions based on such grounds. (128)

D. The Tunnel Problem

An autonomous vehicle is travelling along a single lane mountain road that is fast approaching a narrow tunnel. Just before the car enters the tunnel, a child attempts to run across the road but trips in the center of the lane, effectively blocking the entrance to the tunnel. The car has but two options: hit and kill the child, or swerve into the wall on either side of the tunnel, thus killing its operator. How should the car react?

Jason Millar created the Tunnel Problem in an article written for Robohub. (129) Thereafter, the Open Roboethics Initiative polled Robohub's readers about how they would make the decision in the Tunnel Problem. (130) Of those polled, sixty-four percent stated that that the car should kill the child; (131) forty-eight percent said the decision was easy, and only twenty-four percent thought the decision was difficult. (132) The Open Roboethics Initiative also asked for the reasoning behind each person's decision. (133) Of those who said they would kill the child, thirty-two percent based that decision on the child's level of fault for being in the road, thirty-three percent believed the car should be biased in favor of its passengers, and thirteen percent responded that they would always choose themselves. (134) Of those who said save the child and kill themselves, thirty-six percent provided altruistic reasons, fourteen percent based their decision on the child having more time to live, and eleven percent said they did not want to live with the guilt of killing a child. (135)

The Tunnel Problem presents the unique situation where the autonomous vehicle must decide whether to kill the operator or a third party. If the autonomous vehicle chooses to kill its owner, it would mean that a company has programmed a vehicle to kill its customers--at least in certain situations. From an economic standpoint, car manufacturers probably could not make the decision to kill its consumers to save one person--or any number of people, for that matter. If a consumer knew that, for example, ACME Car Manufacturing Company made the decision to program its vehicle to sacrifice the occupant in order to save any one person, and XYZ Car Company was willing to save the occupants over third parties, then it seems likely that people would purchase their autonomous vehicles from XYZ. (136) In addition, XYZ would probably run advertisements pointing out that distinction: "ACME will kill your family" or "At XYZ, we care about you." Thus, it seems likely that the market will force car manufacturers to be biased in favor of their consumers. (137)

The Tunnel Problem also presents a recurring theme in the analysis of crash-optimization algorithms: what impact should "fault" on the part of a third party have on the autonomous vehicle's decision? After all, the reason the autonomous vehicle has to make this "who to kill" decision is because the child ran into the street. For those uncomfortable with the thought of an autonomous vehicle potentially killing a child, the Tunnel Problem could be modified to replace the child with a psychotic adult who believes that all autonomous vehicle owners should die. In this modified version, the fact that someone is trying to get the vehicle to destroy itself and kill its occupants could change people's opinion about which decision the autonomous vehicle should make. In any event, there is still the technical question of whether autonomous vehicles could be programmed to take fault into account--and if they could, whether society would find this desirable.

E. The Bridge Problem

An autonomous vehicle with only one occupant is traveling along a narrow, two-lane bridge. A school bus full of children is travelling in the opposite direction. The driver of the school bus is drowsy. As the school bus and the autonomous vehicle approach each other, the driver of the school bus starts to doze off, causing the school bus to drift into the autonomous vehicle's lane. The autonomous vehicle has two options: (1) crash into the school bus; or (2) drive off the bridge, killing its occupant. (138)

The Bridge Problem raises the issue of self-sacrifice more forcefully than the Tunnel Problem. An autonomous vehicle that cannot commit self-sacrifice may end up killing all of the children and the bus driver, while also damaging itself, and harming or potentially killing its occupant. (139) Nothing good will result from a head-on collision with a full school bus on a narrow, two-lane bridge. Thus, self-sacrifice by the autonomous vehicle will likely result in less harm. Although the autonomous vehicle will be destroyed and its occupant killed, it is likely that no one else will be harmed and no other damage will occur. Therefore, an autonomous vehicle programmed to minimize harm would probably commit self-sacrifice. (140)

In this example, the autonomous vehicle is again "innocent" in bringing about the harm. The school bus driver is at fault for whatever harm occurs. But this type of "fault" is different than the type of "fault" in the Tunnel Problem: none of the children are at fault for the bus driver's drowsiness, and they are also innocent. Thus, fault is a complicated concept, and may not necessarily be a relevant factor.

What should also be noted about this Problem is that the preferred decision is also fact-specific. An autonomous vehicle should not be programmed to avoid accidents with all school buses: if the school bus's only occupant is the bus driver, it is likely that self-sacrifice would not be the best decision. (141) If the autonomous vehicle drives off the bridge, one death is guaranteed, but if it collides with the school bus, perhaps only serious injury will result. (142)

F. The Trolley Problem

An operator is driving her autonomous car in manual mode and is in control of the vehicle. Either intentionally or not--she could be homicidal or simply inattentive--she is about to run over and kill five pedestrians. Her car's crash-avoidance system detects the possible accident and activates, forcibly taking control of the car. To avoid this disaster, the car swerves in the only direction it can--say, to the right. But on the car's right is a single pedestrian who the car strikes and kills. (143)

This hypothetical is named after the famous philosophical thought-experiment created by Philippa Foot. (144) In the traditional formulation of this Problem, the trolley conductor must decide whether to stay on the same track and kill five people or steer her runaway train onto another track where one person will be killed. (145) Judith Jarvis Thomson created the bystander variation, where the trolley is again faced with either killing five persons by staying on its track or killing one person by switching tracks. (146) In this variation, however, the trolley driver has passed out and his brakes have failed. (147) A bystander is walking along the track near the switch that allows her to change the tracks to kill the one person. (148) The bystander variation differs from the original trolley problem in two main respects. (149) First, the trolley driver is in a special position to protect those who may be harmed by the trolley. (150) Second, if the trolley driver decides to drive the trolley into the five people, he would actively kill them by running into them with his vehicle; on the other hand, if the bystander decides not to flip the switch, the five people would die due to the bystander's inaction. (151) The Foot Trolley Problem would be more reminiscent of a situation where the brakes fail on a traditional vehicle and the driver must determine whether to turn the steering wheel. The bystander variation is more analogous to the choice confronting the autonomous vehicle manufacturer when deciding how to program its vehicles.

What makes the Trolley Problem potentially troublesome is not the fact that the autonomous technology took control of the vehicle to save the human driver from causing an accident or killing people. Society will want the autonomous technology to intervene when it can safely avert an accident, and automobiles are already equipped with technology to protect people when they make mistakes or fall asleep while driving. (152) For example, Volvo has implemented new safety systems to address the four most common causes of accidents: distraction, drowsiness, drunkenness, and driver error. (153) The accident prevention system includes sensors to alert the driver if the car crosses a lane line without a blinker; furthermore, if the driver's behavioral patterns indicate drowsiness, the technology suggests that the driver take a break. (154) If those systems fail, the car tightens seat belts and will attempt to stop the car prior to a collision. (155) Volvo hopes that its safety features will eventually be so robust that no one will be seriously injured or killed in one of its vehicles. (156) Therefore, it is clear that technology's ability to prevent an accident should be fostered by society.

However, what is potentially troublesome with the Trolley Problem is that the autonomous technology killed someone by preventing the death of five persons. Decisions of life or death are inherently moral ones, and for some people, the thought of a "robot car" making those decisions seems repulsive. (157) They would suggest that deeply ethical decisions should be made by the operator, not the programmer. (158) The ethical basis of the Trolley Problem will be discussed in Part IV, and Part VI will discuss who should make these decisions.

The Trolley Problem also introduces an important legal distinction between action and inaction. As discussed more fully in Part V, the law does not typically require people to act. (159) Thus, in the bystander problem, the bystander who walks by the switch without flipping it is not legally responsible for the deaths of the five people who are killed. Likewise, if the autonomous technology does not take control of the vehicle to save the five pedestrians, the manufacturer is not legally responsible for their deaths. (160) Instead, the human driver will be criminally and civilly responsible. (161) On the other hand, if the autonomous technology takes control of the vehicle to save five lives and ends up killing one person, the manufacturer is civilly and perhaps even criminally responsible for the death of the person killed. (162) Therefore, the law creates an enormous incentive for the automaker to program its autonomous technology not to intervene in the Trolley Problem--or for that matter, in any case where an accident would result by avoiding a greater harm caused by the human driver.

IV. MORAL PHILOSOPHY

Machine ethics, robot ethics, and machine morality are all terms used to describe a nascent field of ethics concerned with "the practical challenge of building (ro)bots which explicitly engage in making moral decisions." (163) Machine ethics distinguishes between top-down and bottom-up approaches, (164) but many experts believe ethical robots will require a combination of both approaches. (165) A top-down approach implements normative ethical theories into robotics and requires robots to follow those rules. (166) Examples of top-down rules include Asimov's laws, utilitarianism, Kantianism, egoism, virtue ethics, and the Ten Commandments. (167) Conversely, "[i]n bottom-up approaches to machine morality, the emphasis is placed on creating an environment where an agent explores courses of action and learns and is rewarded for behavior that is morally praiseworthy." (168) Child development is an example of a bottom-up approach to ethics. (169) Thus, the robot discovers and constructs its ethical principles. (170)

This Part utilizes a top-down approach and examines the feasibility of programming a crash-optimization algorithm to follow normative ethics, which describes how people ought to act, not how they actually act. (171) Normative ethical theories have been debated for millennia, and mankind has yet to come to a consensus on which theory is "right." (172) Plato and Aristotle believed that ethics depended on the character of the ethical actor--commonly referred to as "virtue ethics." (173) A virtue ethicist tries to produce an "ideal person" by determining the characteristics that make a person virtuous; once those characteristics are outlined, the moral agent "aspire [s] to be an ideal person" by acquiring those characteristics. (174) As such, virtue ethics combines a top-down approach--the characteristics--with a bottom-up approach--the acquisition of those characteristics--to robot ethics. (175) This combination has led some roboethicists to believe that virtue ethics is a promising ethical system for robots. (176) However, character traits, such as courage, patience, optimism, and industry, provide little insight into how an autonomous vehicle's crash-optimization algorithm should be programmed.

In light of the shortcomings to programming virtue ethics into an autonomous vehicle, a programmer would need to consider other ethical theories, one such theory is the "Divine Command Theory." Under this theory, God determines what actions are morally right or wrong. (177) For example, in the Judeo-Christian tradition, God established the rule "Thou shalt not kill." (178) Thus, an autonomous vehicle would be programmed not to kill. However, this seemingly bright-line rule is not as clear as it first seems. First, the prohibition against killing another person does not necessarily apply to all killings. (179) Second, this rule provides no method to resolve a situation when someone must be killed, such as in the Tunnel Problem. (180) Given these complications, the programming of a religious autonomous vehicle would require consultation with religious experts; this raises the additional question of what religious expert to consult. In the Christian faith alone, countless denominations exist. (181) Because of the multitude of questions that must be answered before programming a religious autonomous vehicle, the Divine Command Theory seems impractical to use for crash-optimization algorithms. (182)

A third ethical theory that could be programmed into the crash-optimization algorithm is "ethical egoism." Under this theory, morality requires each person to always promote her self-interest; thus, each person just asks herself what she feels like doing and that decision is moral. (183) To program an ethical egoist autonomous vehicle, car manufacturers would need to let every individual car owner choose her own moral decisions so that she could decide what is in her best interest. As discussed in Part VI, car manufacturers or the government would foreclose car owners from some ethical criteria: no one will be able to program a vehicle to target someone because of her race, gender, religion, or other arbitrary ground. (184) In addition to some choices being foreclosed, this theory is impractical; it would be difficult, if not impossible, for a car manufacturer to program each vehicle to follow that car owner's self-interest. Because of these shortcomings, ethical egoism is not a realistic choice for programming autonomous vehicles.

Two remaining broad ethical theories could be programmed into autonomous vehicles: utilitarianism (both act and rule utilitarianism) and Deontology (Kantianism). These two theories are generally considered "rivals" and are useful to show the intricate ethical issues facing the programming of an ethical crash-optimization algorithm. (185) The remaining discussion in this Part explores those theories, as well as their interplay with autonomous vehicles and the ethical dilemmas outlined in Part III. (186)

A. Utilitarianism

Utilitarianism is a consequentialist or teleological moral theory, (187) and it has two main features: a consequentialist principal and a utility principle. (188) The consequentialist principal states that the rightness of an action is based on the consequences of that action; as such, utilitarianism is concerned with the ends of an action, not the means. (189) The utility principal focuses on the type of state--for example, pleasure or happiness--that is used to measure the consequences. (190)

There are two principal utilitarian theories: (1) act utilitarianism; and (2) rule utilitarianism. (191) Under act utilitarianism, "[a]n act is right if and only if it produces the greatest happiness of the greatest number." (192) Conversely, under rule utilitarianism, "[a]n act is right if and only if it is required by a rule that is itself a member of a set of rules whose acceptance would lead to greater utility for society than any available alternative." (193) Therefore, act and rule utilitarianism are similar in that they focus on maximizing utility, but they differ because the former focuses solely on maximizing utility of each individual act while the latter is concerned with maximizing utility of repeatedly performed acts. (194)

Because act utilitarianism focuses on each individual act, every decision involves a moral "calculation" of the utility gained or lost by such an action. (195) Seeing as how autonomous vehicles will be operated by a computer, the use of calculations is appealing--at least at first. (196) A computer can aggregate utility more quickly and accurately than a human. (197) Unlike the human who lacks the time and the ability to aggregate happiness on every decision she makes while driving, (198) an autonomous vehicle could constantly run calculations to ensure that each of its decisions maximizes utility. To address the moral dilemmas from Part III, the autonomous vehicle would need to be able to correctly identify that it has encountered a situation in which its crash-optimization algorithm is necessary. (199) Once it correctly identifies the situation, the car would then generate the possible alternative choices available to it. (200) The vehicle would then determine the results of each choice. (201) And finally, the autonomous vehicle would aggregate the utility of each result to determine which choice maximizes utility. (202)

Utilitarianism can be applied to the moral dilemmas from Part III. In the Shopping Cart Problem, the autonomous vehicle would calculate the expected amount of pain that would result from each of its options: colliding into the grocery store, the shopping cart, or the baby carriage. A collision with the grocery store would likely cause property damage to the building, shoppers, the car itself, and its occupant. If the autonomous vehicle hit the shopping cart, the car would likely destroy the groceries and the shopping cart, as well as cause minimal damage to the autonomous vehicle and minimal harm to the occupant. The collision with the baby carriage would likely result in the death of the baby, destruction of the baby carriage, and de minimis damage to the autonomous vehicles and little, if any, harm to vehicle's occupant. As is evident, the utilitarian autonomous vehicle would likely strike the shopping cart because it would result in the least amount of harm.

In the Motorcycle Problem, the autonomous vehicle would calculate the amount of harm that would result from hitting the motorcyclist with a helmet in comparison to striking the motorcyclist without a helmet. As indicated in the introduction of the Motorcycle Problem, helmet use greatly reduces the risk of death and brain injury that results from an accident. (203) Therefore, less harm would likely result if the autonomous vehicle struck the motorcyclist who wore a helmet. As such, the act-utilitarian autonomous vehicle would hit the helmeted motorcyclist. (204) Whether that decision is fair does not matter to the cold calculation of act utilitarianism. A rule utilitarian, however, would argue that helmet use actually increases happiness; therefore, the rule should be one that promotes helmet use. A rule that autonomous vehicles should always strike a helmeted motorcyclist over an unhelmeted motorcyclist would lead to less happiness: it would incentivize motorcyclists not to wear helmets because an autonomous vehicle would be programmed to avoid striking them. Conversely, a rule that protects helmet wearers may lead people to wear helmets. Thus, the rule utilitarian would argue that, although in some instances targeting the helmet wearer would maximize happiness, a rule protecting motorcyclists who wore helmets would actually promote the greatest happiness.

In the Car Problem, an act-utilitarian autonomous vehicle would calculate the amount of damage and harm that would result from hitting each vehicle. The act-utilitarian car would take into account the safety ratings, size, and value of each vehicle, as well as other factors such as the number of people in each vehicle, and the seat belt use of the occupants. In addition, the car could consider the age and gender of the occupants, which could be indicative of the ability of an occupant to withstand the accident. (205) The autonomous vehicle would weigh each factor and determine which vehicle to hit. Similar to the rule utilitarian's objections in the Motorcycle Problem, it is likely that a rule utilitarian would argue that the autonomous should not target the vehicle with the highest safety rating or the vehicle with its occupants wearing seat belts--those decisions would discourage happiness-promoting behavior and result in less happiness.

In the Tunnel Problem, the autonomous vehicle would need to determine who it should kill: the child or the occupant. (206) The only material difference between the two is that killing the occupant will result in the total destruction of the autonomous vehicle, while killing the child will result in some, but not total, destruction of the vehicle. Therefore, it seems likely that less pain would occur from hitting the child. A rule utilitarian would likely agree with the result, but for a different reason: a rule that killed the operator would lead to less people purchasing autonomous vehicles. Very few people will likely want an autonomous vehicle that was programmed to sacrifice themselves or their family--especially if they were sacrificed to save one person. Because of the projected benefits of autonomous vehicles discussed in Part II, the rule utilitarian would likely advocate for a rule that leads people to purchase these cars. In this case, that rule would be to program the autonomous vehicle with a bias in favor of the operator.

In the Bridge Problem, if the autonomous vehicle is programmed such that it does not avoid the school bus, the decision will likely result in the death of all the children and the driver of the school bus; damage to or destruction of the school bus; serious damage to the autonomous vehicle; and harm to the operator. (207) Instead of staying the course, the autonomous vehicle could drive off the bridge, killing the occupant and destroying the autonomous vehicle; in such a situation, no one on the school bus would be harmed and the bus would not be damaged. Because hitting the bus would cause more harm, the act-utilitarian autonomous vehicle would commit self-sacrifice and drive off the bridge. A rule utilitarian would likely agree with the act-utilitarian. An ultra-selfish autonomous vehicle that would not sacrifice itself when the risk of death and harm to its occupant was so high would likely create much unhappiness. And although many people may not want to purchase autonomous vehicles that are willing to sacrifice themselves when the choice was one person or the occupant, it is unlikely that people would have the same reservations as the number of persons killed increases--indeed, society may not even want people to program a vehicle that is that selfish.

In the Trolley Problem, the programmer has the choice of whether the autonomous vehicle should prevent five deaths when doing so will result in one death. (208) This is a simple choice for a utilitarian autonomous vehicle: it would take control of the vehicle and kill the one person--five lives are better than one life.

Although utilitarianism can be applied to the moral dilemmas from Part III, a utilitarian autonomous vehicle creates unique issues that warrant further discussion. Such vehicles suffer from a variation of the frame problem. (209) The frame problem is "[t]he problem of determining, from all the information that can in principle bear, what is relevant to the cognitive task at hand." (210) And this problem invites a computational frame problem, which is how a "cognitive system tractably delimits (i.e., frames) what gets considered in a given cognitive task." (211) The frame problem of crash-optimization algorithms is what factors an autonomous vehicle should consider when determining what decision to make and how to program those factors so that the car applies them when needed. (212) For example, some facts that the vehicle needs to know to operate are irrelevant for determining which choice to make in these situations: the autonomous vehicle does not need to know that to travel "home" it would need to take X, Y, and Z roads; that it could drive from New York City, New York to Los Angeles, California in approximately forty hours; or that a red light means stop. Although all these facts are correct, they are irrelevant information in any of the moral dilemmas. Humans are capable of framing relevant information when faced with a moral dilemma, or more broadly any decision making that a human undertakes. (213) A computer may not have the same framing ability, and without some ability to frame the proper criteria, the autonomous vehicle may be unable to determine what course of action to take in time to actually implement its decision. (214)

In addition to suffering from a variation of the frame problem, an act-utilitarian autonomous vehicle may intentionally violate traffic laws when doing so will lead to greater happiness--this would likely be a problem for the vehicles. (215) For example, if an act-utilitarian autonomous vehicle is traveling on a long, flat road in the middle of nowhere and no vehicles are around, why would the autonomous vehicle drive the speed limit of forty-five miles per hour? Or if an autonomous vehicle is stuck behind a slow driver of a traditional vehicle in a "no passing zone," and there is no oncoming traffic and the vehicle determines that it can safely pass the vehicle, why would it follow the law? If breaking the law led to more happiness in those two examples, the act-utilitarian autonomous vehicle would break the law. However, it is unlikely that the government would allow a car to violate the law: states that have passed autonomous vehicle laws require the vehicles to follow traffic laws. (216) A rule utilitarian would likely agree with the government's position because society would not have enacted traffic laws that do not increase happiness. If society did enact such a law, the proper course of action is repealing or amending that law. A rule utilitarian would also assert that if general law breaking became common practice and went unpunished, such rule-breaking would invite contempt for the law and others to engage in law-breaking, too. Therefore, a rule-utilitarian autonomous vehicle would likely follow traffic laws.

A third problem is that, depending on the value assigned to a human life, an act-utilitarian autonomous vehicle may elect to kill a person to protect property. The Environmental Protection Agency, the Food and Drug Administration, and the Transportation Department value one human life at $9.1, $7.9, and $6.1 million, respectively. (217) Assume for example, that an autonomous vehicle encounters an (unrealistic) situation in which it must decide between crashing into and killing a human or crashing into and destroying a Boeing 787 Dreamliner ("Boeing"). The Boeing is priced at $225 million, but sells on average for $116 million. (218) Therefore, if the choice was destruction of the Boeing or the death of a person, the act-utilitarian autonomous vehicle would kill the person. Furthermore, the autonomous vehicle would kill the person even if the risk of destroying the Boeing was only ten percent. It seems wrong to value property over human life--or even to place a value on human life itself. A Boeing can be rebuilt--a human life cannot be reborn. A major flaw of an act-utilitarian autonomous vehicle is that it may not be able to take into account that simple fact.

B. Deontology (Kantianism)

For those people who consider a life priceless, they may find solace in a deontology theory of ethics; under which, the rightness of an act is based on the means or duties. (219) There are many types of deontology ethical theories, including W.D. Ross's prima facie duties, Asimov's laws of robotics, the Ten Commandments, and Kantianism. (220) This Part focuses on the theory of ethics created by the most famous deontologist, Immanuel Kant.

For Kant, the motives of the underlying action matter. (221) As such, the consequences of an action do not determine its rightness: some actions are morally impermissible regardless of their consequences. Thus, some morally right actions have bad consequences, and some morally wrong actions have good consequences. For example, Kant reasoned that it is always morally wrong to tell a lie. (222) This principal can be illustrated through an example where a good friend comes to your house after a fight with his wife. (223) An hour later, his wife shows up at your door; she is angry, has a gun, and is looking for her husband. Kant would insist that you tell the truth about your friend being at your house, regardless of what that revelation might entail. (224)

Kant used absolute rules--better known as "categorical imperatives"--to formulate his theory of ethics. (225) Kant deduced three such formulations of the categorical imperative. The first categorical imperative ("CI(1)") states: "I ought never to act except in such a way that I can also will that my maxim should become a universal law." (226) CI(1) is a procedure by which a rational agent determines whether an act is morally permissible: an action is morally permissible only if an agent could consistently will that everyone engage in that action. (227) Returning to the lying example, Kant stated that someone could not will everyone to lie because such a rule would be a self-defeating law: no one would believe each other once lying became commonplace and that would cause people to stop paying attention to what anyone said. (228) Accordingly, lying could not be willed into a universal law, and, thus, lying is always immoral. (229)

Kant's second formation of the categorical imperative ("CI(2)") states: "Act in such a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end." (230) CI(2) only prohibits people from treating others merely as a means to their end; Kant had no "general objection to using people, or to using them as a means." (231) In many non-objectionable situations, people use others as means. (232) Rather, CI(2) is based on Kant's belief that each person has inherent value and must be treated with dignity. (233) Thus, under CI(2), people should be treated differently than inanimate objects. (234) Revisiting the Boeing example from the utilitarian subsection, it is clear that a Kantian autonomous vehicle would be programmed to value human life over property. As such, a Kantian autonomous vehicle would always destroy property rather than kill or even harm a person, regardless of the property's value.

Kant's final formulation of the categorical imperative ("CI(3)") states: "the Idea of the will of every rational being as a will which makes universal law." (235) CI(3) represents Kant's belief that an act must be done out of a sense of duty--and not out of inclination--for the act to have moral worth. (236) Acts done out of inclination are morally worthless. (237)

Autonomous vehicles will lack the capability to will their decisions as universal laws. (238) The vehicle is also incapable of acting autonomously in accordance with CI(3). Therefore, an autonomous vehicle will not be a "moral agent" according to Kantian ethics. (239) However, the programmer of the crash-optimization algorithm can act autonomously and will her programming decisions as universal laws, making her a rational agent. Thus, an autonomous vehicle could be programmed according to Kantian ethics.

In the Shopping Cart Problem from Part III, a Kantian autonomous vehicle would not strike the baby carriage. (240) Unlike the shopping cart or the grocery store, the baby has inherent value that must be respected. Striking the baby carriage and killing the baby to cause less damage to the autonomous vehicle would be treating the baby as merely a means to the ends of the autonomous vehicle and its occupant. Therefore, the Kantian autonomous vehicle would not hit the baby carriage.

In the Trolley Problem, the Kantian autonomous vehicle must decide whether to kill one person, or let its human operator kill five persons. Initially, one could assert, perhaps plausibly, that killing is killing. (241) Under CI(1), killing could not be universalized as a universal law: if killing was universalized, everyone would be dead and no one could kill anyone else. (242) Therefore, killing violates CI(1). In addition to violating CI(1), the decision to take control of the vehicle to kill the person may violate CI(2). The issue is whether killing the one person to save the five persons is considered treating the one person as merely a means to the ends of the five persons.

Some philosophers have used the "doctrine of the double effect" to distinguish between foreseeable killings and intentional killings. (243) "The doctrine of the double effect is based on a distinction between what a man foresees as a result of his voluntary action and what, in the strict sense, he intends." (244) Under this doctrine, the incidental killing of one person to save five persons is morally defensible but the affirmative killing of one person to save five persons is not. (245) An example of the latter situation (Transplant Problem) is this:
   David is a great transplant surgeon. Five of his
   patients need new parts--one needs a heart, the others
   need, respectively, liver, stomach, spleen, and spinal
   cord--but all are of the same, relatively rare, blood-type.
   By chance, David learns of a healthy specimen
   with that very blood-type. David can take the healthy
   specimen's parts, killing him, and install them in his
   patients, saving them. Or he can refrain from taking
   the healthy specimen's parts, letting his patients
   die. (246)


From a purely act-utilitarian standpoint, there is no difference between the Trolley Problem and the Transplant Problem. In both cases, five persons are saved at the expense of one person--five persons have more utility than one person. However, it is clear in the Transplant Problem that David is using the healthy specimen merely as a means to save the other five persons--he is cutting that person up so the others could live. (247) Therefore, the Transplant Problem clearly violates CI(2). But the two problems seem different, even though the question is whether to sacrifice one person for the sake of five persons. (248) In the Trolley Problem, the autonomous technology is not intending to kill the one person; instead, the car is intending to save the five persons, and the death of the one person is an incidental result of that intention. Nonetheless, as this discussion has made evident, resolution of the Trolley Problem in Kantian ethics is murkier than it was in utilitarianism.

This raises another concern with the application of Kantian ethics to autonomous vehicles: Kantianism cannot address every situation confronting an autonomous vehicle. (249) In the Tunnel Problem, the autonomous vehicle must decide whether to kill a child or the operator of the vehicle. As indicated earlier, CI(1) prohibits killing; in either situation, someone dies. Thus, Kantianism fails to provide a rule of action. Likewise, in the Motorcycle Problem and the Car Problem, Kantian ethics does not provide an answer for which motorcycle or car to hit. In addition to that limitation, another problem for a deontological approach to autonomous vehicles is that multi-rule systems suffer from conflicts between rules. (250) A Kantian autonomous vehicle would also suffer from a frame problem: an algorithm writer would need to make sure the rules are activated when an autonomous vehicle encounters the situation necessitating its application. (251) Therefore, although Kantianism can provide some moral rules that could be applied to an autonomous vehicle, it could not be the only moral theory programmed into the vehicle.

V. LEGAL CONCERNS

Accidents caused by autonomous vehicles will likely implicate tort and criminal law. (252) Tort law is concerned with vindicating the victim and also "to confirm and reinforce public standards of behavior." (253) Criminal law focuses on vindicating the "state's interests in deterring crime and imposing justice." (254) Or, perhaps more simply stated, "tort law prices, while criminal law prohibits." (255)

Because tort and criminal law draw a distinction between action and inaction in the law, this Part starts by addressing that distinction. This Part then examines the application of tort and criminal law to decisions made by a crash-optimization algorithm. As to tort law, this Part analyzes these decisions in relation to intentional torts, negligence, products liability, and punitive damages. As to criminal law, this Part looks at criminal law more generally, focusing on concepts of intent and necessity.

A. The Non-Duty to Act

At common law, a person did not have "a duty to aid a stranger in distress even though the danger may be great and the inconvenience to the potential rescuer only slight." (256) In criminal law, most crimes require affirmative action and not merely a failure to act, (257) "even when that aid can be rendered without danger or inconvenience." (258) In tort law, "[t]he fact that the actor realizes or should realize that action on his part is necessary for another's aid or protection does not of itself impose upon him a duty to take such action." (259) Therefore, "[a]bsent special relationships or particular circumstances or actions, a defendant is not liable in tort for a pure failure to act for the plaintiffs benefit." (260)

A cogent example of this non-duty to act is the tragic death of Catherine "Kitty" Genovese. A man followed Kitty home and stabbed her to death in the streets of her middle-class neighborhood for over thirty-five minutes. (261) Many neighbors peered through their curtains and saw the attack, but they did not call the police or intervene. (262) From a moral standpoint, the decision not to help Kitty was morally wrong--at the very least, her neighbors could have called the police. (263) A moral duty to act, however, is not enough to impute criminal or tort liability. (264) The bystanders "were legally unreproachable." (265)

The Trolley Problem from Part III presents an interesting application of these rules. (266) In the Trolley Problem, the vehicle was not in autonomous mode, meaning that the human driver would have killed five people. Although many scholars have argued that autonomous vehicles should be civilly liable for accidents caused in autonomous mode, (267) no one has argued that car manufacturers should be liable for accidents caused when a person is driving the car. Therefore, if the autonomous technology lets the five people die, then the car manufacturer is not liable for the accident. (268) However, when the autonomous technology takes control over the vehicle to save the five people, it causes the death of the one person. (269) Therefore, by doing what many would consider right, (270) the car manufacturer would subject itself to legal liability.

A more troublesome application of this doctrine would occur if the autonomous technology could take control over the vehicle to save five persons but the accident does not result in a death; assume instead, the vehicle would crash into a parked vehicle. In this variation, the car manufacturer would likely not be liable for the death of the five people, but it would be liable for the damage to the parked vehicle if the autonomous technology took control over the vehicle. (271) Therefore, a car manufacturer may not be incentivized to use those accident-avoidance mechanisms and crash-optimization algorithms that would subject it to liability.
COPYRIGHT 2015 Albany Law School
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2015 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:I. Introduction through V. Legal Concerns A. The Non-Duty to Act, p. 183-224
Author:Gurney, Jeffrey K.
Publication:Albany Law Review
Date:Sep 22, 2015
Words:10997
Previous Article:Bridging the gap between the "have" and the "have-nots": the ACA prohibits insurance coverage discrimination based upon infertility status.
Next Article:Crashing into the unknown: an examination of crash-optimization algorithms through the two lanes of ethics and law.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters