Printer Friendly

Artificial Intelligence: Discerning a Christian Response.

The movie Wall-E is an entertaining tale of a dystopian future of robots, automation, and humanity. A polluted earth is left abandoned except for robots like the charming title character Wall-E, who are left to clean up the mess. Humans have fled the planet, coddled aboard a massive ark-like spaceship where automated systems take care of their every need. It is striking that the most human-like characters in the movie are the two main robot characters while the human characters are portrayed as obese, feeble, and passive, shuttled about in reclining chairs, and consuming beverages while perpetually entertained by personal screens. At the climax of the movie, the ship's captain valiantly struggles to stand and, unaccustomed to walking, waddles over to the main control panel to wrestle control back from the automated ship. The tension in this climactic moment is driven by one question: will humanity take back control from technology?

For many decades, there have been many optimistic predictions about the capabilities of Artificial Intelligence (AI) which have consistently fallen short of expectations. In 1958 Frank Rosenblatt pioneered modeling neurons using simple networks called "perceptions" which could be trained to classify data. Later, the pioneering AI researchers Marvin Minsky and Seymour Papert published an influential book titled Perceptrons which identified challenges with single-layer perceptrons and expressed skepticism about multilayer perceptrons. They wrote,
Perceptrons have been widely publicized as "pattern recognition" or
"learning machines" and as such have been discussed in a large number
of books, journal articles, and voluminous "reports." Most of this
writing... is without scientific value. (1)


As a result, work in this area diminished greatly through the 1970s, during an era sometimes referred to as an "AI winter." However, interest in multilayer perceptrons was reignited in the mid-1980s after various breakthrough papers were published demonstrating how they could be made effective by employing specialized training algorithms. (2) These techniques have since been further refined, and, combined with advances in computing power, have led to so-called "deep-learning" methods. (3)

Deep learning uses many layers of perceptrons which can be trained using special techniques such as backpropagation or gradient descent. Deep learning is an approach to machine learning, a field which involves training computers to "learn" patterns without being explicitly programmed for those patterns. The training process will typically employ a labeled set of example training data in a process called "supervised learning." Alternately, training can also be performed using a set of unlabeled input data which is then processed to uncover patterns and structures. That process is referred to as "unsupervised learning."

AI techniques employing deep learning have recently achieved remarkable strides in tackling more difficult problems. A research team at Google demonstrated these techniques by developing a system that was trained to play the game Go by playing games against itself, eventually surpassing even the best human players. (4) Google has recently released its machine learning library, TensorFlow, under an open source license, spurring applications in many new areas. (5) These tools are not just solving puzzles in the laboratory. They are now being directed toward a plethora of difficult practical problems that traditionally have been beyond the capabilities of prior AI systems. For instance, these systems are showing great promise in diagnosing certain diseases and analyzing medical images, even outperforming human doctors in some tasks. (6) AI is also making advances in diverse areas such as legal work, image recognition, and language translation. The rise of autonomous vehicles is another emerging area in which deep learning has made remarkable progress.

As a book review editor for PSCF on topics relating to technology, I have been astounded at the sheer number of books that have been released in recent years about issues surrounding AI and robotics (several of which have been reviewed in these pages). These books include titles such as Technology vs. Humanity: The Coming Clash between Man and Machine; In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence; and The Glass Cage: Automation and Us. Some of these books take an optimistic stance, some are more circumspect, while others paint a darker picture.

Some have suggested that the advance of technology and AI will eventually solve all our problems. The term technicism is a word that has been coined to refer to the faith in technology as savior or rescuer of the human condition. (7) A recent book titled Infinite Progress includes the subtitle: "How the Internet and Technology Will End Ignorance, Disease, Poverty, Hunger, and War." (8) This is essentially a form of idolatry, replacing a trust in the Creator with technology. In fact, this trust in technology becomes explicit in the case of the "Way of the Future," a religious group founded by Anthony Levandowski, a former Google and Uber engineer who is working to "develop and promote the realization of a Godhead based on Artificial Intelligence" and that "through understanding and worship of the Godhead, [to] contribute to the betterment of society." (9) The trans-humanist Zoltan Istvan suggests that this new AI deity "will actually exist and hopefully will do things for us." (10) These sentiments are explicit examples of an observation made by the writer David Noble that "the technological enterprise has been and remains suffused with religious belief." (11)

Everyone has a worldview which informs a set of beliefs that shape our conception of reality. Nicholas Wolterstorff suggests it is these "control beliefs" that enable us to commit to a particular theory. (12) These beliefs are also active in our technical work, including the theories related to research in AI, whether explicitly stated or not.

Some engineers and computer scientists believe that technology will even solve the problem of death. According to David Pearce, co-founder of an organization called Humanity+:
If we want to live in paradise, we will have to engineer it ourselves.
If we want eternal life, then we'll need to rewrite our bug-ridden
genetic code and become god-like... only hi-tech solutions can ever
eradicate suffering from the living world. (13)


Ray Kurzweil, an accomplished computer scientist and author of The Age of Spiritual Machines, has suggested that within the present century we will be able to upload our brain into a computer and live forever, free from the limitations of our mortal bodies. This idea has been coined the "rapture of the geeks," and Kurzweil writes, "We don't always need real bodies. If we happen to be in a virtual environment, then a virtual body will do just fine." (14)

David F. Noble observes,
Artificial Intelligence advocates wax eloquent about the possibilities
of machine-based immortality and resurrection, and their disciples, the
architects of virtual reality and cyberspace, exult in their
expectation of God-like omnipresence and disembodied perfection. (15)


Psalm 115 states that the makers of idols will become like them, and in the case of the "rapture of the geeks," the end goal is to literally become software in a computer.

But not everyone shares an optimistic view of the future of AI, and warnings about the dark side of AI can be found in the recent headlines. Stephen Hawking warned that "the development of full artificial intelligence could spell the end of the human race," and Elon Musk has called AI "our biggest existential threat." In 2015, an open letter signed by many AI researchers, along with Musk and Hawking, urged that research priorities be made to ensure the beneficial use of AI. (16) The concerns over AI range from the short-term risks of putting people out of work to the more dystopian visions of a world in which machines turn on their human creators.

The pessimistic view of a dystopian future is frequently portrayed in sci-fi movies. Movies such as The Matrix, Terminator, and Battlestar Gallactica paint a picture of a dark future in which technology turns on humanity. Other movies and TV shows that have narratives based on the existential threat of AI and robotics include Ex Machina, Westworld, Blade Runner, and I, Robot. These stories portray different variations on the "Frankenstein narrative" in which technology turns on its human creators and threatens their existence. Many of these shows and movies, including the more recent sequel, Blade Runner 2049, raise profound questions about what it means to be human, exploring questions of identity, existence, free will, and how we are distinct from our machines. These cultural stories contribute to a social imaginary about the role and future of technology in our society.

While these threats may seem far-fetched, the more immediate concern is the loss of jobs due to AI, robots, and automation. In the early 2000s, I was doing my graduate studies in the area of computer vision. At the time, I recall thinking that self-driving cars were unlikely to be feasible due to the challenges of real-time vision systems in unstructured environments. However, within a short decade, autonomous vehicles were successfully demonstrated. In the near term, autonomous vehicles are likely to disrupt the labor market, potentially displacing millions of jobs in driving professions.

One paper published by researchers from the University of Oxford predicts that 47 percent of U.S. jobs are at risk of being replaced by AI technologies and computerization. (17) Other sources, such as the Organization for Economic Cooperation and Development (OECD), predict that only 9 percent of jobs are at high risk of being completely displaced, while many others will change significantly due to automation. (18) The issue of job losses due to robots and automation was also the topic of a recent Christianity Today article titled "How to Find Hope in the Humanless Economy." (19)

Still, some dismiss the threats of a "jobless future," pointing back to automation in the early nineteenth century when the "Luddites," fearful of losing their jobs, smashed automated weaving machines. They point to the advance of technology throughout the twentieth century, and how employment continued to grow. But a growing number of voices are warning that the remarkable success of AI and deep learning threatens to automate many tasks, including many white-collar jobs.

Some might suggest that these technological changes are inevitable, and we must accept the mantra of the Borg on Star Trek: "resistance is futile." However, we must reject a sense of technological determinism, the notion that technology is an autonomous force beyond our control. The famous media theorist Marshall McLuhan suggested that the way to begin is to stand back and scrutinize what technology and media are doing. He likened the forces of media and technology to the swirling storm depicted in Edgar Allen Poe's "A Descent into the Maelstrom." In this story, a sailor caught in the swirling vortex of a storm saves himself by carefully observing the behavior of the winds and currents around him. Like the sailor, McLuhan suggests that we need to observe and discern the forces of a changing world to ponder its effects and wisely chart a safe course. "Nothing is inevitable if we are willing to contemplate what is happening." (20)

In one of his talks, Neil Postman suggested six helpful questions one might ask when thinking about the impact of technology. (21) Adapting these questions to the area of AI yields the following questions:

1. What is the problem to which AI is a solution?

2. Whose problem is AI solving?

3. What problems will AI create even as it solves a problem?

4. What people or institutions will be hurt by AI?

5. What changes in language are being forced by AI?

6. What sort of people and institutions gain special economic and political power through AI?

These six questions are helpful because they force us to consider more issues than just technical ones, helping us uncover some of the biases embedded in a particular technology. By answering these questions, it becomes abundantly clear that AI is not just changing the economics of the labor market. The reality is that technology is not neutral: it has a bias and it changes things. (22) In his book, Technopoly, Postman argues that "embedded in every tool is an ideological bias, a predisposition to construct the world as one thing rather than another, to value one thing over another, to amplify one sense or skill or attitude more loudly than another." (23) A recent book titled Weapons of Math Destruction (previously reviewed in PSCF) makes the case that even our mathematical algorithms are not neutral. (24) As we develop AI, we must recognize that "we shape our tools and thereafter they shape us." (25)

One helpful way to contemplate what is happening is to carefully consider the philosophical issues. Many of the basic philosophical questions that arise in AI occupied the minds of philosophers long ago. In the seventeenth century, Thomas Hobbes suggested that "cognition is computation," and later Descartes described human beings as "thinking things." In the mid-twentieth century, the pioneering computer scientist, Alan Turing, thought about the notion of "thinking machines" and even proposed a test for them, now referred to as the "Turing test." (26) The questions that frequently arise in AI cover the range of philosophical questions: what is really real? (ontology), how do I know it? (epistemology), what is right and good? (ethics), and what does it mean to be human? (philosophical anthropology).

The approach one takes to questions in AI is largely shaped by our philosophical presuppositions and our worldview. For instance, it has been suggested that Japan's enthusiastic embrace of robotics can be traced to a culture influenced by Shintoism, a religion that accepts that all things, including inanimate objects, can possess living spirits. (27) Another worldview is materialism, the belief that the physical world is all there is. This worldview leads to physicalism, "the philosophy that the human mind is fully explainable with reference only to the biological brain and the laws of physics and chemistry." (28) A physicalist view of what it means to be human has a variety of significant implications. Matthew Dickerson has provided an insightful and comprehensive critique of a physicalist view in his book, The Mind and the Machine. In this book, he pushes physicalism to its logical conclusions and shows the troubling implications for free will, creativity, environmental care, and reason. (29)

Some materialists suggest that everything in the real world can be described in terms of computation. Stephen Wolfram, a computer scientist and mathematician, does this in a book titled A New Kind of Science. Wolfram introduces the "Principle of Computational Equivalence" which suggests that "all processes, whether they are produced by human effort or occur spontaneously in nature, can be viewed as computation." (30) Some have conjectured about the possibility of machine consciousness using neurocomputational models and high-level cognitive algorithms. (31) Others have gone even further, musing that the world is a simulation like the one portrayed in the movie The Matrix. In his article "God is the Machine," Kevin Kelly explores the idea that everything is essentially a simulation, citing those who would suggest that the universe is a computer and we are the "killer app." (32) Gnosticism, a heresy that once plagued the early church, becomes more fashionable as physical reality is reduced to information.

It has also been suggested that developments in AI will disrupt religions, including Christianity. The Atlantic recently published an article with the provocative title, "Is AI a Threat to Christianity?" (33) The article brings up a variety of challenges posed by AI by presupposing that intelligent artificial persons are, in fact, possible. Various questions are raised: Will machines have the ability to pray (and would God hear those prayers)? Would an AI have a soul? and Should Christians seek to evangelize this new technology?

This leads to the question of how a Christian philosophical perspective and worldview might help inform and guide us as we navigate the world of AI. There are many epistemological issues relating to how knowledge is represented in a computer and to the techniques for machine learning. But perhaps it would be better to start with the ontological issues. In the words of theologian Craig Bartholomew,
We should start with ontology--this is our Father's world, and we are
creatures made in his image--and then move on to epistemology--as his
creatures, how do we go about knowing this world truly? (34)


I think this is helpful advice as we start to explore AI, since it is the ontological questions that will help us discern what separates humans from machines. (35) We are often captivated by what things can do, rather than asking what things are. A common tendency is to anthropomorphize our machines, thereby elevating the status of our machines and, in doing so, reducing the distinctiveness of human beings. Once we have established the ontological question of who we are and what machines are, we can start asking the questions about the best way to move forward, including questions about the appropriate use of AI.

A Christian worldview recognizes the ontological reality of creation and the value of physical reality. Christ who is "the Word who became flesh" (1 John 3:2) reveals the value God places on physicality and humanity. In the new heavens and earth, we will not be disembodied spirits floating in the ether, but, in the words of the Apostles' Creed, we look forward to the "resurrection of the body and the life everlasting." (36) A Christian perspective recognizes that reality extends beyond the physical world to include a spiritual realm. This ontological starting point will reject the reductionistic notion that humans are simply complex biochemical machines, while still affirming the value of the physical world. The implications of AI have been raised in previous issues of PSCF. In 2008, Russell Bjork wrote an article in this same journal titled "Artificial Intelligence and the Soul" in which he identified three key issues: (37)

1. Is there a conflict between AI and biblical teaching about the origin of the human soul?

2. Is there a conflict between AI and biblical teaching about human worth and our being created in the image of God?

3. Does biblical teaching about personhood have any implications for our work in AI?

These are ontological questions that are just as relevant ten years after that article was written. Without a biblically informed ontological grounding, we are susceptible to all kinds of philosophical pitfalls such as physicalism, functionalism, reductionism, and gnosticism. But much more work remains to be done, exploring what 2,000 years of Christian social thought have to say about the responsible development of AI.

Once the ontological questions are addressed, we will be better equipped to wrestle with the vast array of ethical issues that arise. These include questions about appropriate applications of AI and and its use in robotics. A small sample of these issues include the following:

* When an autonomous vehicle crashes, who is responsible? This harkens to the "trolley problem," a classic thought experiment in philosophy. (38)

* Should lethal autonomous robots be permitted in warfare? (39)

* How do we approach automation and possible job loss? (40)

* Should we support efforts to develop "artificial persons" or machines that mimic humans or animals?

* Are social robots appropriate, and if so, how ought they to be used? (41)

* Should we use robots for child and elder care? (42)

* How do we navigate the privacy, transparency, and justice issues that arise as AI is applied to big data? (43)

* How do we show care for those whose jobs are threatened by automation? (44)

These are just some of the areas in which ethical issues arise in the use of AI. We will find a responsible way forward not by asking what AI can do, but rather by starting with ontological questions and then determining what role AI ought to play. In the words of the early AI pioneer, Joseph Weizenbaum, "There are limits to what computers ought to be put to do." (45) In his book, Humans Are Underrated, Geoff Colvin suggests asking the following question: "What are the activities that we humans, driven by our deepest nature or by the realities of daily life, will simply insist be performed by other humans, regardless of what computers can do?" (46)

On the other side of the coin, can we imagine some possibilities that AI might open up which can lead to further flourishing? As a part of creation AI can, in principle, be directed in God-honoring ways despite the possibility for sinful distortions. How can we employ AI responsibly in medicine, in research, and in environmental monitoring? In what ways can AI be harnessed to assist in Bible translation, to help in humanitarian relief, and to aid in search and rescue operations? What new assistive technologies might be possible to help people with disabilities? How might AI be directed toward helping the poor? (47) What other creational possibilities might be uncovered and applied in normative ways?

Fred Brooks, a respected computer scientist, wrote, "It is time to recognize that the original goals of AI were not merely extremely difficult, they were goals that, although glamorous and motivating, sent the discipline off in the wrong direction." (48) Our call is to help point the discipline in the right direction and to help discern a responsible road forward in obedience to God. Left on its own, AI will likely veer in the wrong direction, putting efficiency ahead of people. This approach is what Jacques Ellul called technique, the mindset that seeks "absolute efficiency in every field of human activity." (49) A related tendency is for technology and automation to concentrate power in the hands of fewer people, corporations, and nations. We should heed the warning of C. S. Lewis in The Abolition of Man in which he warns that "Man's power over Nature" can become "a power exercised by some men over other men with Nature as its instrument." (50)

In response to the many ethical issues that arise in AI, several organizations have been established to engage them. The Future of Humanity Institute at the University of Oxford is an example of one secular organization whose mission is to wrestle with some of the existential threats of machine intelligence. (51) Another group called the AI Now Institute was established "to explore how AI is affecting society at large... bridging the gap between data scientists, lawyers, sociologists, and economists studying the implementation of artificial intelligence." (52) Likewise, the MIT Media Lab and the Berkman Klein Center for Internet and Society at Harvard University are participating in a global initiative to fund and advance AI research for the public good. (53) The IEEE has also established a working group focused on ethically aligned design for autonomous systems and AI. (54) In 2016, the United Nations announced that it would establish a Centre for Artificial Intelligence and Robotics in The Hague, the Netherlands, to provide an international resource dealing with issues related to AI and robotics. (55)

As Christians who care about God's world, we must do more than wax eloquent about the issues or critique them from the sidelines. We need to answer the question, Knowing what we know, what will we do? (56) We need to actively join this conversation which has already begun, bringing insights from scripture and from Christian philosophy and theology to contribute to the common good. (57) In particular, as we wrestle with these new developments, we must remember what scripture teaches about what it means to be human, the meaning of work, and the kind of world God would have us unfold.

The third Lausanne Congress on World Evangelization took place in 2010 in Cape Town and highlighted the need for "taking the whole gospel to the whole world," including the area of technology. The Cape Town Commitment that came out of the Lausanne Congress includes a "call to action" section that specifically identifies technology (and specifically mentions emerging technologies such as AI) as having "deep implications for the Church and its mission, particularly in relation to the biblical truth of what it means to be human." It encourages us to "promote authentically Christian responses and practical action in the area of public policies, to ensure that technology is used not to manipulate, distort and destroy, but to preserve and better fulfil our humanness." (58) Among the recommendations is a call for "national or regional 'think tanks' or partnerships to engage with new technologies, and to speak to the shaping of public policy with a voice that is biblical and relevant." (59) The Christian faith shapes a worldview, one that points to norms that inform ethical considerations, which, in turn, can help give shape to policies and regulations. (60)

The rapid pace of change adds a degree of urgency to this call to engage. In the words of futurist Roy Amara, who coined Amara's law: "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run." (61) At the end of the movie Wall-E, the human captain wrestles in the control room to seize control back from the automated system. Likewise, the future of AI is neither inevitable nor unstoppable. However, Christians will need to join the dialogue and be prepared to carry out our responsibility as we unfold these powerful new technologies.

Notes

(1) Marvin Minsky and Seymour Papert, Perceptrons (Cambridge, MA: MIT Press, 1969), 4.

(2) One landmark paper in particular was D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning Representations by Back-Propagating Errors," Nature 323 (1986): 533-36.

(3) To explore a demonstration of how these techniques work, visit http://playground.tensorflow.org.

(4) David Silver et al., "Mastering the Game of Go without Human Knowledge," Nature 550 (October 19, 2017): 354-59.

(5) An Open Source machine learning framework for everyone, https://www.tensorflow.org/.

(6) Brandon Keim, "Dr. Watson Will See You... Someday," IEEE Spectrum 52, no. 6 (2015): 76-77; Andre Esteva et al., "Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks," Nature 542, no. 7639 (2017): 115-18; Matthieu Komorowski et al., "The Artificial Intelligence Clinician Learns Optimal Treatment Strategies for Sepsis in Intensive Care," Nature Medicine 24, no. 11 (2018): 1716-20; and Yiming Ding et al., "A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using [.sup.18]F-FDG PET of the Brain," Radiology 290, no. 2 (2018): 456-64.

(7) Egbert Schuurman, Faith and Hope in Technology, trans. John Vriend (Toronto, ON: Clements Publishing, 2003), 69.

(8) Byron Reese, Infinite Progress: How the Internet and Technology Will End Ignorance, Disease, Poverty, Hunger, and War (Austin, TX: Greenleaf Book Group, 2013).

(9) Mark Harris, "God Is a Bot, and Anthony Levandowski Is His Messenger," Wired (September 27, 2017).

(10) Olivia Solon, "Deus ex machina: Former Google Engineer Is Developing an AI God," The Guardian (September 28, 2017).

(11) David Noble, The Religion of Technology: The Divinity of Man and the Spirit of Invention (New York: Penguin Books, 1999), 5.

(12) Nicholas Wolterstorff, Reason within the Bounds of Religion (Grand Rapids, MI: Eerdmans, 1999), 67-68.

(13) See Andres Lomena, "Origins and Theory of the World Transhumanist Association," interview with Nick Bostrom and David Pearce, https://ieet.org/index.php/IEET2/print/2201. Accessed December 15, 2017.

(14) Ray Kurzweil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence (New York: Penguin, 2000), 142.

(15) Noble, Religion of Technology, 5.

(16) An Open Letter, "Research Priorities for Robust and Beneficial Artificial Intelligence," Future of Life Institute, 2015, https://futureoflife.org/ai-open-letter/.

(17) Carl Benedikt Frey, Michael A. Osborne, and Craig Holmes, "Technology at Work v2.0: The Future Is Not What It Used to Be," Citi GPS: Global Perspectives & Solutions report produced by Citi and the Oxford Martin School at the University of Oxford, 2016, https://www.oxfordmartin.ox.ac.uk/publications/view/2092.

(18) OECD, "Automation and Independent Work in a Digital Economy," Policy Brief on the Future of Work (Paris, France: OECD Publishing, 2016), https://www.oecd.org/employment/Automation-and-independent-work-in-a-digital-economy-2016.pdf.

(19) Kevin Brown and Steven McMullen, "How to Find Hope in the Humanless Economy," Christianity Today 61, no. 6 (July/August 2017): 30, http://www.christianitytoday.com/ct/2017/july-august/how-to-have-hope-in-humanless-economy.html.

(20) This quote has been attributed to Marshall McLuhan.

(21) Neil Postman, "Questioning the Media." Talk given for the January Series at Calvin College, January 12, 1998, http://www.calvin.edu/january/1998/postman.htm.

(22) Derek C. Schuurman, "Technology Has a Message," Christian Educators Journal 51, no. 3 (February 2012): 4-7.

(23) Neil Postman, Technopoly: The Surrender of Culture to Technology (New York: Vintage Books, 1993), 13.

(24) Cathy O'Neil, Weapons of Math Destruction (New York: Broadway Books, 2016).

(25) John M. Culkin, "A Schoolman's Guide to Marshall McLuhan," Saturday Review (March 18, 1967), 70.

(26) Alan Turing, "Computing Machinery and Intelligence," Mind 59 (October 1950): 433-60.

(27) Lisa Thomas, "What's behind Japan's Love Affair with Robots?," Time Magazine (August 3, 2009).

(28) Matthew T. Dickerson, The Mind and the Machine: What It Means to Be Human and Why It Matters (Eugene, OR: Cascade Books, 2016), xxvi-xxvii.

(29) Ibid.

(30) Stephen Wolfram, A New Kind of Science (Champaign, IL: Wolfram Media Inc., 2002), 715.

(31) James A. Reggia, "Conscious Machines: The AI Perspective," Proceedings of the AAAI 2014 Fall Symposium Series (Menlo Park, CA: Association for the Advancement of Artificial Intelligence, 2014): 34-37.

(32) Kevin Kelly, "God Is the Machine," Wired (December 1, 2002), https://www.wired.com/2002/12/holytech/.

(33) Jonathan Merritt, "Is AI a Threat to Christianity? Are You There, God? It's I, Robot," The Atlantic (February 3, 2017), https://www.theatlantic.com/technology/archive/2017/02/artificial-intelligence-christianity/515463/.

(34) Craig G. Bartholomew, Contours of the Kuyperian Tradition: A Systematic Introduction (Downers Grove, IL: InterVarsity Press, 2017), 103.

(35) Steven H. VanderLeest and Derek C. Schuurman, "A Christian Perspective on Artificial Intelligence: How Should Christians Think about Thinking Machines?," Proceedings of the 2015 Christian Engineering Conference (CEC), Seattle Pacific University, Seattle, WA, June 2015, 91-107.

(36) See also Derek C. Schuurman, "The Rapture of the Geeks," In All Things (November 5, 2015), https://inallthings.org/the-rapture-of-the-geeks/.

(37) Russell Bjork, "Artificial Intelligence and the Soul," Perspectives on Science and Christian Faith 60, no. 2 (2008): 95-102.

(38) David Edmonds, Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us about Right and Wrong (Princeton, NJ: Princeton University Press, 2014).

(39) Lora G. Weiss, "Autonomous Robots in the Fog of War," IEEE Spectrum 48, no. 8 (August 2011): 31-34, 56-57.

(40) Derek C. Schuurman, "Responsible Automation: Faith and Work in an Age of Intelligent Machines," in The Wonder and Fear of Technology: Commissioned Essays on Faith and Technology, ed. David H. Kim (New York: Center for Faith & Work, 2016), 42-56.

(41) Sherry Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other (New York: Basic Books, 2012).

(42) A. Sharkey and N. Sharkey, "Children, the Elderly, and Interactive Robots," IEEE Robotics & Automation Magazine 18, no. 1 (March 2011): 32-38.

(43) Solon Barocas and Danah Boyd, "Engaging the Ethics of Data Science in Practice," Communications of the ACM 60, no. 11 ( 2017), 23-25.

(44) Erik Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (New York: W. W. Norton, 2014), 188-204.

(45) Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (New York: W. H. Freeman, 1976), 5-6.

(46) Geoff Colvin, Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will (New York: Portfolio/Penguin, 2015), 42.

(47) Declan Butler, "AI Summit Aims to Help World's Poorest," Nature News 546, no. 7657 (June 6, 2017): 196-97.

(48) Frederick P. Brooks Jr., "The Computer Scientist as Tool-smith II," Communications of the ACM 39, no. 3 (March 1996): 64.

(49) Jacques Ellul, The Technological Society (New York: Vintage Books, 1964), xxv.

(50) C. S. Lewis, The Abolition of Man (New York: HarperOne, 1974), 55.

(51) Future of Humanity Institute at the University of Oxford, https://www.fhi.ox.ac.uk/.

(52) AI Now Institute at New York University, https://ainowinstitute.org/.

(53) Berkman Klein Center for Internet and Society at Harvard University, "Ethics and Governance of AI," https://cyber.harvard.edu/research/ai.

(54) IEEE Standards Association, "The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems," http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html.

(55) United Nations Interregional Crime and Justice Research Institute, "UNICRI Centre for Artificial Intelligence and Robotics: The Hague, The Netherlands, http://www.unicri.it/in_focus/on/UNICRI_Centre_Artificial_Robotics.

(56) Steven Garber, Visions of Vocation: Common Grace for the Common Good (Downers Grove, IL: InterVarsity Press, 2014), 222.

(57) Some are already joining the conversation. The Ethics and Religious Liberty Commission of the Southern Baptist Convention recently released a statement titled "Artificial Intelligence: An Evangelical Statement of Principles." See https://erlc.com/resource-library/statements/artificial-intelligence-an-evangelical-statement-of-principles.

(58) "The Cape Town Commitment," Lausanne Movement (2011), https://www.lausanne.org/content/ctc/ctcommitment#capetown.

(59) Ibid.

(60) For one possible list of norms, see Derek C. Schuurman, Shaping a Digital World: Faith, Culture and Computer Technology (Downers Grove, IL: InterVarsity Press, 2013), 77-106.

(61) Susan Ratcliffe, ed., "Roy Amara 1925-2007, American Futurologist," Oxford Essential Quotations, 4th ed. (Oxford, UK: Oxford University Press, 2016).

ASA Members: Submit comments and questions on this article at www.asa3.org[right arrow]RESOURCES[right arrow]Forums[right arrow]PSCF Discussion.

Please note: A draft of this article was originally posted online in January 2018 as an invitational essay with a Call for Papers for a special issue on ArtificiaI Intelligence. The two articles which follow were subsequently submitted and reviewed in response to this invitational essay.

Derek C. Schuurman worked as an electrical engineer for a number of years before returning to school to complete a PhD in the area of robotics and computer vision. He is now a professor of computer science at Calvin College in Grand Rapids, Michigan, where he currently holds the William Spoelhof Teacher-Scholar-in-Residence chair. He is the author of the book Shaping a Digital World: Faith, Culture and Computer Technology (Inter Varsity Press).
COPYRIGHT 2019 American Scientific Affiliation
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Article
Author:Schuurman, Derek C.
Publication:Perspectives on Science and Christian Faith
Article Type:Essay
Geographic Code:1USA
Date:Jun 1, 2019
Words:5668
Previous Article:Worth the Steep Price.
Next Article:Challenges for an Ontology of Artificial Intelligence.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters