Printer Friendly

Ethical, Explainable Artificial Intelligence: Bias and Principles.

Artificial Intelligence (AI) is hot. Like a cat on a hot tin roof, it's hipping and hopping from one area to another. News headlines, giddy with promise, literally shout out all that AI can do: help cure cancer and other diseases that plague modern life, solve climate change, make renewable energies more efficient, transform urban life as we know it, protect us from nuclear weapons, and everything in between.

No doubt, some of these headlines read like science fiction, but AI really does impact our lives in ways that many of us might not even realize. Let me count them.

* Every time you use your favorite search engine, rest assured that a suite of AI techniques and applications is delivering to you a set of search results deemed best for your perusal.

* Travel and mapping tools such as Google Maps and Waze find the fastest routes for commuters traveling to and from work and school.

* Ride-sharing services such as Uber and Lyft use machine learning algorithms to determine how much to charge you or how long you will have to wait until your ride appears.

* Spam filters capture and block unwanted "Nigerian Prince" scams by constantly learning the new spam campaigns and matching those against your personal preferences.

* Financial institutions employ AI to prevent credit card fraud by understanding your credit card history--where you shop, the frequency of your transactions, and how much you normally spend.

* Social media networks such as Facebook employ a variety of AI techniques that help you identify your friends through facial recognition apps on photos uploaded to Facebook, deliver news relevant to your interests, and connect you with advertisers.

* Amazon uses AI to deliver the products and services we wish to buy. Recommendation engines help us buy the best products and find the right movies, music, and books to watch, listen to, and read--all delivered through AI techniques.

AI DOMINATES OUR LIVES

Like it or not, AI will continue to dominate our lives now and into the future. Don't take my word on this. According to Forrester, "across all businesses, there will be a greater than 300% increase in investment in artificial intelligence in 2017 compared with 2016" (Press, Gil. "Forrester Predicts Investment in Artificial Intelligence Will Grow 300% in 2017"; Nov. 1, 2016; forbes.com/sites/gilpress/2016/11/01/forrester-). In its 2016 market intelligence report, Tractica, a market research firm, forecasted "that annual worldwide AI revenue will grow from $643.7 million in 2016 to $36.8 billion by 2025" ("Artificial Intelligence Revenue to Reach $36.8 Billion Worldwide by 2025," Aug. 25, 2016; tractica.com/newsroom/press-releases/artificial-intelligence-revenue-to-reach-36-8-billion-worldwide-by-2025). In 2016, IDC concluded that "widespread adoption of cognitive systems and artificial intelligence (AI) across a broad range of industries will drive worldwide revenues from nearly $8.0 billion in 2016 to more than $47 billion in 2020" ("Worldwide Cognitive Systems and Artificial Intelligence Revenues Forecast to Surge Past $47 Billion in 2020, According to New IDC Spending Guide," Oct. 26, 2016; idc.com/getdoc.jsp?containerId=prUS41878616). And Venture Scanner, an emerging technologies startup research firm, is tracking more than 2,000 AI startup companies that have raised more than $26 billion in venture capital funding (Venture Scanner, "Artificial Intelligence Market Report and Data"; venturescanner.com/artificial-intelligence). AI is hot and getting hotter.

Tech giants such as Google, Microsoft, IBM, Facebook, Amazon, and Apple have all committed to an AI-centric focus. (1) Microsoft, in early July 2017, announced the creation of an "artificial intelligence research unit that will oversee initiatives and products that will help transform industries including healthcare, environment and education" (Christian, Bonnie. "Microsoft Wants to Be a Major AI Player. Here's Its Master Plan," July 13, 2017; Wired.Co.Uk/Article/Microsoft-Ai-). Google's CEO Sundar Pichai, in April 2017, shared with investors the new vision for Google: "We continue to set the pace in machine learning and A.I. research," and "we're transitioning to an A.I.-first company" (Micu, Alexandru. "Google Is Shifting Their Focus From Search to Artificial Intelligence, CEO Says," April 28, 2017; zmescience.com/other/videos/google-pursues-ai).

AI AND SOCIAL ISSUES

The promise of AI is bookended by its perceived perils. The World Economic Forum has identified that the most widely reported problem with AI is the threat to jobs. Alarming headlines report how AI will replace not just blue-collar jobs, but highly educated, highly skilled jobs such as lawyers, doctors, accountants, and, dare we say, librarians (Bossmann, Julia. "Top 9 Ethical Issues in Artificial Intelligence," Oct. 21, 2016; weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence). In their 2013 study, Oxford University academics Carl Benedikt Frey and Michael A. Osborne shocked many by predicting that "about 47 percent of total U.S. employment is at risk" ("The Future of Employment: How Susceptible Are Jobs to Computerisation?" Sept. 17, 2013; oxfordmartin.ox.ac.uk/downloads/academic/ The_Future_of_Employment.pdf). Yikes!

A second issue, growing in importance, is bias and how techniques such as machine learning exacerbate societal problems such as discrimination and gender exclusion. One of the most popular of AI techniques--machine learning--requires vast training datasets to identify and reveal hidden patterns in the data. However, many of these training sets can suffer from a variety of biases: Sets can be incomplete, skewed, non-representative, and rely on data improperly labeled by humans to embed biases and cultural assumptions. According to The AI Now Institute, the biases are "difficult to find and understand, especially when systems are proprietary, treated as black boxes or taken at face value" ("AI Now 2017 Report," Campolo, Alex, Madelyn Sanfilippo, Meredith Whittaker, Kate Crawford, et al.; ainowinstitute.org/AI_Now_2017_Report.pdf).

The AI Now report maintains that we are drowning in big data, but good datasets that are not biased, readily available, and low-cost are hard to come by. If you are using social network, crowdsourced, or scraped data--yes, this data is easily available--it can be biased because these datasets are not representative of the overall population. In other words, biased data yields biased, inaccurate results.

Not surprisingly, more and more reports are surfacing that AI tools are being used in ways that are discriminatory, biased, and unfair to minority populations, the poor, and the disadvantaged. Governments, policymakers, academics, and even titans of industry are calling for increased oversight of AI tools and applications that deliver automated decisions which can negatively impact one's ability to go to college, get a job, or apply for a mortgage. Police departments are using automated decision systems to monitor and survey communities and neighborhoods across the country, while judges employ automated systems to justify longer prison sentences.

AI IN COURT SENTENCING

The ProPublica report on Northpointe's COMPASS system examines how these automated systems are used by courts across the country to hand out widely disparate court sentences for white and black defendants (Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. "Machine Bias: There's Software Used Across the Country to Predict Future Criminals. And It's Biased Against Blacks," May 23, 2016; propublica). A New York Time's article found that the "widely used software that assessed the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk" (Crawford, Kate. "Artificial Intelligence's White Guy Problem," June 25, 2016; nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html?_r=0). In her article, Crawford notes that another problem surfaced when users "discovered that Google's photo app, which applies automatic labels to pictures in digital photo albums, was classifying images of black people as gorillas."

These are not isolated examples. Companies, courts, governments, and colleges are collecting data on all of us and then combining this data with artificially intelligent systems to develop new services, populate newsfeeds, target advertisements, determine healthcare treatment plans, and levy court rulings without our knowledge or consent.

Furthermore, reports Bianca Datta, we "don't really know how these tools work or how they arrive at their decisions, and neither do we know how to interpret or validate many of the algorithms that power these systems. Without that knowledge, vast swaths of our economy could become black boxes, unable to be scrutinized for legality or fairness ("Can Government Keep Up With Artificial Intelligence?" NOVA Next, Aug. 10, 2017; pbs.org/wgbh/nova/next/tech/ai-government-policy). In an April 2017 article in Computerworld, Aki Ohashi, director of business development at PARC (Palo Alto Research Center), comments, "You don't really know why a system made a decision. AI cannot tell you that reason today. It cannot tell you why. It's a black box. It gives you an answer and that's it, you take it or leave it" (Nott, George. "'Explainable Artificial Intelligence': Cracking Open the Black Box of AI," April 10, 2017; computerworld.com.au/article/617359/explainable-artificial-intelligence-cracking-open-black-box-ai).

Mathematician, author, and blogger Cathy O'Neil has called these automated algorithms "weapons of math destruction," or WMDs, because they are widely used to make important and life-altering decisions. These algorithms are opaque and secretive. They do not reveal how a single metric determines whether you go to college, find a job, get a credit card or mortgage, or how long you are incarcerated if you run afoul of the law. You cannot appeal or request an explanation and they are destructive because they are unfair to thousands of people (O'Neil, Cathy. TED Talk, "The Era of Blind Faith in Big Data Must End," April 2017; ted.com/talks/cathy_o_neil_the_era_of_ blind_faith_in_big_data_must_end).

In her August 2016 Opinion piece for The New York Times, Julia Angwin quotes Crawford, who is principal researcher at Microsoft Research and co-founder of the AI Now Institute. Crawford argues, "We urgently need more due process with the algorithmic systems influencing our lives. If you are given a score that jeopardizes your ability to get a job, housing or education, you should have the right to see that data, know how it was generated, and be able to correct errors and contest the decision" ("Make Algorithms Accountable," Aug. 1, 2016; nytimes.com/2016/08/01/opinion/make-algorithms-accountable.html?_r=0).

In a nutshell, we need explainable, accountable AI.

EXPLAINABLE AI (XAI)

One important government agency leading the way on Explainable AI (XAI) is DARPA (Defense Advanced Research Projects Agency). In August 2016, DARPA released a "Broad Agency Announcement" on Explainable XAI (DARPA- BAA-16-53; darpa.mil/program/explainable-artificial-intelligence). In it, David Gunning predicts:
New machine-learning systems will have the ability to explain their
rationale, characterize their strengths and weaknesses, and convey an
understanding of how they will behave in the future. The strategy for
achieving that goal is to develop new or modified machine-learning
techniques that will produce more explainable models. These models will
be combined with state-of-the-art human-computer interface techniques
capable of translating models into understandable and useful
explanation dialogues for the end user.


Why is DARPA involved? In the DARPA announcement, Gunning notes, "The Department of Defense is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI--especially explainable machine learning--will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners." DARPA recognizes both the importance AI plays in modern warfare and the ability to understand how the algorithm arrived at an assessment. If we are using these systems for national security and strategic warfare goals, we need to be darn sure we know how these systems work. The fate of the nation (the world) depends on it.

GOVERNMENT REGULATIONS

The European Union, in April 2016, adopted the General Data Protection Regulation (GDPR), "a set of comprehensive regulations for the collection, storage, and use of personal information" (Goodman, Biyce and Seth Flaxman. "European Union Regulations on Algorithmic Decision-Making and a 'Right to Explanation,'" Oxford Internet Institute, Aug. 31,2016; arxiv.org/pdf/1606.08813.pdf). The regulation goes into effect in May 2018. As Biyce and Flaxman explain, the goal of this regulation is to close the "perceived gaps and inconsistencies in the EU's current approach to data protection." Article 22: Automated Individual Decision-Making specifically addresses the problems of algorithmic decision-making and the regulation, they explain, could have the effect of "prohibiting a wide swath of algorithms currently used in recommendation systems, credit and insurance risk assessments, computational advertising, and social networks."

EU policymakers, they continue, wrote these regulations to enable "the right of citizens to receive an explanation for algorithmic decisions." However, it is not clear if this regulation will have the intended effect.

Scholars Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, all at the Oxford Internet Institute, University of Oxford, are doubtful that the GDPR really does provide a right of explanation. In their paper, they suggest that the GDPR "lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless" ("Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation," Sept. 20, 2017; papers.ssrn.com/sol3/papers.cfm?abstract_id=2903469).

We will have to wait until after May 2018 to find out if the GDPR protects against algorithmic bias and discrimination, or if it really is toothless.

INDUSTRY BODIES AND COMPANY LEADERS

Industrial bodies and organizations, companies, and professional societies are releasing their own ethical principles, guidelines, and reports designed to show regulators and policymakers that industry will police its own, thereby eliminating the need for governmental regulations.

In March 2014, MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn launched the Future of Life Institute, an organization headquartered in Boston whose remit is to examine the existential risks of new technologies and human society. In January 2015, they released an open letter, signed by more than 8,000 people, including Peter Norvig, Elon Musk, Stephen Hawking, and others from Microsoft, Google, Facebook, and DeepMind, calling on scientists, researchers, companies, and governments working in the areas of AI to ensure that "AI systems are robust and beneficial: our AI systems must do what we want them to do" ("An Open Letter: Research Priorities for Robust and Beneficial Artificial Intelligence"; futureofife.org/ai-open-letter).

In January 2017, the Future of Life held its annual conference and released the AI Asilomar Principles (futureoflife.org/ai-principles)--a collection of 23 principles arranged into three broad categories: Research Issues, Ethics and Values, and Longer-Term Issues. In the section on ethics and values, Principles 6-9 deal with making AI tools accountable, transparent, verifiable, and auditable. Principle 9 covers who shares responsibility for AI systems that are used or misused.

In the same month, IBM's CEO Ginni Rometty, in a panel discussion at the World Economic Forum, acknowledged the profound ways AI touches "every facet of work and life--with the potential to radically transform them for the better. ... As with every prior world-changing technology, this technology carries major implications. Many of the questions it raises are unanswerable today and will require time, research, and open discussion to answer" (Dignan, Larry. "IBM's Rometty Lays Out AI Considerations, Ethical Principles," Jan. 17, 2017; zdnet. com/article/ibms-rometty-lays-out-ai-considerations-ethical-principles). In response to these concerns, Rometty released three principles--purpose, transparency, and skills--noting IBM's future products and services and the data used must be transparent and augment human skills.

ETHICAL AI

IBM is only the latest to enter this arena. Tech entrepreneurs Reid Hoffman, eBay founder Pierre Omidyar, the Knight Foundation, MIT's Media Lab, and Harvard's Berkman Klein Center for Internet & Society have established The Ethics and Governance of Artificial Intelligence Fund that "advances the development of ethical AI in the public interest" (Kanaracus, Chris. "MIT, Harvard, Tech Industry Luminaries Team Up on Fund for Ethical AI," Jan. 16, 2017; zdnet.com/article/mit-harvard-tech-industry-luminaries-team-up-on-fund-for-ethical-ai).

DeepMind's Verity Harding and Sean Legassick announced in an October 2017 blog post the creation of a new unit inside Google's DeepMind--the DeepMind Ethics and Society--saying, "[this] new unit will help us explore and understand the real-world impacts of AI. It has a dual aim: to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all" ("Why We Launched DeepMind Ethics & Society," Oct. 3, 2017; deepmind.com/blog/why-we-launched-deepmind-ethics-society).

Microsoft's Satya Nadella, in an article published in Slate, lays out his six principles that industry and society need to consider in its approach to AI. He concludes by arguing that "the most critical next step in our pursuit of A.I. is to agree on an ethical and empathic framework for its design" ("The Partnership of the Future," June 26, 2016; slate.com/articles/technology/future_tense/2016/06/microsoft_ceo_satya_nadella_humans_and_a_i_can_work_together_to_solve_society.html).

In September 2016, Google, Facebook, Amazon, IBM, and Microsoft formed the Partnership on Artificial Intelligence to Benefit People and Society. The goal of the partnership is to "conduct research, recommend best practices, and publish research under an open license in areas such as ethics, fairness and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability and robustness of the technology" (Hern, Alex. "'Partnership on AI' Formed by Google, Facebook, Amazon, IBM and Microsoft," Sept. 28, 2016; the guardian.com/technology/2016/sep/28/google-facebook-amazon-ibm-microsoft-partnership-on-ai-tech-firms).

Professional societies have also released AI principles, guidelines, and white papers. The Association for Computing Machinery (ACM), Institute of Electrical and Electronics Engineers (IEEE), Information Technology Industry Council (ITI), and the Software and Information Industry Association (SIIA) have all released individual sets of ethical principles designed to demonstrate to policymakers and regulators that there is no need for governments to regulate AI now and in the future. (2) The industry recognizes the social impacts of AI and can regulate itself--don't worry we've got this covered--thank you very much.

AI NOW

In November 2017, Kate Crawford and Meredith Whittaker, founder of Open Research at Google, announced the creation of "the AI Now Institute, a research organization tasked with exploring how AI is affecting society at large. AI Now will be cross-disciplinary, bridging the gap between data scientists, lawyers, sociologists, and economists studying the implementation of artificial intelligence" (Gershgorn, Dave. "The Field of AI Research Is About to Get Way Bigger Than Code," Nov. 15, 2017; qz.com/1129307/the-field-of-ai-research-is-about-to-get-way-bigger-than-code). The institute's goal is focus on "four core domains: labor and automation, bias and inclusion, rights and liberties, and safety and critical infrastructure" (The AI Now Institute Launches at NYU to Examine the Social Effects of Artificial Intelligence," Nov. 15, 2017; ainowinstitute.org/press-release-ai-now-launch).

In the previously referenced "AI Now 2017 Report," Campolo et al. appreciate the "development of professional and ethical codes to govern the design and application of AI technologies." While seeing this as a positive step, they also see some real limitations. The key problem, they state, "is that they share an assumption that industry will voluntarily begin to adopt their approaches. ... While these efforts set moral precedents and start conversations, they provide little help to practitioners in navigating daily ethical problems in practice or diagnosing ethical harms and do little to directly change ethics in the design and use of AI."

The authors posit that more needs to be done:
In the face of rapid, distributed, and often proprietary AI development
and implementation, such forms of soft governance face real challenges.
Among these are problems of coordination among different ethical codes,
as well as questions around enforcement mechanisms that would go beyond
voluntary cooperation by individuals working in research and industry.
New ethical frameworks for AI need to move beyond individual
responsibility to hold powerful industrial, governmental, and military
interests accountable as they design and employ AI.


THE VIEW FROM ACADEMIA

Not surprisingly, regulation has its share of supportive academics. Ryan Calo, University of Washington School of Law; Matthew Scherer, Schar School of Policy and Government at George Mason University; Gary Marchant, Sandra Day O'Connor College of Law at University of Arizona; Wendell Wallach, Yale University's Interdisciplinary Center for Bioethics; and Ben Schneiderman, University of Maryland, Department of Computer Science, all advocate for some sort of regulatory organization--to oversee, certify, audit, and ensure that individuals are not harmed by artificial intelligence tools and applications. (3)

In their article "Coordinating Technology Governance," Marchant and Wallach conclude that "the governance of emerging technologies has generally proceeded in a fragmented fashion. Government agencies and developers of soft law programs propose new oversight initiatives one piece at a time, with little regard to how different initiatives affect the same technology." They posit, "Emerging technologies require a coordinated, holistic, and nimble approach. ... In short, emerging technologies need an issue manager to orchestrate and serve as the central hub for the various parts that contribute to the governance of that technology."

AI, although created more than 60 years ago, is still in its early days, its early stages. I think most would agree that AI has already changed society in profound ways and will continue to transform society as we know it. But as a society, we are just now grappling with how AI tools can and should be used in all aspects of our social, economic, political, and cultural lives.

KEY QUESTION OF TRUST

There is a key question we need to ask: Do we trust companies, governments, universities, courts, and their leadership to do the right thing? In WTF: What's the Future and Why It's Up to Us, Tim O'Reilly, CEO of O'Reilly Media, places automation into a larger social and economic context. In the article "The Great AI Paradox" (MIT Technology Review, Dec. 15, 2017; technologyreview.com/s/609318/the-great-ai-par adox), Brian Bergstein references O'Reilly's latest book, noting that O'Reilly argues "automation is fueling a short-sighted system of shareholder capitalism that rewards a tiny percentage of investors at the expense of nearly everyone else." Bergstein also shares, O'Reilly's assertion that "the relentless imperative to maximize returns to shareholders makes companies more likely to use automation purely as a way to save money."

Unethical application of AI to social, economic, and legal issues, all with the goal of saving money, is short-sighted, impractical, and, ultimately, self-defeating. AI offers such opportunities to solve the most worrisome social problems of today with technological solutions of tomorrow. This is the hopeful vision. The dark vision uses AI to exacerbate and expand social, economic, political, and cultural inequalities that benefit the few and the powerful at the expense of everyone else. Let's create and use AI tools, not as "weapons of math destruction," but as tools that deliver on the hyped promise (curing cancer, solving climate change) and in a manner that does no harm and benefits all.

By Laura Gordon-Murnane

Laura Gordon-Murnane (lgmurnane@gmail.com) is emerging technologies librarian at BNA.

Comments? Email the editor-in-chief (marydee@xmission.com).

Endnotes

(1.) Dignan, Larry. "Google Bets on AI-First as Computer Vision, Voice Recognition, Machine Learning Improve," May 17, 2017 (zdnet.com/article/google-bets-on-ai-first-as-computer-vision-voice-recognition-machine-learning-improve); Darrow, Barb. "Microsoft Bids Goodbye to 'Mobile First' Mantra in Favor of AI," Aug. 3, 2017 (fortune.com/2017/08/03/microsoft-cloud-ai-mobile); Barb Darrow, "IBM Ponies Up $240 Million for Watson Artificial Intelligence Lab at MIT," Sept. 7, 2017 (fortune.com/2017/09/07/mit-ibm-watson); "Facebook AI Research Launches Partnership Program," Perronnin, Florent, Serkan Piantino (research.fb.com/facebook-ai-research-launches-partnership-program).

(2). Association of Computing Machinery's "Statement on Algorithmic Transparency and Accountability," Jan. 12, 2017 (acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf); IEEE, Ethically Aligned Design, Versions 1 & 2, The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems (standards.ieee.org/develop/indconn/ec/autonomous_systems.html); Information Technology Industry Council (ITI) AI Policy Principles Executive Summary (itic.org/resources/AI-Policy-Principles-FullReport2.pdf); and "SIIA Issue Brief, Ethical Principles for Artificial Intelligence and Data Analytics " September 2017 (siia.net/Portals/0/pdf/Policy/Ethical%20Principles%20for%20Artificial%20Intelligence%20and%20Data%20Analytics%20SIIA%20Issue%20Brief.pdf?ver=2017-11-06-160346-990).

(3.) Calo, Ryan. "The Case for a Federal Robotics Commission," Sept. 15, 2014 (brookings.edu/research/the-case-for-a-federal-robotics-commission); Scherer, Matthew. "Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies," Spring 2016 (papers.ssrn.com/sol3/papers.cfm?abstract_id=2609777); Marchant, Gary E., Wendell Wallach. "Coordinating Technology Governance," Summer 2015 (issues.org/31-4/coordinating-technology-governance); Shneiderman, Ben. "Opinion: The Dangers of Faulty, Biased, or Malicious Algorithms Requires Independent Oversight," Nov. 29, 2016 (pnas.org/content/113/48/13538.long).
CHART: Principles on Artificial Intelligence--Professional Societies,
Industry Groups and Associations, Tech Leaders, and Watch Dogs

Organization/Company                    Access and  Accountability
                                        Redress

Professional Societies
Association of Computing Machinery      yes         yes
acm.org/binaries/content/assets
/public-policy/2017_usacm_
statement_algorithms.pdf
IEEE
standards.ieee.org/develop/indconn
/ec/autonomous_systems.html
Information Technology
Industry Council (ITI)
itic.org/resources
/AI-Policy-Principles-FullReport2.pdf
SIIA (Software and
Information Industry Association)                         yes
siia.org
Industry Associations and Partnerships
Future of Life--AI
Asilomar AI Principles
futureoflife.org/ai-principles
DeepMind Ethics & Society Principles                yes
deepmind.com/applied/deepmind
-ethics-society/principles
Partnership on Artificial
Intelligence to Benefit People                      yes
partnershiponai.org
/thematic-pillars/
Tech Leaders
IBM Principles
zdnet.com/article/
ibms-rometty-lays-out-ai-
considerations-ethical-principles
Microsoft Principles                                yes
tinyurl.com/y8zugdhy
Watchdogs
AI Now Institute                        yes         yes
ainowinstitute.org
/AI_Now_2017_Report.pdf

Organization/Company                    Auditability  Data


Professional Societies
Association of Computing Machinery      yes           yes
acm.org/binaries/content/assets
/public-policy/2017_usacm_
statement_algorithms.pdf
IEEE
standards.ieee.org/develop/indconn
/ec/autonomous_systems.html
Information Technology
Industry Council (ITI)                                yes
itic.org/resources
/AI-Policy-Principles-FullReport2.pdf
SIIA (Software and
Information Industry Association)
siia.org
Industry Associations and Partnerships
Future of Life--AI
Asilomar AI Principles
futureoflife.org/ai-principles
DeepMind Ethics & Society Principles
deepmind.com/applied/deepmind
-ethics-society/principles
Partnership on Artificial
Intelligence to Benefit People
partnershiponai.org
/thematic-pillars/
Tech Leaders
IBM Principles
zdnet.com/article/
ibms-rometty-lays-out-ai-
considerations-ethical-principles
Microsoft Principles                                  yes
tinyurl.com/y8zugdhy
Watchdogs
AI Now Institute                        yes           yes
ainowinstitute.org
/AI_Now_2017_Report.pdf

Organization/Company                    Diversity of  Education and
                                        Perspectives  Awareness

Professional Societies
Association of Computing Machinery                    yes
acm.org/binaries/content/assets
/public-policy/2017_usacm_
statement_algorithms.pdf
IEEE                                    yes
standards.ieee.org/develop/indconn
/ec/autonomous_systems.html
Information Technology
Industry Council (ITI)                  yes           yes
itic.org/resources
/AI-Policy-Principles-FullReport2.pdf              yes          yes
SIIA (Software and
Information Industry Association)       yes
siia.org
Industry Associations and Partnerships
Future of Life--AI
Asilomar AI Principles
futureoflife.org/ai-principles
DeepMind Ethics & Society Principles    yes
deepmind.com/applied/deepmind
-ethics-society/principles
Partnership on Artificial
Intelligence to Benefit People          yes
partnershiponai.org
/thematic-pillars/
Tech Leaders
IBM Principles                                        yes
zdnet.com/article/
ibms-rometty-lays-out-ai-
considerations-ethical-principles
Microsoft Principles                    yes
tinyurl.com/y8zugdhy
Watchdogs
AI Now Institute                        yes           yes
ainowinstitute.org
/AI_Now_2017_Report.pdf

Organization/Company                    Explanation  Human    Privacy
                                                     Benefit

Professional Societies
Association of Computing Machinery      yes
acm.org/binaries/content/assets
/public-policy/2017_usacm_
statement_algorithms.pdf
IEEE                                    yes                   yes
standards.ieee.org/develop/indconn
/ec/autonomous_systems.html
Information Technology
Industry Council (ITI)                  yes                   yes
itic.org/resources
/AI-Policy-Principles-FullReport2.pdf
SIIA (Software and
Information Industry Association)
siia.org
Industry Associations and Partnerships               yes      yes
Future of Life--AI
Asilomar AI Principles
futureoflife.org/ai-principles
DeepMind Ethics & Society Principles                 yes
deepmind.com/applied/deepmind
-ethics-society/principles
Partnership on Artificial
Intelligence to Benefit People                                yes
partnershiponai.org
/thematic-pillars/
Tech Leaders
IBM Principles                                       yes
zdnet.com/article/
ibms-rometty-lays-out-ai-
considerations-ethical-principles
Microsoft Principles                    yes          yes      yes
tinyurl.com/y8zugdhy
Watchdogs
AI Now Institute                        yes          yes
ainowinstitute.org
/AI_Now_2017_Report.pdf

Organization/Company                    Responsibility  Safety and
                                                        Controllability

Professional Societies
Association of Computing Machinery
acm.org/binaries/content/assets
/public-policy/2017_usacm_
statement_algorithms.pdf
IEEE
standards.ieee.org/develop/indconn
/ec/autonomous_systems.html
Information Technology
Industry Council (ITI)                  yes             yes
itic.org/resources
/AI-Policy-Principles-FullReport2.pdf
SIIA (Software and
Information Industry Association)
siia.org
Industry Associations and Partnerships  yes             yes
Future of Life--AI
Asilomar AI Principles
futureoflife.org/ai-principles
DeepMind Ethics & Society Principles
deepmind.com/applied/deepmind
-ethics-society/principles
Partnership on Artificial
Intelligence to Benefit People                          yes
partnershiponai.org
/thematic-pillars/
Tech Leaders
IBM Principles
zdnet.com/article/
ibms-rometty-lays-out-ai-
considerations-ethical-principles
Microsoft Principles
tinyurl.com/y8zugdhy
Watchdogs
AI Now Institute                        yes             yes
ainowinstitute.org
/AI_Now_2017_Report.pdf

Organization/Company                    Security  Transparency


Professional Societies
Association of Computing Machinery
acm.org/binaries/content/assets
/public-policy/2017_usacm_
statement_algorithms.pdf
IEEE                                    yes
standards.ieee.org/develop/indconn
/ec/autonomous_systems.html
Information Technology
Industry Council (ITI)                  yes
itic.org/resources
/AI-Policy-Principles-FullReport2.pdf             yes
SIIA (Software and
Information Industry Association)
siia.org
Industry Associations and Partnerships            yes
Future of Life--AI
Asilomar AI Principles
futureoflife.org/ai-principles
DeepMind Ethics & Society Principles              yes
deepmind.com/applied/deepmind
-ethics-society/principles
Partnership on Artificial
Intelligence to Benefit People                    yes
partnershiponai.org
/thematic-pillars/
Tech Leaders
IBM Principles                                    yes
zdnet.com/article/
ibms-rometty-lays-out-ai-
considerations-ethical-principles
Microsoft Principles                              yes
tinyurl.com/y8zugdhy
Watchdogs
AI Now Institute                                  yes
ainowinstitute.org
/AI_Now_2017_Report.pdf

Organization/Company                    Validity and
                                        Testing

Professional Societies
Association of Computing Machinery      yes
acm.org/binaries/content/assets
/public-policy/2017_usacm_
statement_algorithms.pdf
IEEE
standards.ieee.org/develop/indconn
/ec/autonomous_systems.html
Information Technology
Industry Council (ITI)
itic.org/resources
/AI-Policy-Principles-FullReport2.pdf
SIIA (Software and
Information Industry Association)
siia.org
Industry Associations and Partnerships
Future of Life--AI
Asilomar AI Principles
futureoflife.org/ai-principles
DeepMind Ethics & Society Principles
deepmind.com/applied/deepmind
-ethics-society/principles
Partnership on Artificial
Intelligence to Benefit People
partnershiponai.org
/thematic-pillars/
Tech Leaders
IBM Principles
zdnet.com/article/
ibms-rometty-lays-out-ai-
considerations-ethical-principles
Microsoft Principles
tinyurl.com/y8zugdhy
Watchdogs
AI Now Institute                        yes
ainowinstitute.org
/AI_Now_2017_Report.pdf
COPYRIGHT 2018 Information Today, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Gordon-Murnane, Laura
Publication:Online Searcher
Date:Mar 1, 2018
Words:4900
Previous Article:IN SUPPORT of THE SCHOLAR.
Next Article:Diversity on Both Sides of the Desk.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters