AI and Ethics.
That's also the case with machine learning and AI. Inklings of their components have been around for decades. Suddenly, they are headlines. The interesting thing about AI, though, is that many of the headlines today are less about the technical advances and more about the impact the invention will have on people and on society.
Information professionals are uniquely positioned to consider the ethical implications of AI and related technologies. We are concerned about privacy, bullying, meddling with elections, hacking of corporate and public systems, and algorithmic bias. We demand technology solutions, but perhaps no clear solutions are possible. Still more abstract are the effects that technology has on children, adults, and society at large. In a world where ethics and societal norms differ from one culture to another, can we come up with generally accepted societal norms for good behavior?
In May 2018, I joined Tom Wilde, CEO, Indico; Steve Cohen, EVP and COO, Basis Technology; and David Bayer, then-executive director of the Cognitive Computing Consortium, at Information Today, Inc.'s Data Summit conference to tackle the topic "Beyond Do No Harm--The Complex Considerations of Ethical AI" (dbta.com/datasummit/2019). Despite the panelists' technology background, the issues discussed were mainly legal and societal.
The issue of trust--how much to trust the recommendations and predictions of "black box" AI and cognitive computing systems--is central to the issue of AI ethics because it raises the question of expectations. After years of watching both the software industry and buyers of software, the panel was convinced that vendors' and buyers' expectations of software perfection--or the need for perfection--simply don't match. The archives of computation journals are rife with discussions about how to develop software without bugs. Complaints by users are also common. How do we reconcile technologists' and users' expectations? To avert widespread frustration and potential lawsuits, both sides need to work together to develop a commonly accepted approach.
The issue of trust touches both software use and development. Can we understand where and why recommendations are made by AI systems without being able to audit and validate algorithmic results or test repeatability? Should we trust the content on which recommendations are made? Are the recommendations based solely on the popularity of clicks with regard to validity? How do we know that suggestions are based on sources that are authoritative and relatively unbiased?
Should we try to understand the motivation for recommendations? Is the vendor's profit motive well-aligned with what we are looking for? Often, a search for products is influenced more by the retailer's inventory than the searcher's needs. Recommendation engines can easily prioritize profit over utility, value, or truth. That's true, of course, in the physical world as well, but users often trust what comes out of a website somewhat blindly.
Bias is another pervasive concern. Training sets can easily contain misinformation, old information, or incomplete information that skews the results of a search. These sets can reflect bias in data and source selection. The content administrator may be unaware that other data exists outside the organization's control. There may be hidden assumptions in taxonomies, ontologies, or schemas.
As a result, algorithmic discrimination based on a rigid schema may make it difficult to present the "best" information for an individual's needs. The vastness of large collections can make it difficult to "stumble" on more pertinent results.
Algorithms and biased training sets are not the only culprits in our quest to develop better information systems. We see a growing reliance on accepting AI and cognitive computing recommendations without interjecting the human ability to test the reality of the recommendations, given the context in which the user is seeking information.
Context is a new concept in developing cognitive applications, and we are still experimenting with how to filter results by the user's context without invading his or her privacy. Another danger in our interaction with information systems is the system's tendency to deprecate or eliminate exceptions, rather than highlight them.
COMPLEXITY AND CURATION
While human interaction with information systems has always been a problem for users, there are new quandaries that have arisen because of the volume and availability of uncurated data. These stumbling blocks are societal as well as legal. Social media, for instance, coupled with a lack of governance, invites manipulation by users, by organizations, and by governments. The drive to regulate malefactors is admirable, but who will have the power to decide whether a social message is dishonest, inaccurate, or evil? This pertains to challenges such as poor design in Boeing's 737 MAX software, as well as acceptance of bail and sentencing recommendations, election interference, and the spread of hate and violence ideologies. In yesterday's more homogeneous societies, especially with non-porous borders, it might have been possible to enforce social norms. This is no longer feasible.
The complexity of risks and choices has flummoxed technologists as well as regulators. What is the correct decision for a self-driving car to make when faced with a dilemma of injuring one person or a group of bystanders? Autonomous car designers tell us that there will be far fewer traffic deaths with more autonomous vehicles on the road, but this means ceding decisions of whom to injure to a vehicle. Do we want to do this?
PRIVACY (OR LACK THEREOF)
Discussions of privacy issues are perhaps the most prevalent in the media and press. Do we need to trade privacy for the results and benefits of AI and cognitive computing? This is anathema to information professionals. Will we resolve issues of data ownership, data (and device) access, and data control? Copyright ownership plays into this area as well, as do transparency and the right to know.
These are rarely technical problems, nor are they likely to spawn effective technical solutions that appeal to all interested parties. Rather, they invite legal or regulatory solutions. And yet, most technologists debate these issues fiercely as if they could arrive at technical solutions.
Perfection eludes us all, every day. Instead, we must develop coping strategies for dealing with imperfections in terms of selecting better training sets, understanding how search technology works, and designing systems in which disastrous consequences are a possibility. The trick will be to achieve "good enough" solutions to give humans the ultimate control to correct egregious errors.
Sue Feldman (email@example.com) is president, Synthexis, and cofounder and research director, Cognitive Computing Consortium.
Comments? Email the editor-in-chief (firstname.lastname@example.org).
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||The SEARCHER'S Viewpoint|
|Date:||Jan 1, 2020|
|Previous Article:||Dissertations, Theses, and the Scholarly Record: Earning a Ph.D. is hard work. No one is likely to disagree with that thesis statement. Regardless of...|
|Next Article:||Internet Librarians Gather in London and Monterey.|