Printer Friendly

Auditing algorithms for bias.

In 1971 the philosopher John Rawls proposed a thought experiment to understand the idea of fairness: the 'veil of ignorance.' What if, he asked, we could erase our recollections so we had no memory of who we were-our race, our income level, anything that may influence our opinion? Can artificial intelligence provide the veil of ignorance that would lead us to objective and ideal outcomes?

The field of AI ethics draws an interdisciplinary group of lawyers, philosophers, social scientists, programmers and others. Influenced by this community, Accenture Applied Intelligence has developed a fairness tool to understand and address bias in both the data and the algorithmic models that are at the core of AI systems. (An early prototype of the fairness tool was developed at a data study group at the Alan Turing Institute. Accenture thanks the institute and the participating academics for their role.)

In its current form, the fairness evaluation tool works for classification models, which are used, for example, to determine whether or not to grant a loan to an applicant. Classification models group people or items by similar characteristics. The tool helps a user determine whether this grouping occurs in an unfair manner, and provides methods of correction.

There are three steps to the tool:

The first part examines the data for the hidden influence of user-defined 'sensitive' variables on other variables. The tool identifies and quantifies what impact each predictor variable has on the model's output.

The second part of the tool investigates the distribution of model errors for the different classes of a sensitive variable. Our tool applies statistical distortion to fix the error term; that is, the error term becomes more homogeneous across the different groups. The degree of repair is determined by the user.

Finally, the tool examines the false positive rate across different groups and enforces a user-determined equal rate of false positives across all groups. False positives are one particular form of model error: instances where the model outcome said 'yes' when the answer should have been 'no.'

In correcting for fairness, there may be a decline in the model's accuracy, and the tool illustrates any change in accuracy that may result. Since the balance between accuracy and fairness is dependent on context, we rely on the user to determine the trade-off. Depending on the context of the tool, it may be a higher priority to ensure equitable outcomes than to optimize accuracy.

Our tool does not simply dictate what is fair. Rather, it assesses and corrects bias within the parameters set by its users, who ultimately need to define sensitive variables, error terms and false positive rates.

COPYRIGHT 2018 Asianet-Pakistan
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Publication:Business Mirror (Makati City, Philippines)
Date:Nov 12, 2018
Previous Article:Women act more ethically than men when representing themselves-but not when representing others.
Next Article:Time to end our Bar Exam fixation.

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |