news

How bias in AI happens — and what IT pros can do about it

Spread the love

AI bias in criminal sentencing

A distressing real-life example of this indirect bias reinforcement is in criminal justice, as an AI sentencing solution called Compas is currently being used in several U.S. states, Stewart said. The system takes a profile of a defendant and generates a risk score based on how likely a defendant is to reoffend and be considered a risk to the community. Judges then take these risk scores into account when sentencing.

A study looked at several thousand different verdicts associated with the AI system and found that African-Americans were 77% more likely than white defendants to be incorrectly classified as high risk. Conversely, white defendants were 40% more likely to be misclassified as low risk, then go on to reoffend.

Even though it is not part of the underlying data set, Compas’ predictions are highly correlated with race because more weight is given to related nonsensitive attributes like geography and education level.

“You’re kind of in a Catch 22,” Stewart said. “If you omit all of the sensitive attributes, yes, you’re eliminating direct bias, but you’re reintroducing and reinforcing indirect bias. And if you have separate classifiers for each of the sensitive attributes, then you’re reintroducing direct bias.”

One of the best ways IT pros can combat this, Stewart said, is to determine at the outset what the threshold of acceptable differentiation should be and then measure each value against it. If it exceeds your threshold, it’s excluded from the model. If it’s under the limit, it’s included in the model.

“You should use those thresholds, those measures of fairness, as constraints on the training process itself,” Stewart said.

If you are creating an AI system that is going to “materially impact someone’s life,” you also need to have a human in the loop who understands why decisions are being made, he added.