Saturday, April 20, 2024

Keeping AI Fair And Square

- Advertisement -

Knowing when to accept a model’s predictions is not always an easy issue for workers who utilise machine-learning models to assist them in making decisions…

Selective regression is a method that users occasionally use. In this method, the model calculates its confidence level for each prediction and rejects predictions if its confidence is too low. After then, a person can look over those situations, gather further data, and manually decide on each one.

For underrepresented groups of persons in a dataset, the method can have the reverse impact, as discovered by researchers at MIT and the MIT-IBM Watson AI Lab. With selective regression, the likelihood that the model will make the correct prediction rises along with the model’s confidence, although this is not necessarily the case for all subgroups.

- Advertisement -

“Ultimately, this is about being more intelligent about which samples you hand off to a human to deal with. Rather than just minimizing some broad error rate for the model, we want to make sure the error rate across groups is taken into account in a smart way,” says senior MIT author Greg Wornell, and a member of the MIT-IBM Watson AI Lab.

The MIT researchers created two algorithms to address the issue after they discovered it. They demonstrate that the algorithms lessen performance discrepancies that have impacted underrepresented minorities using real-world datasets. The goal of the researchers was to guarantee that, as the performance for each subgroup improves with selective regression, so does the overall error rate for the model. This danger is known as monotonic selective risk.

“It was challenging to come up with the right notion of fairness for this particular problem. But by enforcing this criteria, monotonic selective risk, we can make sure the model performance is actually getting better across all subgroups when you reduce the coverage,” says co-lead authors Abhin Shah, an EECS graduate student.

To address the issue, the team created two neural network algorithms that impose this fairness criterion. One approach ensures that all data on the sensitive qualities in the dataset, such race and sex, is present in the features the model uses to make predictions. Sensitive attributes are characteristics that, frequently because of laws or organizational policies, may not be used to make judgments. The second strategy takes use of a calibration technique to guarantee that the model consistently predicts the same value for an input, regardless of whether any sensitive attributes are added to that input.

They were able to lessen inequities by achieving reduced error rates for the minority subgroups in each dataset when they applied their algorithms on top of a common machine-learning technique for selective regression. Furthermore, this was done without having a major effect on the total error rate.


SHARE YOUR THOUGHTS & COMMENTS

Unique DIY Projects

Electronics News

Truly Innovative Tech

MOst Popular Videos

Electronics Components

Calculators