How bias crept into AI-powered technologies

Researchers stated that hate speech detectors were biased against African-American Vernacular English and automated hiring decisions were proven to be biased in favour of upholding status quo.

October 14, 2020 11:30 am | Updated 01:04 pm IST

The state-of-the-art model among the three, the one that does best on typical applied benchmarks, also demonstrated the most extensive use of stereotypes.

The state-of-the-art model among the three, the one that does best on typical applied benchmarks, also demonstrated the most extensive use of stereotypes.

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here   to subscribe for free.)

Researchers at New York University (NYU) have identified how cultural stereotypes found their way into artificial intelligence (AI) models in the early years of their development.

The team’s findings help understand the factors that influence a search engine’s result page and other AI-powered tools, including translation systems, personal assistants, and resume screening software.

In recent years, advances in applied language understanding technology have primarily been driven by the use of language representation models that are trained by exposing them to huge amounts of internet text.

These models not only learn about the language during the training process, they also learn from it by picking up ideas on how the world works from what people write about. This makes for systems that perform well on typical AI benchmarks, but it also causes problems, NYU said in a study titled ‘Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing’.

Models acquire social biases that are reflected in the data. This can be dangerous when the models are used for decision making, especially when they’re asked to make a decision about some piece of text that describes people of colour, or any other social group that faces widespread stereotyping.

Also read | AI assists NASA to spot craters on Mars

In order to mitigate the occurrences of social biases in AI-powered models, a team of writers were asked to note down sentences that express a stereotypical view of a specified social group, as well as incongruous 'anti-stereotypical' sentences that expressed the same view about a different social group.

Using sentence pairs, they then created a metric to measure bias in three widely used language representation models, and deployed that metric to show that each of the three masked language models (MLMs) readily recognized stereotyped sentences as more typical than the anti-stereotyped sentences, demonstrating their knowledge and use of the stereotypes.

The state-of-the-art model among the three, the one that does best on typical applied benchmarks, also demonstrated the most extensive use of stereotypes.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.