Tech firms have a moral duty to explain how their algorithms make decisions, study says

When companies do not provide details about how they gain access to users’ profiles, from where they collect data, and with whom they trade it, both fairness and trust are at stake, the team stated

May 18, 2021 02:16 pm | Updated 02:18 pm IST

Several computer scientists and government bodies have called for transparency around companies’ use of autonomous decision-making algorithms and the reliance on user data.

Several computer scientists and government bodies have called for transparency around companies’ use of autonomous decision-making algorithms and the reliance on user data.

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

From Google to Uber, technology companies today are increasingly using machine learning algorithms to personalise content for users. These companies have a moral responsibility to explain to people how their algorithms work and what data its collects, according to a team of researchers at Carnegie Mellon University (CMU).

Several computer scientists and government bodies have called for transparency around companies’ use of autonomous decision-making algorithms and the reliance on user data. For example, the European Union adopted the General Data Protection Regulation (GDPR) in 2016, part of which regulates the use of automatic algorithmic decision systems.

The team noted in their study titled ‘Why a Right to an Explanation of Algorithmic Decision-Making Should Exist: A Trust-Based Approach’ that GDPR is ambiguous on whether companies are obligated to explain how they automate algorithmic profiling to make decisions, giving them a way to bypass accountability.

When companies do not provide details about how they gain access to users’ profiles, from where they collect data, and with whom they trade it, both fairness and trust are at stake, the team stated.

Also Read | Facebook, Google lack effective scam reporting processes, U.K. watchdog says

In the digital era, obtaining prior permission for disclosing information with knowledge of the possible consequences is no longer possible since many digital transactions are ongoing. But, obtaining informed consent is ethically required unless overridden for specific reasons, the team said.

Several companies have also hired data interpreters, employees who bridge the work of data scientists and the people affected by the companies’ decisions to understand problems posed by automated decisions.

It is not clear if requiring an algorithm to be interpretable or explainable will hinder businesses’ performance or lead to better results, much like the transparency conflict between Apple and Facebook. “But more importantly, the right to explanation is an ethical obligation apart from the bottom-line impact,” the team added.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.