Explained | What is Apple’s new child safety feature and why is it criticised?

Apple’s new child safety feature is a litmus test for the privacy flag-bearer

August 13, 2021 06:03 pm | Updated November 27, 2021 04:08 pm IST

Even though Apple’s intention to combat child pornography is laudable, the company’s latest feature has come under strong criticism as the new feature could compromise the iPhone maker’s end-to-end encryption system.

Even though Apple’s intention to combat child pornography is laudable, the company’s latest feature has come under strong criticism as the new feature could compromise the iPhone maker’s end-to-end encryption system.

On August 5, Apple announced a new feature to limit spread of sexually explicit images involving children. It will soon be introduced in iMessage app, iOS and iPadOS, and Siri. The tech giant notes that the feature will protect “children from predators”, and that it was developed in collaboration with child safety experts.

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

The Cupertino-based company noted its sensitive image-limiting feature will help law enforcement agencies in criminal investigations. For a company that famously stood up to the FBI’s demand in 2016 to unlock one the shooter’s phone, this is a big move.

Several experts and advocacy groups say that Apple’s new feature could potentially become a backdoor channel for government surveillance.

What is this feature?

Apple’s child protection feature is an on-device tool that will warn children and their parents whenever a child receives or sends sexually explicit images. The machine learning (ML)-based tool will be deployed in the iMessage app to scan photos and determine whether they are sexually explicit. The company noted that other private communication in the app will not be read by its algorithm.

Also read: Apple's child protection features spark concern within its own ranks

Once a picture is identified as sensitive, the tool will blur it and warn the child about the content. As an additional layer of precaution, the child will also be told that their parents will get a text if they view the image. This feature can be switched on or off by parents.

How does photo-scanning system work?

In the U.S., child pornographic content is tagged as Child Sexual Abuse Material (CSAM), and are reported to the National Centre for Missing and Exploited Children (NCMEC), which acts as the country’s reporting centre for such images. NCMEC works with law enforcement agencies in the U.S., and notes that sexually explicit images are shared on Internet platforms people use every day.

To limit CSAM content on its platform, Apple says it will scan photos on user’s device and cross reference them with NCMEC’s database. The tech giant will use a hashing technology in iOS and iPadOS to transform the image into a unique number. This process ensures that identical images will have the same hash even when cropped, resized or colour converted.

Then, a cryptographic technology called Private set intersection (PSI) powers the matching process by not allowing Apple to view what is in the image. But, once a particular threshold for the number of CSAM images in a phone, is breached, Apple will manually check the pictures and disable the user’s account. It will then send a report to NCMEC. A threshold is maintained to ensure that accounts are not incorrectly flagged.

Why is it being criticised?

Even though Apple’s intention to combat child pornography is laudable, the company’s latest feature has come under strong criticism as it could compromise the iPhone maker’s end-to-end encryption system.  

Digital rights group Electronic Frontier Foundation (EFF) notes that, “even a well-intentioned effort to build such a system will break key promises of the messenger’s encryption itself and open the door to broader abuses.”

EFF points out that it will be difficult to audit how Apple’s ML tags an image as sexually explicit as the algorithm sans human intervention has the habit of wrongfully classifying content. Another area of concern is the client-side scanning used in this process, which will look through a message, check it against a database of hashes, and then send it. So, if a parent switches on Apple’s new feature, their child’s every message can be viewed by a third-party entity before it is sent.

This means, other government agencies could also start asking for access. Apple says that it won't expand the feature to any other government request.

“But even if you believe Apple won’t allow these tools to be misused there’s still a lot to be concerned about,” Matthew Green, professor at Johns Hopkins University tweeted. “These systems rely on a database of “problematic media hashes” that you, as a consumer, can’t review.”

Green raises the question of “collisions” as a result of combining hashes: “Imagine someone sends you a perfectly harmless political media file that you share with a friend.  But that file shares a hash with some known child porn file?”

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.