Explained | Will facial recognition AI tools help detect telecom fraud?
Premium

How does the ASTR tool locate fake SIM cards? What are the concerns regarding using AI-based facial recognition? Does India have a law governing such technology? What are some of the ethical problems with FRT?

May 28, 2023 02:47 am | Updated 11:47 pm IST

The Department of Telecommunications aims “to use facial recognition-based indigenous and NextGen platform  called Artificial Intelligence and Facial Recognition powered Solution for Telecom SIM Subscriber Verification to analyse the whole subscriber base of all telecom service providers”.

The Department of Telecommunications aims “to use facial recognition-based indigenous and NextGen platform called Artificial Intelligence and Facial Recognition powered Solution for Telecom SIM Subscriber Verification to analyse the whole subscriber base of all telecom service providers”. | Photo Credit: Getty Images/iStockphoto

The story so far: To weed out rampant cases of fraudulently procured SIM cards being used across the country for financial and other cyber scams, the Department of Telecommunications (DoT) has begun using an artificial intelligence-based facial recognition tool named ‘Artificial Intelligence and Facial Recognition powered Solution for Telecom SIM Subscriber Verification’ (or ASTR, to be pronounced astra, the Hindi word for weapon). ASTR has already been used in multiple States including Haryana, Gujarat, Maharashtra, Tamil Nadu, and Kerala. Notably, while the DoT has put out success stories of fake mobile connection busts using ASTR this year, there is no personal data protection regime or AI-specific regulation in India yet.

Why is artificial intelligence being used to detect telecom frauds?

On May 25, the Punjab police said it had blocked 1.8 lakh SIM cards allegedly activated using fake identities, out of which 500 connections were obtained using one person’s photo but different accompanying KYC (Know Your Customer) parameters like names, address proofs, and so on. Haryana’s Nuh district (formerly Mewat) was described as the ‘new Jamtara’ (the Jharkhand region infamous for such frauds) when the police arrested 66 accused for allegedly duping around 28,000 people across the country to the tune of ₹100 crore using 99 fake SIM cards. Meanwhile, Karnataka lost ₹363 crore to cyber fraud in 2022, which averages very close to ₹1 crore per day. As per the DoT, a large proportion of financial cyber frauds involve the use of fake mobile connections that leverage anonymity.

Also read | Majority of SIM cards activated in Tamil Nadu with fake identities were procured from other States

India is the second-largest telecom ecosystem in the world, with about 117 crore subscribers, and while manually identifying and comparing the vast number of subscriber verification documents such as photographs and proofs is a massive exercise, the DoT says it aims to use the facial recognition-based “indigenous and NextGen platform” ASTR to analyse the whole subscriber base of all telecom service providers (TSPs).

Besides, it points out, that the currently available conventional text-based analysis is limited to finding similarities between the proof of identities or addresses and verifying whether such information is accurate but it cannot trawl photographic data to detect similar faces.

What is ASTR and how will it detect fake SIM connections?

Facial recognition is an algorithm-based technology which creates a digital map of the face by identifying and mapping an individual’s facial features, which it then matches against the database to which it has access.

How does facial recognition technology work?
Step 1: Detection- Facial recognition technology (FRT) relies on the use of algorithms to detect the presence of a face in an image, video or real time footage.
Step 2: Analysis- FRT then analyses the image of the face, mapping the facial geometry and an individual’s facial features to create a faceprint (much like a fingerprint). Facial feature extraction uses mathematical representations of distinctive features on individual faces to have unique identifiers between different faces.
Step 2: Recognition- At this stage, it automatically cross-references a person’s facial features with a pre-existing database of images called a gallery dataset.

In 2012, the DoT had directed all TSPs to share their subscriber database including pictures of users with the department. The ASTR analyses this database subscriber images provided by TSPs and put them into groups of similar-looking images using facial recognition technology (FRT). In the next step, it compares the associated textual subscriber details with pictures in the database and uses a string-matching concept called “fuzzy logic” to identify approximately similar-looking names of users or other KYC information to group them. The last step is determining if the same face (person) has acquired SIMs in multiple names, dates of birth, bank accounts, address proofs, and other KYC documents. Alternatively, ASTR also identifies if more than eight SIM connections have been obtained in one person’s name, which is not allowed as per DoT rules. ASTR’s facial recognition technology detects facial features by mapping 68 features of the frontal face. It characterises two faces as similar if there’s a 97.5% match.

What are the concerns associated with the use of facial recognition AI?

The use of FRT involves issues related to misidentification due to the inaccuracy of the technology. An algorithmic FRT, which is trained on specific datasets may have limits to its knowledge, i.e., it might make technical errors due to occlusion (a partial or complete obstruction of the image), bad lighting, facial expression, ageing and so on. Errors in FRT also relate to the underrepresentation of certain groups of people in the training datasets. Studies on FRT systems in India indicate a disparity in error rates based on the identification of Indian men and Indian women. Extensive research globally has revealed that its accuracy rates fall starkly based on race, gender, skin colour and so on. This, in turn, can result in a false positive, where a person is misidentified as someone else or a false negative, where a person is not verified as themselves.

Three concerns around using facial recognition AI
1. Inaccuracy (technical): Technical errors due to occlusion (a partial or complete obstruction of the image), bad lighting, facial expression, ageing and so on
2. Misidentification and Underrepresentation: Errors in FRT also occur due to the underrepresentation of certain groups of people in the datasets it uses for training. Studies on FRT systems in India indicate a disparity in error rates based on the identification of Indian men and Indian women. Globally, research has revealed that its accuracy rates fall starkly based on race, gender, skin colour and so on.
3. Privacy and Consent: Sometimes, individuals may not have consented (or not be aware) to their facial data being used or have control over the extent of its processing by public and private players.

Other concerns about FRT, which are ethical relate to privacy, consent, and mass surveillance. The Supreme Court in the Puttaswamy case had recognised the right to informational autonomy as an important part of the right to privacy enshrined in Article 21. FRT systems consume and compute vast amounts of biometric facial data, to both train and operate. In many cases, an individual may not be in control of the processing of their data or not even be aware. This could — and has — resulted in wrongful arrests and exclusion from social security schemes.

Also read: Facial recognition technology: law yet to catch up

In the case of ASTR, digital policy-watcher and news organisation Medianama pointed out in its findings that there was no public notification issued about the use of ASTR on user data provided to TSPs at the time of obtaining connections. An RTI filed by the publication with the DoT did not reveal any information about how ASTR safeguards data or for how long it retains customer data. The DoT also did not provide a copy of the contract signed with the developer of the technology. This raises questions about privacy and consent even if ASTR is being used on the principle of deemed consent mentioned in the now-withdrawn data protection bill.

What is the legal framework governing such technology in India?

In several jurisdictions world-over which are using FRT in their governance and privately, players have to adhere to the local personal data protection regimes if AI regulation is not in place.

For instance, in the European Union, where a specific AI Act is currently being drafted, FRT tools have to adhere to the strict privacy and security rules in the General Data Protection Regulation of 2018. This is also the case in Canada.

In India, there is no data protection law in place after the government withdrew the Personal Data Protection Bill, 2019 last year following extensive changes recommended by a Joint Parliamentary Committee. The Centre this year came up with a new draft of the Bill but it has not been tabled in Parliament yet. Secondly, India also does not have an FRT-specific regulation.

While there is a regulatory vacuum, according to the Internet Freedom Foundation’s Project Panoptic, which tracks the spread of FRT in India, there are more than 130 government-authorised FRT projects in the country that spread access use cases such as authentication for access to official schemes, airport check-in (the DigiYatra project), real-time use on suspects by various state police forces, identity authentication to access education document (used by the CBSE), and so on. 

However, NITI Aayog has published several papers enunciating India’s national strategy towards harnessing the potential of AI reasonably. It says that the use should be with due consent and should be voluntary and at no time should FRT become mandatory. It should be limited to instances where both public interest and constitutional morality can be in sync. Enhanced efficiency of automation should per se not be deemed enough to justify the usage of FRT. It is yet to be seen whether ASTR aligns with this definition and the Puttaswamy judgment of 2017.

Top News Today

Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.