Swooping down on algorithms

China’s draft rules on regulating recommendation algorithms address pressing issues but have a flavour of authoritarianism

September 22, 2021 12:15 am | Updated 12:15 am IST

Representational image.

Representational image.

China has pursued aggressive measures in its tech sector in the past few months, ranging from strong-arming IPOs to limiting gaming hours for children. A host of legislative instruments are in the process of being adopted, including the Personal Information Protection Law, the Cybersecurity Law, and the draft Internet Information Service Algorithm Recommendation Management Provisions.

Providing user autonomy

The Management Provisions, released by the Cyberspace Administration of China, are possibly the most interesting and groundbreaking interventions among the new set of legislative instruments. The provisions lay down the processes and mandates for the regulation of recommendation algorithms which are ubiquitous in e-commerce platforms, social media feeds and gig work platforms. They attempt to address the concerns of individuals and society such as user autonomy, economic harms, discrimination, and the prevalence of false information.

Algorithmically curated feeds dominate most of our interactions on the Internet. For instance, this article could reach you on social media platforms like Twitter thanks to a recommendation algorithm. Such an algorithm helps a user navigate information overload and presents content that it deems more relevant to the user. These algorithms learn from user demographics, behavioural patterns, location of the user, the interests of other users accessing similar content, etc., to deliver content. This limits user autonomy, as the user has little opportunity to choose what content to be presented with. Algorithms tend to have certain inherent biases which are learned from their modelling or the data they encounter. This often leads to discriminatory practices against users.

China is aiming to mandate recommendation algorithm providers to share the mantle with the users. The draft says users should be allowed to audit and change the user tags employed by the algorithms to filter content to be presented to them. Through this, the draft aims to limit classifications that the user finds objectionable, thereby allowing the user to choose what to be presented with. This also has ripple effects in platformised gig work, where the gig worker can understand the basis of gigs presented to her. Additionally, Article 17 of the draft specifically strikes at labour reform at the algorithmic level, by necessitating compliance with working hours, minimum wage, and labour laws.

The draft has a clear emphasis on active intervention by recommendation algorithm providers to limit and prevent information disorder. This indicates how China is attempting to crack down on mis-/dis-/malinformation. It has to be read with the clear overtone of the draft requiring recommendation algorithm providers to “uphold mainstream value orientations”, “vigorously disseminate positive energy”, and “advance the use of algorithms in the direction of good.” Evidently, this is China’s attempt at dissuading any disaffection to the Party and remain in tight control of the social narrative.

Lessons for the present

Regulating algorithms is unavoidable and necessary. The world is lagging in such initiatives and China is hoping to lead the pack. The draft addresses pressing issues and entrenches some normative ideals that should be pursued globally. The regulatory mechanism institutionalises algorithmic audits and supervision, a probable first in the world. However, a distinct Chinese flavour of authoritarianism looms large in the draft rules. China has less than desirable records in liberty and is not the ideal choice to set standards through laws. It would be best for liberal democracies to steer clear of these overtures and stick to technically sound regulation which is free from the ails of censorship and social control.

It is high time for India to invest better and speed up legislative action on the regulation of data, and initiate a conversation around the regulation of algorithms. India should strive to achieve this without emulating China, where this draft only complements a host of other laws. India must act fast to resolve the legal and social ills of algorithmic decision-making. Policymakers should ensure that freedoms, rights and social security, and not rhetoric, inform policy changes.

Algorithms are as fundamental to the modern economy as engines to the industrial economy. A one-size-fits-all algorithm regulation fails to take into account the dynamic nature of markets. An ideal regime should have goals-based legislation that can lay down the regulatory norms for algorithms. Such legislation must aim to lay down normative standards that algorithmic decision-making must adhere to. This should be complemented by sectoral regulation that accounts for the complexities of markets.

Sapni G.K. is a Research Analyst with the Takshashila Institution, Bengaluru

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.