Lead

Protecting children in the age of AI

We are now living among history’s very first “AI” generation. From the Alexas they converse with, to their robot playmates, to the YouTube wormholes they disappear into, the children and adolescents of today are born into a world increasingly powered by virtual reality and artificial intelligence (AI).

Like all fundamental technological change, AI is not only changing what humans can do, it is shaping our behaviours, our preferences, our perceptions of the world and of ourselves. Older people still remember life before AI and the digital world — our references, anchors and pole stars pre-date the fourth Industrial Revolution. Not so for the millions of children and adolescents who were born into it. What does this mean for them, and for us — their parents and guardians?

Double imperatives — getting all children on-line and creating child-safe digital spaces

One of the most pressing concerns is that not everyone can tap into the opportunities offered by this transformation. According to UNICEF and the International Telecommunication Union (ITU), as many as two-thirds of the world’s children do not have access to the Internet at home. In India, the divide between the digital haves and have-nots was tragically underscored last year by the suicide of a young Delhi University undergrad whose parents could not afford a laptop or smartphone at home. Unless we take rapid and concerted action to close this digital divide, AI will radically amplify societal inequalities among children of different races, socio-economic background, genders, and regions.

In addition to closing the digital divide, we need to better protect children and adolescents on-line. As I argued in a previous column, the expansion and deployment of AI is far outpacing our ability to understand its implications, especially its impact on children. No group is more vulnerable and more deserving of our special care, yet how does one childproof AI? How do we encourage and support the tremendous good AI can do for children’s growth and development, while simultaneously mitigating the harm? As parents and guardians, societies and governments, how do we fulfil our responsibility towards our young charges in this AI world, when we hardly understand it ourselves? And how do we equip children and young people with the knowledge, tools and awareness to protect themselves?

In the old-fashioned physical world, we evolved norms and standards to protect children. For instance, there are policies and protocols for a child travelling alone as an unaccompanied minor. A parent will not allow a six-year-old to pack a bag and suddenly take off on a bus or train for some unknown destination. And trips to the nearby playground or park come with warnings to not talk to strangers and with a caregiver’s eagle eye over the proceedings. Parents are understandably reluctant to let their children be photographed by the media, and in many countries, news outlets blur children’s faces to protect them. Where are these protections online?

The virtual world is full of unsupervised “vacations” and “playgrounds” — with other children and, potentially, less-than-scrupulous adults, sometimes posing anonymously as children. While video gaming and chat forums like Fortnite: Battle Royale, to name one popular example, offer an online space for children to socialise with their friends, multiple reports identify such virtual playgrounds as “honeypots” for child predators. Short of banning screen time entirely, parents are hard-pressed to keep tabs on just what their kids are doing online, and with whom. With online homework, this has become even more difficult.

Children’s right to freedom of attention

It does not help that the AI systems driving many video games and social networks are designed to keep children hooked, both through algorithms and gimmicks like “streaks”, “likes”, infinite scroll, etc. Even if this is an ancillary consequence of the underlying business model, the damage is done — children, from a tender age through adolescence, are becoming digitally addicted. Right when they need to be learning concentration skills, emotional and social intelligence, their attention is being spliced into ever-thinner slices, and their social interactions increasingly virtualised.

Similarly, right when children and youth are forming their initial views of the world, they are being sucked into virtual deep space, including the universe of fake news, conspiracy theories, hype, hubris, online bullying, hate speech and the likes. With every click and scroll, AI is sorting them into tribes, and feeding them a steady diet of specially customised tribal cuisine. All this is thrown at our children just when they are starting to try to make sense of who they are and the world they live in; right when it is so important to help them understand and appreciate different perspectives, preferences, beliefs and customs, to build bridges of understanding and empathy and goodwill.

Data harvesting and algorithmic bias

Other insidious pitfalls also lie in the path of the Generation AI child. Today many AI toys come pre-programmed with their own personality and voice. They can offer playful and creative opportunities for children, with some even promoting enhanced literacy, social skills and language development. However, they also listen and observe our children, soaking up their data, and with no framework to govern its use. Some of these AI toys even perform facial recognition of children and toddlers. Germany banned Cayla, an Internet-connected doll, because of concerns it could be hacked and used to spy on children. Yet most countries do not yet have the legal framework in place to ban such toys.

Finally, in the field of education, AI can and is being used in fabulous ways to tailor learning materials and pedagogical approaches to the child’s needs — such as intelligent tutoring systems, tailored curriculum plans, and imaginative virtual reality instruction, offering rich and engaging interactive learning experiences that can improve educational outcomes. But algorithms can also both amplify existing problems with education systems and introduce new challenges — when the pandemic caused the usual tests to be cancelled in the United Kingdom, and by the International Baccalaureate board, for instance, the algorithms that served as a fallback meant thousands of students lost out on college admissions and scholarships. And unless the educational and performance data on children is kept confidential and anonymous, it can inadvertently typecast or brand children, harming their future opportunities.

Operationalising child rights and protections

So, how do we simultaneously close the digital divide, and safeguard children’s rights in the age of AI? How do we balance the tremendous good AI can do for children, while keeping their unique vulnerabilities topmost in our preoccupations, mitigating inadvertent harm and misuse?

The next phase of the fourth Industrial Revolution must include an overwhelming push to extend Internet access to all children. Governments, private sector, civil society, parents and children must push hard for this now, before AI further deepens the pre-existing inequalities and creates its own disparities. And on mitigating on-line harms, we need a multi-pronged action plan: we need legal and technological safeguards; we need greater awareness among parents, guardians and children on how AI works behind the scenes; we need tools, like trustworthy certification and rating systems, to enable sound choices on safe AI apps; we need to ban anonymous accounts; we need enforceable ethical principles of non-discrimination and fairness embedded in the policy and design of AI systems — we need “do no harm” risk assessments for all algorithms that interact with children or their data. In short, we need safe online spaces for children, without algorithmic manipulation and with restricted profiling and data collection. And we need online tools (and an online culture) that helps prevent addiction, that promotes attention-building skills, that expands children’s horizons, understanding and appreciation for diverse perspectives, and that builds their social emotional learning capabilities.

The Convention on the Rights of the Child urges all public and private actors to act in the best interests of the child, across all their developmental activities and provision of services. In February, in a landmark decision, the UN Committee on the Rights of the Child adopted General Comment 25, on implementing the Convention on the Rights of the Child and fulfilling all children’s rights in the digital environment. This is an important first step on the long road ahead.

UNICEF’s Generation AI initiative is currently working with the World Economic Forum’s Centre for the Fourth Industrial Revolution and other stakeholders to realiae the potential of AI for children in a safe and transparent way. UNESCO consulted with young people, among other stakeholders, in drafting the Recommendations for ethical AI that will be adopted by member states this year and is working to enhance AI literacy through MOOCs. UNESCO’s Mahatma Gandhi Institute of Education for Peace came together this year with UNICEF, UNV, and United Nations in India for a nation-wide consultation with young people to identify the ethical concerns around AI most important to young people in India.

The Government of India has put in place strong policies to protect the rights and well-being of children, including a legislative framework that includes the Right to Education. Laws and policies to prevent a range of abuses and violence, such as the National Policy for Children (2013), can be extended for children in a digital space.

But much more needs to be done, here in India and around the world. And in this interconnected world, the more we can agree upon multilaterally and by multi-stakeholder groups, the easier it may be to implement nationally and locally. Just as India proactively helped shape the Universal Declaration of Human Rights and gave the world the principle of Ahimsa, this great country could also galvanise the international community around ensuring an ethical AI for Generation AI.

Renata Dessallien is United Nations Resident Coordinator, India

This article is closed for comments.
Please Email the Editor

Printable version | May 16, 2021 12:42:24 PM | https://www.thehindu.com/opinion/lead/protecting-children-in-the-age-of-ai/article34361913.ece

Next Story