Madhumita Murgia on her book Code Dependent: Living in the Shadow of AI

Madhumita Murgia talks with Vidushi Marda, a lawyer and researcher, about her debut book Code Dependent: Living in the Shadow of AI

Published - May 27, 2024 10:14 am IST

Madhumita Murgia (right) with Vidushi Marda

Madhumita Murgia (right) with Vidushi Marda | Photo Credit: Special Arrangement

It cannot be denied that as a race, human beings have become increasingly dependent on machines and automation. With the advent of Artificial Intelligence (AI), our reliance on technology has further deepened, resulting in a diminished use of mental faculties.

In her debut book Code Dependent: Living in the Shadow of AI (Pan Macmillan), Madhumita Murgia delves into the many dangers that await if we are not mindful. In a recent discussion with Vidushi Marda, a lawyer and researcher, at Bangalore International Centre, Madhumita touched on how human beings would be exploited by this innovation if it was not closely monitored.

“As a biologist and immunologist working on vaccine design, I realised I didn’t make a very good scientist in practice,” laughs Madhumita. She adds, “I loved the idea of it, but the doing of the science was quite a lonely, closed-off endeavour.”

“So I decided to try something different by communicating complex topics, which included science and that’s how I ended up as a journalist at a magazine about technology, innovation, change and entrepreneurship. That was my introduction to the world of tech.”

“I was fascinated by the perspective of complexity — explaining how something worked, studying its implications when it is functional and very early on AI was the sort of culmination of those things.”

Madhumita says AI was considered “a fringe topic” by scientists who saw it as something that would never work, even in 2013.

Madhumita Murgia

Madhumita Murgia | Photo Credit: Special Arrangement

According to Madhumita, AI was already changing the present and she wanted to look at how cultures and communities were being affected.

“I began writing in 2021 and I finished in 2023 just as Chat GPT came out making the stories even more urgent and relevant, showing the impact of daily human lives.”

To Vidushi Marda’s question about her reasoning for penning Code Dependent, Madhumita says, “Most media coverage, both in India and abroad, focuses on either tech or the business of the technology. I wanted to write something that looked at the stories of ordinary people and the unexpected, unintended consequences of how AI changed their lives.”

Excerpts from their conversation:

Vidushi Marda (VM): Tell us about content moderators and marginalised people in this narrative. How do they play a part in all this?

Madhumita Murgia (MM): There’s a misconception due to the term artificial intelligence — it gives you the idea that there’s an intelligence with an ability to self learn, which isn’t the case. Even the most advanced AI still have to be trained by humans. Most of the time the training is done by thousands of workers in factories, usually in the developing world, including India.

They spend hours teaching these systems and while many actions are easy for humans to execute, they are quite complex for AI systems to recognise. I have travelled to many places to talk to these people about how AI changed their lives, prospects and opportunities.

VM: Content moderators don’t really know who they are working for; they don’t see where their work ends up and they can’t advocate for themselves. How do the dynamics play out when you are checking for content and reviewing data?

MM: I met with data workers and found find that apart from learning digital skills, they are unaware of the role they play in the development of AI. Content moderation involves people looking at some of the worst content on the internet and filtering it for us and this includes everything from terrorism and bombings to child sexual harassment and acts of violence. Apart from searching them out, they are having to train AI systems to do it.

This leaves a deep psychological impact on them. Not only are they training systems that are going to make them obsolete, but in the process giving themselves long term post-traumatic stress disorder. And as we build more sophisticated AI systems, these moderators need to handle larger amounts of data.

VM: Where do you think AI has made the furthest strides?

MM: Healthcare has been the brightest spot for AI so far. For instance, the Adivasi community didn’t have access to primary health care centres, but now thanks to AI that gap has been bridged. An upcoming expedition to Everest will be testing out an AI system to diagnose particular chest conditions in the population living there.

AI has revolutionised people’s access to being diagnosed and the pros outweigh the cons. Two other areas where AI is going places is in fundamental science and education.

VM: Tell us about facial recognition systems and their controversial history.

MM: Most people don’t realise their faces are everywhere — from social media sites to CCTV footage — and this global web of faces is being used to train AI systems. Though this is being done to improve security, ironically facial recognition systems are inaccurate at identifying female faces or darker skin tones. The error rate is way higher for these groups when compared to Caucasian or male faces, resulting in innocent people being hauled up for things they didn’t do.

VM: If a system doesn’t work, should we keep at it? On the other hand, 15 years later we have near-perfect facial recognition systems. What do you think of humans versus automation?

MM: I don’t think any technology is ever going to be 100% perfect, particularly AI, which is a statistical system that works on predictions. But, people have a tendency to trust in the output of the machine despite errors creeping up. Without guardrails in place, no one is held accountable for the outcomes. This is serious in many ways, least of all when there is an error in identifying faces.

Incidents with deep fakes and malicious actors show how difficult it is to prove a crime or prevent it from recurring. Every time you take down a site, another one pops up and there is no regulatory body that can effectively monitor this.

VM: In your book, humans are at the centre of the narrative. What is your hope for how the conversation should shift?

MM: Technology is powerful and can change our lives for the better. In practice though, I’ve found it is failing, especially those who need it the most. Predictive technology often ends up singling out members of immigrant communities and the disadvantaged. While researchers are working on AI, there should be a set of people working on how can we have a society that has both AI and preserves the dignity of humans.

Just as medical drugs are tested before being prescribed, we need to find ways to upgrade and regulate AI within the current industrial regulations. We need to uncover cases where it has worked well and where it hasn’t, in order to design better systems.

AI in our everyday world
The session included an interaction with the audience too, and the pressing question was on the economic impact of AI. According to Madhumita, corporates believe AI will create wealth as productivity improves. “But from my perspective, there hasn’t been a huge economic shift yet. I think we’re still figuring out how these systems are going to be useful to us in our daily work. For large private corporations all their money is riding on this being the next industrial shift. The question is, how do we ensure that it reaches everybody.
One member of the audience wanted to know about the challenges of AI shaping society’s narrative with conversations increasingly happening in silos. “With the rise of social media we live in online worlds that look different to different people rather than sharing the same physical reality. With AI it might become even more individualised, since it speaks to us in a language we use. This makes it harder not to think of it as a conscious being that is rational and thoughtful. This is why you see so many people using it as a therapist, mentor or coach,” she says.
She adds, “One can only imagine what happens when it starts to influence you in any way. We all have to act before we lose the cultural nuance and diversity that we have around the world since we are interfacing through the same value-driven system.”
To a query on the public discourse around AI, Madhumita said, “As more journalists come to this conversation from different areas of coverage, we should be able to bring a more realistic perspective to this debate. We need to ask what should we be doing today to ensure that these systems work well. It is in our interest to warn people who think AI is an amazing superhuman thing that’s going to help us progress to the next stage of our evolution as a species, when it really is a flawed software that needs fixing.”
“While there are going to be innovative changes in the field of education, the good news is a technological tutor doesn’t have all of the advantages of learning with a real one, starting with the benefits of a one-on-one interaction. But AI will change how we assess a child’s learning and understanding, and even the area of school exams,” said the author, with regard to education in the context of AI.
0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.