A blight that did not really happen

The mistake of carrying a story on Facebook’s AI chatbots

August 07, 2017 12:15 am | Updated August 08, 2017 10:05 am IST

An Illustration of a Heads Connected On Line - Internet/ Word Wide Web.

An Illustration of a Heads Connected On Line - Internet/ Word Wide Web.

One of my primary concerns has been the impact of technology on the essential elements of journalism: truth, accuracy, and verification. From algorithm to filter bubbles, from viral content on social media to photoshopped images, from an overload of fake information that comes from paid trolls to diabolic selective leaks — where the context is obfuscated and unrelated sentences from official documents are pulled out to create a wrong impression — the challenges confronting journalists have grown in exponential proportion. However, not all dystopian stories are based on facts.

A nightmare that appeared to be true

There was a shiver down my spine on August 1, 2017, when the Metroplus section of this newspaper carried an item filed by a news agency, “Facebook’s AI chatbots talk in their own language, get shut down”. It appeared as if all our nightmares about the dark side of Artificial Intelligence had indeed come true. The story claimed that while attempting to improve the conversational skills of their chatbots, researchers at the Facebook AI Research Lab realised that the bots had abandoned English in favour of a language they had developed. It further said that they were apparently using advanced machine learning to their advantage and were engaging in “negotiations”, and this abandonment of English in favour of unscripted communication led Facebook researchers to shut down the bots. The Metroplus team thought it deserved publication because the story came in the wake of the public sparring between Tesla’s Elon Musk and Facebook’s Mark Zuckerberg. It appears that not just the Metroplus team, but several mainstream media outlets fell for the story that was flawed in its fundamental assumption.

The editors of the main section of this newspaper decided not to take this story. Therein lie some of the crucial journalistic tools to ensure that no space is provided for scaremongering. They began by combing the copy. They wanted to know the primary source, the experts who have been cited or interviewed, and the nature of the technical development that looked spine-chilling.

First, it was not an original report of the news agency but a report of a report that appeared on a New York-based website called Tech Times . Second, its reading of the development that “the AI did not start shutting down computers worldwide or something of the sort, but it stopped using English and started using a language that it created” led to a curiosity to know what was that new language before publishing the story. The Hindu editors also wondered why no one was interviewed for a story of this magnitude. These reasons were enough for them to reject the story despite its dystopian seductive charm.

Their decision to spike the story was vindicated within 12 hours. Some technology reporters started publishing the real story. A BBC report, “The ‘creepy Facebook AI’ story that captivated the media”, helped us understand the inherent flaws of the first story by providing the recent history of tech giants’ experiments with AI. For the sake of those who read the ‘creepy story’, here are the developments summarised in a paragraph.

Last June, Facebook announced its AI research for its chatbots, where it wanted them to have text-based conversation with humans and other bots. According to BBC , that was “an effort to understand how linguistics played a role in the way such discussions played out for negotiating parties, and crucially the bots were programmed to experiment with language in order to see how that affected their dominance in the discussion.”

Tom McKay, for his article for Gizmodo , spoke to researchers who were involved in FAIR (Facebook AI Research). What emerges from his interviews is that “Facebook did indeed shut down the conversation, but not because they were panicked they had untethered a potential Skynet.” The error of not incentivising the chatbots to communicate according to human comprehensible rules of the English language led to a situation where the bots began chatting back and forth in a derived shorthand. This may be a bit eerie, but to call it a new language is a stretch.

From E.M. Forster’s The Machine Stops to Margaret Atwood’s The Handmaid’s Tale , dystopian novels have helped us understand authoritarianism. But reports of dystopia in journalism often seem to be a false alarm.

readerseditor@thehindu.co.in

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.