Academics warn new science papers are being generated with AI chatbots

Social media users and academics have pointed out that phrases commonly used by chatbots in response to users have made their way into scientific literature

Updated - March 22, 2024 04:49 pm IST

Published - March 22, 2024 03:43 pm IST

Chatbots are prone to hallucination, or a phenomenon where they present illogical or nonsensical answers as facts [File]

Chatbots are prone to hallucination, or a phenomenon where they present illogical or nonsensical answers as facts [File] | Photo Credit: REUTERS

Social media users and academics are warning that more scientific papers show signs of being partially generated with the help of AI chatbots.

For example, a paper published on ScienceDirect, titled ‘The three-dimensional porous mesh structure of Cu-based metal-organic-framework - aramid cellulose separator enhances the electrochemical performance of lithium metal anode batteries,’ published in March 2024 started its introduction with the phrase: “Certainly, here is a possible introduction for your topic:” before covering the subject at hand.

The phrase is commonly used by chatbots such as OpenAI’s ChatGPT when they provide responses to user queries.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

As of Friday, the paper was still accessible through ScienceDirect and the introduction had not been edited or updated. ScienceDirect allows users to digitally access the works of the Dutch academic publisher Elsevier.

The introduction of the paper shows a stock phrase commonly used by chatbots

The introduction of the paper shows a stock phrase commonly used by chatbots | Photo Credit: ScienceDirect

Elsevier is known for its stringent stance against digital piracy and has pursued legal cases against shadow libraries providing free downloads of its publications.

Meanwhile, the stock phrase “as of my last knowledge update” - most commonly used by ChatGPT - was identified in several academic journals when looked up via Google Scholar, reported tech outlet 404 Media on March 18.

Most of these phrases were found in papers that investigated ChatGPT and its responses, but others were present in research papers that covered non-AI subjects.

“As of my last knowledge update in 2021, approximately 65% of India’s population is under the age of 35,” was one sentence in a paper titled ‘Youth at Risk: Understanding Vulnerabilities and Promoting Resilience’ by Dr. Priyanka Beniwal and Dr. C.K. Singh (2023).

The publisher, Weser Books, appeared to be a self-publishing service.

Novelists and journalists have accused the tech companies working on chatbots of misusing their copyrighted works without consent for the sake of AI training. Some companies hit with lawsuits include OpenAI, Microsoft, Google, and Meta.

Chatbots are also prone to hallucination, or a phenomenon where they present illogical or nonsensical answers as facts. This raises the risk of incorrect data making it into costly, peer-reviewed scientific publications used by scholars worldwide.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.