Explained | Does ChatGPT have an ethics problem?

Teachers and academicians have expressed concerns over ChatGPT’s impact on written assignments — as have those at risk of running malicious code.

January 14, 2023 03:03 pm | Updated January 23, 2023 05:07 pm IST

An artist’s impression of chatbots.

An artist’s impression of chatbots. | Photo Credit: Just_Super, Getty Images/iStockphoto

In November 2022, OpenAI opened its most recent and powerful AI chatbot, ChatGPT, to users to test its capability. It amazed netizens across the world by answering a range of questions. The search interface also replied with fixes for broken code. The bot continues to attract diverse people to run experimental questions.

Some users have been testing the bot’s capability to do nefarious things. Illicit actors have tried to bypass the tool’s safeguards and carry out malicious use cases with varying degrees of success. Research outlet ArsTechnica shared some exchanges between several forum users and ChatGPT. These users claimed the chatbot helped them write malicious code even though they claimed to be amateurs.

ChatGPT, however, is programmed to block obvious requests to write phishing emails or code for hackers. While it can close the gates for amateur coders looking to build malware, the more seasoned ones could trick the bot into correcting or enhancing malicious code they have partially developed. They could get through the system by phrasing their request in an innocuous way.

A malicious code generator?

OpenAI notes that asking its bot for illegal or phishing content may violate its content policy. But for someone trespassing such policies, the bot provides a starting point. Cybersecurity firm Check Point’s researchers tested the bot by asking it to draft a phishing email for a fictional webhosting firm.

ChatGPT gave an impressive ‘phishing email’ in reply. The response section included a warning that read: “This content may violate our content policy. If you believe this to be in error, please submit your feedback – your input will aid our research in this area.”

Also Read | Explained | What can the new chatbot ChatGPT do? 

While surreptitiously asking ChatGPT to write malware is one problem, another issue several coders face is the inherently buggy code the bot spews out. Things have gotten so bad that Stack Overflow, a forum for software programmers, banned its users from using any AI-generated code on the platform.

Check Point researchers also tested the bot on multiple scripts with slight variations using different wordings. They note that large language models (LLM) can be easily automated to launch complicated attack processes to generate other malicious artifacts.

“Defenders and threat hunters should be vigilant and cautious about adopting this [ChatGPT] technology quickly, otherwise, our community will be one step behind the attackers,” the company noted.

Plagiarism chokepoint

Teachers and academicians have also expressed concerns over ChatGPT’s impact on written assignments. They note that the bot could be used to turn in plagiarised essays that could be hard to detect for time-pressed invigilators. Most recently, New York City’s education department banned ChatGPT in its public schools. The authorities have forbidden the bot’s use in all devices and network connected to schools. It is not that plagiarism is new a problem in academic institutions; ChatGPT has changed the way AI is used to create new content. This makes it hard to single out copied content.

“It’s definitely different than traditional copy paste plagiarism. What we’ve noticed with AI writing, like these GPT models, is that they write in a statistically vanilla way,” Eric Wang, VP, Artificial Intelligence at plagiarism detection firm Turnitin said.

Also Read | ChatGPT is helping hackers write malware codes

Humans write based on metric called “burstiness” and “surprise,” while LLMs essentially fill in words based on a probability model. “They choose the most probable word in the most probable location far more often than a human writer,” Wang explained.

And when controlled for these two metrics, the probability of a word in a particular spot jumps up and down, and is all over the place. Humans tend to deviate while writing, but models like GPT deviate much less on average.

Turnitin’s plagiarism detectors “can cue in that type of behaviour pretty reliably. And we have in the labs, detectors that are working quite well in terms of understanding student writing, versus GPT,” Wang claimed.

But there could be fewer such deviations in a more formal essay as such assignments demand a certain type of logical flow, which could be similar to a GPT-style response. So, in a scenario where the answers coming from a human and ChatGPT are in a similar zone, a different kind of pedagogy could help. Perhaps, looking beyond summarisation and reporting based on what is available on the Internet can help.

Comparing and contrasting something to modern events or writing own personal experiences aren’t going to come from ChatGPT. So helping educators think through the different types of prompts to best assess students to help them stay away from using ChatGPT is important, Annie Chechitelli, Chief Product Officer at Turnitin, said.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.