The company that developed the AI chatbot ChatGPT has released a new classification tool in order to help users identify text that was written with AI, but said the classifier was not “fully reliable.”
(For insights on emerging themes at the intersection of technology, business, and policy, subscribe to our tech newsletter Today’s Cache.)
OpenAI explained the abilities and limitations of the new classifier it had trained, in a blog post on January 31. The classifier is meant to address rising concerns that the version of ChatGPT that is currently free to use could be exploited to cheat on exams, impersonate humans, or spread misinformation.
However, the new classifier works best when it receives English-language material that is longer than 1,000 characters. Even so, it has a tendency to misidentify human-written text as AI-written, at times.
“In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives),” said OpenAI in its statement.
The company also said that it was engaging with U.S.-based educators to enhance its outreach programme and learn more about ChatGPT in educational settings.
The news comes as researchers and academics have claimed that ChatGPT was able to pass university-level or even professional exams in the areas of law, business, and medicine. This has triggered fears of AI tools like ChatGPT being used to turn in AI-generated work or unlawfully help students pass qualifying examinations.
COMMents
SHARE