A U.S.-based lawyer who used ChatGPT in a case against the airline company Avianca may face legal action after it was learned that he used the chatbot for work and cited non-existent cases, The New York Times reported.
A court filing said that one cited case, Varghese v. China Southern Airlines Co., Ltd., did not exist, while other cited cases were also imaginary or riddled with errors.
Lawyer Steven A. Schwartz accepted in a separate filing that he did use OpenAI’s AI chatbot to supplement his legal research, but stated he greatly regretted the action.
Calling the incident an “unprecedented circumstance,” the May 4 filing signed by U.S. District Judge P. Kevin Castel said that “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
AI chatbots are prone to a phenomenon known as “hallucination,” in which they generate results that sound realistic and accurate, but are revealed to be fiction upon verification
Earlier in the year, ChatGPT named a scholar as a sexual harasser while citing a Washington Post newspaper report that did not exist.