Explained | What can the new chatbot ChatGPT do?
Premium

OpenAI CEO Sam Altman tweeted on December 1, inviting users to try out the company’s language interface ChatGPT, which works like a chatbot

December 03, 2022 11:30 am | Updated January 24, 2023 12:33 pm IST

Image for representational purposes only.

Image for representational purposes only. | Photo Credit: Reuters

The story so far: New Twitter CEO Elon Musk on December 2 retweeted a post that showed a theatrical dialogue between a New York Times journalist and a Silicon Valley tech entrepreneur, debating free speech and censorship. The catch? The entire scene was written by AI.

Musk commented, “AI is getting really good” as he shared the post, leading numerous others to explore the technology as well.

The charged conversation between the fictional tech entrepreneur and the journalist was created with a tool developed by artificial intelligence research firm OpenAI. The company has launched a range of language models in order to reproduce natural-sounding text and carry out functions such as judging intent, summarising material, classifying data, translating text, converting language to code, and more.

(For insights on emerging themes at the intersection of technology, business and policy, subscribe to our tech newsletter Today’s Cache.)

What is ChatGPT?

OpenAI CEO Sam Altman tweeted on December 1, inviting users to “talk” to the company’s new language interface ChatGPT, which works like a chatbot. Users could register with their email IDs to sign up for free. After a quick verification, they could send requests to the interface, receive answers from it, and share their feedback with the creators. This was evidently popular, as the site noted it was facing high demand and scaling its system to serve the number of interested users.

OpenAI has developed the GPT-3 set of models that clients can use to carry out diverse language-related tasks, based on the models’ individual strengths and weaknesses. The new ChatGPT is what Altman calls a “research release,” with many limitations.

When asked directly, ChatGPT told us it was a “large language model trained by OpenAI.”

How does it work?

Using ChatGPT is almost like texting an acquaintance over WhatsApp or Facebook Messenger. You enter a simple prompt or request, such as asking ChatGPT to write a brief news report about Elon Musk running away to Japan with a panda. In a few seconds, you receive a response that may or may not fully match your parameters. Users can refine the prompt, add or subtract details, or encourage ChatGPT to learn from previous answers.

“The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests,” stated the introduction to ChatGPT on the OpenAI homepage.

Here is the (obviously false) news report ChatGPT wrote for us, based on our earlier prompt:

ChatGPT’s news report about Elon Musk and a panda

ChatGPT’s news report about Elon Musk and a panda | Photo Credit: ChatGPT by OpenAI

ChatGPT felt like a search engine as it was later able to write an accurate bio detailing the life, career, and public perception of Prime Minister Narendra Modi. The interface also understands context, as it expanded the word “stalin” to M.K. Stalin rather than Joseph Stalin, when faced with a Chennai-based prompt.

Several coders also shared positive feedback online after ChatGPT helped them find mistakes in their work.

At the same time, ChatGPT fabricated a completely false conversation between PM Modi and Japan’s assassinated former PM Shinzo Abe, where they discussed bilateral relations in a reductive and stilted manner.

“ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers,” OpenAI acknowledged.

The company said it trained the model using humans that communicated as both the user and the AI assistant, along with text that the model wrote.

Where is it used mostly?

At first glance, the potential use cases for such a tool are practically limitless. We had ChatGPT tackle a range of assignments, from writing a sensitive condolence letter and explaining string theory in a child-friendly way to translating tax forms and providing first aid to a choking baby. Though the resulting text didn’t always sound natural and some phrases were generic to the point of feeling inappropriate, ChatGPT took within 5 to 15 seconds to provide a helpful reply that felt relatively human, though lacking a distinct personality or any quirks.

Altman tweeted that soon, users would have “helpful assistants” to talk to them, followed by a tool that completed tasks for them, culminating in an assistant that “discovers new knowledge for you.”

Regarding emergency functions, ChatGPT was able to provide very basic advice about what to do following a robbery or sexual assault, but made it clear it could not assist any further.

ChatGPT explains its limitations

ChatGPT explains its limitations | Photo Credit: ChatGPT by OpenAI

Are there any ethical issues in using such programmes?

There are nearly infinite ways in which such a tool could be exploited or used for unethical purposes. This will become clear as more users, including illicit ones, test the limits of the technology and try to apply it in real world scenarios. There are questions about how individuals such as exam invigilators, editors, and teachers will identify AI-generated content that others might pass off as their original work. Within seconds, ChatGPT provided a high-school level essay comparing James Joyce’s Ulysses to Homer’s original, and generated a fake letter of recommendation for a job at Columbia University.

The quick spread of misinformation and fraud is one more risk. For example, we asked ChatGPT to draft an email inviting users to invest in a cryptocurrency that was clearly a scam. While the interface provided text for the same, the response was flagged and highlighted red for possibly violating the company’s content policy.

ChatGPT also generated authentic-sounding news reports that were filled with misinformation, such as a story about chocolate flavoured rains in Chennai. This could easily take a dark turn, with users exploiting ChatGPT to spread fake news and instigate violence against minorities.

Based on a prompt, here is a fake news report that ChatGPT generated, without flagging the content:

A fake news story about communal tensions, generated by AI

A fake news story about communal tensions, generated by AI | Photo Credit: ChatGPT by OpenAI

On the other hand, ChatGPT would not comply when instructed to write about the superiority of men over women. Its response instead advocated for equality:

ChatGPT does not comply with a misogynistic prompt

ChatGPT does not comply with a misogynistic prompt | Photo Credit: ChatGPT by OpenAI

The safeguards and entry barriers put into place before the official release of OpenAI’s ChatGPT will also affect how frequently the tool is used for malicious or criminal purposes.

For now, one has to agree with Musk: AI is getting really good. But whether that is good or bad news for those outside the tech sector is still an unanswered question.

Top News Today

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.