(This article is part of Today’s Cache, The Hindu’s newsletter on emerging themes at the intersection of technology, innovation and policy. To get it in your inbox, subscribe here.)
A little over two decades ago two Stanford graduates set out to organise information posted on the world wide web. They built an algorithm that helped users find web links to content on the internet. Browsing the web with the help of search engines like Google opened a new way for people to access content online just by typing keywords.
The tool was the need of the hour for those who complained of information overload as digitised content made publishing a lot easier compared to the earlier Gutenberg era. But this was only the era of emails. The internet’s distribution channels enabled content to spread far and wide, opening the floodgates of information. Search engines have helped people find content easily.
Social media platforms like Twitter and Facebook later joined the data flow, inundating the online world with a lot more text, image and video data. According to some estimates, by 2025, the amount of data generated on the Internet each day is expected to reach 463 exabytes globally. [An exabyte is 10 to the power 18.]
Making sense of such vast amount of content became impossible. So, a better way to organise and retrieve information can be helpful. Conversational AI could play a crucial role here, similar to what Google did during the email era.
OpenAI recently introduced its large language model (LLM) ChatGPT for people to play with. If there is one thing you should try before the end of this year, it is this giant chatbot. This LLM will give you more than a glimpse of what artificial intelligence could potentially do in the coming years.
The chatbot doesn’t just answer questions effortlessly, it also corrects codes in a computer programme. Several users have pointed out the chatbot’s limitations, and OpenAI too has accepted that the bot does give out nonsensical responses at times. But those limitations, in my opinion, may not stop ChatGPT from fundamentally altering the way we search the web.
OpenAI’s chatbot is built on GPT 3.5 architecture which allows people to interact with it using natural language. The bot gets its computational power from Microsoft’s cloud. The Windows software maker holds exclusive license to OpenAI’s GPT-3.
On a basic level, I found the chatbot to be quite useful. It opened a new paradigm in search and retrieval that could possibly make information overload manageable. Understandably, several users pointed out that school and college students could use the bot to get some assignments done using the chatbot’s capability to give apt responses. Stack Overflow, a site where developers ask and answer coding questions, temporarily banned the use of text generated by ChatGPT.
Perhaps, here OpenAI can think of a watermark solution. That could help forums such as Stack Overflow and school and college educators withhold solutions developed through conversations with the chatbot. But it is not clear whether OpenAI will keep ChatGPT as a free-to-use service for long.
When the bot’s Beta version was launched, Sam Altman CEO of OpenAI noted that it costs a lot to run such energy-intensive language models. He also suggested that the model can be monetised with an average cost of few cents per query.
That plan could change if, like GPT-3, ChatGPT license goes to Microsoft. The Silicon Valley firm could look at a different use case. Until that point each query keyed in costs OpenAI money. And it could soon take a call on the monetisation dilemma.
(Updated with additional inupts)
Published - December 07, 2022 12:37 pm IST