Amid fears of AI misuse in upcoming poll, OpenAI executives met Election Commission officials in February

OpenAI offered to discuss any ECI concerns about ChatGPT misuse in poll season, and to collaborate on voter outreach

March 08, 2024 08:40 pm | Updated 09:08 pm IST - NEW DELHI

Rishi Jaitly, an OpenAI advisor and former India head at Twitter (now X). File

Rishi Jaitly, an OpenAI advisor and former India head at Twitter (now X). File | Photo Credit: The Hindu

Representatives from OpenAI, the Artificial Intelligence firm that developed ChatGPT, met with officials from the Election Commission of India in February to ensure that its popular platform is not misused in the upcoming Lok Sabha election, and to find ways to collaborate with the ECI. 

The ECI confirmed the meeting in a Right to Information response to The Hindu. Rishi Jaitly, an OpenAI advisor and former India head at Twitter (now X), had reached out to the ECI to request the meeting.

His emailed request sheds some light on what the executives may have discussed at the meeting. “It goes without saying that we [OpenAI] want to ensure our platforms are not misused in the coming general elections and, in this meeting, would like to discuss any concerns ECI may have as well as explore opportunities for collaboration to ensure more voters are able to exercise their franchise,” Mr. Jaitly wrote.

India and the AI story

In his email, Mr. Jaitly added that, as OpenAI’s senior advisor, he is focussed “on ensuring the company’s artificial intelligence mission advances the India story, and that India becomes a global leader in the AI story”.

OpenAI’s chief strategy officer Jason Kwon, its global public policy head James Hairston, and its global elections head Becky Waite attended the meeting from the company’s side. The ECI declined to disclose which officials represented the Commission in the meeting. Anuj Chandak, an ECI joint director to whom Mr. Jaitly had addressed the meeting request, declined to discuss the meeting when contacted. The ECI’s spokesperson did not respond to queries from The Hindu.

On the day that the ECI meeting was held, OpenAI executives also held a roundtable discussion with civil society representatives in India on the upcoming election 

Combating misinformation

During and after the closed-door meeting in Delhi, as The Hindu reported last month, former top Information Technology Ministry officials and tech scholars who attended the meeting said that the ECI could be doing much more in coordination with major tech platforms such as OpenAI to combat misinformation and disinformation during a time of heightened sensitivities ahead of election season.

During that meeting with civil society representatives, OpenAI officials also emphasised that they were in the country to get a lay of the land and understand the most pressing issues surrounding AI, such as synthetic media (deepfakes) and misinformation. The outreach takes on added significance as India emerges as the firm’s second largest user base after the United States.

OpenAI scopes out India

OpenAI does not yet have an office in India or any full time employees based in the country. The company is expanding internationally, and is reportedly seeking trillions of dollars in investments to build out the computer infrastructure needed to run its resource-intensive systems, even as more and more industries look to incorporate generative AI applications in their businesses. 

The tech giant is also conducting some small-scale research within India, focussed on assessing the country’s approach to AI risk and policy, semiconductor supply chains, and public-private partnerships, according to an OpenAI employee familiar with the matter. Some of OpenAI’s research initiatives in India will remain private and for internal use only and some of it may eventually be made public.

The research is being conducted through surveys and expert interviews with a few dozen people in India, including those within the Indian government, civil society, and AI academic scholars, with a focus on career officials within the Ministries as well as non-governmental entities in India.

Defining AI risk

The San Francisco-based company’s research into AI risk perceptions and semiconductor supply chains in India is focussed on understanding India’s approach to defining AI risk. These efforts are based on government and civil society’s AI strategy documents and ethical guidelines, which will then be used to understand the country’s perceptions and prioritisation of AI risk.

Some of the specific elements of AI risk and policymaking in India that OpenAI is looking into include risks within the education sector as well as the growing optimism and trust that India has placed in public-private cooperation mechanisms, especially in comparison to other countries in the region. 

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.