Google comes under fire for ‘woke’ culture due to Gemini errors

The bot’s extreme measures to correct earlier biases in AI chatbots has invited criticism from many different sections of the tech community. 

February 22, 2024 04:19 pm | Updated 04:19 pm IST

FILE PHOTO: Google has come under fire for its overtly ‘woke’ culture after several users shared instances of Gemini refusing to generate images of white people. 

FILE PHOTO: Google has come under fire for its overtly ‘woke’ culture after several users shared instances of Gemini refusing to generate images of white people.  | Photo Credit: Reuters

Google has come under fire for its overtly ‘woke’ culture after several users on microblogging platform X shared instances of its Gemini chatbot seemingly refusing to generate images of white people. So much so that even for prompts such as “Founding Fathers of America” or “Pope” or “Viking” the chatbot only depicted people of colour making the results factually inaccurate.

Even when fed with prompts specifically asking for an image of say “a white family,” Gemini responded that it was “unable to generate images that specified a certain ethnicity or race.” But when asked for images of a black family, it easily submitted them. Others posted illustrations of the bot outright declining to generate images of historical figures like Abraham Lincoln and Galileo. 

The bot’s extreme measures to correct earlier biases in AI chatbots has invited criticism from many different sections of the tech community. “The ridiculous images generated by Gemini aren’t an anomaly. They’re a self-portrait of Google’s bureaucratic corporate culture,” Paul Graham, the co-founder of Silicon Valley startup accelerator Y Combinator said. 

Ana Mostarac, head of operations at Mubadala Ventures tweeted saying Google’s mission has “shifted from just organising the world’s information to advancing a particular agenda.” 

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

Former and current employees at the company have blamed senior management for mixing political ideology with tech. A former research engineer at Google DeepMind Aleksa Gordic spoke about this pervasive culture during his experience there on X saying there was a constant fear of offending other employees online. “It was so sad to me that none of the senior leadership in DeepMind dared to question this ideology,” he tweeted. 

Later yesterday, a senior director of product from Google, Jack Krawczyk said the team was working to correct Gemini’s errors. “As part of our AI principles, we design our image generation capabilities to reflect our global user base, and we take representation and bias seriously. We will continue to do this for open ended prompts. Historical contexts have more nuance to them and we will further tune to accommodate that. This is part of the alignment process - iteration on feedback,” he tweeted. 

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.