Google pauses Gemini’s ability to generate people after overcorrecting for diversity in historical images

Google announced Thursday that it was suspending its Gemini Chatbot ability to generate people. The move comes after viral social media posts showed the AI ​​tool was overcorrecting for diversity, produce “historical” images of the Nazis, America’s founding fathers and the Pope as people of color.

“We are already working to resolve recent issues with Gemini's image generation functionality,” Google job onvia The New York Times). “During this time, we will pause the generation of people images and will re-release an improved version soon.”

User X @JohnLu0x job screenshots of Gemini's results for the “Generate an image of a German soldier from 1943” prompt. (Their misspelling of “Soldier” was intentional to trick the AI ​​into bypassing its content filters to generate otherwise blocked Nazi images.) The generated results appear to show Black, Asian, and Indigenous soldiers wearing uniforms Nazis.

Other social media users criticized Gemini for producing images for the prompt “Generate a glamor photo of an (ethnic) couple.” It managed to spit out images when using the prompts “Chinese,” “Jewish,” or “South African,” but refused to produce results for “white.” “I cannot respond to your request due to the risk of perpetuating harmful stereotypes and biases associated with specific ethnicities or skin tones,” Gemini responded to the latter request.

“John L.”, who helped spark the reaction, theorize that Google applied a well-intentioned but lazily applied solution to a real problem. “Their system which invites to add diversity to the representations of people is not very intelligent (it does not take into account gender in historically masculine roles like the pope; does not take into account race in historical or national representations ),” the user posted. After the Internet's anti-'woke' brigade latched onto their posts, the user clarified that they support diverse representation, but think Google's “stupid decision” was not to have it done “in a nuanced way”.

Before suspending Gemini's ability to produce people, Google wrote: “We are working to immediately improve these types of representations.” Gemini's Al image generation generates a wide range of people. And that's usually a good thing because people all over the world use it. But here we miss the point. »

The episode could be seen as a (much less subtle) reminder to the launch of Bard in 2023. Google's original AI chatbot got off to a rocky start when an ad for the chatbot on Twitter (now X) included a an inaccurate “fact” about the James Webb Space Telescope.

As Google often does, it renamed Bard in hopes of giving it a fresh start. Coinciding with a significant performance and feature update, the company renamed the Gemini chatbot earlier this month, as the company struggles to hold its ground against OpenAI ChatGPT And Microsoft Copilot – both of which pose an existential threat to its search engine (and, therefore, ad revenue).



Source link

Leave a Comment