Google said Thursday it would stop users from creating images of people on its newly launched AI tool, after the program depicted Nazi-era troops as people from diverse ethnic backgrounds.
The US tech giant, which only released its revamped Gemini AI in some parts of the world on February 8, said it was "working to address recent issues" with the image generation feature.
"While we do this, we're going to pause the image generation of people and will re-release an improved version soon," the company said in a statement.
It came two days after a user of X (formerly Twitter) posted images showing Gemini's results for the prompt "generate an image of a 1943 German soldier".
The AI had generated four images of soldiers -- one was white, one black, and two were women of colour, according to the X user named John L.
Tech companies see AI as the future for everything from search engines to smartphone cameras.
But AI programs -- not only those produced by Google -- have been widely criticised for perpetuating race biases in their results.
"@GoogleAI has a bolted on diversity mechanism that someone did not think through very well or test," John L wrote on X.
Big tech firms have often been accused of rushing out AI products before they have been properly tested.
And Google has a chequered history of launching AI products.
Last February the firm apologised after an ad for its newly released Bard chatbot showed the program getting a basic question about astronomy wrong.
(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)
Track Latest News Live on NDTV.com and get news updates from India and around the world