"Isolated Examples": Google After AI Search Tells Users To Glue Pizza, Eat Rocks

Google says these answers aren't representative of how the tool is working in general. According to a Google spokesperson, these were "isolated examples".

Advertisement
Read Time: 3 mins
Some of the answers appeared to be based on Reddit comments.

Google's new search feature that uses artificial intelligence (AI) to answer users' questions is facing criticism for providing inaccurate responses, including telling users to eat rocks and mix pizza cheese with glue. According to the BBC, Google's experimental "AI Overviews" rolled out across the United States last week and became available to some users in the UK last month. It is designed to make searching for information simpler, however, since the rollout, examples of erratic behaviour by the feature have flooded social media. 

The BBC reported that in one instance, the AI appeared to tell users to mix "non-toxic glue" with cheese to make it stick to pizza. In another instance, it said geologists recommend humans eat one rock per day. Another response told users only 17 of the 42 US presidents were white. AI Overview also falsely claimed former US President Barack Obama is Muslim. 

Some of the answers appeared to be based on Reddit comments or articles written by satirical site, The Onion.

However, Google says these answers aren't representative of how the tool is working in general. Speaking to the outlet, a Google spokesperson said that these were "isolated examples". 

"The examples we've seen are generally very uncommon queries, and aren't representative of most people's experiences," Google said in a statement. "The vast majority of AI overviews provide high-quality information, with links to dig deeper on the web. We conducted extensive testing before launching this new experience to ensure AI overviews meet our high bar for quality," it continued. 

Also Read | Avoid Linking This Payment Card To Digital Wallets On Your Phone, Warn Experts

The tech giant also said it had taken action where "policy violations" were identified and was using them to refine its system. "Where there have been violations of our policies, we've taken action - and we're also using these isolated examples as we continue to refine our systems overall," it added. 

Meanwhile, this is not the first time a company has run into problems with its AI-powered products. In one notable example, ChatGPT fabricated a sexual harassment scandal and named a real law professor as the perpetrator, citing fictitious newspaper reports as evidence. In a more recent incident, ChatGPT-maker OpenAI was called out by Hollywood actress Scarlett Johansson for using a voice likened to her own, saying she turned down its request to voice the popular chatbot. 

Featured Video Of The Day
Poll Rules Tweak Makes Electronic Records Harder To Get, Sparks Row
Topics mentioned in this article