Google is playing a game with disastrous AI overviews that have gone viral

Google is playing a game with disastrous AI overviews that have gone viral

Google decided earlier this month to roll out its AI Summaries feature broadly in the US, offering AI-generated summaries for various searches. Unfortunately there were many reactions inaccurate, weird or downright dangerous.

Now Google has confirmed this in a response to Android Authority that it takes “swift action” on abusive searches:

The vast majority of AI overviews provide high-quality information, with links to dig deeper on the web. Many of the examples we saw were unusual queries, and we also saw examples that were modified or that we couldn't reproduce. We conducted extensive testing before launching this new experience, and as with other features we've launched in Search, we appreciate the feedback. We're taking swift action where necessary under our content policies and using these examples to develop broader improvements to our systems, some of which have already been rolled out.

Some of the most notable AI overviews that went viral include a recommendation to eat a small pebble every day, telling people to use non-toxic glue to help cheese stick to pizza, and a suggestion to drink at least two liters of urine every day. passing on a kidney stone. One of the most disturbing apparent blunders was the recommendation to jump off a bridge if users noticed they were depressed. It appears that at least some of these answers come from these AI overviews that link to satirical articles or trolls on forums (e.g. Reddit).

This is just the latest in a series AI blunders we've seen Google seemingly rushing to bring AI into everything. More recently, Gemini's image generator feature came under fire for creating images of racially diverse World War II Nazis instead of historically accurate images. Previous versions of Bard (now Gemini) also made headlines for hallucinations and incorrect answers.