NAIROBI, Kenya- Google recently launched its AI-generated search results overview tool with a promise to “do the work for you” and streamline online searches.
However, just days after its debut, the feature is already under scrutiny for delivering factually incorrect information.
Earlier this month, Google unveiled this AI tool, which aims to summarize search results, sparing users the need to click through multiple links.
But the tool quickly faced backlash after users found it providing false or misleading information.
For example, the AI summary erroneously claimed that former President Barack Obama is a Muslim, a common misconception.
In reality, Obama is a Christian. Another user highlighted an AI summary stating that “none of Africa’s 54 recognized countries start with the letter ‘K’” — neglecting Kenya.
Google confirmed that the AI overviews for these queries were removed for violating company policies.
“The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web,” said Google spokesperson Colette Garcia.
She also noted that some viral examples of AI mistakes appeared to be manipulated images.
“We conducted extensive testing before launching this new experience, and as with other features we’ve launched in Search, we appreciate the feedback. We’re taking swift action where appropriate under our content policies,” she said.
The bottom of each AI search overview includes a disclaimer stating that “generative AI is experimental.”
Google claims to have conducted tests simulating potential misuse to prevent false or low-quality results from surfacing.
This AI tool is part of Google’s broader strategy to integrate its Gemini AI technology across all its products, aiming to stay competitive with rivals like OpenAI and Meta.
However, the current issues highlight the risk that AI, prone to confidently delivering false information, could harm Google’s reputation as a reliable information source.
Even for less serious queries, the AI overview sometimes provides incorrect or confusing information.
For instance, when asked by some users about the sodium content in pickle juice, the AI overview gave conflicting numbers.
Additionally, a search query about data used for Google AI training produced an unclear response, acknowledging uncertainty about the inclusion of copyrighted materials.
This isn’t Google’s first AI blunder. In February, the company paused its AI photo generator’s ability to create images of people after it produced historically inaccurate and racially insensitive images.
Users in areas where AI search overviews are available can toggle the feature on and off via Google’s Search Labs webpage.
Social media has been buzzing with examples of Google’s AI overview giving bizarre advice, from suggesting users put glue on pizza to recommending eating rocks.
These odd outputs have forced Google to manually disable AI overviews for certain searches as memes spread.
This is surprising given that Google has been testing AI Overviews since its beta launch in May 2023, serving over a billion queries during that time.
CEO Sundar Pichai stated that Google has reduced the cost of delivering AI answers by 80 percent through hardware, engineering, and technical breakthroughs. However, this optimization might have been premature, resulting in lower-quality outputs.
“A company once known for being at the cutting edge and shipping high-quality stuff is now known for low-quality output that’s getting meme’d,” an anonymous AI founder told The Verge.
“Google maintains that its AI Overview product generally provides “high-quality information.”
“Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce,” said Google spokesperson Meghann Farnsworth.
She confirmed that the company is taking swift action to remove problematic AI Overviews and is using these incidents to improve its systems.
AI expert Gary Marcus and Meta’s AI chief Yann LeCun agree that achieving near-perfect AI reliability is challenging.
Marcus explained that while reaching 80 percent accuracy is straightforward, the final 20 percent requires advanced reasoning akin to human fact-checking, potentially needing artificial general intelligence (AGI).
Google is in a tough spot. With competitors like Bing, OpenAI, and emerging AI search startups, Google feels pressured to deliver. This pressure contributes to hasty AI releases. \
Last year, Meta had to retract its Galactica AI system after it advised people to eat glass, a scenario similar to Google’s current predicament.
Google has ambitious plans for AI Overviews, including multi-step reasoning for complex queries, AI-organized results pages, and video search in Google Lens.
But for now, Google’s reputation hinges on getting the basics right, and it’s clear that there’s still work to be done.