Google’s AI Overviews, designed to give quick answers, has been under fire for spreading false and dangerous advice, like using glue in pizza sauce and citing non existent idioms. The system, powered by Gemini, is causing click-through rates to reputable sources to plummet by 40%–60%.
Despite claims that it broadens user information, data shows hallucination rates as high as 1.8%. Recent testing of OpenAI’s newest models suggests even higher hallucination rates, with up to 48% of responses containing false information.