An AI-powered feature from Google that blended amateur health opinions with search results has been shut down, the company confirmed. “What People Suggest” organized user-generated health content from online discussions and displayed it in response to medical queries. Its removal, described by insiders as long overdue, was confirmed by Google without meaningful public explanation.
The feature was announced with enthusiasm at Google’s health event in New York as a tool designed to humanize health search. Karen DeSalvo, who was then leading health efforts at Google, wrote a blog post championing the tool’s ability to connect users with community wisdom. She highlighted how people with conditions like arthritis could find tailored tips from others navigating the same challenges.
Google’s defense of the removal cited structural changes to the search results page rather than any concerns about content quality or user safety. When the company was asked to provide a public record of the announcement, the referenced blog post turned out to have no mention of the feature’s discontinuation. Critics found this explanation insufficient and called for more forthcoming communication.
The removal arrives in the context of broader concerns about Google’s approach to AI-generated health content. An investigation earlier this year revealed that AI Overviews on Google Search had been circulating false medical information to billions of users globally. Although some AI Overviews were subsequently removed, the underlying systemic issues have not been fully addressed.
Google’s next health event is expected to showcase a new wave of AI-driven medical tools and research partnerships. As the company navigates criticism from health professionals and digital safety advocates, the discontinuation of “What People Suggest” will likely be scrutinized as a test case in how responsibly Google handles problematic AI products. Greater transparency and slower, more careful rollouts may be essential to rebuilding trust.