
In a quiet but significant retreat, Google has officially discontinued "What People Suggest," an AI-driven search feature that previously surfaced crowdsourced medical advice from online forums. The removal of the feature, which had been designed to provide users with insights from others sharing similar lived experiences, marks another chapter in the ongoing, often contentious, integration of artificial intelligence into public health information.
While the tech giant maintains that the decision was part of a "broader simplification" of its search interface, the move comes amidst heightened scrutiny regarding the safety and accuracy of AI-generated content in the medical domain. For users and industry observers alike, the disappearance of the tool highlights the persistent tension between the desire to democratize health information and the clinical risks of amplifying unverified, non-expert commentary.
Launched around March 2025, "What People Suggest" was positioned as a bridge between professional medical databases and peer-to-peer discussion. At the time of its debut, Google executives argued that while users rely on search engines for expert advice, there is equal value in understanding the "lived experiences" of others—such as how individuals with chronic conditions like arthritis manage exercise routines or daily adjustments.
The mechanism was straightforward: the system utilized AI to crawl and synthesize threads from discussion platforms such as Reddit, Quora, and X (formerly Twitter). It then aggregated these disparate, often anecdotal, viewpoints into summarized "themes" displayed directly on the search results page.
However, the efficacy of this approach was immediately questioned by medical professionals and safety advocates. The primary concern was the potential for the AI to elevate anecdotal—and occasionally dangerous—medical advice to the same level of visibility as evidence-based guidance.
The core risk of "What People Suggest" lay in its fundamental structure. By treating forum-based anecdotes with the same algorithmic weight as professional insights, the feature risked blurring the line between personal experience and clinical recommendation.
In early 2026, investigations revealed that Google’s broader AI Overviews had already faced significant backlash for providing misleading information, such as incorrect dietary advice for pancreatic cancer patients or misinterpreted liver function test results. The inclusion of unverified community sentiment into these search summaries intensified concerns about patient safety.
| Feature Type | Source Trustworthiness | Risk Level |
|---|---|---|
| Medical Search (Standard) | High (Peer-reviewed/Clinician) | Low |
| AI Overviews (General) | Variable (Web-wide) | Moderate |
| "What People Suggest" | Low (Anecdotal/Unverified) | High |
The risks associated with this approach are not merely theoretical. When AI systems ingest and present advice from laypeople, they often lack the clinical context required to filter out harmful misinformation. For a user searching for guidance on a sensitive medical issue, the difference between a helpful tip and a dangerous suggestion can be a matter of public health importance.
Google’s stated reason for removing the feature—a "broader simplification of the search page"—has been met with skepticism by some industry analysts. When pressed on whether safety concerns were a contributing factor, a company spokesperson maintained that the decision was unrelated to the quality or safety of the tool, reiterating that they continue to connect users with reliable health perspectives.
However, the timing of the removal suggests a deeper recalibration of AI integration strategy. As major tech companies face increasing pressure regarding the liability of their AI models, the "move fast and break things" philosophy that characterized the early days of generative AI is being replaced by a more cautious, guardrail-focused approach.
Key considerations for future AI search deployments now include:
The failure or discontinuation of "What People Suggest" is not necessarily a failure of AI technology, but rather a correction in the application of AI to sensitive information. Google remains committed to the "The Check Up" initiative, where company leaders continue to discuss the role of AI research and technological innovations in addressing global health challenges.
For Creati.ai readers, this development serves as a poignant reminder that the value of AI in search is heavily dependent on the quality of the data it curates. As AI models become more adept at summarizing web content, the challenge shifts from the technical ability to synthesize text to the editorial responsibility of determining what should be synthesized.
Google’s pivot suggests that the company is refining its approach to AI safety. While the "What People Suggest" feature may be gone, the underlying question—how to provide users with personal perspectives without sacrificing clinical accuracy—remains one of the most critical challenges for the future of search technology. As the AI arms race continues, the industry is learning that when it comes to human health, accuracy and verifiable expertise must always take precedence over the allure of engagement.