Something changed in healthcare search over the past eighteen months, and most medical practices didn’t notice. By December 2025, 88% of healthcare queriestriggered a Google AI Overview — up from 59% just two years earlier. Treatment and procedure queries hit 100%. That means when a patient types “what are the side effects of metformin” or “how long does a root canal take,” Google now answers the question directly at the top of the page, and more than half of those searchers never scroll down to click anything.
Meanwhile, 40 million people ask ChatGPT a health question every single day. One in four of ChatGPT’s 800 million regular users submits a medical prompt weekly. Seven out of ten of those conversations happen outside clinic hours — at midnight, over the weekend, during a moment of anxiety when calling a doctor isn’t an option.
The shift from “Dr. Google” to “Dr. AI” isn’t coming. It already happened. And it introduces a problem that no other industry faces quite the same way: healthcare content lives under YMYL rules — Your Money, Your Life — which means AI engines hold it to the strictest citation standards on the internet.
“When AI Overviews appear on health queries, organic click-through rates drop to 0.6%. If your practice isn’t cited in the AI answer, you are functionally invisible.”
Who actually gets cited in health AI answers?
Not who you’d expect. The top four domains cited in Google AI Overviews for health queries are Healthline (113,728 mentions), Cleveland Clinic (99,680), Mayo Clinic (89,103), and WebMD (87,025). These aren’t the largest hospitals or the most famous doctors. They’re the organizations that invested earliest in structured, clear, evidence-based patient education content — written for humans, formatted for machines.
That last part matters. Cleveland Clinic didn’t earn nearly 100,000 AI citations by writing brilliant prose. They earned them by doing something most healthcare organizations still haven’t done: they structured their content so AI engines could extract clean answers. Every condition page has a consistent format — definition, symptoms, causes, treatment, when to see a doctor — with proper headings, schema markup, and a “Frequently Asked Questions” section at the bottom. That FAQ section alone, marked up withFAQPage schema, accounts for a disproportionate share of their citations.
Perplexity, interestingly, handles health differently. In comparative studies across medical conditions, Perplexity consistently scored highest for quality and reliability because it surfaces peer-reviewed journal articles and current clinical protocols. If your practice publishes content that references specific studies with citations, Perplexity notices.
How YMYL became the AI gatekeeper
YMYL started as a Google quality rater guideline. It told human evaluators to hold health and financial content to a higher standard. But over the past two years, it evolved into something bigger: YMYL is now the gatekeeping standard that every AI platform applies to medical content before deciding whether to cite it.
The practical effect is that AI engines require three trust layersbefore they’ll cite healthcare content:
Author credentials
The content must be attributed to a named medical professional with verifiable credentials — an MD, DO, NP, or similar. "Written by Staff" doesn't pass. AI engines cross-reference author names against medical board listings, LinkedIn, and publication records.
Clinical evidence
Claims must link to peer-reviewed research, clinical guidelines, or recognized medical databases (PubMed, UpToDate, CDC). Unsubstantiated statements — even accurate ones — are deprioritized because AI systems can't independently verify medical accuracy. They rely on citation chains instead.
Institutional trust signals
The publishing organization needs established authority: accreditation, years of operation, consistent NAP (name, address, phone) across platforms, and reviews from real patients. This is where small practices can compete — a 15-year-old family practice with 200 Google reviews and board-certified providers has genuine authority that AI can verify.
This is the core tension in healthcare AEO: the content that patients want (simple, reassuring, accessible) and the content that AI trusts (evidence-backed, professionally authored, institutionally verified) are often created by different teams with different goals. The organizations that win AI citations are the ones that bridge both.
What small practices consistently get wrong
After analyzing hundreds of healthcare websites — from solo dental practices to multi-location urgent care chains — the same five gaps appear over and over. They’re not exotic problems. They’re oversights that exist because most healthcare marketers optimized for Google’s blue links, not AI’s citation engine.
The “About Us” page is the only page with provider credentials
AI engines look for author attribution on the content itself, not on a separate bio page. If Dr. Sarah Chen wrote your guide to managing type 2 diabetes, her name, credentials, and a link to her profile need to appear on that specific page — ideally near the top, reinforced with Physician schema. Most practices bury provider information three clicks deep, where no AI crawler connects it to the content.
FAQ sections exist but lack schema markup
Plenty of healthcare sites have FAQ pages. Almost none mark them up withFAQPage structured data. The FAQ text is visible to humans who visit the page, but invisible to AI engines scanning for structured question-and-answer pairs. This is the single fastest fix in healthcare AEO — often implemented in under an hour — and it has an outsized effect because FAQ schema snippets achieve an 87% click-through rate when surfaced.
Medical content uses marketing language instead of clinical language
“Our state-of-the-art facility provides world-class orthopedic care” tells AI nothing. “We treat ACL tears, meniscus injuries, and rotator cuff tears using both arthroscopic and open surgical techniques” tells AI exactly which queries to match your page against. AI engines can’t assess quality claims (“world-class”), but they can map specific conditions and procedures to user queries with high confidence.
No MedicalEntity schema anywhere
Most healthcare websites have basic Organization or WebPageschema at best. They’re missing the entire medical schema vocabulary:MedicalClinic, Physician, MedicalCondition,MedicalProcedure. This vocabulary exists specifically so AI engines can understand healthcare entities with precision. Without it, your orthopedic practice looks the same as a shoe store to a machine.
Content hasn’t been updated in years
Healthcare AI citations have an unusually strict freshness requirement. 95% of ChatGPT citations come from content updated within the last 10 months. A treatment guide published in 2022 — even if still medically accurate — is unlikely to be cited because AI can’t verify whether guidelines have changed since publication. Adding a lastReviewed date and updating content quarterly is table stakes.
The schema stack that healthcare sites actually need
Healthcare schema is more complex than most industries because medical entities have their own dedicated vocabulary in schema.org. Here’s the practical hierarchy, ordered by implementation priority.
