AI search is not a future problem. It is already rewriting what a good content strategy looks like, and the latest warning sign came from a brand most marketers know well. In a recent BBC report on businesses scrambling to get noticed by AI search, HubSpot said it lost 140 million visits in a single year as AI changed how people discover information. HubSpot CMO Kipp Bodnar also said the click-through rate on searches with AI Overviews is about 60% to 70% lower.
That should end the last lazy debate about whether AI search is just another SEO trend. It is not. If your content model still depends on long, broad articles that rank, pull a click, and walk the visitor into a conversion path, you are exposed.
The hard part is that many brands are reacting the wrong way. A recent Verge investigation into AI SEO tactics showed how companies are publishing self-serving comparison pages designed to get cited by AI systems. Some of those pages are working, at least for now. That does not mean they are a smart long-term strategy.
The better move is to build content that an AI system can trust, extract, and cite without it reading like spam to a human. That is the shift this post exists to prove.
If you need the broader framework behind answer engine optimization, start there. If you need execution, the rest of this post is the practical version.
HubSpot’s traffic drop is a strategy signal, not just a scary stat
The headline number, 140 million visits gone in a year, matters because it happened to a company that built one of the best-known content engines in B2B marketing. This is not a weak brand getting filtered out. It is a mature publisher being hit by a structural change in discovery.
According to the BBC interview, HubSpot is already restructuring its content around smaller chunks of information that AI tools can extract more easily. That is the part many marketers should pay attention to. The implication is clear: the old model of comprehensive guide first, extraction second is breaking down.
AI systems do not reward content simply because it is long. They reward content that is easy to parse, specific enough to answer a narrow question, and credible enough to cite. Those are different things.
That is also why the zero-click conversation needs an update. Zero-click used to mean a featured snippet stole some traffic at the top of the SERP. In AI search, the answer itself is becoming the destination. BCG’s view of discoverability frames this well: people are moving from searching for information to receiving direct answers. When that happens, visibility shifts from rank position to citation presence.
For agencies and in-house teams, the real KPI is no longer just, “Did we rank?” It is, “Did we make the answer?”

Why comparison spam is the wrong lesson to learn from AI search
The Verge piece is useful because it names the temptation out loud. If AI systems are citing structured comparison pages, then brands will naturally try to publish more of them. That is exactly what is happening across software, ecommerce, and service categories.
The problem is that this tactic confuses short-term exploitability with durable authority.
A self-serving “best of” page can sometimes win citations because it is formatted cleanly, uses direct language, and covers multiple options in one place. But if the page is obviously tilted, thin on evidence, or inconsistent with what other trusted sources say, it becomes fragile. It may work until the platform adjusts its quality filters, or until stronger sources crowd it out.
Google already said in that Verge report that it is aware of low-quality listicle manipulation and is working against it. Marketers who build their AI visibility strategy around loopholes should expect those loopholes to close.
AEO is stronger when it borrows the right lesson from those pages instead of copying the worst part of them. The right lesson is structure:
- clear question targeting
- easy-to-scan sections
- direct answers near the top
- transparent comparisons when a comparison is actually useful
- evidence and citations that support claims
The wrong lesson is making your own brand look like the winner in every list and hoping the model does not notice.
This matters even more in high-trust industries. Healthcare brands, treatment centers, and B2B companies with long sales cycles cannot afford visibility built on content that feels slippery. If the page wins a citation but loses credibility with the buyer, it was a bad trade.
What citation-ready content looks like in 2026
Most teams do not need more content. They need content that matches how AI retrieval works.
HubSpot’s restructuring points in the right direction. So does what healthcare marketers are saying publicly. In a recent Swaay.Health interview with Envision Health, the takeaway was blunt: organizations need clear, accurate, patient-friendly content with no fluff, and they need a deep understanding of what they actually do. That is good advice well beyond healthcare.
Citation-ready content usually has five traits.
1. It answers one thing clearly
A page should have a primary job. Not five jobs. If a page is trying to define a concept, compare vendors, explain process, sell a service, and rank for every adjacent keyword, the answer quality gets muddy.
AI systems seem to prefer content that can be extracted into a direct response without cleanup. That means tighter scope and cleaner intent.
2. It exposes the answer early
Burying the answer after a long introduction is bad for readers and worse for citation potential. Put the plain-language answer in the first paragraph or two. Then support it.
This is one reason many old-school SEO posts are losing ground. They were written for dwell time and scroll depth. AI search favors extractable clarity.
3. It shows why the source should be trusted
Specific numbers, named methodologies, expert attribution, and external validation all matter. If you reference a market shift, link to the source. If you make a claim about performance, tie it to a real result.
One reason Emarketed’s work has translated into AI visibility is that the outputs are measurable. For example, Seasons in Malibu holds 4,200+ keyword rankings, 814,230 monthly social impressions, and averages 5 patient admits per month driven directly through Emarketed’s marketing. That kind of specificity creates a stronger trust footprint than vague claims about “improved visibility.”
4. It is chunked for retrieval, not padded for volume
Marketers spent years stretching pages to hit a target word count. AI search pushes in the other direction inside each section. A long article can still work, but each section needs to function as a self-contained answer block.
Think in terms of modules: definition, use case, risk, example, checklist, FAQ. Each block should make sense on its own.
5. It is consistent across the site
One strong post is not enough. AI systems compare your page against your wider entity footprint. Service pages, FAQs, blog posts, about pages, and third-party mentions all help shape whether the model sees you as a reliable source.
That is why AEO services work best when technical cleanup, content structure, and off-site credibility are handled together.

The content format that is losing ground
A lot of traffic-dependent brands still publish articles built like this:
- broad keyword target
- 2,000 words of generic context
- one weak section that answers the real query
- a CTA at the end
That format was already tiring. AI search makes it less useful.
If a model can answer the user’s question before the visitor lands on the page, only two kinds of content tend to keep earning value:
- pages that become cited sources inside the answer
- pages that solve a deeper follow-up once the user wants specifics
The middle layer, generic explainers with no original insight, is getting squeezed hard.
This is where many agency content programs need an honest audit. If the editorial calendar is full of interchangeable top-of-funnel posts that any competitor could publish, the content may still get indexed, but it is less likely to be cited and less likely to convert. The same logic applies to healthcare organizations trying to win patient trust. If your pages sound like everyone else’s, AI has no reason to prefer you.
EMARKETER made a useful point in its recent FAQ on GEO and AEO: GEO is about getting mentioned in an AI-generated answer, not just ranking in a list of links. That sounds simple, but it changes how you evaluate almost every content decision.
What agencies should do this quarter
If I were auditing an agency or in-house team right now, I would push five immediate changes.
Rebuild the page template around extraction
Your standard blog template should force clarity early: direct answer, concise definitions, strong subheads, and short paragraphs. If the first real answer appears halfway down the page, fix the template.
Split giant guides into tighter source pages
A giant pillar page can still exist, but it should be supported by narrower pages that answer specific questions cleanly. HubSpot’s own move toward smaller content chunks is the strongest public signal of this shift.
Audit any comparison page for credibility risk
If you publish alternatives pages, software comparisons, or “best” lists, make them fair, transparent, and evidence-backed. If they read like disguised ad copy, they may hurt you later even if they help now.
Track citations, not just rankings
If your reporting still treats Google rankings as the lead indicator of discoverability, it is incomplete. You need recurring test prompts across ChatGPT, Google AI Overviews, Gemini, and Perplexity, plus a way to track who gets cited.
Publish with point of view
AI systems pull from pages that say something clearly. Opinion without evidence is weak, but evidence without a position is forgettable. Strong posts combine both.
If you want another example of how this shift is changing measurement, Emarketed’s piece on AI citations and what traffic misses is worth reading after this one.
The healthcare lesson is even sharper
Healthcare marketers should read the current shift with more urgency than most industries. Patients are using AI to ask symptom, treatment, provider, and comparison questions before they ever visit a site. And unlike many retail categories, bad answers in healthcare are not just annoying. They are trust-destroying.
That is why the advice from Envision Health matters. Clear language, clinician-informed content, and pages built around what patients actually ask are not just conversion improvements. They are visibility requirements.
It also lines up with what we see in practice. Seasons in Malibu’s visibility gains did not come from chasing one clever AI trick. They came from sustained authority across SEO, AEO, paid search, social, and web, with measurable outcomes across each layer. In AI search, that kind of depth compounds.
Healthcare brands that keep publishing dense, compliance-heavy copy with no plain-language answer near the top are making themselves harder for both people and AI to trust.

FAQ: What marketers should do about AI search right now
Is AI search killing organic traffic for everyone?
Not evenly, but it is reducing clicks for many informational queries. The BBC report quoting HubSpot’s experience is one of the clearest public signs yet that major brands are already feeling the shift.
Should brands create more comparison pages to get cited?
Only when they can make those pages genuinely useful and credible. The Verge reporting shows why self-serving comparison spam is spreading, but that does not make it durable. Structure helps, bias hurts.
What kind of content gets cited most often?
Content with a narrow purpose, direct answers near the top, clear structure, trustworthy sourcing, and strong topical consistency across the site has the best shot.
Does long-form content still matter?
Yes, but only if it is organized into extractable sections. Length alone is not a moat anymore. A long post needs multiple answer blocks that can stand on their own.
How should healthcare marketers adapt first?
Start by rewriting high-intent pages in plain language, involve clinicians in the review process, and build content around real patient questions rather than internal jargon.
What should agencies measure besides rankings?
Citation frequency, cited pages, AI referral traffic quality, and the downstream conversion value of visitors influenced by AI answers all matter more than rankings alone.
What to do Monday morning
Pick your top 20 informational pages and read only the first 150 words of each. If the answer is buried, rewrite it. If the page tries to do too much, split it. If the claims are vague, add evidence. If the structure is messy, clean it.
Then review every comparison-style post on the site and ask one uncomfortable question: would this still deserve to rank if the brand name were removed? If the answer is no, fix it before the platforms do it for you.
HubSpot’s 140 million lost visits should not push marketers into panic. It should push them into precision. The brands that win the next phase of search will not be the ones publishing the most content. They will be the ones publishing the clearest, most citable, and most trustworthy answers.