← All News

Share of Model: The AI Search Metric Your Agency Isn't Tracking Yet

Share of Model (SOM) measures how often AI systems cite your brand when answering buyer questions. Here's what it is, how to measure it, and what's actually moving it in 2026.

Your organic traffic looks stable. Your keyword rankings are holding. But your biggest competitor just got named three times in a Perplexity answer to a question your best prospect was asking this morning. You weren’t mentioned at all.

That’s the gap that Share of Model (SOM) was designed to expose.

SOM is an emerging measurement framework that tracks how often your brand gets cited, mentioned, or recommended by AI systems when users ask questions relevant to your category. It’s not a tool yet, exactly. It’s a metric concept that the industry converged on this week as practitioners realized traditional share of voice no longer captures what matters in AI search. And agencies that start tracking it now will have a meaningful competitive advantage within the year.

What Share of Model Actually Measures

The traditional share of voice question was: of all the ads and content in my category, what percentage is mine?

The SOM question is: of all the AI responses given to high-intent questions in my category, what percentage mention my brand?

The practical difference is significant. A brand can dominate paid search, rank #1 for a dozen keywords, and still score near zero on SOM because AI systems draw from a different set of signals when constructing answers. According to research published this week by DecodesFuture, top-tier enterprise brands in competitive categories are capturing up to 30% of AI response volume. A score of 20% or higher is considered strong. Most brands don’t know where they stand.

The metric splits into two related scores: Share of Model (SOM), which measures citations across AI systems like ChatGPT and Gemini, and Share of LLM (SoLLM), which focuses on language model responses specifically. Both point at the same underlying question: is your brand in the conversation when AI answers questions your buyers are asking?

AI brand visibility measurement across platforms

Why This Matters More Than Rankings Right Now

Google’s traditional results page is increasingly competing with itself. AI Overviews appear above organic results. AI Mode, now widely rolled out with Gemini 3 Flash integration, provides synthesized responses before any click happens. ChatGPT crossed 500 million monthly users. Perplexity is growing fast among professional and technical audiences.

The pattern is consistent: users who would have clicked to your site are now reading AI-generated summaries. In healthcare, this shift is stark. Patients asking ChatGPT “what’s the best rehab near me” or “how do I find a behavioral health program” receive structured AI answers that may or may not include your facility. A Boston Globe investigation from last month found that physicians now take for granted that patients arrive with AI-generated medical information. “It used to be Google searches, WebMD. Now it’s ChatGPT, Perplexity, Claude,” one psychiatrist told the paper.

For service businesses, professional firms, healthcare providers, and agencies, the brands that show up in AI-generated answers have a first-mover advantage over those that don’t. SOM makes that advantage measurable.

The issue isn’t that rankings don’t matter. They still do. But rankings don’t tell you whether ChatGPT mentioned you this morning.

The Reddit Signal Nobody Saw Coming

Here’s the finding that should change how you think about content distribution.

Reddit is currently the most heavily cited domain across leading AI models, outranking LinkedIn, Wikipedia, and YouTube in terms of the percentage of AI responses that reference the platform. That finding, published this week by Everso Media and covered in marketing circles immediately, isn’t just an interesting data point. It’s a strategic signal with immediate implications.

Why does Reddit dominate AI citations? Because AI systems are trained to weight authentic, community-driven discussion over branded content. Reddit threads are conversational, contain genuine expertise, include corrections and counterarguments, and lack the promotional layer that makes most brand content less useful as an AI source. When an AI model encounters a Reddit discussion where a community expert explains something clearly, it’s high-signal, low-spin content. Exactly what these models are optimized to surface.

The implication for agencies: if your clients aren’t participating authentically in Reddit communities relevant to their industry, they are structurally absent from the most AI-cited content source on the internet. That’s not an accident. That’s a gap you can fix.

This doesn’t mean astroturfing subreddits or creating fake accounts to post product mentions. It means legitimate participation: answering questions in industry subreddits, sharing expert insights without the pitch, building the kind of genuine reputation that communities reward and AI models subsequently cite.

What Else Moves Your SOM Score

Reddit is the most surprising signal, but it’s not the only one. Based on current patterns, these factors drive AI citation frequency:

Content depth and completeness. AI systems prefer sources that answer questions fully rather than partially. A 2,000-word piece that addresses a question from multiple angles, anticipates follow-ups, and includes concrete examples is far more likely to be cited than a 400-word post optimized around a keyword. Authority-first content is the phrase the industry has landed on. Fewer pieces, more depth, more trust.

Third-party mentions and press coverage. When multiple independent sources reference your brand in context, AI models treat that as an authority signal. A brand that appears in industry publications, gets quoted in expert roundups, and earns mentions in other authoritative content builds what some researchers are calling “synthetic authority” — the kind of credibility that AI systems recognize and cite.

Structured data and machine-readable signals. Schema markup, properly configured FAQs, well-structured headers, and llms.txt files all help AI systems parse your content correctly. An AI model that can’t reliably identify what your page is about won’t cite it. One that can extract a clear answer from a structured format will. The llms.txt generator tool makes it straightforward to add these signals to any site.

Brand entity consistency. Your brand name, core offerings, and category signals should appear consistently across your website, your Google Business Profile, your social profiles, and any third-party directory listings. AI models build entity models of brands from multiple sources. Inconsistency creates ambiguity that reduces citation probability.

Topic authority breadth. A site with 40 deep, interlinked pieces on one topic will outperform a site with 400 thin pieces spread across 20 topics when it comes to AI citations. Topic authority builder tools help map the content coverage needed to own a subject area in AI responses. The topic authority builder at Emarketed is designed specifically for this.

Content authority pyramid showing depth over volume

How to Start Measuring Your Own SOM

You have two approaches: tool-assisted and manual.

Tool-assisted tracking is getting more accessible. Platforms now specifically designed for AI visibility monitoring include Scrunch (monitors brand presence across LLMs and tracks AI bot crawling behavior), Profound, Nightwatch, Birdeye Search AI, Otterly AI, and Ahrefs Brand Radar. Most of these are enterprise-priced for now, but pricing is dropping as the category matures.

Manual prompt testing is free and surprisingly informative. Choose 10 to 20 high-intent questions your ideal customer might ask an AI about your category. Run them through ChatGPT, Perplexity, and Google AI Overviews. Record whether your brand appears, where it appears, and how it’s described. Do the same for your top three competitors. Run this test monthly and track changes over time.

This gives you a basic SOM proxy: of the 20 prompts you tested, your brand appeared in how many responses? Your competitor appeared in how many? That gap is your priority list.

For keyword-level research into what your prospects are actually asking AI systems, the AI keyword researcher tool surfaces the question-format queries that drive AI answer responses, which is different from the head terms that drive traditional search.

The Healthcare Angle Is Urgent

For healthcare marketers, particularly those working with behavioral health facilities, rehab centers, and mental health providers, this isn’t a future concern. It’s a current patient acquisition issue.

OpenAI launched ChatGPT Health in February 2026, a dedicated hub specifically for medical inquiries. Patients are using it now to ask about treatment options, compare facilities, and research conditions before making calls. The ECRI Institute named misuse of AI chatbots in healthcare the most significant health technology hazard of 2026, which tells you how pervasive AI health queries have become.

A behavioral health facility that ranks well in Google but doesn’t appear in AI answers is invisible to a growing segment of patients who never reach the organic results. For high-intent, time-sensitive searches (“I need help for my son,” “how to find alcohol rehab near me”), the AI answer is increasingly the first and sometimes only stop.

The strategy here is the same as for any other industry: deep, authoritative content, third-party mentions, schema markup, and llms.txt — but the urgency is higher because the audience is making time-sensitive decisions and the competition to appear in AI responses is less developed.

Talking to Clients About SOM

Most clients are still asking about keyword rankings. That’s fine. But agencies that can explain SOM — that 25% of their search volume may never reach a traditional results page, and that AI responses are making brand recommendations right now — are positioned as strategic partners rather than tacticians.

The conversation doesn’t have to be complex. Ask your client: “When someone asks ChatGPT about [your category], do you show up?” Most will have no idea. Run the test in front of them. That conversation changes priorities.

Agency team presenting AI visibility metrics to client

Frequently Asked Questions

What is Share of Model (SOM) in AI search? Share of Model measures the percentage of AI-generated responses, across platforms like ChatGPT, Perplexity, and Google AI Overviews, that mention or cite your brand when users ask questions relevant to your category. A higher SOM means your brand appears more consistently in the answers your prospects are already receiving.

How is SOM different from traditional share of voice? Traditional share of voice measures your presence in paid or earned media relative to competitors. SOM measures your presence in AI-generated answer content specifically. A brand can have strong traditional share of voice and near-zero SOM if its content and citation signals haven’t been optimized for AI systems.

What’s the fastest way to improve my brand’s SOM score? Start with content depth: audit your existing content to ensure your most important topics are covered comprehensively rather than shallowly. Add schema markup and structured FAQ sections. Build legitimate third-party mentions through PR and industry participation. And, counterintuitively, consider authentic Reddit participation in relevant subreddits, since Reddit is currently the most cited domain across leading AI models.

Can small businesses compete with large brands for AI citations? Yes, particularly in niche or local categories. AI systems are topic-specific rather than domain-authority-focused. A local healthcare provider that owns its topic area with deep, authoritative content can appear in AI responses ahead of large national brands that treat that topic superficially.

How often should I measure my SOM? Monthly is a useful cadence for manual prompt testing. Tool-based monitoring can run continuously. The key is consistency: track the same prompts over time to measure trend, not just snapshot position.

Is optimizing for SOM the same as AEO? SOM is a measurement framework within the broader discipline of AEO (Answer Engine Optimization). AEO describes the practices used to improve how AI systems understand, trust, and cite your content. SOM is one way to measure whether those practices are working. You can use the AI search optimizer tool to audit your current AEO readiness and identify gaps.

Where This Goes Next

The agencies tracking SOM in early 2026 are doing what the early SEO practitioners did with keyword rankings in 2003: building a measurement muscle before the market standardizes around the metric. Within 12 to 18 months, expect clients to ask for SOM reports the same way they ask for ranking reports today.

The underlying technology is moving faster than most marketing teams can track. Google’s AI Mode is expanding. ChatGPT Health is launching dedicated tools for medical queries. Perplexity is adding more commercial integrations. Every major AI assistant is increasing the frequency and specificity of brand citations in responses.

The brands that start building synthetic authority now, through depth of content, breadth of third-party mentions, community presence, and technical structure, will have a citation footprint that compounds. The ones that wait for clearer proof of ROI will spend 2027 explaining to clients why they’re invisible in AI answers.

Start with the prompt test. Run 20 buyer questions through three AI platforms. See where you stand. That’s the first SOM measurement your agency has probably ever done. It won’t be the last.

About the Author

Matt Ramage

Matt Ramage

Founder of Emarketed with over 25 years of digital marketing experience. Matt has helped hundreds of small businesses grow their online presence, from local startups to national brands. He's passionate about making enterprise-level marketing strategies accessible to businesses of all sizes.