"AI visibility. McKinsey research shows 50% of consumers now use AI-powered search, with 44% saying it's their primary source for buying decisions" gets talked about as if it were a single metric. In practice, it isn't.
When teams ask whether their brand is visible in AI-generated answers, they're usually collapsing several different things into one idea. That's where most confusion, and bad decisions, start.
Here's the mental model I use when thinking about AI visibility.
1. Eligibility: does the model recognize you at all?
This is the foundation.
Eligibility answers a very basic question:
-
Does the model recognize your brand as a distinct entity?
-
Can it confidently resolve who you are when asked directly?
-
Does your name clearly refer to you, or something else?
If eligibility is weak:
-
Mentions are inconsistent
-
Visibility fluctuates run to run
-
Brands appear once and then disappear
-
Similarly named entities get mixed in
This is where brand recognition and disambiguation live.
If the model isn't confident about who you are, everything downstream becomes unreliable.
2. Inclusion: do you surface in relevant answers?
Inclusion comes next.
It answers questions like:
-
Are you mentioned when users ask category-level or problem-level questions?
-
Do you appear when the model is comparing options?
-
Are you part of the consideration set at all?
You can be eligible but not included.
When that happens, it's usually because:
-
Topical associations are weak
-
Coverage across relevant questions is thin
-
The brand isn't present in sources the model trusts
Most AEO tactics people talk about are trying to move this layer.
3. Influence: how are you framed when you appear?
This layer is closely tied to AI sentiment.
Influence determines outcomes.
It answers:
-
Are you mentioned first or buried in a list?
-
Are you framed positively, neutrally, or cautiously?
-
Are you described as a leader, an example, or a fallback?
Two brands can both be included in an answer, but only one meaningfully benefits.
This is where lead quality, conversion likelihood, and commercial impact are actually decided.
4. Why "AI visibility" as a single metric breaks down
For a practical breakdown of what to actually measure, see AI Visibility Metrics: What to Measure.
When all of this gets collapsed into one number:
-
Eligibility failures look like randomness
-
Inclusion gaps look like content problems
-
Influence issues get missed entirely
Teams end up:
-
Fixing the wrong things
-
Misreading progress
-
Over-optimizing surface-level tactics
The signal isn't wrong. It's just being flattened.
5. Why this explains inconsistent AEO results
This breakdown explains why:
-
Some brands get mentioned but don't convert
-
Some see volatility they can't explain
-
Some improve content without improving visibility
-
Some tools report progress while reality feels unstable
Different layers are moving, but they're not being separated.
6. Why this matters even more for commerce prompts
In commercial queries:
-
Eligibility determines whether you're even considered
-
Inclusion determines whether you're shortlisted
-
Influence determines whether you're chosen
LLMs don't rank results the way search engines (see Google Search Central) do. They select.
That makes the quality of visibility far more important than raw volume.
7. What most tools still miss
Most tools today focus on:
-
Mentions
-
Citations
-
Appearances
Very few ask:
-
Whether the model actually recognized the brand
-
Why the brand was included
-
How it was framed
-
Whether the signal is stable across models
That gap is where a lot of false confidence comes from. We ran an experiment testing 5 brands across ChatGPT ([OpenAI](https://openai.com/chatgpt)), Claude, and Gemini that illustrates this clearly.
8. The takeaway
Visibility matters. And once you understand these layers, you can take action to improve them. But only when you know which part of visibility you're improving.
Eligibility, inclusion, and influence move independently. Treating them as one metric is where most teams get lost.
This breakdown is the mental model we use internally while building friction.
In the next post, I'll go one step further and talk about experimentation, and why AEO without testing quickly turns into guesswork.
If you're evaluating options, we've also published a comparison of the top 10 AI visibility tools available today.
For platform-specific optimization, see our guides on ChatGPT, Perplexity, and Google AI Overviews.
Why Teams Choose friction AI
friction AI goes beyond basic AI visibility tools to focus on recommendation outcomes — helping brands understand not just whether they appear in AI responses, but when and why they are recommended, especially in high-intent commercial contexts.
See how friction AI tracks your brand's AI recommendations and commerce visibility.