AI platforms can't agree on which brands to recommend. When a shopper asks ChatGPT and Google AI the same product question, the two platforms disagree on brand recommendations 62% of the time. Run the same query twice on a single platform, and you'll often get different answers each time.
The inconsistency runs deeper than occasional variation. Research from SparkToro found that there is less than a 1-in-100 chance that any two AI responses will contain the same list of recommended brands. This means a single test query tells you almost nothing about your competitive position in AI answers. To get a reliable picture, you need a structured, repeatable approach.
That's what this guide covers: a practical framework for tracking how AI models talk about your brand relative to your competitors, and what to do with what you find.
The Inconsistency Problem
Most teams check their AI visibility by typing a few queries into ChatGPT and scanning the results. This approach is fundamentally broken.
BrightEdge's analysis of brand recommendations across AI platforms revealed that 62% disagreement rate between ChatGPT and Google AI. A brand that appears first in one platform's answer may not appear at all in the other's. SparkToro's research confirms the volatility within platforms too: to get statistically meaningful data, you need to run 60 to 100 queries per topic.
This volatility creates both a risk and an opportunity. The risk: your competitors might be appearing in AI answers far more often than your one-off checks suggest. The opportunity: with the right measurement system, you can identify and close competitive gaps that others aren't even tracking.
Why Single-Query Checks Mislead You
A single AI query is a coin flip, not a measurement. Here's why volume matters.
AI models are probabilistic. Each response is generated fresh, influenced by the phrasing of the query, the model's training data, and a degree of randomness built into the generation process. Two identical queries can produce different brand lists seconds apart.
This means competitive intelligence gathered from a handful of manual queries will mislead you. You might conclude your brand is well-represented when it only appeared by chance, or panic because a competitor showed up in one response when they rarely do. Reliable competitive tracking requires running dozens of query variations across multiple platforms, then measuring frequency rather than presence in any single response.
Building an AI Share-of-Voice Framework
Share of voice in AI answers measures how often your brand appears relative to competitors across a defined set of queries. Here's how to build a tracking system.
Define Your Query Set
Start with 15 to 20 core queries that represent how your customers ask AI for recommendations in your category. Include variations:
- Category queries: "best [product category] for [use case]"
- Comparison queries: "how does [your brand] compare to [competitor]"
- Problem queries: "[customer pain point] solutions"
- Purchase queries: "which [product] should I buy for [specific need]"
Choose Your Platforms
Track at minimum ChatGPT, Google AI (Gemini), and one additional model such as Claude or Perplexity. Given the 62% cross-platform disagreement, single-platform tracking gives you an incomplete picture.
Measure Frequency, Not Placement
Run each query 5 to 10 times per platform and record:
- Mention rate: What percentage of responses include your brand?
- Position: When mentioned, where do you appear in the list?
- Context: Are you recommended, mentioned neutrally, or cited as a lesser option?
- Competitor frequency: How often does each competitor appear across the same queries?
Track these metrics weekly or biweekly to identify trends over time.
What Gives Competitors an Edge
Three factors drive which brands appear most in AI recommendations. Understanding them reveals where your competitors are winning and where you can catch up.
Earned Media Dominance
An analysis of 23,000 LLM citations by Omniscient Digital found that 48% of citations came from earned media (press coverage, reviews, third-party mentions) while owned media accounted for only 23%. Competitors with strong PR programs and review presences have a structural advantage in AI answers.
Domain Authority and Backlink Profiles
SE Ranking's study of 129,000 domains found that referring domains are a 3.5x predictor of whether a brand gets cited by ChatGPT. This aligns with how LLMs weight sources during training: widely-linked content gets more weight. If a competitor has a stronger backlink profile, their content likely appears more often in AI training data.
ChatGPT's Abstraction Bias
Research from UNSW found that ChatGPT prioritizes desirability over feasibility in product recommendations. The model tends to recommend premium or well-known brands over options that might be a better practical fit. Competitors with stronger brand narratives around aspiration and quality can benefit from this bias, even if their products aren't objectively superior.
How to Run Your Own Competitive Audit
Follow these steps to benchmark your brand's AI visibility against competitors.
Step 1: Identify your top 3 to 5 competitors. Focus on brands that compete for the same customer intent, not necessarily the same product category.
Step 2: Build your query matrix. Create 20 queries across four intent types (category, comparison, problem, purchase). For each, write 2 to 3 phrasing variations.
Step 3: Run queries systematically. Execute each query variation 5 times on each platform. Record every brand mentioned in each response, its position, and the sentiment of the mention.
Step 4: Calculate share of voice. For each competitor (including yourself), divide total mentions by total possible mentions across all queries and platforms. This is your AI share of voice percentage.
Step 5: Analyze the gaps. Where competitors outperform you, examine why. Check their earned media coverage, backlink profiles, and whether their content directly addresses the query intent.
Step 6: Repeat monthly. AI model updates, new training data, and content changes shift results over time. Monthly tracking reveals whether your optimization efforts are working.
The Stakes Are Rising
This competitive intelligence work isn't academic. AI-powered search is becoming a primary shopping channel.
According to Bain and Company, 80% of consumers now use AI-generated results in their search process, and shopping-related queries doubled in the first half of 2025. The IAB reports that AI has become the second most influential shopping source, with 40% of shoppers using AI tools during their purchase journey.
Brands that aren't tracking their competitive position in AI answers are flying blind in a channel that's shaping purchase decisions right now.
What Comes Next
This guide is part of a broader series on AI visibility and citation strategy:
- How to Get Your Content Cited by AI: The complete framework for earning AI citations
- Competitor Analysis in AI Search: Deep dive into competitive positioning across AI platforms
- How to Control What AI Says About Your Brand: Shaping your brand narrative in AI responses
- AI Visibility Metrics: What to Measure: The key performance indicators for AI search
- AI Brand Monitoring: The complete guide to building a monitoring practice that includes competitive tracking
- How to Monitor Competitor Mentions in AI Answers: A focused guide on AI competitive intelligence
Track Your Competitive Position with friction AI
Running a competitive audit manually is useful but time-consuming. friction AI automates multi-platform AI monitoring, tracking your brand and competitors across ChatGPT, Gemini, Claude, and Perplexity with the query volume needed for statistical reliability.
You get share-of-voice dashboards, mention tracking, sentiment analysis, and competitive benchmarking, updated on a regular cadence so you can see how your position shifts over time.