First, a quick reality check
Before assuming you're invisible, run the test properly. The fact that your one query didn't surface your brand isn't proof of anything — AI engines have meaningful response variance, and a single query is a sample size of one. The threshold for "I have a real problem" is closer to:
- You ran 5–10 commercial queries spanning your category's awareness, consideration, and decision stages
- You ran each query 2–3 times (different sessions, ideally different IPs)
- You ran across at least 3 engines (ChatGPT, Claude, Perplexity at minimum — Gemini and AI Overviews if you can)
- Your brand was named in less than 30% of responses where it reasonably should have been
If you've done that, and the gap is real, one or more of the 12 reasons below is at play. They're roughly ordered by how often we see each one as the primary cause in audits.
The 12 reasons
Insufficient editorial presence in the sources LLMs train on
This is the most common cause by a significant margin. AI engines learn what brands matter in a category from the editorial corpus they train on — Wikipedia, mainstream press, trade publications, Reddit, podcast transcripts, YouTube transcripts. If you don't appear in those sources at meaningful volume, the model's representation of your category doesn't include you.
This isn't about whether any articles mention you. It's about the volume and quality. A category leader will have hundreds or thousands of mentions across high-authority sources. A challenger brand might have a dozen. The gap shows up in AI answers the same way it shows up in any consensus view: the brands with overwhelming source presence dominate.
Fix horizon: 6–18 monthsYour brand narrative is inconsistent across sources
If five articles describe your brand five different ways — a tech company, a media company, a marketing platform, an enterprise tool, a startup — the model can't form a coherent representation of you. When asked "best X for Y," it can't confidently place you in a category because it doesn't know which category you're in.
Run this test: have a non-marketer Google your company name and read the first 10 results. Does each one describe you the same way? If not, your positioning is muddy at the source level, and AI engines reflect that muddiness.
Fix horizon: 3–6 monthsYou're blocking AI crawlers
Look at your robots.txt. Look hard. A surprising number of brands — usually because of a paranoid response to "AI is stealing our content" headlines — have explicitly blocked the AI crawlers that would otherwise index their content for live retrieval.
The crawlers to check for: GPTBot, ClaudeBot, PerplexityBot, Google-Extended, CCBot. If any of these are blocked, you've made yourself invisible to the engines that block-respect them. Whether to allow these is a real strategic question (training data vs. retrieval, IP concerns), but if you've blocked them by accident or through inherited config, it's the easiest fix on this list.
Your site is too JavaScript-heavy for AI crawlers
AI crawlers, like older search crawlers, often don't fully render JavaScript. If your most important content — pricing, product descriptions, comparisons, customer logos — only appears after a client-side render, AI crawlers may see an empty shell.
Test this: open Chrome, disable JavaScript (Cmd+Shift+P → "Disable JavaScript"), reload your homepage. What you see is roughly what a JS-blind crawler sees. If your value proposition isn't there, neither is your brand for AI engines.
Fix horizon: 1–4 weeksYour competitors are doing better Digital PR than you are
Most marketers under-rate Digital PR's effect on AI visibility. Coverage in editorial publications doesn't just drive direct referral traffic — it creates the source corpus that AI engines synthesise from. A competitor that gets 30 trade-press mentions per quarter while you get 3 will appear in AI answers at roughly 10× your rate, regardless of who has the better product.
This compounds. PR budget that felt like a vanity metric in 2022 is now the most direct lever on AI visibility. Brands without an active PR motion are losing AI search every day.
Fix horizon: 3–6 monthsYou're missing from the structured data sources LLMs trust
Some sources are weighted disproportionately heavily in AI training and retrieval. If you're absent from these, you're absent from the AI's confident knowledge of your category:
- Wikipedia — single biggest leverage. If you should have an article and don't, that's the highest-priority fix in this list.
- Wikidata — the structured data layer behind Wikipedia. Fast to fix once Wikipedia exists.
- Crunchbase (and equivalents) — for businesses
- G2 / Capterra / Trustpilot — for SaaS
- Industry-specific authoritative directories (varies by category)
The Wikipedia question is loaded. You can't just create your own article — Wikipedia's notability standards mean someone independent has to do it, and self-promotional articles get deleted aggressively. The reverse-engineer: do enough Digital PR that Wikipedia editors notice you exist, then they create the article.
Fix horizon: 3–12 monthsThe AI's training data predates your brand
If your brand launched in the last 12 months, and the engine you're testing has a training cutoff older than that, you simply aren't in the baseline knowledge. ChatGPT 4o has a cutoff around October 2023; Claude Opus 4 around early 2025; Gemini 2.5 around mid-2024. (Cutoffs change as models update — check current.)
The temporary fix: optimise hard for live retrieval, since that's the only layer you'll appear in until the next model snapshot. The permanent fix: keep doing the work consistently so you're well-represented when the next snapshot is taken.
Counter-test: brands older than three years that aren't appearing usually aren't a training cutoff problem. They're a Reason 1 or 2 problem.
Fix horizon: Until next model updateYour brand name is generic or ambiguous
If your brand name is a common word or matches a more famous entity, AI engines disambiguate to the better-known reference. "Polaris" is a missile, a snowmobile brand, and a navigation star, before it's your B2B SaaS startup. "Atlas" is a mountain range, a Marvel character, a moon, and several large companies, before it's your product.
This isn't fatal, but it raises the bar significantly. Brands with ambiguous names need 2–3× the editorial presence of brands with distinctive names to overcome the disambiguation gap. The fix is either rename (drastic) or aggressively over-invest in entity-clarity work — Wikipedia disambiguation, structured data, consistent contextualisation in every editorial mention.
Fix horizon: 6–18 monthsYou don't have the comparison and alternative content AI engines need
A surprising amount of AI search behaviour is comparative: "X vs Y," "alternatives to Z," "best X for use case Y." AI engines have a strong preference for retrieving content that's explicitly structured to answer these questions — comparison pages, alternatives pages, "X vs Y" pages, listicles.
Most B2B brands have none of these. They have product pages, feature pages, customer stories. They don't have /vs/competitor URLs, they don't have alternatives content. They expect AI engines to construct comparisons from scratch, which the engines do — using competitors' comparison pages as the source. Result: the AI's comparison of you and your competitor reads like the version your competitor wrote.
Your Reddit and community presence is weak
LLMs train heavily on Reddit. Heavily. Reddit content is over-represented in training data by enormous margins, because it's high-volume, conversational, and uses natural language patterns the models benefit from. Brands that get organically recommended in Reddit subreddits about your category get baked into the AI's category answers in ways that polished marketing presence never matches.
You can't fake this. Astroturfing Reddit is detectable, against site rules, and usually backfires. The legitimate moves: have your team (real people, not pretending to be users) participate in subreddits, answer questions, build genuine reputation. Encourage genuine recommendations from real customers. Make your product easier to recommend by being unambiguously good at one thing.
Fix horizon: 6–12 monthsYou're being mentioned but framed negatively
Worse than not appearing: appearing with negative framing. "X is known for poor customer service" or "X used to be the leader but has lost ground." If the AI's representation of you skews negative, being named is a net loss.
Audit this carefully. The AI is just synthesising — the negative framing exists in the source corpus somewhere. Find it, address it. Sometimes that means responding directly (replying to negative reviews, fixing the underlying product issue). Sometimes it means seeding new positive framing (a flagship case study, an editorial piece reframing your category position) faster than the negative content is being generated.
Fix horizon: 3–9 monthsYou're winning queries you don't realise you're winning
This sounds counter-intuitive in a list of reasons your brand is missing, but it's a real audit finding. Brands check the obvious queries — their core product category — and don't see themselves. They conclude they're invisible. But when you run a structured 50-query audit across the funnel, the picture is often messier: strong on awareness queries, weak on consideration, surprisingly strong on decision-stage queries for a niche use case.
The gap isn't always where you think. The fix isn't always in the obvious direction. This is why measurement-led GEO outperforms intuition-led GEO so consistently.
Fix horizon: Depends what you findHow to figure out which one is yours
Knowing the 12 possible causes is half the work. Knowing which two or three apply to you is the other half. The diagnostic process:
Run these checks in order
Start with the cheapest, fastest checks. Each one rules out (or in) some of the 12 reasons:
- Check 1 — robots.txt (5 min). Visit yourdomain.com/robots.txt. Search for "GPTBot", "ClaudeBot", "PerplexityBot". If any are disallowed, that's likely Reason 03. Easy fix.
- Check 2 — JavaScript test (5 min). Disable JS in Chrome, reload your homepage. If your key content is missing, that's Reason 04.
- Check 3 — Wikipedia (2 min). Search "[your brand]" on Wikipedia. No article, or a poor one? That's Reason 06.
- Check 4 — Brand age vs cutoff (1 min). Founded after early 2025? Reason 07 is in play. Older? Probably not the cause.
- Check 5 — Brand name distinctiveness (3 min). Search your brand name on Google. Are the first 10 results yours, or are they other entities? If yours, your name is distinctive. If not, Reason 08 is contributing.
- Check 6 — Comparison content audit (10 min). Do you have
/vs/,/alternatives/, or "X vs Y" content? If not, Reason 09. - Check 7 — PR cadence (15 min). How many editorial mentions did you get in the last 6 months? Compare to your top 3 competitors via Google search. Significantly fewer? Reason 05.
- Check 8 — Reddit search (10 min). Search "[your category] reddit" and look for recommendation threads. How often are you mentioned vs competitors? Reason 10.
- Check 9 — Sentiment audit (20 min). Across 20 queries where you ARE mentioned, what fraction are neutral or negative? Reason 11.
- Check 10 — Funnel-stage breakdown (60+ min). Run 30 queries split awareness/consideration/decision. Where's the gap actually concentrated? Reasons 02 and 12.
Most brands that do this systematically discover three things at once: the obvious reason they expected to find, a less obvious reason they hadn't considered, and a strength they didn't know they had. The audit pattern is roughly:
- 30% of brands: primary cause is editorial presence (Reason 01) — the slow, structural fix
- 25%: brand narrative inconsistency (Reason 02) — fixable with focused PR work
- 15%: missing comparison content (Reason 09) — fixable in weeks
- 10%: technical (Reasons 03 + 04) — fixable in days
- 10%: Reddit/community gap (Reason 10) — slow to build
- 10%: other (Reasons 06, 07, 08, 11, 12) or combinations
The structural lesson
The brands who get this right don't run one ad-hoc audit. They make Share of AI Voice a tracked metric, monitored quarterly, with the same discipline they apply to organic traffic or paid CAC. The brands who treat it as a one-off curiosity are the brands who'll be invisible to AI search by 2027.
What you can't do yourself
Most of the diagnostic checks above you can run yourself in under three hours. The hard parts are:
- Running a representative prompt set at scale. 50–150 commercial queries × 3–5 engines × 3 runs each = 450–2,250 logged responses. Hand-running this is grim. You need automation.
- Computing share-of-voice properly. Brand-name extraction, sentiment classification, and citation parsing across thousands of responses isn't something you'll do well in a spreadsheet.
- Cross-referencing AI visibility against SEO authority. The interesting insights come from understanding which parts of your AI gap are explained by traditional SEO weakness and which parts are genuinely AI-specific. That requires correlating two datasets you may not have.
- Translating findings into prioritised actions. Knowing the cause is one thing. Knowing which fix to do first, given your team and budget, is another.
This is what audits are for.