Foundations

How long does GEO actually take to work?

Live-retrieval improvements show up in 1–4 weeks. Training-data improvements take 6–12 months. Both are real timelines, both matter, and confusing the two is the fastest way to lose patience with GEO before it pays off. This is the honest answer to the question every CMO asks before signing off on GEO budget.

By Gareth Hoyle Published 25 April 2026 Read time 10 min
TL;DR

GEO works on two timelines simultaneously. Live retrieval responds in days to weeks — your AI-citable content can be cited by Perplexity, ChatGPT browse mode, and Claude's web tool quickly after publication. Training-data shifts take 6–12 months — being baked into ChatGPT or Claude's baseline knowledge requires being well-represented in the corpus when their next snapshot is taken. Treat the two layers as separate workstreams with separate timelines, expect the former to provide early signal, expect the latter to compound across years. Don't measure them with the same yardstick.

The question behind the question

"How long does GEO take to work?" is rarely asked innocently. It's usually asked by:

Each version of the asker wants a different version of the answer. Some want it fast (the agency). Some want it honest (the marketing director justifying spend). Some want it detailed enough to make a real decision (the CMO).

This piece is the version that's honest enough to make a real decision against. It will disappoint the agency.

The two timelines

GEO operates on two distinct mechanisms simultaneously, and they have radically different timelines. Treating them as one is the source of most disappointment.

Live retrieval (fast)

When an AI engine answers a query that requires current information, it doesn't only use its training data — it retrieves live web content, reads it, and includes it in the answer. This is technically called retrieval-augmented generation, or RAG. Perplexity is the most heavily retrieval-driven engine; ChatGPT and Claude have web tools that do similar work.

Live retrieval is the fast layer. If you publish strong AI-citable content today, you can see it cited within days — sometimes hours, on Perplexity. The mechanism:

  1. You publish a page
  2. AI crawlers find it (this can take hours to days)
  3. The page enters the index that the engine retrieves from
  4. Next time someone asks a relevant query, your content is one of the candidates the engine reads
  5. If your content is well-structured for AI extraction, it gets cited or quoted in the answer

The whole loop closes in 1–4 weeks for new content on a domain with reasonable authority. Brand-new domains take longer because crawlers visit them less frequently.

Training data (slow)

Each major AI model has a frozen training corpus. ChatGPT's underlying GPT-4o was trained on a snapshot up to October 2023. Claude's current model has its own cutoff. When users ask the engine general questions, the model draws on this training data for baseline knowledge of brands, categories, and concepts.

Influencing training data is slow. The mechanism:

  1. You do work that creates editorial mentions, Reddit presence, Wikipedia coverage, etc.
  2. That content becomes part of the public web
  3. The next time the engine company runs a major training data snapshot, your work is in it
  4. The engine company trains the next model on the new snapshot (this takes months of computation)
  5. The new model is released to users
  6. Your brand now appears in the model's baseline knowledge

Major model updates from each engine company:

The implication: editorial work you do today shows up in major engine baseline knowledge somewhere between 4 and 12 months from now, depending on the engine and the timing of their next snapshot.

What can you expect at each milestone

Weeks 1–2

Technical fixes show up. If you've corrected your robots.txt to allow AI crawlers, added schema markup, fixed JavaScript rendering issues, those changes are visible to crawlers within days. The downstream effect on AI engine answers is detectable within 1–2 weeks for engines that retrieve aggressively (Perplexity especially).

This is also when your first published AI-citable content can begin to be cited. If you've written a strong comparison page or definitional piece on a topic AI engines retrieve for, expect citations within 7–14 days.

Weeks 3–8

Live-retrieval performance starts to compound. Pages you've published earlier in the cycle have had time to be crawled multiple times, link to one another, and accumulate authority signals. The number of queries on which your content appears as a citation grows.

Earned editorial coverage starts driving live-retrieval citations too. A trade publication article mentioning your brand, published in week 2, gets indexed by week 3, starts appearing as a source in AI answers by week 4.

Months 3–6

Sentiment frame and brand association start shifting in retrieval-driven contexts. The narrative AI engines apply when they mention you starts to reflect the consistent positioning you've been seeding into the editorial landscape.

Wikipedia-related work — if you've earned enough notability for an article to be created — may produce a published Wikipedia article in this window. That's a step-change rather than gradual: presence appears, gets indexed, and starts shifting the AI's representation noticeably.

Months 6–12

The first major AI model snapshot incorporating your work happens. ChatGPT, Claude, or Gemini ships an update whose training corpus included your editorial coverage, your Wikipedia presence, your Reddit mentions. Your brand's baseline knowledge in those engines steps up.

This is when teams who started GEO work in late 2025 see their first big SoAIV gains in early 2026 — not because of anything new they did this quarter, but because the underlying model update finally reflects work from quarters past.

Months 12–24

Compounding. Editorial relationships established in the first year produce more coverage in the second year (you become a known source). Wikipedia article quality improves as more editors contribute. Reddit recommendations accumulate. The brand becomes "what AI engines confidently know about your category" rather than "a brand the AI sometimes mentions."

This is also when you can credibly claim category leadership in AI search, if the work has been consistent. The threshold for "we are the AI's default answer" usually requires 18–24 months of sustained work for most brands in most categories.

The pattern of disappointment

Most teams who give up on GEO do so somewhere between months 2 and 4. The pattern:

The fix isn't doing more in months 1–4. It's accurate expectation-setting before month 1, so months 2–4 don't trigger panic.

How to know if it's working before month 7

You don't have to wait for a model update to know whether the work is going right. Earlier signals exist if you measure them:

Leading indicators

What to track in the first 90 days

  • Citation rate (your URLs cited as sources) — should grow within weeks if your content is genuinely AI-citable. If this isn't moving, your content isn't doing the job.
  • Editorial mentions per quarter — direct measure of Digital PR pipeline output. Should track upward each quarter once the motion is established.
  • Perplexity SoAIV specifically — Perplexity weights live retrieval heavily, so changes show fastest there. If your Perplexity SoAIV is moving but ChatGPT isn't, the system is working — ChatGPT's number will follow at the next training update.
  • Sentiment frame in retrieved content — when AI engines retrieve and quote your content, what frame do they apply? Watch this even before volume shifts.
  • Direct referral traffic from AI engines — Perplexity citations drive direct clicks. ChatGPT browse and Claude web tool also pass referral traffic. Trending up means you're appearing in answers, even if absolute volume is small.

If these leading indicators are all moving in the right direction, the lagging indicator (overall SoAIV) will follow. If they're flat, you have a real problem and need to diagnose it — not wait for the next model update to maybe rescue the numbers.

How to budget realistically

The realistic budget shape across the first 24 months of a serious GEO programme:

Months 1–6

Heaviest investment relative to results. You're building the engine — establishing PR relationships, fixing technical foundations, producing the initial content corpus, doing the Wikipedia work. Visible results lag the spend.

Expect to spend more in this phase than feels comfortable for the visible output. The cost is front-loaded; the returns are back-loaded.

Months 7–12

Investment continues but compounds against earlier work. Editorial relationships earn more coverage per pitch. Wikipedia presence stabilises. Live-retrieval performance shows the work is paying off.

This is also when the first major model update reflects your earlier work, and overall SoAIV steps up.

Months 13–24

Investment continues but the productivity per pound spent improves substantially. Compounding kicks in. The brand starts to be recognised by editors and AI engines as a category source rather than a category newcomer.

The conversation with your CFO shifts from "this is unproven" to "obviously we keep investing."

Beyond month 24

Maintenance and defence. Sustaining the visibility you've built requires ongoing PR, content, and entity work — but at a maintenance level rather than building level. New competitors will challenge; you have to keep moving to stay ahead.

What can be faster

A few specific cases where parts of GEO produce faster results than the general timeline suggests:

Brand-new domains in established categories

If you launch a new product in a category AI engines already understand well, your live-retrieval performance can start contributing to AI answers within weeks. The category's discourse already exists; the AI just needs to encounter you in it.

Brands with existing strong PR

Brands who've been doing Digital PR for years before adding "GEO" to the strategy already have the editorial corpus working in their favour. The "first model update boost" effectively already happened. Adding deliberate AI-citable content can produce visible results in 2–3 months rather than 6–12.

Brands fixing a specific gap

If your audit reveals a single specific gap — say, you're invisible in decision-stage queries because you have no comparison content — fixing that one gap can produce measurable SoAIV improvement on the affected query subset within 4–8 weeks. The whole-brand timeline is slower; the specific-gap timeline can be quicker.

What can be slower

Equally honestly, a few cases where the timeline stretches:

Highly competitive categories

If you're in a category dominated by 3–5 well-established brands with strong AI presence, displacing them is a multi-year effort. Your work compounds, but so does theirs. The relative gap closes slowly.

Brands with brand-name ambiguity

If your brand name is a common word or matches a more famous entity, the AI's disambiguation problem makes everything slower. Each additional editorial mention does double duty — it both adds your presence and clarifies which "Atlas" or "Polaris" you are.

Brands recovering from negative AI representation

If the AI currently mentions you negatively, fixing the framing requires both adding new positive content and waiting for the existing negative content to age out of the AI's recency-weighted retrieval. Faster than starting from invisible, but emotionally harder because the headline number can stay flat while the underlying narrative shifts.

The honest budget conversation

If you're presenting GEO investment to a CFO, the version that holds up under scrutiny:

"We expect 12 months until material SoAIV improvement. We expect 18–24 months until competitive category position. The leading indicators we'll track quarterly tell us whether we're on track to those outcomes — without waiting the full timeline. We're committing to those leading indicators improving every quarter; if they don't, we'll diagnose and adjust before the headline number is supposed to move."

That's a budget conversation a CFO can actually evaluate. "GEO works" is not a budget conversation. "Our SoAIV will be up in 12 months" is more honest. "Here's what we're committing to seeing each quarter" is the version that survives.

What's actually rare in the market

Brands who'll commit to a 12+ month GEO programme are uncommon. Most either don't try, or try for 3 months and pull back. The brands who commit and stay committed are the ones that, in 12–24 months, become the "default answer" in their category.

This is a rare pattern. It's also the most predictable competitive advantage in marketing right now — patience compounded with measurement is the unfair advantage in a discipline most teams give up on too early.

Set realistic expectations

Get a Search Visibility Audit.

Find out where you actually stand today, what gap to fix first, and what realistic 12-month progress looks like for your specific category. From $997, in 48 hours.