
.aiboost-post-meta { font-size: 0.8em; color: #888; margin: 24px 0 8px 0; line-height: 1.5; border-top: 1px solid #eee; padding-top: 12px; }
.aiboost-post-meta strong { color: #555; font-weight: 600; }
.aiboost-post-meta .sep { margin: 0 6px; color: #ccc; }
.aiboost-post-meta a { color: #888; text-decoration: none; }
.aiboost-post-meta a:hover { color: #1a56db; }
.aiboost-tldr { background: #f0f6ff; border-left: 4px solid #1a56db; padding: 20px 24px; margin: 24px 0; border-radius: 0 8px 8px 0; }
.aiboost-tldr h3 { margin: 0 0 12px 0; color: #1a56db; font-size: 1.1em; }
.aiboost-tldr ul { margin: 0; padding-left: 20px; }
.aiboost-tldr li { margin-bottom: 6px; line-height: 1.5; }
.aiboost-direct-answer { background: #f8faf8; border: 1px solid #d1e7d1; padding: 20px 24px; margin: 24px 0; border-radius: 8px; font-size: 1.05em; line-height: 1.6; }
.aiboost-key-facts { background: #fffbf0; border: 1px solid #f0e0b0; padding: 20px 24px; margin: 24px 0; border-radius: 8px; }
.aiboost-key-facts h3 { margin: 0 0 12px 0; color: #b8860b; font-size: 1.05em; }
.aiboost-key-facts ul { margin: 0; padding-left: 20px; }
.aiboost-key-facts li { margin-bottom: 8px; line-height: 1.5; }
.aiboost-toc { background: #fafafa; border: 1px solid #e0e0e0; padding: 20px 24px; margin: 24px 0; border-radius: 8px; }
.aiboost-toc h3 { margin: 0 0 12px 0; font-size: 1em; color: #1a1a2e; text-transform: uppercase; letter-spacing: 0.5px; }
.aiboost-toc ol { margin: 0; padding-left: 24px; }
.aiboost-toc li { margin-bottom: 4px; line-height: 1.5; }
.aiboost-toc a { color: #1a56db; text-decoration: none; }
.aiboost-toc a:hover { text-decoration: underline; }
.aiboost-cta { background: #f8fafc; border: 1px solid #e2e8f0; padding: 24px 28px; margin: 32px 0; border-radius: 8px; }
.aiboost-cta p { margin: 0 0 16px 0; line-height: 1.6; color: #1a1a2e; }
.aiboost-cta-button { display: inline-block; background: #1a1a2e; color: #ffffff !important; padding: 12px 28px; border-radius: 6px; text-decoration: none !important; font-weight: 600; font-size: 0.95em; transition: background 0.2s; }
.aiboost-cta-button:hover { background: #2d2d4f; color: #ffffff !important; }
.aiboost-sources { border-top: 2px solid #e0e0e0; margin-top: 48px; padding-top: 24px; }
.aiboost-sources h2 { font-size: 1.3em; margin-bottom: 12px; }
.aiboost-sources ol { padding-left: 20px; }
.aiboost-sources li { margin-bottom: 8px; line-height: 1.5; font-size: 0.95em; color: #444; }
.aiboost-sources a { color: #1a56db; word-break: break-word; }
.aiboost-author-bio { display: flex; gap: 16px; align-items: flex-start; background: #fafafa; border: 1px solid #e0e0e0; padding: 20px 24px; margin: 32px 0; border-radius: 8px; }
.aiboost-author-bio img { width: 64px; height: 64px; border-radius: 50%; flex-shrink: 0; }
.aiboost-author-bio .text { flex: 1; }
.aiboost-author-bio .name { font-weight: 600; font-size: 1.05em; color: #1a1a2e; margin-bottom: 4px; }
.aiboost-author-bio .role { font-size: 0.9em; color: #666; margin-bottom: 8px; }
.aiboost-author-bio p { margin: 0; line-height: 1.5; font-size: 0.95em; color: #444; }
.aiboost-related { margin: 32px 0; padding: 20px 24px; background: #fafafa; border-radius: 8px; }
.aiboost-related h3 { margin: 0 0 12px 0; font-size: 1em; text-transform: uppercase; letter-spacing: 0.5px; color: #1a1a2e; }
.aiboost-related ul { margin: 0; padding-left: 20px; }
.aiboost-related li { margin-bottom: 6px; line-height: 1.5; }
.aiboost-related a { color: #1a56db; text-decoration: none; }
.aiboost-changelog { border-top: 1px solid #e0e0e0; margin-top: 32px; padding-top: 16px; font-size: 0.85em; color: #888; }
.aiboost-changelog h3 { font-size: 0.9em; margin: 0 0 8px 0; text-transform: uppercase; letter-spacing: 0.5px; color: #888; }
.aiboost-changelog ul { margin: 0; padding-left: 20px; }
TL;DR
- Ahrefs analysed 17 million citations and found AI assistants cite content that is 25.7 percent fresher than typical Google results (Ryan Law and Xibeijia Guan, July 2025).
- ChatGPT is the freshness extreme. The average age of a page ChatGPT cites is 958 days. Google AI Overviews sits at 1,432 days, identical to standard Google organic.
- That 33 percent gap between ChatGPT and Google means a single editorial cadence cannot serve both surfaces. The post you wrote for AI Overview eligibility is too old for ChatGPT by month 18.
- The three-tier refresh model below covers the gap: top 20 percent of commercial pages on a monthly micro-refresh, next 30 percent quarterly, the rest on an annual review.
- For a 50-page UK service site, the calculator outputs 205 refresh actions a year, or roughly four a week. That is the operational baseline. Below that you are leaving citation share on the table.
Key facts
- 17 million cited URLs analysed across ChatGPT, Perplexity, Gemini, Copilot, Google AI Overviews, and traditional Google SERPs (Ahrefs, July 2025).
- Average age of a cited URL: 1,064 days on AI surfaces, 1,432 days on Google SERPs. The 25.7 percent gap is consistent across engines except Google AI Overviews.
- ChatGPT is 33 percent fresher than Google SERP (958 days vs 1,432 days). Perplexity sits in between (1,166 days), as does Gemini (1,118 days) and Copilot (1,056 days).
- Google AI Overviews mirror Google organic. AIO eligibility is a different optimisation problem from ChatGPT or Perplexity citation, despite sharing a query surface.
- The 13-week citation half-life findings published earlier are consistent with this freshness preference. The two findings together imply a refresh cadence, not a refresh event.
What the Ahrefs 17-million-citation study actually found
In July 2025, Ryan Law and Xibeijia Guan at Ahrefs published an analysis of 16.975 million cited URLs across the major AI surfaces and traditional Google SERPs. The headline finding: AI assistants cite URLs that are, on average, 25.7 percent fresher than the URLs Google ranks in organic search. The average age of a cited URL is 1,064 days on AI surfaces compared to 1,432 days on Google SERPs.
The per-engine breakdown is more useful than the headline figure because it tells you which surface to optimise for and at what cadence. ChatGPT is at one end of the spectrum (958-day average citation age), Google AI Overviews is at the other (1,432 days, identical to organic SERP). Perplexity, Gemini, and Copilot sit between them.

Why ChatGPT prefers fresh and Google AI Overviews does not
ChatGPT and Google AI Overviews have different jobs. ChatGPT is positioned as a research and reasoning surface, where users ask broad questions and expect synthesis. Synthesis benefits from current sources, so the retrieval pipeline weights recency. Google AI Overviews sits on top of the existing Google ranking system. When AIO selects sources to cite, it draws from organic top-10 results, which themselves carry the freshness profile of Google’s main index. That index is older on average because Google’s ranking still rewards backlink accumulation, which takes years to build.
The practical implication is that a page optimised for AIO eligibility (long-form, deep backlink profile, traditional SEO maturity) will be too old for ChatGPT citation by month 18. Two surfaces, two cadences.
Translating freshness preference into a refresh cadence
The Ahrefs finding gives you the population average of cited content age. To set your own refresh cadence, the question is not “how old is the average AI-cited page” but “how do I keep my own commercial pages inside that distribution”.
The principle is straightforward. If the target engine cites content with average age N days, your commercial-page age distribution should sit below N. The simplest way to achieve that is the half-N rule: refresh frequently enough that your most-cited pages have a median age of N divided by 2. For ChatGPT that means tier-1 commercial pages with a median age below 480 days. For Perplexity around 580 days. For AIO and Google organic, around 720 days.
That is the floor, not the target. In practice top commercial pages should refresh more aggressively than the floor because half-life data suggests citation share decays meaningfully after 13 weeks even on pages that are still under the average-age threshold. Combining the freshness floor with the 13-week half-life produces a three-tier model.
The three-tier refresh queue
Sort all commercial pages by current citation share multiplied by commercial value (a 1 to 5 scale based on what the page is worth in revenue terms). Group them into three tiers.
- Tier 1: top 20 percent. Monthly micro-refresh (every 4 weeks). Update one or two facts, the date-modified, and the most-prominent statistics. No structural changes.
- Tier 2: next 30 percent. Quarterly substantive refresh (every 13 weeks). Add a new section, refresh statistics across the post, update at least three sources. Re-validate schema.
- Tier 3: bottom 50 percent. Annual review (every 52 weeks). Decide retain, rewrite, or retire based on citation rate over the previous 12 months.

Calculating refresh frequency for a 50-page UK service site
Take a UK accountancy firm with 50 commercial pages. The tier split is 10 pages in tier 1, 15 in tier 2, and 25 in tier 3. The yearly action count is the sum of pages multiplied by refreshes per year per tier.
- Tier 1: 10 pages × 12 refreshes = 120 actions a year
- Tier 2: 15 pages × 4 refreshes = 60 actions a year
- Tier 3: 25 pages × 1 refresh = 25 actions a year
- Total: 205 actions a year, or roughly 4 a week.
Four refreshes a week is the operational baseline. Below that, the page-age distribution drifts upward and citation share leaks. Above that, the team can absorb adjacent work like schema validation and source reverification, which keeps the baseline robust over time.

What freshness does NOT mean
Updating only the date-modified field does not constitute a refresh. AI engines do read structured dates, but they cross-reference with content fingerprints. Two consecutive crawls of the same body text will not register as fresh content even if the dateModified field changes between them. The freshness signal that engines weight is body content evolution, not metadata change.
A meaningful micro-refresh changes at least three things: a quoted statistic, a sentence in the introduction, and the most recent dated reference. The date-modified field then updates honestly to reflect the change. A meaningful substantive refresh changes a section.
How to measure whether your refresh worked
The operational metric is citation share over the next four weeks for the queries the page targets, compared against the four weeks before the refresh. A successful refresh moves citation share by 5 percent or more. If the change is below 2 percent, the refresh was insufficient. Above 10 percent and the page was probably too stale before the refresh, which means it should have been in tier 1 or tier 2 rather than wherever it actually was. Use that signal to retier.
The monitoring layer is a simple weekly query panel: 30 to 50 commercial queries, run weekly against ChatGPT, Perplexity, and an AI Overview check. Track which of your domain pages get cited, which lose citation, and which engine drives the change. A spreadsheet with five months of weekly data will tell you within a quarter whether the cadence is holding.
Common mistakes
- Treating all surfaces as one. ChatGPT and AIO have a 33 percent age gap. Optimising for AIO and assuming ChatGPT will follow is wrong on this data.
- Refreshing the most-visited pages instead of the most-citable pages. Visit count and citation share correlate weakly. Use citation share multiplied by commercial value as the tier signal, not pageviews.
- Date-stamping without content change. Engines cross-reference. Trying to game the dateModified field without changing the body slowly trains the engine to ignore that field on your domain.
- Ignoring the 13-week half-life. Annual refresh on commercial pages is a tier-3 frequency dressed up as tier-1 work. Citation share decays before the annual cycle completes.
- No weekly measurement. Without the citation-share-by-query weekly panel, the team cannot tell which refreshes worked. The cadence becomes faith-based.
How to set up the cadence in week one
- Score every commercial page. Pull citation rate for each page from a citation tracker (Profound, Authoritas, or a manual weekly query panel). Multiply by a 1 to 5 commercial value score.
- Apply the tier split. Top 20 percent of the score distribution to tier 1, next 30 percent to tier 2, rest to tier 3.
- Build the calendar. Tier 1 gets a monthly slot per page in a content calendar. Tier 2 gets a quarterly slot. Tier 3 gets a yearly slot. Distribute the work across weekdays so the load is even.
- Define the refresh template. What changes on a tier 1 micro-refresh, what changes on a tier 2 substantive refresh. Document the template so the team can execute without re-deciding each time.
- Start the weekly panel. 30 to 50 queries, run weekly, log to a spreadsheet. Include three engines minimum. Track citation share by query and by domain.
- Review monthly. Retier pages whose citation share has moved by more than 20 percent up or down.
Frequently asked questions
Does this cadence apply to a 5-page brochure site?
The tier structure breaks down below about 15 commercial pages. For a small site, treat every commercial page as tier 1 and run the monthly cadence on all of them. The total work is roughly the same as the worked example above.
Should the cadence change for AI start-ups vs traditional SaaS?
The cadence does not change. The pages that go in tier 1 do. AI start-ups commonly have founder-led posts and category-defining glossary pages as their tier 1 because those posts get cited most often. Traditional SaaS often has solution-page tier 1 because that is where commercial intent converges.
What if my pages are already much fresher than the per-engine averages?
The averages are population statistics. The optimisation goal is your own commercial page distribution sitting below the average for the target engine, not equal to it. If your tier 1 pages have a median age of 200 days and ChatGPT cites at 958 days average, you are well-positioned and the cadence is producing the intended effect. Do not slow down. The competition is improving its cadence too.
Is monthly refresh on tier 1 too aggressive for a small team?
A tier 1 micro-refresh takes 20 to 40 minutes per page once the team has the template down. For 10 pages that is roughly 5 hours a month. Most small teams underestimate this until the team has run the cadence for a quarter. Build the template before scaling.
Does AI Overview optimisation conflict with ChatGPT optimisation?
Not on freshness. AIO does not penalise fresher content; it just does not reward it the way ChatGPT does. So a tier 1 page kept under 480-day median age performs well on both surfaces. The conflict is elsewhere (schema priority, anchor structure), not in freshness.
How long until the cadence shows results?
Expect the first visible citation-share change within 6 to 8 weeks of starting the tier 1 cycle. Stable lift takes 12 to 16 weeks because the engines re-crawl on their own schedule, which is typically every 4 to 8 weeks for cited domains. By month 4 the team has enough data to retier confidently.
Sources and references
- New Study: AI Assistants Prefer to Cite Fresher Content (17 Million Citations Analyzed). Ahrefs (Ryan Law and Xibeijia Guan), 2025
- AI Search and Content Freshness: Why Updates Improve Visibility. Quattr, 2025
- How content freshness drives AI search visibility (the conceptual companion to this operational post). AiBoost, 2026
Want to know which of your pages are tier 1 vs tier 3, and what their median age is right now? Our free GEO audit returns those tiers in 60 seconds with no signup required.
Change log
- 2026-05-08: Initial publication.
{“@context”: “https://schema.org”, “@graph”: [{“@context”: “https://schema.org”, “@type”: “Article”, “headline”: “Why ChatGPT Cites Content 33 Percent Fresher Than Google: A Per-Engine Editorial Cadence Calculator”, “description”: “Ahrefs analysed 17 million AI citations and found the average ChatGPT-cited page is 958 days old vs 1,432 days for Google SERP. Here is the three-tier refresh cadence that captures that 33 percent gap, with a worked example for a 50-page UK service site.”, “image”: [“https://aiboost.co.uk/wp-content/uploads/2026/05/cover-6.png”], “author”: {“@type”: “Person”, “name”: “Pavel Uncuta”, “url”: “https://aiboost.co.uk/author/claudeblogs/”, “image”: “https://aiboost.co.uk/wp-content/uploads/2026/05/pavel-uncuta-avatar.png”, “jobTitle”: “Founder”, “worksFor”: {“@type”: “Organization”, “name”: “AiBoost”, “url”: “https://aiboost.co.uk”}, “sameAs”: [“https://www.linkedin.com/in/paveluncuta/”, “https://aiboost.co.uk/about/”]}, “publisher”: {“@type”: “Organization”, “name”: “AiBoost”, “url”: “https://aiboost.co.uk”, “logo”: {“@type”: “ImageObject”, “url”: “https://aiboost.co.uk/wp-content/uploads/aiboost-logo.png”}}, “datePublished”: “2026-05-08”, “dateModified”: “2026-05-08”, “mainEntityOfPage”: {“@type”: “WebPage”, “@id”: “https://aiboost.co.uk/why-chatgpt-cites-content-fresher-editorial-cadence-calculator/”}, “wordCount”: 2068, “keywords”: “content-freshness, ai-citations, ai-search, chatgpt, perplexity, ai-overviews, content-strategy, content-optimization, generative-engine-optimization”, “inLanguage”: “en-GB”}, {“@context”: “https://schema.org”, “@type”: “BreadcrumbList”, “itemListElement”: [{“@type”: “ListItem”, “position”: 1, “name”: “Home”, “item”: “https://aiboost.co.uk”}, {“@type”: “ListItem”, “position”: 2, “name”: “Blog”, “item”: “https://aiboost.co.uk/2-column-blog/”}, {“@type”: “ListItem”, “position”: 3, “name”: “Why ChatGPT Cites Content 33 Percent Fresher Than Google: A Per-Engine Editorial Cadence Calculator”, “item”: “https://aiboost.co.uk/why-chatgpt-cites-content-fresher-editorial-cadence-calculator/”}]}, {“@context”: “https://schema.org”, “@type”: “FAQPage”, “mainEntity”: [{“@type”: “Question”, “name”: “Does this cadence apply to a 5-page brochure site?”, “acceptedAnswer”: {“@type”: “Answer”, “text”: “The tier structure breaks down below about 15 commercial pages. For a small site, treat every commercial page as tier 1 and run the monthly cadence on all of them. The total work is roughly the same as the worked example.”}}, {“@type”: “Question”, “name”: “Should the cadence change for AI start-ups vs traditional SaaS?”, “acceptedAnswer”: {“@type”: “Answer”, “text”: “The cadence does not change. The pages that go in tier 1 do. AI start-ups commonly have founder-led posts and category-defining glossary pages as tier 1 because those get cited most. Traditional SaaS often has solution-page tier 1.”}}, {“@type”: “Question”, “name”: “What if my pages are already much fresher than the per-engine averages?”, “acceptedAnswer”: {“@type”: “Answer”, “text”: “The averages are population statistics. The optimisation goal is your own tier 1 distribution sitting below the average for the target engine. If your tier 1 has a median age of 200 days and ChatGPT cites at 958 days average, you are well-positioned. The competition is improving its cadence too.”}}, {“@type”: “Question”, “name”: “Is monthly refresh on tier 1 too aggressive for a small team?”, “acceptedAnswer”: {“@type”: “Answer”, “text”: “A tier 1 micro-refresh takes 20 to 40 minutes per page once the template is in place. For 10 pages that is roughly 5 hours a month. Build the template before scaling so the team is not re-deciding each time.”}}, {“@type”: “Question”, “name”: “Does AI Overview optimisation conflict with ChatGPT optimisation?”, “acceptedAnswer”: {“@type”: “Answer”, “text”: “Not on freshness. Google AI Overviews does not penalise fresher content. A tier 1 page kept under 480-day median age performs well on both surfaces. The conflict between AIO and ChatGPT is elsewhere (schema priority, anchor structure), not freshness.”}}, {“@type”: “Question”, “name”: “How long until the cadence shows results?”, “acceptedAnswer”: {“@type”: “Answer”, “text”: “Expect the first visible citation-share change within 6 to 8 weeks. Stable lift takes 12 to 16 weeks because engines re-crawl on their own schedule, typically every 4 to 8 weeks for cited domains. By month 4 there is enough data to retier confidently.”}}]}, {“@context”: “https://schema.org”, “@type”: “HowTo”, “name”: “How to set up a three-tier AI citation refresh cadence”, “step”: [{“@type”: “HowToStep”, “position”: 1, “name”: “Score every commercial page”, “text”: “Pull citation rate for each page from a citation tracker. Multiply by a 1 to 5 commercial value score. The product becomes the tier-assignment score.”}, {“@type”: “HowToStep”, “position”: 2, “name”: “Apply the tier split”, “text”: “Top 20 percent of the score distribution to tier 1, next 30 percent to tier 2, rest to tier 3.”}, {“@type”: “HowToStep”, “position”: 3, “name”: “Build the refresh calendar”, “text”: “Tier 1 gets a monthly slot per page (every 4 weeks), tier 2 gets quarterly (every 13 weeks), tier 3 gets annual (every 52 weeks). Distribute the work evenly across weekdays.”}, {“@type”: “HowToStep”, “position”: 4, “name”: “Define the refresh template”, “text”: “Document what changes on a tier 1 micro-refresh (one or two facts, dateModified, top statistics) and on a tier 2 substantive refresh (a new section, statistics across the post, three sources, schema validation). Without a template the team re-decides each time.”}, {“@type”: “HowToStep”, “position”: 5, “name”: “Start the weekly citation-share panel”, “text”: “30 to 50 commercial queries, run weekly against ChatGPT, Perplexity, and an AI Overview check. Log to a spreadsheet. Track citation share by query and by domain.”}, {“@type”: “HowToStep”, “position”: 6, “name”: “Review monthly and retier”, “text”: “Pages whose citation share has moved by more than 20 percent up or down get retiered for the next month.”}]}]}
