The starting point
A Series B analytics SaaS (ARR ~$18M, ~40 employees) came to us in December 2025 after noticing their competitors appearing in ChatGPT product recommendation answers — while they did not. Their traffic from AI sources was negligible: fewer than 200 visits per month across ChatGPT, Perplexity, and Gemini combined.
An initial AEO scan found three critical gaps:
- Robots.txt blocked GPTBot and ClaudeBot — a legacy of a 2023 policy decision nobody had revisited.
- No FAQ schema anywhere on the site — including on product and comparison pages that were natural citation candidates.
- No visible "last updated" timestamps on any blog post. Content was fresh but looked stale to crawlers.
What we changed
The 90-day plan had three phases:
Phase 1 — Technical (Week 1)
- Removed blanket Disallow rules for GPTBot, OAI-SearchBot, ClaudeBot, and PerplexityBot
- Published an
llms.txtfile at the site root listing the 30 most important pages - Added Organization and WebSite JSON-LD schema to the site template
Phase 2 — Content structure (Weeks 2-6)
- Rewrote 20 top-of-funnel pages to open each H2 section with a 2-sentence answer capsule
- Rephrased section headings as questions ("How does X work?" instead of "Product overview")
- Added FAQPage schema with 5-8 Q&A pairs on each rewritten page
- Added visible "Last updated" timestamps beside every title
Phase 3 — Authority (Weeks 7-12)
- Published a 4,000-word proprietary data report from the company's own usage metrics
- Pitched the report to 12 industry publications — got 4 placements with backlinks
- Added a customer case study library with real metrics and named enterprises
Results by month
| Metric | Baseline | Month 1 | Month 2 | Month 3 |
|---|---|---|---|---|
| AI referral visits | 185 | 412 | 748 | 1,106 |
| ChatGPT citations (manual audit) | 0 | 3 | 11 | 19 |
| Perplexity citations | 2 | 8 | 22 | 41 |
| Google AI Overview inclusions | 1 | 4 | 9 | 18 |
| AEO scan score | 42 | 61 | 75 | 87 |
Total AI referral traffic went from 185 visits in November to 1,106 visits in February — a 5.98x increase. Conversions from AI traffic outperformed paid search channels, with a trial signup rate of 6.1% compared to 1.4% for paid.
What worked best
Looking at the weekly data, three changes produced disproportionate lift:
- Unblocking crawlers — Week 1 alone saw a 12x increase in AI crawler visits in server logs. This was literally a five-line change in robots.txt.
- Answer capsules on the comparison pages — the "X vs Y" pages were the first to earn Perplexity citations, typically within 2-3 weeks of the rewrite.
- The proprietary data report — when the report was picked up by a major industry publication in Week 10, ChatGPT citations for related queries jumped from 11 to 19 within 4 weeks.
Lessons learned
- The easy wins are really easy. Three robots.txt lines and an llms.txt file accounted for the first 30% of the growth.
- Answer capsules move the needle fastest. New content took 4+ weeks to earn citations; retrofitted capsules on existing content earned citations in 2-3 weeks.
- Third-party mentions compound. The four industry-publication backlinks from the data report continued driving AI citations months after publication.
- Measurement is manual. There's no single dashboard for AI citations yet. Expect to set aside 30-60 minutes per week for manual audits across ChatGPT, Perplexity, and Gemini.
