TL;DR. A Citation Gap exists when competitors are cited on the directories, review sites, and communities AI engines use, and you are not. Diagnose by filtering the Rankscale Citations tab to your non-branded prompts, then map the top sources to your presence on each. Output: a tiered target list of 3–6 third-party domains to earn presence on this quarter.
What the Citation Gap is
AI engines decide who to cite based on who the rest of the internet trusts. The sources that show up in their answers are concrete and countable. Your job is to find which ones you are missing from, then earn presence on the top 3–6. The higher you get mentioned on these pages, the better.
The 4-step diagnostic
Step 1: Pick your diagnostic prompt. Use the one you set in Module 3.5. This is the worst prompt (lowest visibility, highest intent), so the citation deltas are biggest there.
Step 2: Open the Citations tab in Rankscale. Filter by: target AI engine, your non-branded prompt, and your niche. You see the exact URLs being cited across 7+ runs. Screenshot this list; it is your target universe.
Step 3: Categorize each cited source into a tier
| Tier | Source type | Examples | Effort to earn |
|---|---|---|---|
| Tier 1 | Review sites + software directories | G2, Capterra, Gartner Peer Insights, TrustRadius | Medium (6.4) |
| Tier 2 | Editorial + industry press | Forbes, TechCrunch, The Verge, Harvard Business Review | High (6.5) |
| Tier 3 | Communities | Reddit (subreddit), YouTube (named creator), Quora | High, slow (6.6) |
| Tier 4 | Comparison portals + niche editorial | G2-equivalents in your geo, trade press | Low–Medium |
| Tier 5 | Personal blogs, Substack, Medium | Named author with domain authority | Low |
Step 4: Find 3–6 sources where a competitor is cited and you are not. From the filtered list, pick 3–6 across tiers. For each, tag: Category (tier 1–5), Citation share (how often cited across your prompt group, as %), Priority (1 = highest), Action (listing, pitch, community engagement).
Example target table
| # | Source | Tier | Citation share | Priority | Action |
|---|---|---|---|---|---|
| 1 | g2.com | 1 | 18% | 1 | Claim listing, run review campaign (6.4) |
| 2 | reddit.com/r/[niche] | 3 | 12% | 2 | Engage monthly, build moderator relationship (6.6) |
| 3 | capterra.com | 1 | 8% | 3 | Claim listing (6.4) |
| 4 | forbes.com | 2 | 6% | 4 | HARO + expert contributor angle (6.5) |
| 5 | [niche trade publication] | 4 | 5% | 5 | Pitch editorial feature |
| 6 | [niche blog] | 5 | 10% | 2 | Reach out via Contact form to get you mentioned in listicle |
The citation-share principle
Your priority is driven by % citation share, not absolute volume. A source cited 18% of the time is 3x the value of a source cited 6% of the time, even if 6% sounds fine. Always rank by share, then ship in share order and what is feasible budget-wise. Most of them have a contact form in their footer.
The Mention-to-Citation ratio red flag
If your Mention Count is high on a prompt but your Citation Count is low (Module 3 decision tree rule 5), AI engines know the brand exists but have no source to cite. That is the exact signature of a Citation Gap. The fix is on one side to create content that answers these prompts (Module 5) and on the other side earning mentions on the sources engines already trust.
What "earning a citation" means per tier
- Tier 1 (review sites): claim, fill out fully, ethically request reviews (lesson 6.4)
- Tier 2 (press): pitch, contribute expert quotes via HARO (lesson 6.5)
- Tier 3 (communities): engage authentically, earn community-native mentions (lesson 6.6)
- Tier 4–5: outreach to editors, guest posts, niche relationships
Lessons 6.4, 6.5, 6.6 cover each playbook.
Do this now:
Open the Rankscale Citations tab. Filter by your diagnostic prompt. Build the target table above. Pick the top 3 by citation share. Those are your outreach targets for the next 4 weeks.
Start improving your AI visibility today with Rankscale.
Get started