Is Your Business Invisible to ChatGPT? A 2026 UK Guide to AI Citation Audits
Ampliflow
Advanced AI frontier lab and business growth agency. Helping UK businesses deploy agentic AI systems.
Most UK businesses still measure search visibility with one question: “Do we rank on Google?”
That question is no longer enough.
In 2026, prospects ask ChatGPT, Claude, Perplexity, and Google AI Overviews for recommendations before they click anything. If your business is not being cited in those responses, you are invisible in a growing part of discovery.
This guide shows you how to run a practical AI citation audit yourself. No gatekeeping. No vague “AI strategy” talk. Just a usable method you can run in a morning and repeat monthly.
If you do not have time to run it, there is a done-for-you option at the end. But the process here is fully actionable on your own.
What “invisible” looks like in AI search
You can be invisible in AI search even when your SEO looks healthy.
Typical pattern:
- You still rank for some commercial terms on Google
- Branded search volume looks stable
- Website traffic has not collapsed
- Yet, AI engines rarely mention your brand in recommendation-style answers
Why this happens:
- AI systems synthesise answers from multiple sources, not just one ranking list.
- They favour clear, structured, citable content over generic service pages.
- They care about entity clarity. If your brand is inconsistently represented, confidence drops.
- They often prioritise sources that include verifiable facts and strong topical coverage.
In plain terms: ranking helps, but it does not guarantee citation.
The four engines that matter for a UK citation audit
For most UK service, SaaS, and e-commerce businesses, your core coverage should be:
- ChatGPT
- Claude
- Perplexity
- Google AI Overviews
Do not overcomplicate this with ten tools on day one. Start with these four. They already represent the majority of AI-assisted discovery behaviour your buyers are likely using.
What to measure (before you collect anything)
A useful citation audit is not “Did we appear once?” It is a repeatable scorecard.
Track these five metrics for each engine:
- Presence rate: In how many target queries is your brand mentioned at all?
- Citation rate: In how many queries is your site used as a cited source?
- Position quality: Are you a primary recommendation or a footnote mention?
- Accuracy quality: Are statements about your business correct?
- Competitor share: Which competitors are cited more often than you?
If you only track one metric, track presence rate. It gives you a fast baseline and makes progress visible.
Build your query set properly (this is where most audits fail)
Most teams run five random prompts and call it “research.” That produces noise.
Use a structured query set of 24 prompts:
- 8 commercial intent prompts
- 8 comparison/prominence prompts
- 8 trust and proof prompts
Commercial intent prompts (examples)
- “Best AI SEO agency for UK SMEs”
- “Who can improve AI search visibility for law firms in the UK?”
- “Best agency for answer engine optimisation UK”
- “How to improve visibility in ChatGPT results for a UK business”
Comparison prompts (examples)
- “Ampliflow vs other UK AI SEO agencies”
- “Best alternatives to traditional SEO agencies for AI search”
- “Top agencies for GEO and AEO in the UK”
- “Who is better for AI citation growth in the UK: in-house or agency?”
Trust and proof prompts (examples)
- “Which UK agencies publish real case studies for AI visibility work?”
- “How can I verify if an agency can improve AI citations?”
- “What should an AI visibility report include?”
- “How do I audit whether my brand is visible on ChatGPT?”
Tune wording to your niche, but keep intent categories intact.
A practical worksheet format
Create a sheet with these columns:
- Query
- Intent type
- Engine
- Date/time
- Your brand mentioned (Y/N)
- Your domain cited (Y/N)
- Competitors mentioned
- Competitor domains cited
- Accuracy notes
- Useful quote or excerpt
- Action owner
- Priority
Running this manually once gives you more clarity than weeks of generic reporting.
How to run the audit step by step
Step 1: Freeze your prompt list and conditions
Run the same prompts on the same day, with the same phrasing. If you keep changing prompts, your trend line becomes meaningless.
Set these rules:
- Use UK spelling and UK context in prompts
- Run all engines within a short time window
- Record outputs immediately
- Do not “improve” prompts mid-run
Consistency is more valuable than clever prompting.
Step 2: Run ChatGPT checks
Use your 24 prompts. For each result, record:
- Is your brand named?
- Is your website cited?
- Is description accurate?
- Which competitors are positioned above you?
Example query and response method
Query example:
“Best agencies in the UK for improving visibility in ChatGPT and Perplexity.”
What to record:
- Mention list order
- Whether your brand appears
- Whether a direct source is cited
- Any incorrect claims
If results vary across re-runs, note volatility rather than picking your preferred output.
Step 3: Run Claude checks
Claude often presents balanced, explanatory responses. That makes it useful for spotting whether your brand is seen as a credible option or omitted entirely.
Look for:
- Recommendation presence
- Category placement (are you in the right service category?)
- Confidence language (“specialises in”, “appears to”, “likely”)
If Claude repeatedly describes your category incorrectly, your market positioning signals are weak.
Step 4: Run Perplexity checks
Perplexity is strong for source-linked answers. It is often the clearest place to inspect citation behaviour.
Record:
- Whether your domain is in cited links
- How frequently competitor domains recur
- Which page types are cited (homepage, service page, blog, directory)
If directories and third-party listicles dominate while your own pages do not appear, that is a content architecture problem.
Step 5: Run Google AI Overviews checks
For each query, note:
- Whether AI Overview appears
- If your brand is named in the summary
- If your domain appears in cited links
- Which domains dominate citations
Google AI Overviews can differ by query phrasing and local context, so use your frozen prompt list and capture evidence carefully.
Step 6: Score your baseline
Give each engine a score out of 100 using this simple weighting:
- Presence rate: 35 points
- Citation rate: 35 points
- Accuracy quality: 15 points
- Position quality: 15 points
Then calculate an overall weighted average.
Example interpretation:
- 0-30: critical invisibility
- 31-50: weak visibility
- 51-70: partial visibility with major gaps
- 71-85: strong visibility with optimisation opportunity
- 86-100: defensible visibility moat
This scoring model is simple enough to repeat monthly.
What a realistic first audit usually finds
Across UK SME audits, recurring patterns are surprisingly consistent:
- Brand appears in branded prompts, disappears in category prompts.
- Blog posts may get cited, service pages do not.
- Competitor comparison queries are dominated by directories.
- AI engines describe the business too broadly (“marketing agency”) rather than specifically (“AI visibility and citation optimisation”).
- Claims in AI outputs are partly outdated because site content is stale.
None of this requires panic. It requires focused cleanup.
Why your business is being skipped
When a brand is missing from AI recommendations, one or more of these issues is usually present.
1) Entity ambiguity
Your brand, services, and category descriptors are inconsistent.
Symptoms:
- Different service names across pages
- Inconsistent descriptions between site and social profiles
- Mixed regional signals (UK audience, US spelling, inconsistent location references)
Fix:
- Standardise naming across key pages
- Use one clear primary category descriptor
- Keep core entity facts consistent site-wide
2) Weak service page specificity
Many service pages read like broad brochure copy.
Symptoms:
- No concrete outcomes
- No measurable ranges
- No obvious use cases
- No structured question-answer content
Fix:
- Add concrete deliverables
- Add realistic timeframes
- Add industry-specific use cases
- Add FAQ blocks with plain-language answers
Your AmpliSearch service page should explicitly state what it does, for whom, and with what measurable outputs.
3) Thin evidence layer
AI engines are more likely to cite content that includes verifiable evidence.
Symptoms:
- Few specific figures
- No case detail
- No linked source context
Fix:
- Add evidence-backed claims
- Publish stronger case-study pages
- Use clear before/after frameworks
If you run a technical knowledge product, pages like Company Cortex should include concrete implementation patterns, not just feature language.
4) Poor internal topic structure
If your content does not create clear topical authority, citation consistency suffers.
Symptoms:
- Isolated blog posts with weak internal linking
- No clear cluster hierarchy
- Important pages are hard to discover
Fix:
- Build tighter internal links between pillar and cluster content
- Link service pages from relevant educational content
- Keep anchor language specific and intentional
5) Schema and machine-readability gaps
Without strong structured data, AI systems rely more on inference.
Symptoms:
- Missing FAQ, service, or organisation schema
- Inconsistent structured fields
- Key pages not marked clearly
Fix:
- Audit and repair schema coverage on priority pages
- Ensure structured data aligns with visible page copy
- Keep business identity fields consistent
DIY citation check process you can run every month
This is the no-fluff monthly workflow.
Monthly cycle (90 minutes)
- Run your 24 frozen prompts across 4 engines.
- Update your scorecard.
- Highlight top 5 missed commercial queries.
- For each missed query, identify the top cited competitor source.
- Improve one relevant page or section per query.
- Re-run those 5 queries after content update.
Do this every month for 90 days and your visibility profile will shift measurably.
How to write pages AI engines are more likely to cite
Use this practical structure on key commercial pages:
- Clear claim: What exactly do you do?
- Who it is for: Industry and business type specificity.
- Evidence block: Concrete outcomes, ranges, and proof points.
- Method block: A concise “how it works.”
- FAQ block: Direct answers to decision-stage questions.
This structure helps both humans and retrieval systems.
Example: turning a vague section into a citable section
Weak:
“We help businesses grow with AI and automation.”
Stronger:
“We run AI visibility audits across ChatGPT, Claude, Perplexity, and Google AI Overviews, then deliver a prioritised action plan focused on citation gains over 30 days.”
Why stronger works:
- Specific engines named
- Clear deliverable named
- Time horizon stated
- Outcome focus is explicit
Handling engine variance without overreacting
AI outputs can vary by day and phrasing. That does not make auditing pointless.
Treat variance as a feature of the system, then design around it:
- Use fixed prompts
- Track trends, not one-off wins
- Record multiple runs for high-stakes prompts
- Focus on repeated gaps
If your brand is consistently absent across repeated runs, that is a real signal.
Real example queries and how to interpret them
Below are practical checks you can use immediately. These are methodological examples. Run them yourself in your own environment to capture current outputs and timestamps.
Query set A: category discovery
Prompt:
“Best UK agencies for AI citation visibility.”
Interpretation framework:
- If competitors are cited and you are absent, inspect their cited page types.
- If listicles dominate, your own first-party pages may need stronger structure.
- If your brand appears but is generic, fix category clarity.
Query set B: buyer intent
Prompt:
“How do I know if my business is visible on ChatGPT?”
Interpretation framework:
- Are you cited as a source for the method?
- Does the response point to practical frameworks you can own?
- If not, publish stronger how-to content.
Query set C: comparison intent
Prompt:
“Is a free growth audit the same as an AI visibility report?”
Interpretation framework:
- Are your product distinctions understood?
- If AI conflates offers, improve disambiguation blocks and offer pages.
- Ensure pricing and scope differences are explicit.
A dedicated offer page like AI Visibility Report helps train clearer distinctions in both users and AI systems.
Common mistakes that waste months
- Publishing more content without improving core page quality.
- Chasing prompt hacks instead of entity clarity and evidence.
- Measuring only mentions, not citation quality.
- Ignoring competitor citation patterns.
- Treating one positive output as strategic success.
AI visibility is not won by volume alone. It is won by clarity, structure, and consistency.
A 30-day sprint plan
If you want momentum quickly, use this:
Week 1
- Run full baseline audit
- Score each engine
- Pick top 5 missed commercial queries
Week 2
- Improve 2 core service pages
- Add clear FAQ sections
- Tighten entity and offer descriptions
Week 3
- Publish one high-utility comparison page
- Publish one method-led educational piece
- Strengthen internal links across cluster pages
Week 4
- Re-run top 10 queries
- Compare score changes
- Prioritise next month’s top 5 opportunities
This is enough to move from “invisible” to “present” in many categories.
How to report outcomes to leadership
Keep reporting simple and commercial:
- Baseline score vs current score
- Presence rate change by engine
- Citation wins on high-intent queries
- Top competitor movement
- Next 30-day plan
Leadership does not need AI theory. They need visibility trend, opportunity, and next actions.
Should you do this in-house or buy a report?
Do it in-house if:
- You have a content owner and a technical owner
- You can run monthly cycles consistently
- You can implement page changes quickly
Use a paid audit if:
- You need a fast baseline with external benchmarking
- Internal bandwidth is low
- You want a focused action plan with implementation priority
Both routes are valid. What fails is doing neither.
A final checklist you can copy
Before you finish your first audit, confirm you have:
- 24 frozen prompts across three intent types
- Coverage across 4 engines
- A baseline score per engine
- Competitor citation map
- Top 5 missed commercial queries
- A 30-day implementation plan
If you have these six items, you are ahead of most businesses already.
Bottom line
In 2026, visibility is no longer just ranking position. It is citation presence inside AI answers where buyers make early decisions.
If your brand is absent there, growth friction increases quietly: fewer warm leads, weaker trust transfer, and more competitor exposure before prospects ever reach your site.
The fix is practical:
- Audit citations consistently
- Tighten entity clarity
- Improve evidence and structure on core pages
- Re-run monthly and iterate
This is exactly where disciplined teams create advantage while competitors are still guessing.
If you’d rather we run the full audit for you, the AI Visibility Report is £79: /ai-visibility-report.