Understand your AI Visibility Dashboard
Learn to read and interpret every metric, compare AI models, track your evolution over time and turn your audit results into concrete actions.
Table of Contents
Dashboard overview
The dashboard is your control center for AI visibility. After each audit, it automatically displays all the key metrics about how AI models perceive your brand.
It is divided into several sections:
- 6 key metrics — your visibility health indicators at a glance
- Native vs Web comparison — how you perform on desktop AI vs web-augmented AI
- Model breakdown — detailed performance for each AI model tested
- Evolution chart — track your progress across multiple audits
- GEO Checklist — website optimization score for AI visibility
- Action plans — concrete steps to improve your scores
The 6 key metrics
Your dashboard displays 6 main metrics, each visualized as a donut gauge with a percentage or score. Together, they paint a complete picture of your AI visibility.
Visibility Score
Your overall presence in AI responses. Measures how often AI models mention your brand across all prompts. 100% = mentioned in every response.
Mention Rate
Frequency of mentions across all AI models tested. A high rate means multiple AI models consistently recommend you.
AI Share of Voice
Your market share in AI-generated responses compared to competitors. Shows how dominant your brand is in AI conversations about your industry.
Average Position
Your average ranking when AI lists recommendations. Position #1 = you are the top recommendation. Lower is better.
Sentiment Index
Measures the tone of AI mentions (0-10 scale). 10 = extremely positive. Captures whether AI models speak of your brand favorably or critically.
Topic Coverage
Breadth of topics where your brand appears. 100% = you are mentioned across all audit themes (products, services, comparisons, etc.).
How scoring works
Each AI model's response is analyzed and scored on a 0-10 scale. The score depends on whether and how your brand is mentioned in the response.
Score scale
Score adjustments
The base score can be adjusted with bonuses and penalties:
- +1 point — Detailed description of your brand (features, benefits)
- +1 point — Recent or up-to-date information mentioned
- +1 point — Positive tone or recommendation
- -2 points — Negative sentiment or critical mention
Native vs Web scores
Your dashboard compares two types of AI responses side by side:
| Native AI | Web AI | |
|---|---|---|
| What it is | AI responds from its training data only (no internet search) | AI searches the web before answering (real-time data) |
| Examples | ChatGPT (desktop), Claude (app), Gemini (app) | Perplexity, ChatGPT with browsing, SearchGPT |
| Depends on | Brand reputation in training data | GEO, online presence, recent content |
| Updates | Only when the model is retrained | Real-time with web content |
Understanding the gap
The dashboard shows both scores with a VS indicator between them. Smart alerts help you interpret the gap:
- Big gap (>20%) — Your brand relies too heavily on web results. Work on building your reputation in AI training data.
- Medium gap (10-20%) — Web gives a noticeable boost. Keep investing in online presence.
- Small gap, both high (>70%) — Excellent! Your brand is strong in both contexts.
- Small gap, both low (<40%) — Priority: improve your overall visibility on all fronts.
Model comparison
The Model Breakdown section shows a detailed table where each AI model tested in your audit is listed with its individual scores:
- Score (/10) — Overall performance score for that model
- Sentiment — How positive or negative the model speaks about you
- Position — Average ranking in that model's recommendations
- Mention Rate — How often that model mentions your brand
Scores are color-coded for quick reading:
- Green (7+) — High performance, the model knows and recommends you
- Yellow (4-7) — Medium performance, room for improvement
- Red (<4) — Low performance, the model barely mentions you
Radar chart
Next to the table, a radar chart provides a visual comparison of all models across multiple dimensions. This makes it easy to spot which models are strong and which need attention.
Evolution and history
When you have two or more audits for a client, the dashboard unlocks additional features:
Progress banner
At the top of the dashboard, a banner summarizes your progress since the last audit:
- Improved — Number of metrics that went up
- Declined — Number of metrics that went down
- Stable — Number of metrics that stayed the same
It also highlights your best gain and worst loss to quickly identify what changed the most.
Evolution chart
A line chart tracks all 6 metrics over time, with each audit date as a data point. This lets you visualize long-term trends and see the impact of your optimization actions.
Reading audit results
The Results page (accessible from the audit detail) shows every prompt that was sent to each AI model, along with the full responses and individual scores.
Structure of the results
Results are organized prompt by prompt. For each prompt, you see:
- The prompt text — the question that was sent to the AI models
- A grid of response cards — one card per AI model
What each response card shows
- Model name with a colored icon
- Score badge — color-coded (green/yellow/red) with the score out of 10
- Response preview — truncated text of the AI's response
- Sentiment and position — mini indicators for each
- Full response — click to expand and read the complete response in a modal
GEO Checklist (Website analysis)
The GEO Checklist analyzes your website to evaluate how well it is optimized for Generative Engine Optimization — making your site easy for AI models to crawl, understand and cite.
Overall GEO Score
Your site receives an overall score (0-100%) with a letter grade:
- A (70%+) — Well optimized for AI crawling
- B-C (40-70%) — Some improvements needed
- D-F (<40%) — Significant work required
The 4 categories
The score is broken down into 4 categories, each with its own progress bar:
- AI Crawlability — Can AI bots access and read your site?
- Structured Data — Does your site use schema.org markup?
- Content Quality — Is your content clear, comprehensive and authoritative?
- Authority & Trust — Does your site show expertise and trustworthiness?
Simple vs Detailed view
You can toggle between two viewing modes:
- Simple view — High-level overview with expandable items. Click any checklist item to see details and recommended actions.
- Detailed view — Full checklist with all items visible, ideal for a thorough review or PDF export.
Advanced GEO Modules
In addition to the GEO Checklist, the dashboard displays 5 advanced analysis modules that run automatically during each audit. These modules provide deeper technical insights about your website's AI readiness and competitive positioning.
Module 1 — SSR & AI Crawlability
Checks whether your website is technically accessible to AI crawlers. The widget displays a global score and 4 sub-scores:
- SSR — Is your site server-side rendered or does it rely on client-side JavaScript?
- Robots.txt — Does your robots.txt allow AI bots (GPTBot, ClaudeBot, etc.)?
- WAF — Does your firewall block AI crawlers?
- llms.txt — Does your site include an llms.txt file for AI instructions?
If specific AI crawlers are blocked, they are listed in red below the widget.
Module 2 — Entity Health
Evaluates how well your brand is represented as a knowledge graph entity. Sub-scores include:
- Wikidata — Does your entity exist on Wikidata?
- Properties — How complete is your Wikidata entry (description, logo, URLs, etc.)?
- Schema.org — Does your website include structured Organization/LocalBusiness markup?
- sameAs — Are your social profiles and external identifiers linked via sameAs properties?
Module 3 — Mention vs Citation
Analyzes the AI responses from your audit to classify how your brand is referenced. Uses AI classification to categorize each response into:
- Mentions — Your brand is named in passing, in a list
- Citations — Your brand is cited as a source or reference
- Recommendations — Your brand is explicitly recommended
The widget also shows a mention-to-citation ratio, which indicates how often simple mentions become actual citations or recommendations.
Module 4 — Citation Readiness
Evaluates whether your website content is structured in a way that makes it easy for AI models to cite. Displays 7 sub-scores:
- Sources — External references and links cited on your pages
- Statistics — Presence of numbers, data, and measurable claims
- Experts — Author attribution and expert quotes
- Factual density — Ratio of objective, fact-based sentences vs. first-person/promotional content
- Paragraphs — Well-structured content with clear paragraphs
- Questions (H2) — Headers phrased as questions (matching how users query AI)
- Direct answers — Content that directly answers common questions
Module 5 — STS Detection
An experimental module that analyzes your competitors for signs of Search Trust Signals manipulation — techniques used to artificially boost AI visibility. It checks for hidden text, authority claims, and suspicious patterns.
The widget shows how many competitors were analyzed and flags any suspects. If no anomalies are detected, a green checkmark is displayed.
Data quality alerts
When issues are detected with competitor data during the audit, the dashboard displays data quality alerts. These alerts help you understand potential problems that may affect the accuracy of competitive analysis.
Types of alerts
- Unreachable URL — A competitor's website could not be reached during the audit (HTTP error, timeout, etc.)
- Missing URL — A competitor was configured without a website URL, limiting GEO analysis
- Suspicious competitor — A competitor shows signs of potential STS manipulation
Each alert can be dismissed individually by clicking the X button. Alerts are color-coded by severity: red for errors, orange for warnings.
Action plans
Based on your audit results, the dashboard displays action plans — concrete steps you can take to improve your AI visibility scores.
How actions are organized
Each action item includes:
- Priority level — High, Medium, or Low (color-coded dot)
- Title and description — What needs to be done and why
- Status — Pending, In Progress, or Completed
- Responsible party — Who should handle this action
- Time estimate — Approximate effort required
GEO-generated actions
Some actions are automatically generated from the GEO module analysis. For example, if Module 1 detects that your robots.txt blocks AI crawlers, an action will be created to fix it. If Module 2 finds that your Wikidata entity is missing, an action to create it will be added. These GEO-based actions complement the AI-generated recommendations.
Tips for improvement
Here are proven strategies to boost your AI visibility scores:
Strengthen your online presence
AI models learn from web content. The more authoritative content exists about your brand online (blog articles, press mentions, directory listings, reviews), the more likely AI models are to mention you.
Optimize for structured data
Add schema.org markup to your website (Organization, Product, FAQ, HowTo). Structured data helps AI models understand your brand, products and services in a machine-readable way.
Create comprehensive content
Write detailed, factual content about your expertise. AI models favor sources that provide thorough, well-structured information over superficial content.
Build authority signals
Get mentioned in industry publications, partner websites and authoritative directories. The more trusted sources reference your brand, the higher your AI visibility.
Monitor and iterate
Run audits regularly and track your evolution chart. Identify which actions had the most impact and double down on what works.
Ready to analyze your visibility?
Run your first audit and discover how AI models perceive your brand.
Create a new auditReady to audit your AI visibility?
Create your free account and receive 500 bonus credits.
Create free account