Frequently asked questions
Everything you need to know about AI Labs Audit, AI visibility audits, credits, API, tracking and integrations.
Platform & audits
What is AI Labs Audit?
AI Labs Audit is a European platform that measures brand visibility inside generative AI (GEO and AEO). It queries more than 300 AI models simultaneously (ChatGPT, Claude, Gemini, Perplexity, Copilot, Grok, DeepSeek) to measure how your brand is cited, positioned and perceived in their answers.
The platform delivers a defensible scoring across 6 dimensions, Hero Grades from A+ to F, prioritized action plans, hallucination detection and competitive analysis (sector Share of Voice). It also provides white-label PDF reports, a client portal, GEO showcases, an MCP server (220 tools) and a public REST API.
Hosted in Europe, GDPR-compliant and with no lock-in, it is designed for digital agencies, GEO consultants, e-commerce brands, SMBs and growth teams.
How does an AI visibility audit work?
An AI Labs Audit runs through several automated steps:
- Setup: you create a client (company, URL, sector) and the platform generates prompts that are representative of your activity.
- Estimation: a credit cost estimate is displayed before launch (typically 24 to 60 credits depending on the number of prompts and selected models).
- Execution: prompts are sent in parallel to more than 300 models via OpenRouter, combining native mode (model knowledge) and web search mode.
- Analysis: responses are scored across 6 dimensions, benchmarked against automatically detected competitors and summarized into Hero Grades.
- Deliverables: narrative dashboard, prioritized action plan, premium PDF (15-25 pages) and public GEO showcase.
Which AI models are queried during an audit?
AI Labs Audit queries more than 300 generative AI models simultaneously, covering the main global providers:
- OpenAI — ChatGPT (GPT-5, GPT-5-mini by default, GPT-4 families)
- Anthropic — Claude (Sonnet, Opus, Haiku)
- Google — Gemini and AI Overviews
- Perplexity — Sonar (selected by default, always in web mode)
- Microsoft — Copilot
- xAI — Grok
- DeepSeek and specialized open-source models
Each model is tested in native mode (intrinsic knowledge) and in web mode (retrieval-augmented) when relevant, which allows you to compare scores and identify where your optimizations have the most impact.
What is the Hero Grade (A+ to F)?
The Hero Grade is the synthetic rating of your AI visibility, expressed on a letter scale from A+ to F. It translates your overall 0-100 score at a glance, calculated with the following weighting: citations 40%, position 30%, sentiment 20% and coverage 10%.
The mapping is as follows:
- A+: 90 and above — outstanding visibility
- A: 80-89 — very strong visibility
- B: 70-79 — good visibility
- C: 60-69 — average visibility
- D: 50-59 — low visibility
- E: 40-49 — very low visibility
- F: under 40 — near-total absence
Three Hero Grades are computed and stored in the database: AI Visibility, GEO Tech and AEO, to track their evolution over time.
Which dimensions are analyzed (native, web, sentiment, share of voice, brand safety)?
Each audit analyzes your brand across several complementary dimensions:
- Citations (40%) — frequency of brand mentions in AI responses, in both native and web modes.
- Position (30%) — rank of the brand's appearance within the response.
- Sentiment (20%) — tone and context (positive, neutral, negative).
- Coverage (10%) — breadth of topics where you are cited.
- Sector Share of Voice — share of voice vs. competitors automatically detected by NAF sector / area.
- Brand Safety & Semantic Perception — detection of sensitive or reputationally risky contexts.
Advanced GEO modules round out the analysis: SSR & Crawlability, Wikidata Entity Health, Citation Readiness (Princeton GEO KDD 2024 methodology), STS Detection, Mention/Citation ratio, sector median and P75.
How long does it take to get audit results?
A complete audit is typically available in 3 to 10 minutes. Processing is asynchronous: the platform queries 300+ models in parallel, aggregates the responses, computes scores across the 6 dimensions and generates the narrative dashboard automatically.
The typical flow is:
- 2 to 10 min — execution of prompts on the selected AI models (GPT-5-mini and Perplexity Sonar by default, extendable to the full catalog).
- A few seconds — computation of Hero Grades, Share of Voice and prioritized recommendations.
- On demand — generation of the premium PDF (15-25 pages) and publication of the public GEO showcase.
You can track progress in real time from the client dashboard, then compare each new audit to history to measure your progress.
What is the difference between GEO, AEO and LLMO?
These three acronyms refer to related but distinct disciplines of AI optimization:
- GEO — Generative Engine Optimization: optimization for generative search engines that combine user content and AI content (Google AI Overviews, Bing Copilot, Perplexity). Covers SSR crawlability, Schema.org, llms.txt and Citation Readiness.
- AEO — Answer Engine Optimization: optimization for appearing in the answers of AI answer engines (ChatGPT, Claude, Gemini). The goal is no longer the click but the direct mention or citation inside the response.
- LLMO — Large Language Model Optimization: generic term covering optimization for large language models, regardless of channel (chat, search, assistant, agent).
AI Labs Audit covers all three by simultaneously measuring native visibility (LLMO), performance in AI answers (AEO) and technical crawlability signals (GEO), with a dedicated Hero Grade for each.
Pricing, credits & agencies
What subscription plans are available on AI Labs Audit?
AI Labs Audit offers five monthly plans, with no lock-in, in euros excluding VAT:
- Discovery: 0 €/month — 100 credits/month + 500 credits on signup, ideal for testing the platform.
- Consultant: 79 €/month — 1,500 credits/month, perfect for an independent GEO/AEO consultant.
- Consultant+: 179 €/month — 5,000 credits/month, to manage several active clients.
- Agent: 349 €/month — 10,000 credits/month, unlocks the MCP server (215 tools) and the full REST API.
- Agency+: 599 €/month — 20,000 credits/month, shared credit pool, multi-agent, full white-label and multi-client management.
All plans include access to the 300+ AI models, the 6-dimension scoring and premium PDF reports. Plan changes are immediate via Stripe.
How do credits work on AI Labs Audit?
Credits are the platform's universal consumption unit. They are debited for every billable action:
- Launching an audit: consumption computed from number of prompts × number of AI models queried, with a cost per model (historical medians updated continuously).
- White-label premium PDF report: 60 credits per generation (15 to 25 pages, multi-AI scorecard, Source Authority, Brand Safety).
- GEO showcase regeneration: around 10 credits.
Monthly credits are automatically renewed on the plan's anniversary date and do not roll over from one month to the next. However, credits bought as packs or earned through referrals never expire. You can track your balance in real time from /my-credits.
How much does an audit cost in credits?
The cost of an audit depends directly on the number of prompts tested and the number of AI models queried. In practice, a standard audit consumes between 24 and 60 credits.
- Quick audit (default configuration:
gpt-5-mini+perplexity-sonar, ~12 prompts): around 24 credits. - Full multi-model audit (ChatGPT, Claude, Gemini, Perplexity, Grok, Native + Web mode): up to 60 credits.
Before every launch, the platform displays a precise estimate (endpoint POST /audits/estimate) based on historical medians, with an average 42% reduction thanks to Phase A optimizations (caching, max_results). You validate the cost before any consumption — no surprises.
Can I try AI Labs Audit for free?
Yes. The Discovery plan at 0 €/month is entirely free, no credit card required.
- 500 credits offered on signup (welcome bonus).
- 100 credits/month renewed automatically.
- Access to the 300+ AI models, 6-dimension scoring and the full dashboard.
- Generate real audits on your own clients (a quick audit consumes ~24 credits).
With the initial 600 credits you can launch several full audits and generate at least one premium PDF report (60 credits) before deciding whether a paid plan makes sense. Signup takes less than a minute on ailabsaudit.com/register (email + password or Google OAuth). No automatic sales follow-up, no commitment.
How do I buy additional credits?
If your monthly credits aren't enough, you can buy on-demand credit packs from the /my-credits page. Four packs are available, payable by card via Stripe:
| Pack | Credits | Price ex. VAT | Savings |
|---|---|---|---|
| Starter | 500 | 39 € | — |
| Standard | 2,000 | 129 € | ~17% |
| Pro | 5,000 | 279 € | 20% |
| Enterprise | 15,000 | 749 € | 30% |
Important technical note: credits are added to your account only after confirmation of the Stripe webhook payment_intent.succeeded, not through the success redirect. Allow a few seconds between payment and the effective credit. Purchased credits never expire and stack on top of monthly credits.
How does the Agency mode (Agency+) work?
The Agency+ plan (599 €/month ex. VAT, 20,000 credits) is designed for digital agencies managing several GEO/AEO clients in parallel:
- Shared credit pool: all agents in the agency draw from the same balance, with consumption tracking per agent and per client.
- Centralized multi-client management: a single dashboard lists every client, their scores (AI Visibility, GEO Tech, AEO), their alerts and audit history.
- Read-only access for consultants: each agent can be assigned a role (admin, consultant, read-only) with granular permissions (RBAC), avoiding unwanted changes.
- Full white-label: PDFs, client portal, GEO showcases without any AI Labs branding.
- Stripe seat-based management: add/remove agents mid-month, 10,000 additional credits per extra agent.
Client transfers between agents and dedicated API keys per client are also supported.
What is the AI Labs Audit referral program?
The referral program rewards partners who recommend the platform to other agencies or GEO consultants:
- 500 credits every time a referred agency subscribes to a paid plan (Consultant, Consultant+, Agent or Agency+).
- Credits automatically added to the referrer's account as soon as the first Stripe payment is confirmed.
- No cap on the number of referrals: the more you recommend, the more credits you accumulate to use on your own audits.
- Potential recurring revenue: a deeper partnership (revenue share) can be discussed with the team for regular business introducers.
Your unique referral link is available in your partner area. No manual action is required: attribution tracking is automatic. This mechanism is designed to be transparent and stackable with monthly credits and purchased packs.
AEO, GEO, AQA & tracking
What is the AQA (AI Question Answer) standard?
The AQA (AI Question Answer) standard is an open format (MIT license) that enriches your FAQs to make them directly readable by generative AI (ChatGPT, Claude, Gemini, Perplexity). Concretely, it adds normalized markup (questions, answers, language, freshness, author) to your pages, which LLM crawlers can extract unambiguously — increasing your chances of being cited.
AI Labs Audit offers 4 compliance levels verifiable via the public validator /aqa-validator:
- basic: a few minimal Q&A properly marked up;
- standard: complete FAQ (10+ Q&A, metadata, language);
- full: exhaustive documentation, changelog, versioning;
- shield: premium level with compliance guarantee.
The validator automatically detects the number of questions, language (fr/en/es/de) and FAQ URLs. Implementation in under 5 minutes for most sites.
How does AI Labs Audit detect hallucinated URLs?
A hallucinated URL is a page that an AI cites or recommends, but that does not exist on your site (or used to exist and has since been removed). For example, ChatGPT sends a user to yoursite.com/pro-guide-2024 which returns a 404. The result: frustration, lost traffic, reputation damage.
AI Labs Audit detects these URLs in two steps:
- Extraction: each audit collects every URL cited by the models in their responses (via the
/hallucinated-urlsendpoints and AI tracking). - HTTP verification: each URL is tested (200/301/404/410 status) and correlated with real bot logs. A URL never crawled but cited is flagged as a "probable hallucination".
You then receive an action plan: create the missing page, set up a 301 redirect, or publish authoritative content to correct the reference inside the AIs. Module included in every paid plan.
What is AI Brand Safety?
AI Brand Safety covers all the practices that protect a brand against hallucinations and negative associations produced by generative AI. Unlike traditional advertising brand safety (preventing an ad from appearing next to sensitive content), it focuses on what AIs say about you when a user asks them.
AI Labs Audit measures four concrete risks:
- Factual hallucinations: incorrect prices, features, executives or dates;
- Negative associations: accidental pairing with a controversial competitor, a scandal, or a questionable practice;
- Degraded sentiment: recurring negative tone (score < 60%);
- Hallucinated URLs that lead to 404s.
The dashboard triggers real-time alerts whenever a drift is detected on any of the 300+ models, along with a remediation plan: authoritative content to publish, third-party sources to consolidate, Organization schema to enrich.
How does AI tracking work (bots + referrals)?
AI Labs Audit's AI tracking continuously measures two types of signals on your site:
- Bot visits: user-agent signature detection for 100+ AI crawlers (GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Bingbot, CCBot, etc.), with timestamp, crawled URL and IP;
- Referral traffic: users arriving from chat.openai.com, claude.ai, gemini.google.com, perplexity.ai, copilot.microsoft.com…
Installation via an open-source Go agent (or WordPress/Shopify plugin) that pushes events to the API in batches (max 500 events/POST, HMAC-SHA256, rate limit 60 req/min). A single curl command activates tracking per domain.
The dashboard then shows: most active bots, most crawled URLs, crawl ↔ citation correlation within audits, and alerts on missing bots or suspicious spikes. It's the only way to prove the ROI of an AEO/GEO strategy with first-party data.
What's the difference between native score and web score?
AI Labs Audit distinguishes two complementary measurement modes:
- Native score: what the AI knows on its own, from its training memory, without web search enabled. It reflects your footprint in the data used to train the model (Common Crawl, Wikipedia, Reddit, press, etc.).
- Web score: what the AI answers with web search enabled (ChatGPT Search, Perplexity, Gemini with Grounding, Claude with tools). It reflects your real-time visibility via indexed sources.
The two scores tell different stories. A low native score + high web score indicates a young brand that is well referenced. Conversely, a high native score + low web score reveals a legacy brand whose current site is poorly crawlable (SSR, robots.txt, llms.txt to fix).
Measuring both is essential to prioritize actions: working on long-term authority (native) or technical indexing (web).
Can I audit visibility on ChatGPT, Claude, Gemini and Perplexity separately?
Yes. AI Labs Audit queries 300+ models in parallel and then offers a per-model dashboard that isolates your performance on each major conversational AI: ChatGPT (OpenAI), Claude (Anthropic), Gemini (Google), Perplexity, Copilot (Microsoft), Grok (xAI), DeepSeek and specialized models.
For each model you get:
- The mention rate (% of prompts where your brand is cited);
- The average position in the response;
- The sentiment (positive/neutral/negative);
- The topical coverage;
- The model-specific Hero Grade (A+ to F).
This granular view is essential because metrics vary significantly from one AI to another: a brand can be Grade A on Perplexity (strong web anchoring) and Grade C on native ChatGPT (little presence in the training data). You can therefore prioritize corrective actions per platform and monitor evolution over time with scheduled audits.
Integrations, API & support
How do I get started as a partner or agency on AI Labs Audit?
Getting started takes less than 10 minutes and requires no credit card. Go to ailabsaudit.com/register and create your account with a professional email (or via Google OAuth). Your password must be at least 8 characters and include an uppercase letter, a lowercase letter, a digit and a special character. After email validation, you automatically receive 500 welcome credits as well as 100 monthly credits via the Discovery plan (free).
To launch your first audit:
- Create a client profile via
/client/new(company, URL, sector). - Select the AI models to query among 300+ (ChatGPT, Claude, Gemini, Perplexity, etc.).
- Estimate the cost, then click "Launch audit" — expect 24 to 60 credits depending on the number of prompts and models.
The audit runs asynchronously in 3 to 10 minutes. As soon as it's complete, you can generate a white-label PDF report and share the results with your client.
How does the AI Labs Audit MCP server work and which tools does it expose?
The AI Labs Audit MCP (Model Context Protocol) server exposes more than 215 tools that your AIs (Claude Desktop, Cursor, N8N, etc.) can call directly to query your account, launch audits, retrieve reports or drive tracking. The number of tools depends on your role:
- Client: 45 tools (audits, reports, tracking, questionnaires).
- Partner / Agency: 100+ tools (portfolio management, credits, multi-client reports).
- Admin: 220 tools (full management).
To integrate it:
- Generate an API key from
/admin/api-keys(formataila_*). - In Claude Desktop, Cursor or N8N, configure the HTTP Streamable transport to
https://ailabsaudit.com/mcp. - Authenticate with the header
X-Api-Key: aila_...or an OAuth 2.1 Bearer token.
More details in our dedicated article on the MCP protocol.
Does AI Labs Audit offer a public REST API?
Yes. The public REST API v1 is available at the root https://ailabsaudit.com/api/v1/ and covers 16 endpoint groups: /clients, /audits/{id}, /reports/{id}, /action-plans/{id}, /analytics/dashboard, /tracking/stats/{client_id}, /hallucinated-urls, /agency, /blog, /glossary, /questionnaires/compare, /geo-checklist, /scheduled-audits, /showcase, etc.
Authentication uses the Authorization: Bearer <key> header or X-Api-Key: aila_.... The default quota is 60 requests per minute per key (unlimited for admins, 10 req/min for bot signatures). Every response returns the X-RateLimit-Remaining and X-RateLimit-Reset headers to pace your calls.
Minimal example:
curl -H "Authorization: Bearer aila_xxx" \
https://ailabsaudit.com/api/v1/audits/AUD123Find a step-by-step guide in our "AI visibility audit" tutorial.
How do I install the AI tracker on my site to detect GPTBot, ClaudeBot and PerplexityBot?
The AI Labs Audit tracker is an open-source Go log agent that reads your Nginx logs and detects in real time the visits of AI bots (GPTBot, ClaudeBot, PerplexityBot, Google-Extended, etc.) as well as referrals from ChatGPT, Claude, Perplexity or Gemini.
Install in a single command, from your web server:
curl -sL https://raw.githubusercontent.com/sarsator/plugin_ailabsaudit_tracke/main/collectors/log-agent/install.sh \
| sudo bash -s -- \
--api-key <TRK> \
--secret <HMAC> \
--client-id <CID> \
--api-url https://ailabsaudit.com/api/v1Retrieve your three identifiers from the client dashboard, "Tracking" tab. The agent signs every event with HMAC-SHA256, honors a 60 req/minute rate limit and sends up to 500 events per batch. You then review your statistics via GET /tracking/stats or in the interface.
See our complete AI tracking guide for more examples.
Are the PDF reports fully white-label customizable?
Yes, every premium PDF report is fully white-label. Each document is 15 to 25 pages long and includes scores per model, Hero Grades, competitive analysis, prioritized action plan and GEO checklist (SSR, crawlability, llms.txt).
To activate your brand:
- Go to
Settings > Brandin your partner or agency account. - Upload your logo (PNG or JPG, 2 MB max).
- Set your primary and secondary colors in hexadecimal format.
- Fill in your header, footer and contact email (custom provider).
During generation ("Generate PDF Report" button on the dashboard, 60 credits cost), the report is produced without any AI Labs Audit mention and can be downloaded via the API: GET /api/v1/reports/<report_id>/download.
Examples and detailed pricing in our article on premium reports.
How do I share an audit with a client without giving them admin access?
AI Labs Audit provides a read-only client portal that lets your client view their audits, reports and action plans without being able to modify your account or see the other clients in your portfolio.
Procedure:
- From the dashboard of the relevant client, click "Client portal".
- Generate a unique
access_tokenand, for extra security, set an optional password (hashed server-side). - Share the generated URL — in the form
https://ailabsaudit.com/portal/<access_token>— and the password through a separate channel.
The portal is automatically customized with your agency's (or your provider's) colors and logo. It exposes audit summaries, scores, trends, sector recommendations and the GEO checklist. A rate limit of 5 attempts per 15 minutes protects against abusive access, and you can revoke the token at any time.
Details and best practices: Read-only client portal — share your GEO results.
Where can I get help if I'm stuck on AI Labs Audit?
Several channels are available, depending on the type of question:
- Direct technical support: write to contact@ailabsaudit.com. Our Europe-based team replies within 24 to 48 business hours.
- Step-by-step tutorials: see ailabsaudit.com/tutoriel/fr for detailed guides on audits, tracking, reports and MCP configuration.
- Videos: demos and annotated screenshots on ailabsaudit.com/videos.
- Blog: case studies, product news and GEO/AEO best practices on ailabsaudit.com/blog/fr.
- Glossary: definitions of AI/GEO/AEO terms on ailabsaudit.com/glossary/fr.
- Contact page: form and information on ailabsaudit.com/contact.
For MCP or API integrations, include your role (client, partner, agency) and your aila_* key (without the secret) to speed up the diagnosis.
AQA-standard FAQ
Every answer is dated, sourced and versioned so that ChatGPT, Claude, Gemini and Perplexity can cite it reliably.
Check AQA conformance