AI Labs Audit

Document generated on

Bien vue par les IA, mais votre site technique freine votre progression.

+19 pts vs audit précédent

Action prioritaire : +1 pts en 30 min — Bloquer les bots d'entraînement dans robots.txt

AI Visibility
B+
84/100 +27
Technical GEO
C+
59/100
AEO Readiness
A
87/100
Key Metrics
6 improved 0 declined 0 stable
Visibility Score
+27.6
84%
84.4%
Mention Rate
+65.0
96%
96.7%
AI Share of Voice
+2.7
77%
77.7%
Average Position
+0.1
66%
2.0
Sentiment
+4.6
69%
6.9/10
Topic Coverage
+3.3
96%
96.7%
Sector Radar
Your AI footprint compared to your sector
AI Labs Audit
Previous audit
Evolution Over Time
Technical Metrics
SSR / AI Crawlability
87
SSR: 90%
Robots.txt: 60%
Acces IA: 100%
llms.txt: 100%
Framework: Next.js
Entity Health Check
33
Wikidata: 0
Properties: 0
Schema.org: 100
sameAs: 33
Qualite des mentions

No data — run a new audit

Citation Readiness
43
Sources: 50
Stats: 0
Experts: 0
Factual dens.: 48
Paragraphs: 29
H2 Questions: 25
Direct answers: 80
Source Authority
49%
Owned: 232 Third-party: 237
ailabsaudit.fr (21) averi.org (10) arxiv.org (10)
Brand Safety
77%
45 factual error(s)
[claude-haiku-4.5] brand not recognized
[gemini-2.0-flash-001] brand not recognized
Native vs Web Score
80.8% Score
Score without Internet
What AIs know about you by default
VS
Web +0.4
81.2% Score
Score with Internet
What AIs find when searching
Excellent! AIs know you very well, with or without web search. Your brand has a strong footprint in AI training data.
AI Model Breakdown
AI Model Score Sentiment Position Mention Rate
Claude Haiku 4.5 NATIF
6.9/10
7.4
#1.9
100.0%
Gpt 5:Online WEB
6.5/10
6.8
#2.2
96.7%
Gemini 2.0 Flash 001:Online WEB
6.8/10
6.8
#1.8
96.7%
Gemini 2.0 Flash NATIF
6.8/10
6.7
#2.0
96.7%
Claude Haiku 4.5:Online WEB
6.9/10
6.8
#1.8
96.7%
Gpt 5 NATIF
6.3/10
6.8
#2.5
93.3%
GEO Section Analysis Detailed scores by prompt category
S1 — Visibilité & Notoriété
4/4 categories tested
7.2/10
Connaissance de marque
7.8/10
8.1
100%
+5.2
Couverture thématique
6.3/10
5.9
83%
+3.8
Empreinte digitale
6.9/10
7.2
100%
+4.2
Visibilité multilingue
7.9/10
8.2
100%
+5.9
S3 — Sentiment & Perception
3/3 categories tested
5.8/10
Sentiment général
5.4/10
5.8
100%
+3.3
Réputation & Image
5.6/10
5.6
100%
+3.8
Gestion de crise
6.5/10
6.2
100%
+4.1
S4 — Positionnement Concurrentiel
5/5 categories tested
7.4/10
Comparaison directe
6.9/10
8.1
100%
+4.2
Duel concurrentiel
7.6/10
7.8
100%
+4.7
Classement & Top N
7.6/10
7.9
100%
+4.9
Alternatives & Substituts
6.7/10
6.3
100%
+4.5
Différenciation
8.1/10
8.3
100%
+5.6
S5 — Intentions de Recherche
8/8 categories tested
7.0/10
Direct / Navigationnel
7.5/10
8.1
100%
+5.1
Informatif
7.6/10
6.7
100%
+6.1
Comparatif
6.6/10
7.7
100%
+4.1
Transactionnel
7.4/10
7.6
100%
+5.0
Intentionnel ciblé
7.4/10
8.0
100%
+4.8
Local & Géographique
6.5/10
6.7
100%
+4.1
Vocal & Conversationnel
7.5/10
8.2
100%
+5.2
Résolution de problèmes
5.8/10
5.9
100%
+3.5
S6 — Parcours Client & Recommandation
5/5 categories tested
6.4/10
Recommandation IA
7.5/10
7.6
100%
+5.3
Ciblage par persona
7.0/10
7.3
100%
+4.7
Micro-moments client
3.0/10
8.5
17%
+2.3
Matching business
6.8/10
7.2
100%
+4.7
Intentions d'achat avancées
7.7/10
8.0
100%
+5.8
AI Visibility Files (AEO)
robots.txt
Present
26.4 KB
AI Bots :
  • GPTBot
  • OAI-SearchBot
  • ChatGPT-User
  • ClaudeBot
  • Claude-SearchBot
  • PerplexityBot
  • Google-Extended
  • Applebot-Extended
  • meta-externalagent
sitemap.xml
Present
857 URLs
Sitemap index (5 child sitemaps)
llms.txt
Present
7.9 KB
llms-full.txt
Present
71.9 KB
Scan vue bot IA — 10 pages analysees avec User-Agent GPTBot (home + 3 FAQ, 2 blog, 2 guide, 2 random)
AQA
Standard
3 Q&A enriched — trouve sur 4/10 pages
3 dated 3 sourced 3 changelog
To reach FULL :
- author credentials (jobTitle, affiliation) on each question
Schema FAQ
Present
trouve sur 10/10 pages
FAQPage: 28 questions
HowTo: 4 steps
Article BlogPosting BreadcrumbList FAQPage HowTo ItemList
OpenGraph
5/5
trouve sur 10/10 pages — 100% de couverture complete (>=4/5 tags)
+5 bonus (dates, author)
RSS Feed
Absent
No RSS or Atom feed detected.
Aucun <link rel="alternate"> RSS/Atom sur 10 pages.
Complete GEO Checklist Optimization for generative AI engines
Contenu recupere via : Vue Chrome (utilisateur) | Fiabilite 100%
92% GEO
AI Accessibility
100%
Entity & Schema
100%
Content Citability
100%
Authority & Trust
67%
Freshness & Signals
67%
A
Grade
AI Crawlability
Robots.txt allows AI crawlers — tous les crawlers IA autorises
The robots.txt file controls which bots can crawl your site. If it blocks AI crawlers (GPTBot, ClaudeBot...), your content will never be indexed by generative AI engines.
AI Bots : GPTBot OAI-SearchBot ChatGPT-User ClaudeBot Claude-SearchBot PerplexityBot Google-Extended Applebot-Extended meta-externalagent
Actions to complete
  • Verify that robots.txt is accessible at /robots.txt (HTTP 200)
  • Verify no global block User-agent: * / Disallow: /
  • Verify GPTBot is NOT in a Disallow directive
  • Verify ClaudeBot is NOT in a Disallow directive
  • Verify Claude-SearchBot is allowed
  • Verify OAI-SearchBot is allowed
  • Verify PerplexityBot is allowed
  • Verify Google-Extended is allowed
  • Verify Applebot-Extended is not blocked
  • Ensure public content directories (/blog/, /services/, /about/) are accessible
How to verify
Run an AI Labs Audit before/after changes. Check server logs for AI bot requests within 2-4 weeks. Test manually with curl -A "GPTBot" https://yoursite.com/.
Sitemap.xml accessible — 857 URLs
The sitemap helps AI crawlers discover all your important pages. Without a valid sitemap, some pages may never be indexed by AI engines.
857 URLs — Index (5 sitemaps)
Actions to complete
  • Verify /sitemap.xml returns HTTP 200
  • Verify sitemap is referenced in robots.txt via Sitemap: https://yoursite.com/sitemap.xml
  • Verify the XML is valid (no parsing errors)
  • Verify <lastmod> tags are present on each URL
  • Verify <lastmod> dates match actual update dates
  • Verify there are no error URLs (404, 500) in the sitemap
How to verify
Check Google Search Console for sitemap indexation status. Monitor server logs for AI bot sitemap access.
llms.txt file — 7.9 KB
The llms.txt file is an emerging standard that helps AI quickly understand your site. It provides a structured summary of your business and key pages.
7.9 KB
Actions to complete
  • Create a /llms.txt file at the site root
  • Add an H1 title with the company name
  • Add a summary blockquote (2-3 sentences describing the business)
  • List key pages with URL and short description (home, services, about, pricing, FAQ, blog)
  • Use valid Markdown format
  • Test accessibility with curl https://yoursite.com/llms.txt
  • Optional: also create a /llms-full.txt file with full content
How to verify
Monitor server logs for llms.txt file access. Note: direct impact is not yet proven, it's an AI readiness signal.
HTTPS enabled
HTTPS is a trust prerequisite for all AI engines. A site without HTTPS is considered unreliable and will rarely be cited as a source.
Actions to complete
  • Verify the site is accessible via HTTPS
  • Verify the SSL certificate is valid and not expired
  • Verify HTTP → HTTPS redirect is in place (301 permanent, NOT 302)
  • Verify no mixed content (HTTP resources on HTTPS pages)
  • Verify all internal URLs use HTTPS
How to verify
Immediate test via browser or curl -I https://yoursite.com. Certificate must be valid.
No blocking WAF
An overly restrictive WAF (Web Application Firewall) can block AI crawlers like GPTBot or ClaudeBot, preventing AI engines from indexing your content.
Bot IA accepté — HTTP 200
Aucun WAF/CDN connu détecté
Server: nginx
Actions to complete
  • Test site access with GPTBot user-agent: curl -A "GPTBot/1.0" → should return 200
  • Test with ClaudeBot: curl -A "ClaudeBot/1.0" → should return 200
  • Test with PerplexityBot: curl -A "PerplexityBot" → should return 200
  • If Cloudflare: verify "Bot Fight Mode" is not in aggressive mode
  • If Cloudflare: verify AI Crawl Control has desired AI crawlers set to "Allow"
  • Check custom WAF rules: no rule should generically block user-agents containing "bot"
  • Verify no CAPTCHA challenge is returned to AI bots
  • Verify WAF rules and AI Crawl Control settings do not contradict each other
How to verify
Re-run curl tests after changes. Monitor Cloudflare analytics for AI bot requests: they should show as "Allowed".
Structured Data
Schema.org (JSON-LD) — Article, BlogPosting, BreadcrumbList, FAQPage, HowTo, ItemList, Organization, VideoObject, WebSite
Schema.org structured data helps AI understand your page content and context. It's a strong signal to be cited as a reliable source.
Types: Article BlogPosting BreadcrumbList FAQPage HowTo
Actions to complete
  • Add Organization JSON-LD script on homepage (name, url, logo, description, contactPoint, sameAs)
  • Add WebSite JSON-LD script on homepage
  • Add Article or BlogPosting JSON-LD script on each blog article
  • Add FAQPage JSON-LD script on pages with FAQs
  • Add BreadcrumbList JSON-LD script for breadcrumb navigation
  • Add Person JSON-LD script on author/team pages
  • For service/product pages: add Product or Service with Offer
  • Validate each schema with Google Rich Results Test
  • Verify JSON-LD is server-side rendered (SSR), not injected via client JavaScript
How to verify
Validate via Google tools. Run an AI Labs Audit and compare the Structured Data score.
Open Graph tags — 5/5
Open Graph tags improve how AI and social networks understand your pages. They provide structured title, description, and image.
5/5 tags requis
Actions to complete
  • Verify og:title is present on all pages (unique per page)
  • Verify og:description is present (complementary to title)
  • Verify og:image with a quality image (1200x630px recommended)
  • Verify og:image:alt with descriptive text
  • Verify og:url points to the canonical URL
  • Verify og:type is present (website for home, article for articles)
  • For articles: add article:published_time and article:modified_time
  • Also add Twitter Card tags (twitter:card, twitter:title, twitter:description, twitter:image)
  • Test rendering with Facebook Sharing Debugger and LinkedIn Post Inspector
How to verify
Test sharing a URL on LinkedIn/Facebook/Twitter and verify the preview is correct and complete.
Canonical tag
The canonical tag prevents duplicate content and helps AI crawlers index the correct version of each page.
Actions to complete
  • Verify each page has a <link rel="canonical" href="..."> tag
  • Verify canonical is self-referential (points to the same page)
  • Verify consistency between canonical, og:url and actual URL
  • Verify canonicals don't point to 404s or redirects
  • Verify pages with UTM parameters have a canonical to the clean version
  • Verify protocol consistency (all HTTPS)
How to verify
Crawl the site with Screaming Frog or similar tool and check the "Canonical" column for inconsistencies.
dateModified in schema
AI engines prioritize recent content. The dateModified property in JSON-LD schemas signals that your content is up to date.
Actions to complete
  • Add "dateModified" to all Article/BlogPosting JSON-LD scripts
  • Add "datePublished" if missing
  • Verify dateModified is more recent than datePublished
  • Display the update date visibly on the page ("Updated on...")
  • Update the corresponding <lastmod> in sitemap.xml
  • Set up a regular update process (at least quarterly)
How to verify
Validate JSON-LD with Rich Results Test. Run AI Labs Audit and verify the check turns green.
Content Quality
Optimized title tag — AI Labs Audit — Plateforme GEO...
The title tag is the first signal AI reads to understand a page's topic. A unique, descriptive title of proper length maximizes your chances of being cited.
Actions to complete
  • Each page has a unique <title>
  • Length between 30 and 60 characters
  • Contains the main keyword or page topic
  • No generic title ("Home", "Page", domain name alone)
  • Consistent with the page H1
  • No duplication with other site pages
  • Includes brand name (at end of title, after separator)
How to verify
Crawl the site and extract all titles. Verify uniqueness and length. Track CTR evolution in Google Search Console.
Meta description present
The meta description provides a structured summary to AI. An informative, unique description per page increases your chances of being selected as a source.
Actions to complete
  • Each page has a meta description
  • Length between 120 and 160 characters
  • Informative and descriptive content (not hollow marketing)
  • Unique per page
  • Contains a direct answer or actionable summary
  • Includes an implicit call to action when relevant
How to verify
Crawl the site to check coverage. Track CTR in Google Search Console.
FAQ section / FAQPage
Structured Q&A is the preferred format for AI to extract information. A well-structured FAQ with FAQPage schema maximizes your chances of being cited.
Actions to complete
  • Identify the 5-10 most frequent customer/prospect questions
  • Write direct, factual answers in 2-4 sentences each
  • Integrate the FAQ section on key pages (services, pricing, home)
  • Use <h2> or <h3> tags for each question
  • Implement FAQPage schema in JSON-LD with question/answer pairs
  • Validate schema via Rich Results Test
  • Write questions in natural conversational language
How to verify
Validate FAQPage schema. Test questions in ChatGPT, Claude and Perplexity to see if the site appears in responses.
H1→H2→H3 heading hierarchy
A logical heading hierarchy helps AI understand your content structure. Each H2 section should be self-contained and extractable by an LLM.
H1:1 H2:3 H3:10 H4:4
Actions to complete
  • Each page has a single H1 describing the main topic
  • H2s structure the main sections
  • H3s detail the subsections
  • No level skipping (no H1 → H4 directly)
  • Headings are descriptive and self-explanatory
  • No headings used for styling purposes
  • Each H2 section is relatively self-contained (extractable alone by an LLM)
How to verify
Use an SEO audit tool (Screaming Frog, Semrush) to validate heading structure.
Sufficient content (2000+ words) — 4240 mots
Pages with fewer than 2000 words are rarely cited by AI. Substantial content with data and concrete examples increases your credibility.
4240 words
Actions to complete
  • Check word count of main content on each page (excluding nav, footer)
  • Pages under 2000 words should be enriched or consolidated
  • No "under construction" or placeholder pages
  • Service pages should have at least 500 words of substantial content
  • Include data, concrete examples, and use cases
  • Avoid generic or duplicated content from other sources
How to verify
Count words on key pages. Run audit again to verify the check turns green.
Blog / News section
A regular blog with expert content increases your authority in the eyes of AI. Long, specialized articles are most likely to be cited.
Actions to complete
  • Create a Blog or News section on the site
  • Publish a first in-depth article on your domain of expertise
  • Each article must have: named author, visible publication date, Article/BlogPosting schema, 800+ words, images with alt text
  • Set a publication rhythm (minimum 1 article/month)
  • Cover customer questions (guide, comparison, how-to formats)
  • Set up internal links between articles and service pages
  • Ensure articles are indexed in the sitemap
How to verify
Track organic traffic to the blog section. Regularly test covered topics in ChatGPT/Claude/Perplexity to see if the site is cited.
Authority & Trust
About page
A detailed About page strengthens your credibility (E-E-A-T). AI uses it to understand who you are and decide whether to cite you as a reliable source.
Actions to complete
  • Page is accessible from main navigation or footer
  • Contains company history (founding date, founders)
  • Contains verifiable legal information (registration number, address, legal form)
  • Describes mission, values, positioning
  • Mentions certifications, labels, partnerships
  • Contains at least 400 words of substantial content
  • Organization schema implemented with the same information
  • Team or office photos (proof of real existence)
How to verify
Ask ChatGPT, Claude and Perplexity "What is [company name]?" and verify if returned information matches the About page.
Author bios / Expertise
AI values content written by identifiable experts with qualifications. Detailed bios strengthen E-E-A-T signals.
Actions to complete
  • Each content author has a biography on the site
  • Bio contains: full name, position, expertise area, career summary
  • Mentions certifications, degrees, or years of experience
  • Professional photo
  • Link to LinkedIn profile
  • Person schema implemented (name, jobTitle, description, sameAs, worksFor)
  • Blog articles display author name and bio
  • No content published under "Admin" or "The team" without attribution
How to verify
Verify Person schema is valid. Test "[expert name] + [field]" in AI to see if expertise is recognized.
Linked social media — twitter, linkedin
Company registration number visible
Contact email and phone number visible
Director or responsible person identified
Freshness & Signals
No noai meta tag
The <meta name="robots" content="noai"> tag blocks AI from using your content. If present, your site is invisible to generative AI engines.
Actions to complete
  • Check for absence of <meta name="robots" content="noai"> in <head>
  • Check for absence of X-Robots-Tag: noai HTTP header
  • If present, remove it to allow AI indexation
How to verify
The automated audit detects the noai meta tag and X-Robots-Tag header.
RSS or Atom feed
An RSS or Atom feed signals to AI crawlers that your site publishes regularly. AI engines use feeds to discover fresh content. Content less than 30 days old receives 3.2x more citations.
Actions to complete
  • Add a <link rel="alternate" type="application/rss+xml"> in <head>
  • Ensure the feed is accessible (HTTP 200, valid XML)
  • Include at least the 10 most recent modified articles/pages
  • Verify each item has a <pubDate> or <updated> date
How to verify
The AEO section of the dashboard automatically detects RSS or Atom feeds.
AQA markup (AI Question Answer)
AQA is an open standard that enriches your FAQs with AI-ready metadata: per-question dates, per-answer sources, revision history. AI systems prioritize content with verifiable provenance.
Actions to complete
  • Implement AQA Basic markup on your FAQ pages (dates + sources)
  • Add dateCreated and dateModified on each question
  • Add a citation (source URL) on each answer
  • Declare conformanceLevel in the Article block
  • Validate with the AQA validator: github.com/sarsator/aqa-specification
How to verify
The AEO section of the dashboard automatically detects AQA presence and level (Basic/Standard/Full).
Recently updated content
Content updated within the last 90 days is prioritized by AI. Content less than 30 days old receives 3.2x more citations. After 13 weeks without updates, content loses citation eligibility.
Actions to complete
  • Verify dateModified is present in Schema.org JSON-LD
  • Update key pages at least every 90 days
  • Add current statistics (current year) in content
  • Maintain a blog with regular publications
How to verify
The Citation Readiness v2 module automatically evaluates content freshness.
Sitemap & llms.txt Diagnostic
Sitemap.xml Index (4 sitemaps) via robots.txt
URLs
634
With lastmod
634
Avg. freshness
68 days
Sub-sitemaps :
sitemap-pages.xml 105 URLs
sitemap-blog.xml 232 URLs
sitemap-glossaire.xml 510 URLs
sitemap-vitrine.xml 2 URLs
sitemap-videos.xml 8 URLs
4 URLs en double entre sub-sitemaps
Missing strategic pages: AI Labs Audit, Auditez la visibilité de vos clients sur ChatGPT, Gemini & Perplexity et encore plus
634 URLs indexed
3 strategic pages missing
Average age: 67.5 days
llms.txt + llms-full.txt
Referenced pages
41
Tokens
960
Title
AI Labs Audit
Extra pages (full)
+691 pages
Sitemap not in llms
603
41 pages referenced
960 tokens
llms-full.txt: 691 additional pages
603 sitemap pages missing from llms.txt
Recommended Action Plan 6 actions
Action plan progress
0 Done 0 6 0%
1 high priority 2 medium priority
Bloquer les bots d'entraînement dans robots.txt

Vous autorisez les bots d'entraînement (GPTBot training, CCBot, etc.) à utiliser votre contenu gratuitement. Bloquez-les tout en gardant search et retrieval autorisés.

Medium Priority GEO Optimization ⚡ Quick fix High Impact
+1.0 pts · 30min
Mentionner le directeur de publication

Le directeur de publication n'est pas mentionné. Obligation légale et signal E-E-A-T.

Low Priority Presence ⚡ Quick fix Low Impact
+0.4 pts · 15min
Verify and fix sameAs links

Verify all sameAs links in Schema.org point to valid and active pages.

Medium Priority GEO Optimization ⚡ Quick fix Medium Impact
+0.6 pts · 30min
Créer un flux RSS

Les IA comme Perplexity utilisent les RSS pour détecter du contenu frais rapidement.

Low Priority GEO Optimization ⚡ Quick fix Medium Impact
+0.6 pts · 30min
Afficher le numéro d'immatriculation

Le SIREN/SIRET n'est pas visible. Les IA associent la transparence légale à la fiabilité.

Low Priority Presence ⚡ Quick fix Low Impact
+0.4 pts · 30min
Create or enrich the Wikipedia/Wikidata page

If the brand is notable enough, create a Wikipedia page with verifiable sources. Otherwise, enrich the Wikidata entry.

High Priority Native Knowledge 🏗️ Project High Impact
+1.4 pts · 5.0h