Before we run the audit, we need to make sure we're asking the right questions about the right competitors to the right buyers. This document presents what we've learned about Resonate Labs' market — your job is to tell us what we got right, what we got wrong, and what we missed.
Before we measure citation visibility in the GEO agency space, these three signals tell us whether AI crawlers can access and trust Resonate Labs' content. Two of three are flagged.
AI search is reshaping how B2B marketing leaders discover and shortlist Generative Engine Optimization agencies. Buyers are forming vendor shortlists inside ChatGPT, Perplexity, Gemini, and Claude before engaging with sales — and for a company whose entire value proposition is AI visibility, your own discoverability across these platforms is both a business imperative and a proof point. Companies that establish citation visibility now build a compounding advantage as AI platforms learn to trust cited domains.
This document presents two bodies of work for your validation. The competitive landscape identifies which vendors your buyers compare against in AI platform queries — tier assignments determine which head-to-head matchups the audit tests. The buyer personas map the roles who evaluate and sign GEO agency contracts, each generating distinct search intent across the buying journey. And the Layer 1 technical analysis reveals whether AI platforms can actually access your content — which revealed a critical blocker: the Cloudflare managed robots.txt blocks GPTBot and ClaudeBot from indexing any page on the site.
Two types of decisions need to happen before the audit runs. First, at the validation call: confirm or correct the knowledge graph inputs — particularly the inferred VP of Demand Generation persona and the two medium-confidence primary competitor tier assignments, since these directly shape which queries test head-to-head differentiation versus category awareness. Second, for engineering immediately: unblock GPTBot and ClaudeBot in the Cloudflare dashboard and verify schema markup status on all three pages. These technical fixes don't require the call and will improve the baseline before we measure it.
Three things to know before you read further.
What this is This document presents the research foundation for Resonate Labs' GEO visibility audit. Every section feeds a downstream step: personas drive the buyer query set, competitors determine head-to-head matchups, features shape capability queries, and pain points provide the buyer language queries will be phrased in. The technical findings tell us whether AI platforms can reach your content at all.
What we need from you Look for the purple boxes throughout this document. Each one asks a specific question about something we need you to confirm or correct. Your answers directly change what the audit measures. A wrong persona means wasted queries. A wrong competitor tier means testing matchups that don't exist. Come to the validation call with answers to the purple questions.
Confidence badges Every data point includes a confidence badge: High means sourced from public data or direct evidence. Medium means inferred from category patterns or partial evidence — these are the ones most likely to need correction. Low means best-guess based on limited signal.
The baseline identity that anchors every query in the audit.
Validate The name variant "Resonate" may cause entity collisions with Resonate (the consumer data platform) in AI platform responses. Does Cited generate its own buyer search intent (e.g., "GEO optimization book," "AI visibility guide"), or is it purely a credibility asset for the audit service? If it drives its own discovery queries, we add a dedicated query cluster for the book.
5 personas: 2 decision-makers, 2 evaluators, 1 influencer. Each generates distinct search queries across the GEO agency buying journey.
Critical review area Personas drive the buyer query set — every role maps to a cluster of queries reflecting how that person searches during evaluation. A missing persona means an entire search intent pattern goes untested. A wrong persona means wasted queries that don't match real buying behavior.
Data sourcing Role, department, seniority, influence level, and veto power are sourced from the knowledge graph (provenance noted per card). Buying jobs and query focus areas are synthesized from the persona's role context and the GEO agency buying cycle — these are our best inference of how each role searches, not direct observations.
→ Does the VP of Marketing hold final signature authority on GEO agency contracts at your typical buyer's company, or does approval route through a CMO or CEO? If a C-suite executive signs off, we add executive-level validation queries targeting their approval criteria.
→ Does the Director of Content Strategy evaluate GEO agencies independently, or execute on a decision already made by the VP Marketing? If Marcus is executing rather than evaluating, we shift his query cluster from evaluation-stage to implementation-stage queries.
→ Is demand gen a separate buying center from content marketing for GEO services, or does Aisha's role overlap with Sarah Chen's evaluation? If demand gen isn't a distinct evaluator in actual deals, we merge these query clusters and remove ~20% of demand-stage queries.
→ Does the Head of Digital Marketing typically discover and champion GEO agencies bottom-up, or is this initiative top-down from the VP Marketing? If Jordan isn't the internal champion, we deprioritize awareness-stage technical queries and weight evaluation-stage queries instead.
→ Does Finance exercise veto power specifically on GEO agency contracts, or only above a budget threshold? If Finance isn't involved at typical GEO engagement sizes, we remove ROI-justification queries from the buyer set and reallocate to evaluation-stage queries.
Missing personas? Three roles that may be relevant for GEO agency buying committees: (1) CEO / Founder — at startup-stage companies, the CEO often drives the GEO initiative personally after experiencing invisible pipeline loss. (2) CMO — at larger companies, may be the actual decision-maker above VP Marketing. (3) Head of SEO / Growth — the technical champion who evaluates methodology depth and has the most opinions about whether "GEO" is distinct from SEO. Who else shows up in your deals?
5 primary + 4 secondary competitors identified. Tier assignments determine which head-to-head matchups the audit tests across AI platforms.
Why tiers matter Primary competitors generate head-to-head comparison queries — "Resonate Labs vs Omniscient Digital," "best GEO agency for B2B SaaS" — while secondary competitors appear in broader category queries. Getting these tiers right determines which of approximately 30–40 queries test direct competitive differentiation vs. category awareness. Two primary competitors — Genevate and GenOptima — are rated medium confidence based on category listings rather than deal data. If they rarely appear in actual evaluations, moving them to secondary would shift approximately 12–16 queries out of the head-to-head set.
Validate Genevate and GenOptima are both rated medium confidence as primary competitors — do either appear in actual competitive evaluations, or should they move to secondary? Are there vendors we're missing entirely — particularly in-house teams, freelance GEO consultants, or traditional SEO agencies your prospects evaluate against specialized GEO agencies? Is any listed competitor irrelevant to your actual deal landscape?
10 buyer-level capabilities mapped. Strength ratings determine which capability queries test competitive advantage vs. defense.
Run a comprehensive audit showing exactly where our brand appears — and doesn't appear — across ChatGPT, Perplexity, Gemini, and Claude for the queries our buyers actually ask
Show me which competitors are getting recommended by AI instead of us, how often they win, and what content is earning them those citations
Don't just hand me a report — build and deploy the content that will actually get us cited by AI platforms
We need visibility across all the AI platforms buyers use — ChatGPT, Perplexity, Gemini, Claude — not just one
Map the actual questions our different buyer personas are asking AI platforms across the entire buying journey — from problem identification to vendor evaluation
Audit whether our website is technically set up for AI crawlers to read, index, and cite our content correctly
We need a partner who actually understands GEO deeply, has published research, and can educate our team — not just an agency that added "AI" to their SEO pitch deck
Give me a dashboard where I can log in anytime and see our AI visibility metrics, track progress, and share results with my leadership team
Prove that the AI visibility work is actually driving pipeline and revenue — tie citations back to traffic, leads, and deals
Track how our AI visibility changes over time so we can see if the content we're deploying is actually moving the needle
Validate Self-Service Reporting is rated "absent" and ROI Attribution "weak" — are these conscious positioning choices (consultancy model, not a platform), or gaps you plan to address? If gaps, competitors with dashboards dominate those query categories by default. Are any "strong" ratings overstated — particularly Content Strategy & Managed Execution relative to agencies with deeper execution track records and larger teams? Are there buyer capabilities we're missing entirely?
9 pain points: 6 high, 3 medium severity. Pain point buyer language is how queries will be phrased in the audit.
Validate Budget Justification (high severity) and Agency Credibility Gap (medium severity) are both sourced from inference rather than direct buyer evidence — do these resonate with actual prospect conversations? Is "agency credibility gap" a pain point buyers articulate explicitly, or more of a market-level observation? Missing pain point candidates: (1) internal bandwidth to implement GEO recommendations — if buyers lack content teams to execute, it changes which service tier they evaluate; (2) executive skepticism that AI search will replace traditional channels — adds a "convince my boss" query cluster; (3) fear of AI platform algorithm changes invalidating the investment — risk-oriented buyers search differently. What buyer frustrations are we missing?
4 findings from the technical analysis of resonatelabs.co. 1 critical, 1 high, 1 medium, 1 low severity.
Engineering: start immediately The robots.txt is blocking GPTBot and ClaudeBot site-wide via Cloudflare's managed bot settings. This is a critical blocker — OpenAI and Anthropic cannot index any content on resonatelabs.co. Engineering should review Cloudflare's Security → Bots → Managed robots.txt configuration and allow GPTBot and ClaudeBot immediately. Schema markup status is also unknown and should be verified in parallel.
What we found: The robots.txt file (managed by Cloudflare) blocks four key AI crawlers: GPTBot (OpenAI/ChatGPT training), ClaudeBot (Anthropic/Claude), Google-Extended (Google AI training), and Bytespider (ByteDance AI). These directives prevent these crawlers from indexing any page on resonatelabs.co. ChatGPT-User (ChatGPT browse mode), PerplexityBot, and Googlebot are not mentioned and default to allowed. The robots.txt also includes a Content-Signal directive setting ai-train=no for all user agents.
Why it matters: GPTBot and ClaudeBot crawling is a prerequisite for content to enter the training data and retrieval pipelines of ChatGPT and Claude respectively. For a company whose entire value proposition is AI visibility, blocking the crawlers of the platforms you help clients get cited on creates a credibility gap and limits the company's own discoverability.
Recommended fix: Review the Cloudflare Managed robots.txt settings. Allow GPTBot and ClaudeBot to crawl the site (these are the content retrieval crawlers, not just training crawlers). Consider keeping Google-Extended and Bytespider blocked if the concern is training data usage. The Content-Signal ai-train=no directive already signals training opt-out without blocking retrieval crawling. Cloudflare dashboard → Security → Bots → Configure Managed robots.txt.
What we found: The entire site consists of 3 pages: the homepage (resonatelabs.co), a brief request page (/brief/), and a visibility review page (/visibility/). The sitemap.xml contains only these 3 URLs. No blog, case studies, feature pages, comparison pages, documentation, resource pages, or about page were found. The homepage navigation uses only anchor links to sections within the same page.
Why it matters: AI platforms construct responses by synthesizing content from multiple authoritative pages. A 3-page site provides minimal surface area for AI crawlers to index and cite. Each missing page type represents a class of buyer queries where Resonate Labs cannot be cited. AI platforms prefer discrete, focused pages over one comprehensive page — they extract passages from focused pages more reliably.
Recommended fix: This is a structural observation. The content gap analysis in the full audit will identify specific page types and topics that would expand the site's citable surface area, prioritized by which gaps actually cost citations in query response data.
What we found: Our analysis method (rendered markdown) cannot detect JSON-LD schema markup, meta descriptions, or OG tags. These signals are embedded in HTML head sections and are not visible in rendered output. We were unable to determine whether the site implements Organization, WebPage, Product, or other schema types.
Why it matters: Schema markup helps AI platforms understand entity relationships, product offerings, and content structure. Organization schema reinforces brand identity. FAQ schema and HowTo schema make content more extractable for AI responses. Without verification, we cannot confirm whether this signal is present or absent.
Recommended fix: Verify schema markup using Google's Rich Results Test or Schema.org Validator. At minimum, implement: (1) Organization schema on the homepage, (2) WebPage schema on all pages, (3) FAQ schema on any future FAQ content. Also verify meta descriptions and OG tags are present on all 3 pages.
What we found: All 3 pages returned substantive text content via web_fetch, suggesting server-side rendering is functional. However, we cannot definitively confirm whether any page content relies on client-side JavaScript rendering that might be invisible to AI crawlers with limited JavaScript execution. The site appears to be a static site, but this should be confirmed.
Why it matters: AI crawlers vary in JavaScript execution capability. GPTBot has limited JS support, while Googlebot renders fully. If critical content is loaded via JavaScript, it may be invisible to some AI platforms even after unblocking their crawlers.
Recommended fix: Verify by loading the site with JavaScript disabled in browser DevTools (Settings → Debugger → Disable JavaScript). If all content renders without JS, no action needed. If content disappears, implement server-side rendering for the affected sections.
Partial sample This analysis covers only 3 pages — the entire discoverable site. Content depth (0.40) and passage extractability (0.52) scores reflect the site's structure of packing substantial content into a single homepage rather than distributing across dedicated pages. Schema coverage could not be assessed from rendered output and requires manual verification.
Why now
• AI search adoption is accelerating — buyer discovery patterns are shifting quarter over quarter
• Early citations compound: domains that AI platforms learn to trust now get cited more frequently as training data accumulates
• Competitors who establish GEO visibility first create a structural disadvantage for late movers
• Generative Engine Optimization is still early-innings — acting now means competing against inaction, not against entrenched strategies
The full audit will measure Resonate Labs' citation visibility across buyer queries spanning the entire GEO agency evaluation journey — from problem-identification queries like "how to get my brand cited by ChatGPT" to vendor-comparison queries like "best GEO agency for B2B SaaS." You'll see exactly which queries return results that include your competitors but not Resonate Labs — and what it would take to appear in them. Fixing the robots.txt blocking now ensures the audit measures your true visibility potential, not an artificially suppressed baseline.
45–60 minutes walking through this document. Confirm personas, competitor tiers, feature strength ratings, and pain point accuracy. Your corrections directly shape the buyer query set.
Buyer queries generated from validated personas and pain points, executed across ChatGPT, Perplexity, Gemini, and Claude. Each query tests a specific buyer intent at a specific buying stage.
Visibility analysis, competitive positioning, content gap prioritization, and a three-layer action plan: technical fixes, content strategy, and authority building — prioritized by which gaps actually cost citations.
Start now — don't wait for the call These don't depend on the rest of the audit and will improve your baseline visibility before we even measure it:
1. Unblock GPTBot and ClaudeBot in Cloudflare — Cloudflare dashboard → Security → Bots → Managed robots.txt. Allow these retrieval crawlers. Under 1 day. (Critical)
2. Verify schema markup on all 3 pages — Use Google's Rich Results Test to confirm whether Organization, WebPage, and other schema types are implemented. 1–3 days. (Medium)
3. Verify client-side rendering — Load the site with JavaScript disabled in browser DevTools. If all content renders, no action needed. Under 1 day. (Low)
Two jobs before we meet. The questions on the left require your judgment — no one knows your business better than you. The engineering tasks on the right don't require the call at all.