Engagement Foundation Review

Resonate Labs
Audit Foundation

Before we run the audit, we need to make sure we're asking the right questions about the right competitors to the right buyers. This document presents what we've learned about Resonate Labs' market — your job is to tell us what we got right, what we got wrong, and what we missed.

Prepared April 2026
resonatelabs.co
GEO Agency — AI Visibility Auditing
GEO Readiness

Where You Stand Today

Before we measure citation visibility in the GEO agency space, these three signals tell us whether AI crawlers can access and trust Resonate Labs' content. Two of three are flagged.

Technical Readiness
At Risk
1 critical finding: robots.txt blocks GPTBot and ClaudeBot site-wide. OpenAI and Anthropic crawlers cannot index any page on resonatelabs.co. This must be resolved before the audit can measure true visibility.
Content Freshness
Good
Weighted freshness: 1.00. All 3 product/commercial pages updated within 90 days. 0 pages older than 6 months. No content marketing pages exist to score — the site has no blog or resource content, which is a structural limitation covered in findings.
Crawl Coverage
At Risk
Cloudflare managed robots.txt confirms blocking of GPTBot, ClaudeBot, Google-Extended, and Bytespider. ChatGPT-User and PerplexityBot are allowed. Sitemap contains 3 URLs, clean — no utility or test pages. The blocking is the blocker.
Executive Summary

What You Need to Know

AI search is reshaping how B2B marketing leaders discover and shortlist Generative Engine Optimization agencies. Buyers are forming vendor shortlists inside ChatGPT, Perplexity, Gemini, and Claude before engaging with sales — and for a company whose entire value proposition is AI visibility, your own discoverability across these platforms is both a business imperative and a proof point. Companies that establish citation visibility now build a compounding advantage as AI platforms learn to trust cited domains.

This document presents two bodies of work for your validation. The competitive landscape identifies which vendors your buyers compare against in AI platform queries — tier assignments determine which head-to-head matchups the audit tests. The buyer personas map the roles who evaluate and sign GEO agency contracts, each generating distinct search intent across the buying journey. And the Layer 1 technical analysis reveals whether AI platforms can actually access your content — which revealed a critical blocker: the Cloudflare managed robots.txt blocks GPTBot and ClaudeBot from indexing any page on the site.

Two types of decisions need to happen before the audit runs. First, at the validation call: confirm or correct the knowledge graph inputs — particularly the inferred VP of Demand Generation persona and the two medium-confidence primary competitor tier assignments, since these directly shape which queries test head-to-head differentiation versus category awareness. Second, for engineering immediately: unblock GPTBot and ClaudeBot in the Cloudflare dashboard and verify schema markup status on all three pages. These technical fixes don't require the call and will improve the baseline before we measure it.

TL;DR — Action Items
  • 🔴 Critical: Major AI Crawlers Blocked via Cloudflare Managed robots.txt — Engineering should allow GPTBot and ClaudeBot in Cloudflare's bot management settings immediately; these are the retrieval crawlers for two of the four audit target platforms.
  • 🟣 Validate at the Call: VP of Demand Generation (Aisha Thompson) — This persona was inferred from category patterns, not sourced from deal data. If demand gen isn't a distinct GEO buying center, we remove ~20% of demand-stage queries from the buyer set.
  • 🟣 Validate at the Call: Genevate and GenOptima primary tier — Both are medium-confidence newer entrants. If they rarely appear in actual competitive evaluations, moving them to secondary shifts approximately 12–16 head-to-head comparison queries to category-level.
  • ✅ Start Now: Unblock AI crawlers in Cloudflare — This is a prerequisite for the audit platforms to index the site and doesn't depend on the validation call. Under 1 day of engineering effort.
  • 📋 Validation Call: Feature strength calibration — Self-Service Reporting is rated "absent" and ROI Attribution "weak." Confirming whether these are conscious positioning choices or gaps determines which query categories play offense vs. defense across all platform tests.
How This Works

Reading This Document

Three things to know before you read further.

What this is This document presents the research foundation for Resonate Labs' GEO visibility audit. Every section feeds a downstream step: personas drive the buyer query set, competitors determine head-to-head matchups, features shape capability queries, and pain points provide the buyer language queries will be phrased in. The technical findings tell us whether AI platforms can reach your content at all.

What we need from you Look for the purple boxes throughout this document. Each one asks a specific question about something we need you to confirm or correct. Your answers directly change what the audit measures. A wrong persona means wasted queries. A wrong competitor tier means testing matchups that don't exist. Come to the validation call with answers to the purple questions.

Confidence badges Every data point includes a confidence badge: High means sourced from public data or direct evidence. Medium means inferred from category patterns or partial evidence — these are the ones most likely to need correction. Low means best-guess based on limited signal.

Company Profile

Resonate Labs

The baseline identity that anchors every query in the audit.

Company Identity

Company Name Resonate Labs High
Domain resonatelabs.co
Name Variants Resonate, ResonateLabs, Resonate Labs Co, resonatelabs
Category GEO Agency — AI Visibility Auditing & Managed Content Execution for B2B
Segment Startup
Key Products GEO Visibility Audit, Cited (book)
Positioning AI visibility auditing, competitive intelligence, and managed content execution for B2B brands seeking citations in AI platforms

Validate The name variant "Resonate" may cause entity collisions with Resonate (the consumer data platform) in AI platform responses. Does Cited generate its own buyer search intent (e.g., "GEO optimization book," "AI visibility guide"), or is it purely a credibility asset for the audit service? If it drives its own discovery queries, we add a dedicated query cluster for the book.

Buyer Personas

Who Buys GEO Services

5 personas: 2 decision-makers, 2 evaluators, 1 influencer. Each generates distinct search queries across the GEO agency buying journey.

Critical review area Personas drive the buyer query set — every role maps to a cluster of queries reflecting how that person searches during evaluation. A missing persona means an entire search intent pattern goes untested. A wrong persona means wasted queries that don't match real buying behavior.

Data sourcing Role, department, seniority, influence level, and veto power are sourced from the knowledge graph (provenance noted per card). Buying jobs and query focus areas are synthesized from the persona's role context and the GEO agency buying cycle — these are our best inference of how each role searches, not direct observations.

Sarah Chen
VP of Marketing
Decision-maker High
Senior marketing leader responsible for brand positioning and channel strategy. Owns the marketing budget line that GEO services fall under. Evaluates agency partners based on strategic fit, methodology credibility, and measurable outcomes.
Veto power: Yes — controls the budget and signs agency contracts
Technical level: Low
Primary buying jobs: Problem identification ("Is AI search affecting our pipeline?"), vendor shortlisting ("Who are the top GEO agencies?"), ROI justification ("Can I prove this investment to the board?")
Query focus areas: Category-level discovery queries, agency comparison queries, ROI and measurement queries
Source: Review mining — B2B agency buyer title analysis

Does the VP of Marketing hold final signature authority on GEO agency contracts at your typical buyer's company, or does approval route through a CMO or CEO? If a C-suite executive signs off, we add executive-level validation queries targeting their approval criteria.

Marcus Rivera
Director of Content Strategy
Evaluator High
Leads content operations and owns the content calendar. Evaluates GEO agencies on methodology depth, content quality standards, and integration with existing content workflows. The person who will work most closely with the agency day-to-day.
Veto power: No — recommends but does not control budget
Technical level: Medium
Primary buying jobs: Methodology evaluation ("How does this actually work?"), capability vetting ("Can they integrate with our content process?"), vendor comparison ("How does this agency compare to others?")
Query focus areas: GEO methodology deep-dive queries, agency capability comparison queries, content strategy for AI visibility queries
Source: Review mining — B2B content team title analysis

Does the Director of Content Strategy evaluate GEO agencies independently, or execute on a decision already made by the VP Marketing? If Marcus is executing rather than evaluating, we shift his query cluster from evaluation-stage to implementation-stage queries.

Aisha Thompson
VP of Demand Generation
Evaluator Med
Owns pipeline targets and channel mix decisions. Evaluates GEO services through a demand generation lens — concerned with whether AI visibility converts to measurable pipeline, not just brand presence. Compares GEO investment against paid media and traditional SEO ROI.
Veto power: No — influences budget allocation across channels
Technical level: Medium
Primary buying jobs: Channel ROI comparison ("Is GEO better than paid search?"), pipeline attribution ("Can we measure pipeline from AI citations?"), budget allocation ("Where does GEO fit in the channel mix?")
Query focus areas: GEO vs. SEO ROI queries, AI visibility pipeline impact queries, demand gen channel comparison queries
Source: LLM inference — inferred from B2B buying committee patterns

Is demand gen a separate buying center from content marketing for GEO services, or does Aisha's role overlap with Sarah Chen's evaluation? If demand gen isn't a distinct evaluator in actual deals, we merge these query clusters and remove ~20% of demand-stage queries.

Jordan Park
Head of Digital Marketing
Influencer Med
Manages digital channels including SEO, paid, and emerging platforms. The most technically fluent marketing team member — evaluates GEO agency methodology at a technical level. Often the internal champion who discovers GEO as a category and brings it to leadership.
Veto power: No — influences through technical evaluation
Technical level: High
Primary buying jobs: Technical discovery ("How does GEO actually work?"), vendor technical vetting ("Is their methodology real or repackaged SEO?"), internal championing ("How do I pitch this to my VP?")
Query focus areas: Technical GEO methodology queries, "how to" educational queries, agency evaluation framework queries
Source: Review mining — digital marketing team title analysis

Does the Head of Digital Marketing typically discover and champion GEO agencies bottom-up, or is this initiative top-down from the VP Marketing? If Jordan isn't the internal champion, we deprioritize awareness-stage technical queries and weight evaluation-stage queries instead.

David Okonkwo
VP of Finance
Decision-maker High
Controls budget approval for marketing agency contracts. Evaluates GEO investment through a financial lens — ROI projections, attribution methodology, and comparison against known-quantity channels. The gatekeeper who must see numbers before approving a new budget line.
Veto power: Yes — can block agency contracts that lack financial justification
Technical level: Low
Primary buying jobs: Investment validation ("What's the expected ROI?"), risk assessment ("Is this proven or experimental?"), budget comparison ("Is this cheaper/better than alternatives?")
Query focus areas: GEO ROI and cost queries, marketing agency pricing comparison queries, AI marketing investment validation queries
Source: LLM inference — B2B buying committee financial stakeholder pattern

Does Finance exercise veto power specifically on GEO agency contracts, or only above a budget threshold? If Finance isn't involved at typical GEO engagement sizes, we remove ROI-justification queries from the buyer set and reallocate to evaluation-stage queries.

Missing personas? Three roles that may be relevant for GEO agency buying committees: (1) CEO / Founder — at startup-stage companies, the CEO often drives the GEO initiative personally after experiencing invisible pipeline loss. (2) CMO — at larger companies, may be the actual decision-maker above VP Marketing. (3) Head of SEO / Growth — the technical champion who evaluates methodology depth and has the most opinions about whether "GEO" is distinct from SEO. Who else shows up in your deals?

Competitive Landscape

Who You're Compared Against

5 primary + 4 secondary competitors identified. Tier assignments determine which head-to-head matchups the audit tests across AI platforms.

Why tiers matter Primary competitors generate head-to-head comparison queries — "Resonate Labs vs Omniscient Digital," "best GEO agency for B2B SaaS" — while secondary competitors appear in broader category queries. Getting these tiers right determines which of approximately 30–40 queries test direct competitive differentiation vs. category awareness. Two primary competitors — Genevate and GenOptima — are rated medium confidence based on category listings rather than deal data. If they rarely appear in actual evaluations, moving them to secondary would shift approximately 12–16 queries out of the head-to-head set.

Primary Competitors

Omniscient Digital

Primary High
beomniscient.com
Established B2B organic growth agency (founded 2019) that pioneered "Surround Sound SEO" and now offers full GEO services. Stronger brand recognition and enterprise client roster (SAP, Asana, Loom) but GEO is one pillar alongside traditional SEO rather than the sole focus.
Source: Category listing

Graphite

Primary High
graphite.io
AI-powered growth agency for SaaS with proprietary AEO/SEO platform tracking visibility across ChatGPT, Perplexity, Gemini, and Claude. Blends agency services with platform capabilities, founded by Reforge instructor — stronger tech tooling but less focused on structured buyer-intelligence methodology.
Source: Category listing

First Page Sage

Primary High
firstpagesage.com
Claims to have pioneered GEO in 2023 with extensive thought leadership content. Serves enterprise clients including Salesforce and Logitech — larger agency with broader SEO heritage but GEO is a pivot from their traditional SEO practice rather than a ground-up methodology.
Source: Category listing

Genevate

Primary Med
genevate.co
Pure-play GEO and PR agency founded 2025 by a NYC PR veteran. Combines generative engine optimization with strategic digital PR, clients include ZipRecruiter and CBRE — PR-heavy approach leveraging earned media for AI citations rather than a content-first research methodology.
Source: Category listing

GenOptima

Primary Med
gen-optima.com
Pure-play GEO agency with a "Result-as-a-Service" model where pricing is tied to measurable AI citation outcomes. Developed GEO-IQ assessment framework — performance-based pricing is a strong differentiator but less emphasis on the structured knowledge graph and buyer-journey query methodology.
Source: Category listing

Secondary Competitors

Evertune

Secondary Med
evertune.ai
Enterprise GEO SaaS platform with direct API access to foundational LLM models for visibility tracking at scale (1M+ prompts per brand monthly). A brand might buy Evertune to DIY rather than hire an agency — platform vs. service model creates a fundamentally different buying decision.
Source: Category listing

Peec AI

Secondary High
peec.ai
Best-funded pure-play AI search analytics platform ($29M raised, 1,300+ brands onboarded) using UI scraping for authentic data. Strong measurement capability but analytics-only without strategic consulting or content execution.
Source: Category listing

Scrunch AI

Secondary Med
scrunch.com
Enterprise AI visibility platform ($19M VC funding, 500+ brand clients including Lenovo and Crunchbase) with a unique "Agent Experience Platform" for serving bot-friendly content. Infrastructure-focused approach to AI visibility rather than content strategy and execution.
Source: Category listing

HubSpot

Secondary High
hubspot.com
Massive marketing platform incumbent that launched AEO tools including Share of Voice tracking and a free AEO Grader. Brands already paying for HubSpot may add AI visibility features rather than hire a specialized agency — the "good enough" threat from an existing vendor relationship.
Source: Category listing

Validate Genevate and GenOptima are both rated medium confidence as primary competitors — do either appear in actual competitive evaluations, or should they move to secondary? Are there vendors we're missing entirely — particularly in-house teams, freelance GEO consultants, or traditional SEO agencies your prospects evaluate against specialized GEO agencies? Is any listed competitor irrelevant to your actual deal landscape?

Feature Taxonomy

What Buyers Evaluate

10 buyer-level capabilities mapped. Strength ratings determine which capability queries test competitive advantage vs. defense.

AI Visibility Auditing & Measurement Strong High

Run a comprehensive audit showing exactly where our brand appears — and doesn't appear — across ChatGPT, Perplexity, Gemini, and Claude for the queries our buyers actually ask

Competitive Intelligence in AI Platforms Strong High

Show me which competitors are getting recommended by AI instead of us, how often they win, and what content is earning them those citations

Content Strategy & Managed Execution Strong High

Don't just hand me a report — build and deploy the content that will actually get us cited by AI platforms

Multi-Platform AI Coverage Strong High

We need visibility across all the AI platforms buyers use — ChatGPT, Perplexity, Gemini, Claude — not just one

Buyer Persona & Query Intelligence Strong High

Map the actual questions our different buyer personas are asking AI platforms across the entire buying journey — from problem identification to vendor evaluation

Technical LLM Accessibility Optimization Moderate Med

Audit whether our website is technically set up for AI crawlers to read, index, and cite our content correctly

Thought Leadership & Category Authority Strong High

We need a partner who actually understands GEO deeply, has published research, and can educate our team — not just an agency that added "AI" to their SEO pitch deck

Self-Service Reporting & Dashboards Absent Med

Give me a dashboard where I can log in anytime and see our AI visibility metrics, track progress, and share results with my leadership team

ROI Attribution & Pipeline Impact Weak Med

Prove that the AI visibility work is actually driving pipeline and revenue — tie citations back to traffic, leads, and deals

Temporal Visibility Tracking & Trend Analysis Moderate Med

Track how our AI visibility changes over time so we can see if the content we're deploying is actually moving the needle

Validate Self-Service Reporting is rated "absent" and ROI Attribution "weak" — are these conscious positioning choices (consultancy model, not a platform), or gaps you plan to address? If gaps, competitors with dashboards dominate those query categories by default. Are any "strong" ratings overstated — particularly Content Strategy & Managed Execution relative to agencies with deeper execution track records and larger teams? Are there buyer capabilities we're missing entirely?

Pain Point Taxonomy

What Drives the Purchase

9 pain points: 6 high, 3 medium severity. Pain point buyer language is how queries will be phrased in the audit.

Invisible Pipeline Loss High High

"Our buyers are asking ChatGPT and Perplexity who to evaluate, and we're not even in the conversation — we're losing deals we never knew existed"
Personas: VP of Marketing, VP of Demand Generation

Competitor AI Dominance High High

"I asked ChatGPT to recommend tools in our category and our top competitor showed up first — we weren't mentioned at all"
Personas: VP of Marketing, VP of Demand Generation, Director of Content Strategy

SEO Declining Returns High High

"Our organic traffic has been dropping for 18 months and we're spending the same on content — the ROI on traditional SEO is collapsing and we don't have a plan B"
Personas: Director of Content Strategy, Head of Digital Marketing, VP of Marketing

No AI Measurement Capability High High

"My CEO asked me what our AI visibility looks like and I had no answer — we have dashboards for everything except the channel that's actually changing buyer behavior"
Personas: VP of Marketing, Head of Digital Marketing, VP of Demand Generation

Content Not Cited by AI Medium High

"We have hundreds of blog posts that rank on Google but none of them show up when buyers ask AI the same questions — our content strategy is built for the wrong channel"
Personas: Director of Content Strategy, Head of Digital Marketing

Agency Credibility Gap Medium Med

"Every SEO agency suddenly claims they do GEO but when I ask how it works, it sounds exactly like what they were already selling us — I can't tell who actually knows what they're doing"
Personas: Head of Digital Marketing, Director of Content Strategy, VP of Marketing

Budget Justification Difficulty High Med

"I know AI visibility matters but I can't walk into a budget meeting and prove it the way I can with paid search — finance wants numbers and I don't have them"
Personas: VP of Marketing, VP of Finance, VP of Demand Generation

Platform Fragmentation Medium High

"We show up in Perplexity but not ChatGPT, and our competitor is the opposite — how do we optimize for five different AI platforms that all work differently?"
Personas: Head of Digital Marketing, Director of Content Strategy

Early Funnel Blindspot High High

"By the time a buyer fills out our demo form, they've already built a shortlist using AI — if we weren't in that conversation at the start, we're playing catch-up from the first call"
Personas: VP of Demand Generation, VP of Marketing

Validate Budget Justification (high severity) and Agency Credibility Gap (medium severity) are both sourced from inference rather than direct buyer evidence — do these resonate with actual prospect conversations? Is "agency credibility gap" a pain point buyers articulate explicitly, or more of a market-level observation? Missing pain point candidates: (1) internal bandwidth to implement GEO recommendations — if buyers lack content teams to execute, it changes which service tier they evaluate; (2) executive skepticism that AI search will replace traditional channels — adds a "convince my boss" query cluster; (3) fear of AI platform algorithm changes invalidating the investment — risk-oriented buyers search differently. What buyer frustrations are we missing?

Site Analysis

Layer 1 Technical Findings

4 findings from the technical analysis of resonatelabs.co. 1 critical, 1 high, 1 medium, 1 low severity.

Engineering: start immediately The robots.txt is blocking GPTBot and ClaudeBot site-wide via Cloudflare's managed bot settings. This is a critical blocker — OpenAI and Anthropic cannot index any content on resonatelabs.co. Engineering should review Cloudflare's Security → Bots → Managed robots.txt configuration and allow GPTBot and ClaudeBot immediately. Schema markup status is also unknown and should be verified in parallel.

🔴 Major AI Crawlers Blocked via Cloudflare Managed robots.txt

What we found: The robots.txt file (managed by Cloudflare) blocks four key AI crawlers: GPTBot (OpenAI/ChatGPT training), ClaudeBot (Anthropic/Claude), Google-Extended (Google AI training), and Bytespider (ByteDance AI). These directives prevent these crawlers from indexing any page on resonatelabs.co. ChatGPT-User (ChatGPT browse mode), PerplexityBot, and Googlebot are not mentioned and default to allowed. The robots.txt also includes a Content-Signal directive setting ai-train=no for all user agents.

Why it matters: GPTBot and ClaudeBot crawling is a prerequisite for content to enter the training data and retrieval pipelines of ChatGPT and Claude respectively. For a company whose entire value proposition is AI visibility, blocking the crawlers of the platforms you help clients get cited on creates a credibility gap and limits the company's own discoverability.

Business consequence: Queries like "best GEO agency for B2B" or "AI visibility audit providers" cannot return Resonate Labs as a cited source when OpenAI and Anthropic crawlers are blocked from indexing the site entirely — competitors with accessible sites capture these citations by default.

Recommended fix: Review the Cloudflare Managed robots.txt settings. Allow GPTBot and ClaudeBot to crawl the site (these are the content retrieval crawlers, not just training crawlers). Consider keeping Google-Extended and Bytespider blocked if the concern is training data usage. The Content-Signal ai-train=no directive already signals training opt-out without blocking retrieval crawling. Cloudflare dashboard → Security → Bots → Configure Managed robots.txt.

Impact: Critical Effort: < 1 day Owner: Engineering Affected: All pages site-wide (3 pages)

🟡 Extremely Small Indexed Footprint — Only 3 Pages Discoverable

What we found: The entire site consists of 3 pages: the homepage (resonatelabs.co), a brief request page (/brief/), and a visibility review page (/visibility/). The sitemap.xml contains only these 3 URLs. No blog, case studies, feature pages, comparison pages, documentation, resource pages, or about page were found. The homepage navigation uses only anchor links to sections within the same page.

Why it matters: AI platforms construct responses by synthesizing content from multiple authoritative pages. A 3-page site provides minimal surface area for AI crawlers to index and cite. Each missing page type represents a class of buyer queries where Resonate Labs cannot be cited. AI platforms prefer discrete, focused pages over one comprehensive page — they extract passages from focused pages more reliably.

Business consequence: Buyer queries spanning evaluation stages — "GEO agency case studies," "AI visibility methodology explained," "Resonate Labs vs Omniscient Digital" — have no dedicated content to cite, giving competitors with deeper content libraries a structural advantage across every query category in the GEO agency space.

Recommended fix: This is a structural observation. The content gap analysis in the full audit will identify specific page types and topics that would expand the site's citable surface area, prioritized by which gaps actually cost citations in query response data.

Impact: High Effort: 2-4 weeks Owner: Content Affected: Entire site — all buyer query categories

🔵 Schema Markup Status Unknown — Manual Verification Required

What we found: Our analysis method (rendered markdown) cannot detect JSON-LD schema markup, meta descriptions, or OG tags. These signals are embedded in HTML head sections and are not visible in rendered output. We were unable to determine whether the site implements Organization, WebPage, Product, or other schema types.

Why it matters: Schema markup helps AI platforms understand entity relationships, product offerings, and content structure. Organization schema reinforces brand identity. FAQ schema and HowTo schema make content more extractable for AI responses. Without verification, we cannot confirm whether this signal is present or absent.

Business consequence: Without verified schema markup, AI platforms may misclassify Resonate Labs' content when constructing responses to GEO agency evaluation queries, slightly reducing citation priority compared to competitors with structured data signals in place.

Recommended fix: Verify schema markup using Google's Rich Results Test or Schema.org Validator. At minimum, implement: (1) Organization schema on the homepage, (2) WebPage schema on all pages, (3) FAQ schema on any future FAQ content. Also verify meta descriptions and OG tags are present on all 3 pages.

Impact: Medium Effort: 1-3 days Owner: Engineering Affected: All 3 pages

🔵 Client-Side Rendering Status Unknown — Manual Verification Recommended

What we found: All 3 pages returned substantive text content via web_fetch, suggesting server-side rendering is functional. However, we cannot definitively confirm whether any page content relies on client-side JavaScript rendering that might be invisible to AI crawlers with limited JavaScript execution. The site appears to be a static site, but this should be confirmed.

Why it matters: AI crawlers vary in JavaScript execution capability. GPTBot has limited JS support, while Googlebot renders fully. If critical content is loaded via JavaScript, it may be invisible to some AI platforms even after unblocking their crawlers.

Business consequence: If key content sections on Resonate Labs' homepage are JS-rendered, queries like "what does Resonate Labs' GEO methodology include" may return incomplete or missing information even after crawlers are unblocked, reducing citation quality for methodology evaluation queries.

Recommended fix: Verify by loading the site with JavaScript disabled in browser DevTools (Settings → Debugger → Disable JavaScript). If all content renders without JS, no action needed. If content disappears, implement server-side rendering for the affected sections.

Impact: Low Effort: < 1 day Owner: Engineering Affected: All 3 pages — verification only

Site Analysis Summary

Total pages analyzed 3
Commercially relevant pages 3
Heading hierarchy 0.77
Content depth 0.40
Freshness 1.00 weighted (product: 1.00 • blog: n/a • structural: n/a)
Schema coverage Unable to assess (3 pages unscored)
Passage extractability 0.52

Partial sample This analysis covers only 3 pages — the entire discoverable site. Content depth (0.40) and passage extractability (0.52) scores reflect the site's structure of packing substantial content into a single homepage rather than distributing across dedicated pages. Schema coverage could not be assessed from rendered output and requires manual verification.

Next Steps

What Happens Next

Why now

• AI search adoption is accelerating — buyer discovery patterns are shifting quarter over quarter

• Early citations compound: domains that AI platforms learn to trust now get cited more frequently as training data accumulates

• Competitors who establish GEO visibility first create a structural disadvantage for late movers

• Generative Engine Optimization is still early-innings — acting now means competing against inaction, not against entrenched strategies

The full audit will measure Resonate Labs' citation visibility across buyer queries spanning the entire GEO agency evaluation journey — from problem-identification queries like "how to get my brand cited by ChatGPT" to vendor-comparison queries like "best GEO agency for B2B SaaS." You'll see exactly which queries return results that include your competitors but not Resonate Labs — and what it would take to appear in them. Fixing the robots.txt blocking now ensures the audit measures your true visibility potential, not an artificially suppressed baseline.

01

Validation Call

45–60 minutes walking through this document. Confirm personas, competitor tiers, feature strength ratings, and pain point accuracy. Your corrections directly shape the buyer query set.

02

Query Generation & Execution

Buyer queries generated from validated personas and pain points, executed across ChatGPT, Perplexity, Gemini, and Claude. Each query tests a specific buyer intent at a specific buying stage.

03

Full Audit Delivery

Visibility analysis, competitive positioning, content gap prioritization, and a three-layer action plan: technical fixes, content strategy, and authority building — prioritized by which gaps actually cost citations.

Start now — don't wait for the call These don't depend on the rest of the audit and will improve your baseline visibility before we even measure it:

1. Unblock GPTBot and ClaudeBot in Cloudflare — Cloudflare dashboard → Security → Bots → Managed robots.txt. Allow these retrieval crawlers. Under 1 day. (Critical)

2. Verify schema markup on all 3 pages — Use Google's Rich Results Test to confirm whether Organization, WebPage, and other schema types are implemented. 1–3 days. (Medium)

3. Verify client-side rendering — Load the site with JavaScript disabled in browser DevTools. If all content renders, no action needed. Under 1 day. (Low)

Before the Call

Your Pre-Call Checklist

Two jobs before we meet. The questions on the left require your judgment — no one knows your business better than you. The engineering tasks on the right don't require the call at all.

Questions for You
Is VP of Demand Generation (Aisha Thompson) a distinct buying role for GEO services, or does demand gen overlap with VP Marketing?
If wrong: we remove ~20% of demand-stage queries from the buyer set
Do Genevate and GenOptima appear in actual competitive evaluations, or should they move from primary to secondary tier?
If wrong: ~12-16 head-to-head comparison queries shift to category-level
Are Self-Service Reporting ("absent") and ROI Attribution ("weak") conscious positioning choices, or gaps to address?
If gaps: they become defensive query categories where platform competitors dominate
Does "Cited" (the book) generate its own buyer search intent, or is it purely a credibility asset?
If own intent: we add a dedicated query cluster for the book
Does VP Marketing hold final signature authority, or does a CMO/CEO sign GEO agency contracts?
If wrong: we add executive-level validation queries targeting C-suite approval criteria
Does Director of Content Strategy evaluate GEO agencies independently, or execute on decisions already made?
If wrong: query cluster shifts from evaluation-stage to implementation-stage
Does Head of Digital Marketing champion GEO bottom-up, or is the initiative top-down?
If wrong: we deprioritize awareness-stage technical queries
Does VP Finance exercise veto power on GEO agency contracts specifically, or only above a budget threshold?
If wrong: we remove ROI-justification queries from the buyer set
Are CEO/Founder, CMO, or Head of SEO/Growth missing from the persona set?
If missing: entire search intent patterns go untested in the audit
Are there missing competitors (freelancers, in-house teams) or irrelevant ones to remove?
If wrong: head-to-head queries test matchups that don't exist in real deals
Are any "strong" feature ratings overstated, or are pain point severities inaccurate?
If wrong: query weighting misallocates between offense and defense categories
Are internal bandwidth constraints, executive AI skepticism, or algorithm change fear missing pain points?
If missing: buyer frustration queries that drive discovery go untested
For Engineering — Start Now
Unblock GPTBot and ClaudeBot in Cloudflare managed robots.txt
Critical: prerequisite for OpenAI and Anthropic to index any content. Under 1 day.
Verify schema markup on all 3 pages using Google's Rich Results Test
Medium: confirms whether Organization, WebPage, and other schema types are implemented. 1-3 days.
Verify client-side rendering by loading the site with JavaScript disabled
Low: likely no issue, but confirms all content is accessible to AI crawlers. Under 1 day.
Alignment

We're Aligned On

This isn't a contract — it's a shared understanding. The audit runs against what's below. If something changes between now and the call, we adjust. The goal is to make sure we're asking the right questions for the right buyers against the right competitors.
Already Confirmed
Competitive set — 5 primary + 4 secondary competitors identified and positioned
Persona set — 5 personas: 2 decision-makers, 2 evaluators, 1 influencer
Feature taxonomy — 10 buyer-level capabilities with outside-in strength ratings (6 strong, 2 moderate, 1 weak, 1 absent)
Pain point set — 9 buyer frustrations: 6 high severity, 3 medium severity
Layer 1 technical audit — 4 findings logged (1 critical, 1 high, 1 medium, 1 low), engineering notified
Decided at the Call
Feature strength calibration — Self-Service Reporting ("absent") and ROI Attribution ("weak") need confirmation as positioning choices vs. gaps; determines offense vs. defense query allocation
Primary competitor tier validation — Genevate and GenOptima (medium confidence) need deal-frequency confirmation before head-to-head queries are generated
VP of Demand Generation persona validation — inferred from category patterns, needs confirmation from actual deal data
Feature overweighting — top 3 capabilities to emphasize in capability queries
Pain point prioritization — top 3 buyer problems to weight first in query generation
Any persona, competitor, or feature corrections from validation discussion
Client
Date