Engagement Foundation Review

Schematic Audit Foundation

Before we run the audit, we need to make sure we're asking the right questions about the right competitors to the right buyers. This document presents what we've learned about Schematic's market — your job is to tell us what we got right, what we got wrong, and what we missed.

Prepared April 23, 2026
schematichq.com
Runtime Monetization & Entitlements
GEO Readiness

Where You Stand Today

Before we measure citation visibility in the runtime monetization and entitlements space, these three signals tell us whether AI crawlers can access Schematic's site, whether the content they see is fresh enough to be trusted, and whether there are any technical blockers to cite-worthy extraction.

Technical Readiness
Needs Attention
Two high-severity findings with no critical blockers. The homepage "Docs" nav link returns a 404 on docs.schematichq.com/introduction (live path is /overview), and the Automox case study — one of only five customer proof points — is 608 days old.
Content Freshness
At Risk
Weighted freshness 0.55 hides a category split: blog/content marketing averages 0.71 (21 of 28 posts updated within 90 days), but commercial product pages average 0.23 — 13 of 15 scored product pages are over 180 days old, outside the 2–3 month citation window where AI platforms concentrate 76.4% of citations (Ahrefs, analysis of top 1,000 cited pages, 2024). 2 product pages have no detectable date — verify manually.
Crawl Coverage
Good
robots.txt at schematichq.com/robots.txt explicitly allows GPTBot, ChatGPT-User, ClaudeBot, PerplexityBot, Google-Extended, Googlebot, and Bytespider. Sitemap is accessible with 46 pages indexed and no junk paths detected.
Executive Summary

What You Need to Know

Runtime monetization and entitlements is still a category where AI search behavior is being written in real time. Founders and engineering leaders who today would have Googled "Stripe alternatives for usage-based pricing" are increasingly asking ChatGPT and Perplexity the same question in natural language — and the answers those tools give now become the default shortlist. Schematic's category is early-innings in GEO optimization, which means establishing citation visibility now is competing against inaction, not against entrenched rivals.

This Foundation Review covers three things we need your validation on before the audit runs: the competitive landscape that shapes how we construct comparison queries, the buyer personas whose search intent determines how those queries get phrased, and the technical baseline that determines whether AI crawlers can see Schematic's answers at all. Each section is a set of inputs — not conclusions. We're asking you to confirm we're aiming at the right targets before we start measuring.

The validation call is a working session with real stakes. Two categories of decisions come out of it: (1) input validation — are the right entities in the right tiers, and do the medium-confidence items hold up against how your deals actually close? — and (2) engineering triage — which Layer 1 findings can start now, before results come back, so baseline signals improve while the audit measures citation visibility. The Pre-Call Checklist below aggregates everything you need to prepare.

TL;DR — Action Items
  • 🟡 High: Homepage docs nav returns 404 — point the header "Docs" link from docs.schematichq.com/introduction to /overview (the live entry path). Less than a day for engineering; no client input required.
  • 🟡 High: Automox case study is 608 days old — refresh with a 2026 update interview and re-date, or retire and let JourneyTMS/Plotly carry the "Schematic customers" citation weight.
  • 🟣 Validate at the Call: Priya Subramanian (Staff Full-Stack Engineer) — this persona is llm-inferred, not observed in case studies. If the deep-in-the-code reviewer doesn't show up in real deals, we drop 15–20 implementation-oriented queries and consolidate technical evaluation under the VP Engineering persona.
  • 🟣 Validate at the Call: LaunchDarkly as a competitor — its tier is llm-inferred, not observed in head-to-head pages. If buyers never evaluate Schematic vs. LaunchDarkly directly, we reassign its query budget to a more consequential primary like Stigg or Orb.
  • ✅ Start Now: Fix the docs nav 404 and republish /pricing + /developers to refresh sitemap lastmod — both are engineering/marketing tasks that don't wait on any validation answer, and both raise the freshness and crawl-signal baseline before the audit measures it.
  • 📋 Validation Call: Which 3 of the 6 strong-rated features best represent where Schematic wins deals — the audit tests all 12 capabilities, but differentiation queries emphasize 3. The answer determines roughly 30–40% of the capability query set's orientation.
Orientation

How to Read This Document

Purpose This Foundation Review captures what an outside-in analyst can learn about Schematic's position in the runtime monetization and entitlements space before running buyer queries against ChatGPT, Claude, Perplexity, and Google's AI Overviews. It's the input contract for the audit — if the inputs are wrong, the output is wrong.

Your Job Read each section and tell us what's right, what's wrong, and what's missing. We're especially interested in medium-confidence items — those are judgment calls that someone inside Schematic can confirm in seconds and that an outside analyst can only guess at. The purple question boxes throughout the document flag the specific items we want your answer on.

Confidence Badges Every major entity carries a confidence tag — High (directly observed from Schematic's site, case studies, or competitor-authored comparison pages), Medium (inferred from category context or review-mined patterns), or Low (LLM-synthesized from category norms). Low and medium-confidence items are where the validation call earns its keep.

Company Profile

How We've Framed Schematic

The category framing below drives which query clusters the audit assembles. If the category description doesn't match how your buyers describe what they're looking for, we're testing the wrong questions.

Company name Schematic High
Domain schematichq.com
Name variants SchematicHQ, Schematic HQ, Schematic Inc., Schematic.com
Category Runtime monetization & entitlements platform for SaaS and AI companies
Company segment Startup (seed — $6.5M, April 2026) High
Key products Plans & Entitlements • Metering & Pricing • Smart Flags • Billing Components • Revenue Insights
Positioning "Decouples pricing, packaging, and in-product feature access from application code, on top of Stripe Billing."

→ Validate Schematic's category spans two distinct buying conversations: a Series A SaaS founder graduating past Stripe Billing for entitlement enforcement, and an AI product lead spinning up a credit-based billing stack for an agent product (per the dedicated /ai landing page). Are those one query cluster or two? If unified, the category query set stays tighter; if separate, we add a dedicated AI-monetization cluster that tests "credit billing for LLM apps" alongside the core "runtime entitlements for SaaS" set.

Buyer Personas

Who Actually Buys Schematic

6 personas: 3 decision-makers (VP Engineering, CEO, Director FP&A), 1 evaluator (Head of Product), 2 influencers (Growth Lead, Staff Engineer). Personas drive the buyer query set — how each role searches determines the phrasing the audit tests against AI platforms.

Critical Review Area Personas are the highest-leverage input to validate. Query intent, phrasing, and competitive framing all shift depending on who's doing the searching. A CEO asking "what's the fastest way to ship usage-based pricing" produces a different query set than a VP Engineering asking "alternatives to building billing in-house" — and both produce a different set than a Director of FP&A asking "subscription billing with ASC 606 support." If the personas below don't map to the actual humans in your deals, we're testing the wrong search behavior.

Data Sourcing Names, role titles, and department are placeholders for archetypes — the audit tests the role, not the person. Provenance is set to review_mining for personas synthesized from G2, case studies, and Schematic's own customer stories (Plotly, Automox, JourneyTMS), automated_scrape for roles observed directly on the site, and llm_inference for archetypes inferred from category norms — which is worth flagging explicitly.

Head of Product / Lead Product Director
Product • Director seniority
Evaluator High
Owns monetization strategy and packaging — the persona Schematic's "Clean Pricing" and "Experiment Safely" pages are written for. Frustrated that shipping a new plan or pricing experiment takes an engineering sprint instead of a product decision.
Veto power: No — influential but paired with VP Engineering or CEO for the signoff.
Technical level: Medium — reads SDK docs critically but doesn't implement.
Primary buying jobs: Validating that Schematic can model current plans without shipping code, confirming pricing experiments are safe to run on live customers, and evaluating revenue analytics as an expansion-signal surface.
Query focus areas: "no-code pricing experiments," "SaaS packaging tools," "run a pricing A/B test without engineering," "Schematic vs. Stigg for packaging."
Source: review_mining — inferred from G2 and Plotly/JourneyTMS case studies where product leads are quoted as initiators.

At seed-stage Schematic deals, does Head of Product actually drive the purchase, or does the CEO/founder sign? If the CEO signs, we downweight PM-first discovery queries and lean into founder-aligned strategic queries like "should we build or buy billing."

VP of Engineering
Engineering • VP seniority
Decision-maker High
The persona quoted directly in the Automox and Plotly case studies — owns the "build vs. buy" call and the engineering roadmap that Schematic is trying to free up. The engineering-sprint-drain pain point (20–30% of sprint capacity spent on billing code) lives with this role.
Veto power: Yes — "if engineering says no, it doesn't happen."
Technical level: High — evaluates SDK quality, integration surface, and entitlement check latency personally.
Primary buying jobs: Validating SDK quality and integration footprint, confirming Schematic won't become the next homegrown stack to babysit, de-risking Stripe-adjacent architecture decisions.
Query focus areas: "build vs. buy billing," "entitlement SDK latency," "Schematic vs. Stigg vs. Lago for engineering teams," "best way to extend Stripe with entitlements."
Source: review_mining — directly quoted in Automox and Plotly case studies.

At what engineering org size does the VP still personally evaluate Schematic vs. delegating to a staff engineer? If delegation kicks in past ~20 engineers, the query phrasing should shift technical depth downward on VP-targeted queries and upward on IC-targeted ones.

Co-Founder / CEO
Executive • C-Suite
Decision-maker Medium
At Schematic's target customer size (seed → Series A SaaS), the founder is often still the signer on any tool that touches revenue infrastructure. Feels the "we lost a six-figure deal because we couldn't spin up a custom plan" pain acutely; wants to ship AI features on credits this quarter.
Veto power: Yes — at seed/Series A, this is the signature on the contract.
Technical level: Medium — often a former engineer; reads architecture docs but trusts the VP Eng's read.
Primary buying jobs: Confirming Schematic will let the company ship AI/credit pricing without a billing rewrite, de-risking the Stripe-plus-Schematic stack vs. Stripe-only, protecting against enterprise-deal leakage from spreadsheet overrides.
Query focus areas: "fastest way to ship usage-based pricing," "Stripe vs. Schematic for startups," "how do AI companies handle credit billing."
Source: automated_scrape — inferred from Schematic's positioning content and target-customer stage.

At Schematic's current ACV band (inferred ~$6K–$50K/yr from the published Starter/Scale tiers), does the CEO personally sign off at all contract sizes, or only on enterprise? If CEO signoff is enterprise-only, the CEO-targeted queries should anchor to six-figure-deal framing rather than PLG discovery framing.

Director of FP&A / Finance
Finance • Director seniority
Decision-maker Medium
Feels the "enterprise overrides in spreadsheets" and "revenue leakage from manual proration" pains most directly — the one who has to reconcile what the product says a customer has access to vs. what Stripe actually billed them for. Technical level is low but the veto is real: ASC 606 requirements and audit risk block deals.
Veto power: Yes — on finance/audit grounds, especially for enterprise deals.
Technical level: Low — reads docs for revenue recognition features; doesn't evaluate SDKs.
Primary buying jobs: Confirming Schematic closes revenue-leakage gaps, validating ASC 606 / deferred revenue support, checking that enterprise overrides leave an audit trail.
Query focus areas: "subscription billing with ASC 606," "SaaS revenue recognition tools," "Chargebee vs. Schematic for finance teams," "audit-ready subscription billing."
Source: review_mining — Chargebee/Zuora migration stories and G2 finance-reviewer quotes.

Does FP&A actually hold veto at a seed-stage SaaS prospect, or does finance veto only kick in post-Series A? If premature at Schematic's target segment, we demote this to "influencer" and drop revenue-recognition queries from the priority query set — which also affects how we rate the weak-rated Revenue Recognition & Finance Workflows feature.

Senior Manager, Growth & Digital
Marketing • Manager seniority
Influencer Medium
The "expansion signals hidden" and "self-service portal gap" pains land here — the growth lead who wants to see which accounts are approaching limits and wants customers upgrading themselves instead of filling the support queue.
Veto power: No — advocates for tooling that drives expansion revenue but doesn't sign.
Technical level: Medium — wires up pricing tables and checkout flows directly; reads the React SDK docs.
Primary buying jobs: Validating that Billing Components ship fast, confirming Revenue Insights surfaces expansion signals, checking self-serve upgrade flows reduce support ticket volume.
Query focus areas: "self-serve upgrade flow SaaS," "pricing page components React," "expansion revenue signals," "customer portal for SaaS billing."
Source: review_mining — growth/marketing reviewer titles on G2 and buyer quotes about expansion leakage.

At seed/Series A SaaS companies without a dedicated growth lead, does this role fold into PM or CEO? If so, we remove growth-focused queries as a standalone cluster and merge their scope into the Head of Product and CEO sets rather than testing growth-specific search behavior that doesn't exist in the deal.

Staff Full-Stack Engineer
Engineering • Senior IC
Influencer Medium
The deep-in-the-code reviewer — the engineer the VP hands a tool to and says "kick the tires, tell me what breaks." Feels the "billing code scattered" pain viscerally; their name is on the paywall logic duct-taped across four services.
Veto power: No — can sink a deal with a negative evaluation, but doesn't sign.
Technical level: High — runs the SDK through its paces, reads every doc page.
Primary buying jobs: Validating SDK quality and documentation completeness, testing entitlement check latency at realistic traffic, confirming migration effort from homegrown gating code is bounded.
Query focus areas: "feature flag vs. entitlement SDK," "entitlement check performance," "how to migrate off homegrown billing," "best entitlement library for Node/React."
Source: llm_inference — this archetype is inferred from category norms, not directly observed in Schematic's case studies.

This persona is the KG's weakest — case studies quote the VP Engineering but the Staff Engineer is synthesized. Does this role show up in actual Schematic deals? If it does, it shapes how we phrase SDK-depth and integration-footprint queries. If it doesn't, we drop the 15–20 implementation-oriented queries and consolidate technical evaluation under the VP Engineering persona.

→ Missing Personas? Three roles sometimes appear in runtime monetization deals — do they show up in yours? CRO / Head of Revenue (if sales-led enterprise contracts are a distinct buying motion from the PLG/self-serve motion at your current stage). Head of Platform / Infrastructure (if billing/entitlement decisions get routed through a central platform team rather than handled by product-aligned engineering). Developer Advocate (if DevRel voices materially shape technical buyer sentiment in the monetization SDK category). Who else shows up in your deals that we haven't mapped?

Competitive Landscape

Who We'll Test You Against

6 primary + 5 secondary competitors identified. Tier assignments determine which competitors appear in head-to-head differentiation queries vs. category-awareness queries.

Why Tiers Matter Primary competitors get dedicated head-to-head query slots — roughly 6–8 queries each testing phrases like "Schematic vs. Stigg for runtime entitlements" or "Orb alternatives for usage-based pricing" — approximately 36–48 queries total across the primary set. Secondary competitors surface only in category-awareness queries. LaunchDarkly sits in secondary on llm_inference provenance — if buyers never evaluate Schematic vs. LaunchDarkly head-to-head in practice, we'd move those query slots to a more relevant vendor. Maxio, Paddle, Recurly, and Zuora are also medium confidence on tier — worth scrutinizing whether each belongs in the set at all.

Primary Competitors

Stigg

Primary High
stigg.io
The closest head-to-head competitor — a MonetizationOS that unifies pricing, packaging, entitlements and usage with runtime enforcement; overlaps almost entirely on Schematic's core wedge but leans more feature-flag-native and less tied to the Stripe ecosystem.
Source: competitor_site

Orb

Primary High
withorb.com
Usage-based billing infrastructure for high-scale AI and infra companies; stronger on metering accuracy, revenue simulation, and invoice reconciliation, but engineering owns more of the stack and in-product entitlement enforcement is thinner than Schematic's.
Source: competitor_site

Metronome

Primary High
metronome.com
High-throughput usage metering and billing platform recently acquired by Stripe; excellent for companies with precise event-rating needs at scale, but narrower than Schematic on packaging, entitlements, and customer-facing portal components.
Source: category_listing

Stripe Billing

Primary High
stripe.com/billing
The default billing and payments layer under Schematic — simultaneously a platform partner and the DIY baseline most prospects evaluate against; ubiquitous and cheap, but leaves entitlements, packaging, and runtime access control to engineering.
Source: automated_scrape

Chargebee

Primary High
chargebee.com
Mid-market subscription management and revenue growth platform; strong on recurring invoicing, dunning, and finance workflows, but buyers increasingly hit its ceiling on usage-based and hybrid pricing and look elsewhere — including Schematic — for entitlement enforcement.
Source: competitor_site

Lago

Primary High
getlago.com
Open-source, self-hostable metered billing that appeals to engineering teams who want to own the stack; strong developer control and cost transparency, but leaves packaging, plan management, and customer-facing UI to the buyer to build.
Source: competitor_site

Secondary Competitors

Zuora

Secondary Medium
zuora.com
Enterprise subscription and revenue platform; the incumbent mid-market startups explicitly want to avoid — heavy implementation, expensive, and slow to change pricing, which is the exact problem Schematic sells against.
Source: category_listing

Recurly

Secondary Medium
recurly.com
Subscription billing focused on dunning and revenue recovery; popular with consumer and media SaaS but weaker on B2B usage-based, hybrid pricing, and in-product entitlements — rarely the direct winner against Schematic.
Source: category_listing

Maxio

Secondary Medium
maxio.com
Billing + revenue ops platform (Chargify + SaaSOptics merger) aimed at finance-led B2B SaaS; strong on revenue recognition and metrics, but not built for runtime feature gating or modern AI pricing that Schematic targets.
Source: category_listing

Paddle

Secondary Medium
paddle.com
Merchant-of-record billing and tax solution — owns payments, compliance and global sales tax, but operates at the invoicing layer and does not solve the in-product entitlement and packaging problem Schematic is built for.
Source: category_listing

LaunchDarkly

Secondary Medium
launchdarkly.com
Market-leading feature flag and experimentation platform; adjacent to Schematic's Smart Flags and frequently appears in buyer stacks, but is built for release management and experimentation rather than plan-based monetization and entitlement enforcement.
Source: llm_inference

→ Validate Competitors Three questions in one: (1) Missing vendors — who shows up in Schematic's deals that's not in this list? We'd expect possibilities like Flexprice, Togai, Kovr, Superwall, or RevenueCat depending on vertical. (2) Tier accuracy — LaunchDarkly is llm-inferred; do buyers actually evaluate Schematic's Smart Flags against LaunchDarkly head-to-head, or is the overlap only conceptual? And should any of Maxio, Paddle, Recurly, or Zuora move out of the set entirely? (3) Stripe's dual role — Stripe Billing is classified as a primary competitor because prospects evaluate "just use Stripe" as the alternative, but it's also the payments layer under Schematic. Does the audit test "Schematic vs. Stripe Billing" head-to-head, or only "Schematic on top of Stripe vs. Stripe alone"? The framing changes roughly 6–8 queries.

Feature Taxonomy

How We'll Test Your Capabilities

12 buyer-level capabilities mapped from product pages, category grids, and competitor feature matrices. Features determine which capability queries the audit tests against each AI platform.

Runtime Entitlement Enforcement Strong High

Gate features, enforce usage limits, and check plan access in-product with a single SDK call instead of hand-rolled gating code

No-Code Pricing & Packaging Configuration Strong High

Let product and GTM teams change plans, tiers, add-ons, trials and overrides without shipping code

Stripe-Native Monetization Layer Strong High

Extend Stripe Billing with entitlements and packaging without migrating payments or rebuilding invoicing

Usage Metering, Credits & Hybrid Pricing Moderate Medium

Support pay-as-you-go, credit burndown, tiered overages and hybrid seat+usage models without rewriting billing

Drop-In Billing Components & Customer Portal Strong High

Ship a pricing table, checkout flow, and self-serve customer portal in days using prebuilt React components

Monetization-Aware Feature Flags Strong High

Use feature flags that understand plan, limit, and entitlement state so releases and paywalls stay in sync

Plan Versioning & Pricing Experimentation Moderate Medium

Roll out, A/B test and safely migrate customers between plan versions without risking revenue regressions

Sales-Led Overrides & Enterprise Contracts Moderate Medium

Apply per-customer pricing, commitments, and contract overrides without spreadsheets or one-off Stripe metadata

Usage-to-Revenue Analytics Moderate Medium

See which customers are approaching limits, likely to upgrade, or at risk of churn based on real usage patterns

Developer Experience & SDK Quality Strong High

Fast, well-documented SDKs and APIs with sub-50ms entitlement checks and minimal integration footprint

Revenue Recognition & Finance Workflows Weak Low

ASC 606 revenue recognition, deferred revenue schedules, and audit-ready reports my finance team can rely on

Open-Source / Self-Hosted Deployment Absent Medium

Run the monetization layer on our own infrastructure with full source access and data sovereignty

Prioritization Question The audit tests all 12 capabilities, but the head-to-head differentiation queries emphasize 3. Which of these strong-rated features best represent where Schematic wins deals?

Runtime Entitlement Enforcement — the category-defining wedge
No-Code Pricing & Packaging Configuration — PM/CEO-facing velocity story
Stripe-Native Monetization Layer — the "don't rip out Stripe" value prop
Drop-In Billing Components & Customer Portal — engineering-velocity surface
Monetization-Aware Feature Flags — the LaunchDarkly-adjacent wedge
Developer Experience & SDK Quality — latency and DX angle

Picking the top 3 concentrates roughly 30–40% of the capability query set on your strongest ground. Defaults without your input would be Runtime Entitlement Enforcement + No-Code Pricing Config + Stripe-Native — but if the true wedge in closed deals is Billing Components or Smart Flags, the weighting is wrong.

→ Validate Features Three checks: (1) Are the strength ratings accurate against specific competitors — e.g., is Runtime Entitlement Enforcement genuinely stronger than Stigg's equivalent, or is it parity? (2) Revenue Recognition & Finance Workflows is rated "weak" on llm_inference / low confidence — is that an accurate absence, or is there roadmap that should bump it to "moderate"? Same question for Open-Source / Self-Hosted Deployment marked "absent." (3) Usage Metering, Credits & Hybrid Pricing is rated "moderate" — given the AI-credits pain point is high-severity across three personas, should this be "strong"? A strength upgrade here shifts AI-monetization queries into the differentiation set.

Pain Points

What Your Buyers Are Frustrated About

11 pain points: 7 high severity, 4 medium severity. Buyer language is how queries get phrased — the audit tests these frustrations verbatim against AI platforms to see whether Schematic gets cited as a solution.

Launching a new plan or pricing tier takes a full quarter High High

"It takes us a full quarter to launch a new pricing tier — by the time it ships the market has moved"
Personas: Head of Product, CEO, Growth Lead

Entitlement and gating logic is scattered across services High High

"Our paywall logic is duct-taped across four services and everyone is scared to touch it"
Personas: VP Engineering, Staff Full-Stack Engineer

Engineering sprint capacity drained by billing code High High

"I have two full-time engineers babysitting our homegrown billing stack instead of building product"
Personas: VP Engineering, CEO, Head of Product

Enterprise overrides tracked in spreadsheets High High

"Every enterprise deal turns into a custom spreadsheet of overrides that nobody trusts"
Personas: Director FP&A, Growth Lead, CEO

AI products need credit/metered pricing existing stacks can't handle High High

"We want to ship an AI product on credits this quarter and there's no way our billing stack can handle it"
Personas: Head of Product, VP Engineering, CEO

Revenue leakage from manual proration and mis-synced entitlements High Medium

"We keep discovering customers who are using features they shouldn't be — that's real revenue walking out the door"
Personas: Director FP&A, Growth Lead

Stripe alone doesn't gate features or model complex plans High High

"Stripe is great at charging cards but it has no idea what a 'plan' means inside our product"
Personas: VP Engineering, Head of Product, Staff Full-Stack Engineer

Customers can't self-serve — support drowns in billing tickets Medium High

"Support is drowning in 'why did I get charged' tickets because customers have no visibility into their plan"
Personas: Growth Lead, Head of Product

Teams avoid changing pricing because grandfathering is a nightmare Medium High

"We know our pricing is wrong but we're too scared to change it — grandfathering is a nightmare"
Personas: Head of Product, CEO, Director FP&A

Enterprise deals lost to slower-than-sales-cycle custom pricing Medium Medium

"We lost a six-figure deal last quarter because we couldn't spin up a custom plan in time"
Personas: Growth Lead, Director FP&A, CEO

Expansion signals are invisible until customers churn Medium Medium

"By the time we notice a customer is slamming into their limit, they've already churned or opened a ticket"
Personas: Growth Lead, Head of Product, Director FP&A

→ Validate Pain Points Three angles: (1) Severity accuracy — "revenue leakage from manual proration" is rated high but on medium-confidence review-mining; at your current customer scale, is leakage viscerally felt, or aspirational? (2) Buyer language authenticity — the "duct-taped across four services" phrasing comes from review mining; does it match what prospects actually say in discovery calls, or is the phrasing too folksy? (3) Missing pains — three pains often appear in this category but aren't in this set: billing data residency / regional compliance (EU customers wanting data in-region), webhook reliability for downstream revenue systems (NetSuite, Salesforce CPQ sync), and developer velocity tax from waiting on finance to approve pricing changes. Do any of these show up in your discovery calls?

Site Findings

Technical Baseline from Layer 1

46 pages analyzed across schematichq.com. All major AI crawlers are allowed in robots.txt, but two high-severity items on the homepage path and case study surface warrant engineering attention before the audit measures citation visibility.

Engineering Start Now No critical blockers — robots.txt explicitly allows GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Googlebot, and Bytespider, so crawlers can reach the site. But the homepage "Docs" nav link returns a 404 on docs.schematichq.com/introduction (the live path is /overview), and the Automox case study is 608 days old with a freshness score of 0.0. Fixing both is a <1-day engineering task for the 404 and a 1–3 day content refresh for the case study — neither requires client input. Separately, 13 of 15 scored product pages are >180 days old per sitemap lastmod; republishing /pricing and /developers to refresh lastmod is another same-day task that doesn't wait on the validation call.

🟡 Homepage nav link to docs returns 404

What we found: The header navigation on https://schematichq.com/ links to https://docs.schematichq.com/introduction, which returns a "Page Not Found" response. The live docs entry point is actually https://docs.schematichq.com/overview. Every visitor — human or AI crawler — who follows the "Docs" link from the primary nav lands on an empty 404.

Why it matters: Documentation is the single highest-leverage surface for AI crawlers evaluating developer-oriented B2B products; AI Overviews and ChatGPT routinely cite documentation pages for "how does X work" and "does X support Y" queries. A 404 on the canonical nav path creates a negative crawl signal (dead link on the homepage), blocks the shortest path from landing-page traffic to citable docs content, and likely causes a measurable share of first-time developer visitors to bounce. For the VP of Engineering and Staff Full-Stack Engineer personas in the knowledge graph, the docs subdomain is a primary evaluation surface; a broken entry point reduces trust on first touch.

Business consequence: Queries like "Schematic docs," "does Schematic support usage-based pricing," or "how do Schematic entitlements work" send AI crawlers to a dead endpoint, so the docs content that should be cited isn't indexed through the canonical path — handing the citation opportunity to competitors whose docs are one hop away.

Recommended fix: Update the "Docs" link in the header navigation on the homepage (and any other marketing pages that reference /introduction) to point to https://docs.schematichq.com/overview. If /introduction was intentional, add a redirect from /introduction to /overview on the docs subdomain. Also add /introduction (or the intended entry path) to the docs sitemap once the redirect is in place so AI crawlers discover it.

Impact: high Effort: < 1 day Owner: Engineering Affected: Homepage header nav + any marketing pages linking to docs.schematichq.com/introduction

🟡 High-value Automox case study is 20+ months old

What we found: The Automox case study (/case-studies/eliminating-commercial-debt-automox-schematic) carries a visible publication date of 08/23/2024, making it 608 days old as of the analysis date. The case study is otherwise strong — it cites concrete metrics (75% reduction in time-to-market, two engineers freed) and is one of only five customer proof points on the site. It is also the only case study in the inventory scoring 0.0 on freshness.

Why it matters: ChatGPT's most-cited pages are overwhelmingly updated within 30 days (76.4% per Ahrefs' analysis of top-cited pages). Case studies compete for citations on "who uses Schematic" and "Schematic customers" queries; at 20+ months old, the Automox page is functionally invisible to freshness-weighted citation algorithms even though its content is substantive. It also signals poor content hygiene to the audit reader — a case study that old suggests the customer relationship isn't being tended.

Business consequence: Queries like "Schematic customer stories," "who uses Schematic," or "examples of runtime entitlement platforms in production" will tend to surface JourneyTMS or Plotly (the fresher case studies) and skip Automox entirely, leaving one of the strongest concrete-outcome proof points on the bench while competitors' freshly dated case studies get cited.

Recommended fix: Either (a) refresh the Automox case study with a post-2026 update interview confirming the customer relationship and outcomes are still active, and re-date the page; or (b) retire the page and let the more recent JourneyTMS and Plotly case studies carry the weight. Option (a) is preferred if Automox is still an active customer — refreshed case studies consistently outperform new ones for branded "Schematic customers" queries.

Impact: high Effort: 1-3 days Owner: Content Affected: /case-studies/eliminating-commercial-debt-automox-schematic

🔵 Two flagship product pages are too thin to carry their capability narrative

What we found: Two of the five /products/* pages have visibly thin body content: /products/plans-entitlements (~300 body words, content_depth 0.45) and /products/revenue-insights (~280 body words, content_depth 0.40). Both are flagship capabilities per Schematic's own product taxonomy (Plans & Entitlements is the runtime enforcement wedge; Revenue Insights is the analytics layer). By contrast, competitor product pages (Stigg, Orb, Metronome) on equivalent capabilities routinely run 800-1,500 words with example code, architecture diagrams, and quantified performance claims.

Why it matters: Product pages are the canonical citation source for "how does Schematic do X" queries. When a product page is this thin, LLMs evaluating buyer queries fall back to mentions in blog posts or competitor comparison pages — which means Schematic's own narrative doesn't control the answer. Against Stigg specifically (the KG's closest competitor), a thin /products/plans-entitlements page concedes citation ground on the runtime-enforcement category where Schematic should be winning by default.

Business consequence: Runtime monetization queries like "how does Schematic do runtime entitlement enforcement" or "Schematic revenue analytics capabilities" may return summaries pulled from Stigg's or Orb's longer-form product pages rather than Schematic's own, meaning Schematic loses narrative control on queries about its own products.

Recommended fix: Expand /products/plans-entitlements and /products/revenue-insights to 800-1,200 words each. Structure: (1) capability definition, (2) 2-3 specific scenarios with code or config snippets, (3) one customer quote with quantified outcome, (4) comparison callout vs. the DIY alternative, (5) links to deeper glossary pages in /pricing-resources/. Target content_depth ≥ 0.7. Add H2s that match high-intent buyer queries.

Impact: medium Effort: 1-2 weeks Owner: Marketing Affected: /products/plans-entitlements and /products/revenue-insights

🔵 Homepage and /ai landing render multiple H1 elements

What we found: The rendered markdown of the homepage (https://schematichq.com/) exposes 10 distinct H1-level headings (including "Ship any pricing model.", "The feature flag, reinvented", "Schematic makes Stripe 10x better", and seven others). The /ai landing page exposes 5 H1s in the rendered output. In both cases, subsequent sections use H2 as the next heading level, but the multiple top-level headings break the single-root hierarchy that both traditional SEO and passage-extraction algorithms expect.

Why it matters: LLMs use heading structure to segment pages into citable passages. When every section carries an H1, the page has no authoritative title-level anchor — every section competes for top weight and none wins. This typically manifests as shallower citation (AI platforms excerpt a random block rather than the key positioning statement). Webflow and similar block-based builders commonly produce this pattern; it is usually fixable without a visual redesign.

Business consequence: Branded queries like "what is Schematic" or "how does Schematic work" may surface a random section heading ("Schematic makes Stripe 10x better") rather than the intended positioning statement ("Ship any pricing model."), so the phrasing that gets cited isn't the one Schematic chose as the canonical value prop.

Recommended fix: On the homepage and /ai, demote all section-level headings from H1 to H2, reserving a single H1 for the page's top-level positioning statement (e.g., "Ship any pricing model." on the homepage, "Monetize AI with credits" on /ai). In Webflow / the block system, change the heading element type per section without altering visual styling — CSS can continue to render section titles at H1-size. Verify with view-source or Screaming Frog after the change.

Impact: medium Effort: 1-3 days Owner: Engineering Affected: Homepage + /ai, recommend sweeping /products/* and /clean-pricing, /experiment-safely

🔵 Pricing and developers pages have not been refreshed in 8+ months

What we found: The sitemap lastmod for /pricing, /developers, and /aup is 2025-08-12 — 254 days old as of analysis. For /pricing specifically, the page lists concrete dollar amounts ($0, $200/mo, $500/mo) and event caps (500K, 10M events/mo) that should reflect current commercial reality. /developers references SDK counts and performance claims that a developer buyer will sanity-check against the docs. Neither page has been touched by either content or engineering in over eight months per the sitemap.

Why it matters: Sitemap lastmod is one of the strongest signals AI crawlers use to prioritize which pages to re-fetch. A stale lastmod on a commercial page as consequential as /pricing suggests either that the pricing hasn't been reviewed (a real risk for a seed-stage product still dialing in unit economics) or that the site's content ops process doesn't refresh lastmod when content is reviewed. Either way, the signal to crawlers is "this page is static," which reduces re-crawl frequency and delays propagation when pricing actually changes.

Business consequence: Pricing queries like "how much does Schematic cost" or "Schematic pricing tiers" may return AI-generated summaries based on old indexed data even after Schematic updates its tiers, so prospects see stale price points in AI answers while competitors' fresher pricing pages get cited as the up-to-date reference.

Recommended fix: Institute a lightweight quarterly review of /pricing and /developers: confirm numbers are current, touch the page (even a trivial edit), and let the CMS bump lastmod. If the pages are genuinely current as of today, republish now to refresh lastmod to 2026-04 and restart the freshness clock. Longer-term, tie pricing page lastmod to any plan-catalog change deployed to app.schematichq.com.

Impact: medium Effort: < 1 day Owner: Marketing Affected: /pricing, /developers, /aup

Manual Verification Checklist

The following items could not be assessed through our analysis method (rendered markdown). We recommend your engineering team verify these manually before the validation call.

Structured data (JSON-LD) markup not assessable from rendered markdown

What to check: The analysis method used web_fetch, which returns rendered markdown and does not expose JSON-LD schema blocks. As a result, we cannot confirm whether Schematic's pages carry Product schema (on /products/*), Article schema (on /blog/*), FAQPage schema (on /pricing and any FAQ-bearing page), Organization schema (site-wide), or HowTo schema (on how-to oriented blog posts). This is a tooling limitation, not a confirmed absence.

Recommended action: Run the client's engineering team through a one-pass audit using Google's Rich Results Test (https://search.google.com/test/rich-results) on: homepage, /pricing, /products/plans-entitlements, one blog post, one case study, and one /pricing-resources/* page. Flag any "no valid items detected" results. For pages without schema, add the minimum applicable schema type. For blog posts, ensure Article schema includes datePublished, dateModified, and author.

Effort: 1-3 days Owner: Engineering

Meta descriptions and Open Graph tags not assessable from rendered markdown

What to check: web_fetch's rendered output does not expose <meta> tags, so meta description presence/quality, Open Graph (og:title, og:description, og:image) tags, and Twitter Card tags cannot be confirmed from this analysis. Page titles in the inventory were read from rendered content, not from <title> elements directly.

Recommended action: Use a tool like Screaming Frog (free under 500 URLs) or Ahrefs Site Audit to pull a site-wide report of meta descriptions and OG tags. Confirm every commercial page (homepage, /pricing, /products/*, /ai, /stripe, top case studies, top comparison blogs) has: unique meta description ≤ 155 chars, og:title, og:description, og:image (at correct aspect ratio), og:url, og:type. For blog posts, ensure Article-specific OG tags are populated.

Effort: 1-3 days Owner: Engineering

Client-side rendering status not directly assessable — partial verification suggested

What to check: web_fetch returned substantial body content for every page in the inventory (word counts ranging from ~250 to ~3,200), which is a positive signal that the pages are server-side rendered or pre-rendered and not dependent on JavaScript execution for content display. However, the analysis method cannot directly inspect the HTML source to confirm whether content is server-rendered or hydrated client-side, and cannot detect CSR-specific signals like framework divs, noscript tags, or content-to-markup ratio.

Recommended action: For high-value pages only (homepage, /pricing, /products/*, top comparison blogs), verify in a browser with JavaScript disabled. Any content that disappears is effectively invisible to AI crawlers and should be converted to SSR or pre-rendered. Alternatively, use "Fetch as Googlebot" in Search Console or the View-Source technique (Ctrl-U then search for specific body text) to confirm presence in the raw HTML.

Effort: < 1 day Owner: Engineering

Site Analysis Summary

Total pages analyzed 46
Commercially relevant pages 45
Avg heading hierarchy 0.87
Avg content depth 0.68
Freshness (weighted) 0.55 — content_marketing: 0.71, product_commercial: 0.23 (3 pages unscored)
Avg passage extractability 0.76
Schema coverage Unable to assess (46 pages unscored — verify manually)
Critical / High findings 0 critical, 2 high
Next Steps

What Happens Now

Why Now GEO is a timing opportunity, and the window is narrower than it looks.

• AI search adoption is accelerating quarter over quarter — buyer discovery patterns are shifting faster than most GTM teams have noticed.
• Early citations compound: domains that AI platforms learn to trust now get cited more frequently as training data accumulates, which lifts subsequent citation rates.
• Competitors who establish GEO visibility first create a structural disadvantage for late movers — once a platform defaults to citing Stigg or Orb for runtime entitlement queries, displacing that citation is materially harder than owning it from the start.
• Runtime monetization and entitlements is still early-innings in GEO optimization — acting now means competing against inaction, not against entrenched strategies.

Once the KG inputs above are validated, the full audit measures citation visibility across buyer queries in the runtime monetization and entitlements space — including phrasings like "fastest way to ship usage-based pricing," "Schematic vs. Stigg for runtime entitlements," "how do AI companies handle credit billing," and "build vs. buy billing." You'll see exactly which queries return results that include competitors but not Schematic — and what specifically (product page depth, case study freshness, docs structure, comparison coverage) would need to change to appear in them. Fixing the Layer 1 findings now means the audit measures Schematic's citation baseline against a cleaner technical surface, not against a 404 or stale sitemap timestamps.

01

Validation Call

45–60 minutes walking through this document. Confirm or correct the KG inputs — personas, competitor tiers, feature strength ratings, pain point severity. Use the Pre-Call Checklist below to prepare.

02

Query Generation & Execution

We generate the buyer query set from validated KG inputs and run it across selected AI platforms — ChatGPT, Claude, Perplexity, and Google AI Overviews — capturing every response for analysis.

03

Full Audit Delivery

Visibility analysis across every query, competitive positioning breakdown, and a three-layer action plan — technical, content, and positioning — prioritized by which gaps actually cost Schematic citations.

Start Now — Engineering Three Layer 1 fixes your team can ship before the validation call; none require KG validation answers:

Fix the homepage "Docs" nav 404 — point docs.schematichq.com/introduction to /overview (or add a redirect on the docs subdomain). <1 day.
Republish /pricing and /developers to refresh sitemap lastmod — restarts the freshness clock on two commercial-intent pages currently flagged as 254 days stale. <1 day.
Refresh or retire the Automox case study — at 608 days old with a 0.0 freshness score, it's functionally invisible to citation algorithms. 1–3 days if a refresh interview is possible; otherwise retire and let JourneyTMS/Plotly carry the weight.

These don't depend on the rest of the audit and will improve Schematic's baseline visibility before we even measure it. Robots.txt is confirmed open to all major AI crawlers, so no verification needed there.

Before the Call

Your Pre-Call Checklist

Two jobs before we meet. The questions on the left require your judgment — no one knows your business better than you. The engineering tasks on the right don't require the call at all.

Questions for You
Does the Staff Full-Stack Engineer persona actually show up in Schematic's deals?
If wrong: we drop 15–20 implementation-oriented queries and consolidate technical evaluation under the VP Engineering persona.
Which 3 of the 6 strong-rated features best represent where Schematic wins deals?
If wrong: ~30–40% of the capability query set is weighted against the wrong differentiators.
Do buyers actually evaluate Schematic vs. LaunchDarkly head-to-head, or is the overlap only conceptual?
If wrong: LaunchDarkly's query budget moves to a more consequential primary (likely Stigg or Orb expansion).
Does the audit run "Schematic vs. Stripe Billing" as head-to-head, or only "Schematic-on-Stripe vs. Stripe alone"?
If wrong: ~6–8 queries frame the competitive relationship incorrectly relative to how prospects evaluate it.
Is Schematic's category one buying conversation (SaaS entitlements) or two (SaaS + AI credit billing)?
If two: we add a dedicated AI-monetization query cluster; if one: category queries stay tighter.
Does the Director of FP&A hold real veto at seed-stage prospects, or does finance veto only kick in post-Series A?
If premature: demote to influencer and drop revenue-recognition queries from priority set.
At Schematic's target ACV, does the CEO personally sign off at all contract sizes or only enterprise?
If enterprise-only: CEO-targeted queries anchor to six-figure-deal framing rather than PLG discovery.
At what engineering org size does the VP stop personally evaluating Schematic and delegate to a staff engineer?
Changes the technical-depth balance between VP-targeted and IC-targeted queries.
Does Head of Product own the purchase decision, or does the CEO sign in practice?
If CEO signs: downweight PM-first discovery queries, lean into founder-aligned strategic queries.
At seed/Series A companies without a dedicated growth lead, does this role fold into PM or CEO?
If yes: remove growth as a standalone cluster and merge scope into PM and CEO sets.
Are Revenue Recognition (weak) and Open-Source/Self-Hosted (absent) actually absent, or is there roadmap?
Roadmap moves these to moderate and shifts competitive framing against Chargebee/Lago.
Should Usage Metering/Credits be upgraded from moderate to strong given the AI-credits pain is high-severity across 3 personas?
Strength upgrade pulls AI-monetization queries into the differentiation set.
Do Maxio, Paddle, Recurly, or Zuora actually appear in deals, or should they drop from the competitor set?
Dropping reduces secondary-tier noise and tightens category-awareness queries.
Are there competitors missing from this list (e.g., Flexprice, Togai, Kovr, RevenueCat) that show up in your deals?
Missing primary = missing 6–8 head-to-head query slots for a real competitor.
Is "revenue leakage from manual proration" a visceral pain at current customer scale, or aspirational?
Changes severity rating and whether leakage queries land in the priority set.
Does the reviewer-mined buyer language ("duct-taped across four services") match what prospects actually say in discovery?
Mismatch means the audit tests phrasing that real buyers don't use.
Are billing data residency, webhook reliability, or finance-approval delays pains your prospects raise that we've missed?
Missing pains = missing query phrasings buyers actually type.
Do CRO/Head of Revenue, Head of Platform, or DevRel personas belong in the set?
Missing persona = missing a dedicated query cluster for a real deal voice.
For Engineering — Start Now
Fix the homepage "Docs" nav 404 (docs.schematichq.com/introduction → /overview)
Highest-leverage single fix. <1 day. Eliminates a dead crawl path from the primary nav.
Republish /pricing and /developers to refresh sitemap lastmod
Both are 254 days stale. Restarts the freshness clock and signals to crawlers these pages are current.
Refresh or retire the Automox case study (608 days old, freshness 0.0)
1–3 days for a refresh interview; otherwise retire so the fresher JourneyTMS/Plotly studies carry the citation weight.
Demote multiple H1s on homepage and /ai to a single H1 + H2s
10 H1s on the homepage and 5 on /ai break passage-extraction hierarchy. 1–3 days; no visual redesign required.
Run Rich Results Test on homepage, /pricing, /products/plans-entitlements, a blog post, and a case study
Schema coverage is currently unverified across all 46 pages. 1–3 days to confirm presence and add missing types.
Screaming Frog pass for meta description + OG tag coverage on commercial pages
Meta/OG presence couldn't be assessed from rendered markdown. 1–3 days to catalog and fix gaps site-wide.
JS-disabled browser check on homepage, /pricing, /products/*, /ai, /stripe
Confirm no high-value content is client-side rendered. <1 day. Any content that disappears is invisible to GPTBot/ClaudeBot.
Alignment

We're Aligned On

This isn't a contract — it's a shared understanding. The audit runs against what's below. If something changes between now and the call, we adjust. The goal is to make sure we're asking the right questions for the right buyers against the right competitors.
Already Confirmed
Competitive set — 6 primary competitors (Stigg, Orb, Metronome, Stripe Billing, Chargebee, Lago) + 5 secondary (Zuora, Recurly, Maxio, Paddle, LaunchDarkly)
Persona set — 6 personas: 3 decision-makers (VP Engineering, CEO, Director FP&A), 1 evaluator (Head of Product), 2 influencers (Growth Lead, Staff Engineer)
Feature taxonomy — 12 capabilities with outside-in strength ratings (6 strong, 4 moderate, 1 weak, 1 absent)
Pain point set — 11 buyer frustrations with severity ratings (7 high, 4 medium)
Layer 1 technical audit — 8 findings logged (2 high, 3 medium, 3 verification required); engineering notified
Decided at the Call
Feature overweighting — pick the 3 of 6 strong-rated features that best represent where Schematic wins deals (determines ~30–40% of capability query weighting)
Staff Full-Stack Engineer persona — validate whether the llm-inferred "deep-in-the-code reviewer" shows up in actual deals
SaaS-vs-AI category split — one query cluster or two (adds a dedicated AI-monetization cluster if split)
LaunchDarkly competitor tier — validate head-to-head evaluation or reassign query budget to a more consequential primary
Pain point prioritization — top 3 buyer problems to emphasize (severity × persona breadth defaults to engineering-sprint-drain, AI-credit-pricing gap, Stripe-alone-isn't-enough)
Persona corrections — Director of FP&A veto status, CEO signoff scope at target ACV, Growth Lead role existence at seed stage
Competitor tier adjustments — Maxio, Paddle, Recurly, Zuora confirm-or-drop; any missing primaries from the real deal pattern
Client
Date