Competitive intelligence for AI-mediated buying decisions. Where Vitally wins, where it loses, and a prioritized three-layer execution plan — built from 150 buyer queries across ChatGPT + Perplexity.
Vitally's visibility numbers look healthy at the surface — 50% overall and #3 SOV rank — but three compounding structural gaps explain why presence does not translate into recommendations and why the CRO never chooses Vitally.
[Mechanism] The broken /customer-success-platforms hub page (returning 404) blocks hierarchical crawler discovery of all 5 Comparison pages — the site's highest-intent content — before any content quality issue is even relevant. Existing content is built for practitioner personas: feature descriptions, automation how-tos, and health score methodology pages that serve CS team leads but do not answer the ROI, compliance, and cost-of-ownership questions that CROs ask when approving budgets. Three complete content categories are missing — expansion revenue intelligence, enterprise scalability and compliance positioning, and digital-touch Customer Segmentation & Lifecycle Management — and Gainsight wins those queries by default because Vitally has nothing to cite.
The 61.4% early-funnel invisibility rate across problem identification, solution exploration, and requirements building compounds the decision-maker gap: buyers who never encounter Vitally at discovery stage never add it to their shortlist, regardless of Shortlisting-stage visibility strength. The 5pp platform delta (ChatGPT lower than Perplexity) reflects ChatGPT's higher third-party authority bar penalizing the absence of external citations for ROI and compliance claims.
[Synthesis] L1 fixes must execute before L2 or L3 content goes live because two findings specifically gate downstream value: restoring the /customer-success-platforms hub makes the 5 Comparison pages discoverable through hierarchical crawling (without this fix, new Comparison content added in L3 NIO 006 may not be discovered), and adding sitemap lastmod timestamps signals content freshness to AI crawlers for all 68 new L3 pieces as they publish — without lastmod, new content competes with Vitally's existing stale pages on equal footing.
Where Vitally appears and where it doesn't — across personas, buying jobs, and platforms.
[TL;DR] Vitally is visible in 50% of buyer queries but wins only 10%. Converting visibility to wins is the primary challenge (40% gap). High-intent queries run higher at 65%.
Vitally is visible enough to be shortlisted — the problem is that 61.4% early-funnel invisibility (27/44 queries) starves the top of the funnel, and a 51pp vis-to-win gap means shortlist presence does not convert to recommendation, particularly for the CRO (0% win rate, 0/27 queries).
| Dimension | Combined | Platform Delta |
|---|---|---|
| All Queries | 50% | Perplexity +5 percentage points |
| By Persona | ||
| Chief Revenue Officer | 48.1% | ChatGPT +4 percentage points |
| Director of CS Operations | 36.7% | Perplexity +7 percentage points |
| Customer Success Team Lead | 64.5% | Perplexity +10 percentage points |
| Head of Customer Success | 46.7% | Perplexity +10 percentage points |
| VP of Customer Success | 53.1% | Perplexity +3 percentage points |
| By Buying Job | ||
| Artifact Creation | 16.7% | ChatGPT +8 percentage points |
| Comparison | 65.6% | Perplexity +6 percentage points |
| Consensus Creation | 23.1% | Perplexity +15 percentage points |
| Problem Identification | 15.4% | Perplexity +8 percentage points |
| Requirements Building | 60% | Perplexity +13 percentage points |
| Shortlisting | 88% | Perplexity +4 percentage points |
| Solution Exploration | 37.5% | Perplexity +6 percentage points |
| Validation | 41.7% | Even |
| Dimension | ChatGPT | Perplexity |
|---|---|---|
| All Queries | 37.3% | 42.7% |
| By Persona | ||
| Chief Revenue Officer | 40.7% | 37% |
| Director of CS Operations | 23.3% | 30% |
| Customer Success Team Lead | 48.4% | 58.1% |
| Head of Customer Success | 33.3% | 43.3% |
| VP of Customer Success | 40.6% | 43.8% |
| By Buying Job | ||
| Artifact Creation | 16.7% | 8.3% |
| Comparison | 56.2% | 62.5% |
| Consensus Creation | 7.7% | 23.1% |
| Problem Identification | 7.7% | 15.4% |
| Requirements Building | 40% | 53.3% |
| Shortlisting | 72% | 76% |
| Solution Exploration | 18.8% | 25% |
| Validation | 29.2% | 29.2% |
[Data] Overall visibility: 50% (75/150 queries). High-intent visibility: 65.4% (53/81). Shortlisting: 88% (22/25).
Early-funnel visibility: 38.6% (17/44 queries visible across problem identification, solution exploration, requirements building). Decision-maker win rate: 13.3% (4/30 visible queries, 59 total). Evaluator win rate: 24.4% (11/45 visible queries, 91 total).
Role gap: -11pp (decision-makers win 11pp less than evaluators). Vis-to-win gap (high-intent): 51pp.
[Synthesis] The visibility pattern reveals a buyer journey that starts weak and ends strong but never converts. Vitally is nearly absent at early-funnel discovery (38.6% visibility across 17/44 early-funnel queries), gains strength at Shortlisting (88%, 22/25), then fails to close at Validation and consensus stages. The 11pp decision-maker deficit is the mechanism: content built for practitioners (feature descriptions, automation how-tos) does not answer the questions CROs and VPs ask when writing purchase justifications.
Fixing early-funnel invisibility requires new content in the three problem-identification and solution-exploration categories where Vitally is structurally absent; fixing the decision-maker gap requires reframing existing pages from feature descriptions to business-case evidence.
34 queries won by named competitors · 8 no clear winner · 33 no vendor mentioned
Sorted by competitive damage — competitor-winning queries first.
| ID | Query | Persona | Stage | Winner |
|---|---|---|---|---|
| ⚑ Competitor Wins — 34 queries where a named competitor captures the buyer | ||||
| vit_036 | "What AI capabilities actually matter when evaluating CS platforms — account summaries, meeting prep, or risk alerts?" | VP of Customer Success | Requirements Building | Planhat |
| vit_038 | "Evaluation criteria for CS platforms focused on expansion revenue — what features help identify upsell-ready accounts?" | Head of Customer Success | Requirements Building | Gainsight |
| vit_066 | "customer success tools that integrate with Amplitude or Mixpanel for real-time feature adoption tracking" | Director of CS Operations | Shortlisting | Custify |
| vit_071 | "How does Gainsight compare to Totango for CS workflow automation at a 300-person company?" | VP of Customer Success | Comparison | Gainsight |
| vit_072 | "ChurnZero vs Planhat — which integrates better with Salesforce and product analytics tools?" | Director of CS Operations | Comparison | Planhat |
| vit_073 | "Gainsight vs Planhat for customer success — pros and cons for a mid-market SaaS company" | Head of Customer Success | Comparison | Planhat |
| vit_075 | "ChurnZero vs Totango for tracking customer product usage and predicting churn risk" | Customer Success Team Lead | Comparison | ChurnZero |
| vit_087 | "Planhat vs Catalyst for data integration — which handles complex multi-tool tech stacks better?" | Director of CS Operations | Comparison | Planhat |
| vit_088 | "Gainsight vs ChurnZero playbook automation — which is easier for a small CS team to manage without a dedicated admin?" | Head of Customer Success | Comparison | ChurnZero |
| vit_089 | "Compare Gainsight and Totango reporting dashboards — which is better for C-suite visibility into NRR and churn?" | Chief Revenue Officer | Comparison | Gainsight |
Remaining competitor wins: Gainsight ×8, ChurnZero ×7, Totango ×3, Planhat ×3, Catalyst ×2, Custify ×1. 8 queries with no clear winner. 33 queries with no vendor mentioned. Full query-level data available in the analysis export.
Queries where Vitally is mentioned but a competitor is positioned more favorably.
| ID | Query | Persona | Buying Job | Winner | Vitally Position |
|---|---|---|---|---|---|
| vit_001 | "How do you know which customers are about to churn before it's too late?" | VP of Customer Success | Problem Identification | No Vendor Mentioned | Mentioned In List |
| vit_014 | "Build vs. buy for customer success — when should a SaaS company invest in a dedicated CS platform instead of building on top of Salesforce?" | Director of CS Operations | Solution Exploration | No Clear Winner | Mentioned In List |
| vit_015 | "What's the difference between a dedicated customer success platform and just using Salesforce with add-ons for CS workflows?" | VP of Customer Success | Solution Exploration | No Clear Winner | Mentioned In List |
| vit_016 | "How do customer health scoring models work in practice — what data do they need to be accurate?" | Customer Success Team Lead | Solution Exploration | No Vendor Mentioned | Mentioned In List |
| vit_020 | "How is AI being used in customer success platforms today — account summaries, next-best-action, or something else?" | Customer Success Team Lead | Solution Exploration | No Vendor Mentioned | Mentioned In List |
| vit_023 | "We've been running CS out of spreadsheets and Salesforce — at what point does a dedicated CS platform make sense?" | Head of Customer Success | Solution Exploration | No Vendor Mentioned | Mentioned In List |
| vit_028 | "What integrations matter most for a CS platform at a product-led SaaS company using Segment and Intercom?" | Customer Success Team Lead | Solution Exploration | No Clear Winner | Mentioned In List |
| vit_030 | "Key requirements for evaluating customer success platforms at a 200-person B2B SaaS company using Salesforce" | Director of CS Operations | Requirements Building | No Clear Winner | Mentioned In List |
| vit_031 | "What questions should I ask CS platform vendors about their health scoring methodology and data requirements?" | VP of Customer Success | Requirements Building | No Vendor Mentioned | Mentioned In List |
| vit_032 | "Must-have vs. nice-to-have features for a CS platform focused on automating onboarding, renewals, and risk escalations" | Customer Success Team Lead | Requirements Building | No Clear Winner | Mentioned In List |
| ID | Query | Persona | Buying Job | Winner | Vitally Position |
|---|---|---|---|---|---|
| vit_033 | "What reporting capabilities should a customer success platform have for executive visibility into NRR and churn trends?" | Chief Revenue Officer | Requirements Building | No Clear Winner | Brief Mention |
| vit_035 | "Requirements checklist for customer success platforms that integrate with Segment, Mixpanel, and Salesforce" | Director of CS Operations | Requirements Building | No Vendor Mentioned | Mentioned In List |
| vit_037 | "We need a CS platform that collects NPS and triggers follow-ups automatically — what should be on our requirements list?" | Customer Success Team Lead | Requirements Building | No Clear Winner | Mentioned In List |
| vit_040 | "What Customer Segmentation & Lifecycle Management and automation features should a CS platform have to manage 1,000+ accounts with a team of 8 CSMs?" | Director of CS Operations | Requirements Building | Custify | Mentioned In List |
| vit_042 | "We're replacing our homegrown health scoring with a CS platform — what capabilities should we prioritize?" | Head of Customer Success | Requirements Building | No Clear Winner | Mentioned In List |
| vit_045 | "Best customer success platforms for mid-market SaaS companies that need real-time churn prediction" | VP of Customer Success | Shortlisting | Gainsight | Mentioned In List |
| vit_047 | "We don't have clean NRR reporting today — which CS platforms have the best executive dashboards for retention metrics?" | Chief Revenue Officer | Shortlisting | Gainsight | Mentioned In List |
| vit_049 | "Customer success platforms with the best Salesforce and HubSpot integrations for mid-market companies" | Head of Customer Success | Shortlisting | Gainsight | Mentioned In List |
| vit_050 | "Which CS platforms have AI features that actually help CSMs manage larger books of business?" | VP of Customer Success | Shortlisting | Gainsight | Mentioned In List |
| vit_052 | "Top customer success platforms for B2B SaaS focused on driving net revenue retention and expansion pipeline" | Chief Revenue Officer | Shortlisting | Gainsight | Mentioned In List |
| vit_053 | "CS platforms that let you run different automated playbooks for different customer segments and tiers" | Customer Success Team Lead | Shortlisting | Gainsight | Strong 2nd |
| vit_054 | "looking for CS platform alternatives that are faster to set up than our current tool — mid-market SaaS, 300 employees" | Head of Customer Success | Shortlisting | No Clear Winner | Mentioned In List |
| vit_055 | "Customer success tools with built-in team collaboration and shared account notes instead of separate Slack channels" | VP of Customer Success | Shortlisting | No Clear Winner | Mentioned In List |
| vit_056 | "customer success platforms that handle complex account hierarchies and custom objects well for larger companies" | Director of CS Operations | Shortlisting | Gainsight | Mentioned In List |
| vit_057 | "Best CS platforms for giving leadership real-time visibility into customer health across the entire book of business" | Chief Revenue Officer | Shortlisting | No Clear Winner | Mentioned In List |
| vit_058 | "Which customer success platforms have built-in NPS surveys with automated follow-up actions based on scores?" | Customer Success Team Lead | Shortlisting | Totango | Mentioned In List |
| vit_059 | "switching from spreadsheet-based renewal tracking — best CS platforms for managing renewals and spotting upsell signals" | Head of Customer Success | Shortlisting | Velaris | Mentioned In List |
| vit_060 | "Top CS platforms besides Gainsight for mid-market teams that need strong workflow automation without the enterprise complexity" | VP of Customer Success | Shortlisting | No Clear Winner | Brief Mention |
| vit_062 | "Which CS platforms work best for scaling customer success at a fast-growing SaaS without doubling the team?" | Chief Revenue Officer | Shortlisting | No Clear Winner | Strong 2nd |
| vit_063 | "we need a better customer health scoring system than what we built internally — which CS platforms are best for predicting churn?" | Customer Success Team Lead | Shortlisting | Gainsight | Brief Mention |
| vit_064 | "Best AI-powered customer success platforms for mid-market B2B SaaS companies" | Head of Customer Success | Shortlisting | No Clear Winner | Mentioned In List |
| vit_067 | "Which customer success platforms have the strongest onboarding project management for fast time-to-value?" | Chief Revenue Officer | Shortlisting | No Clear Winner | Mentioned In List |
| vit_068 | "looking for a CS platform where all customer notes and context live in one place instead of scattered across Slack and Google Docs" | Customer Success Team Lead | Shortlisting | Gainsight | Mentioned In List |
| vit_069 | "customer success platforms for companies with 2,000+ accounts that need custom fields, advanced permissions, and API access" | Head of Customer Success | Shortlisting | Gainsight | Mentioned In List |
| vit_070 | "Gainsight vs ChurnZero — which has better customer health scoring for mid-market SaaS?" | Customer Success Team Lead | Comparison | Gainsight | Mentioned In List |
| vit_074 | "Totango vs Catalyst — which gives better executive visibility into retention and expansion metrics?" | Chief Revenue Officer | Comparison | Totango | Mentioned In List |
| vit_079 | "How do smaller CS platforms compare to Gainsight for managing renewals and identifying expansion opportunities?" | Chief Revenue Officer | Comparison | No Clear Winner | Mentioned In List |
| vit_080 | "Which customer success platforms have AI features that actually compete with Gainsight's AI capabilities?" | Customer Success Team Lead | Comparison | Gainsight | Mentioned In List |
| vit_081 | "Which CS platforms have better customer onboarding management than ChurnZero for mid-market SaaS?" | VP of Customer Success | Comparison | No Clear Winner | Mentioned In List |
| vit_083 | "Vitally vs Catalyst — which is better for a growing mid-market CS team focused on retention?" | Head of Customer Success | Comparison | No Clear Winner | Mentioned In List |
| vit_084 | "Is Gainsight worth the $50K+ price tag for a mid-market company or are there better options at a lower price point?" | Chief Revenue Officer | Comparison | Custify | Mentioned In List |
| vit_090 | "We're on Catalyst and not getting the product usage insights we need — how do other CS platforms handle this better?" | Customer Success Team Lead | Comparison | No Clear Winner | Mentioned In List |
| vit_091 | "Best alternatives to Planhat for renewal management and identifying expansion opportunities at a mid-market SaaS" | VP of Customer Success | Comparison | ChurnZero | Mentioned In List |
| vit_097 | "How do AI features compare across Gainsight, ChurnZero, and other top CS platforms in 2026?" | Director of CS Operations | Comparison | ChurnZero | Mentioned In List |
| vit_098 | "Planhat vs ChurnZero health scoring — which does a better job predicting churn for B2B SaaS customers?" | Head of Customer Success | Comparison | No Clear Winner | Brief Mention |
| vit_099 | "We're evaluating Catalyst alternatives after the Totango merger — what's better for expansion revenue tracking?" | Chief Revenue Officer | Comparison | No Clear Winner | Mentioned In List |
| vit_100 | "Switching from Totango — which CS platforms have better workflow automation for small teams?" | Customer Success Team Lead | Comparison | Totango | Strong 2nd |
| vit_107 | "Catalyst customer success platform reviews — what are the biggest complaints after the Totango merger?" | Customer Success Team Lead | Validation | No Clear Winner | Mentioned In List |
| vit_112 | "ChurnZero UI complaints — is the interface really as hard to navigate as the reviews say?" | Customer Success Team Lead | Validation | ChurnZero | Brief Mention |
| vit_113 | "Vitally reviews from customer success leaders — what are the honest pros and cons?" | VP of Customer Success | Validation | No Clear Winner | Strong 2nd |
| vit_115 | "Planhat renewal management weaknesses — what do customers actually complain about?" | Chief Revenue Officer | Validation | Planhat | Brief Mention |
| vit_117 | "Problems with Totango's customer Customer Segmentation & Lifecycle Management for mid-market SaaS — does it scale or get clunky?" | Customer Success Team Lead | Validation | Totango | Mentioned In List |
| vit_118 | "Can mid-market customer success platforms handle 3,000+ accounts without performance issues and slowdowns?" | VP of Customer Success | Validation | No Clear Winner | Mentioned In List |
| vit_120 | "Which CS platforms' built-in NPS surveys actually get good response rates from customers?" | Chief Revenue Officer | Validation | No Clear Winner | Mentioned In List |
| vit_123 | "What are the biggest implementation risks with Totango for a 200-person SaaS company with a small CS ops team?" | VP of Customer Success | Validation | No Clear Winner | Mentioned In List |
| vit_124 | "How good are CS platform analytics for board-level presentations — do any of them produce reports executives actually read?" | Head of Customer Success | Validation | No Clear Winner | Strong 2nd |
| vit_128 | "Case studies of mid-market SaaS companies that improved NRR after implementing a CS platform" | Head of Customer Success | Consensus Creation | No Clear Winner | Strong 2nd |
| vit_129 | "Typical payback period for customer success platforms like Gainsight, ChurnZero, or Vitally" | Chief Revenue Officer | Consensus Creation | No Clear Winner | Brief Mention |
| vit_144 | "Create a Comparison matrix of Gainsight, ChurnZero, Totango, and Planhat for Customer Segmentation & Lifecycle Management, digital-touch automation, and scalability" | VP of Customer Success | Artifact Creation | Velaris | Mentioned In List |
| vit_150 | "Create a vendor evaluation scorecard for customer success platforms with criteria weighted for a mid-market company preparing to scale" | Head of Customer Success | Artifact Creation | Gainsight | Mentioned In List |
Who’s winning when Vitally isn’t — and who controls the narrative at each buying stage.
[TL;DR] Vitally wins 10% of queries (15/150), ranks #3 in SOV — H2H record: 20W–31L across 9 competitors.
Vitally holds its ground against smaller competitors (positive H2H vs Catalyst, Velaris, ClientSuccess, Custify) but loses consistently to Gainsight (2W-13L H2H) and ChurnZero (3W-8L H2H) on the queries that matter most — the Comparison and Shortlisting buying jobs where category leadership is decided.
| Company | Mentions | Share |
|---|---|---|
| Gainsight | 94 | 17.7% |
| ChurnZero | 87 | 16.4% |
| Vitally | 75 | 14.1% |
| Totango | 71 | 13.4% |
| Planhat | 65 | 12.2% |
| Custify | 41 | 7.7% |
| Velaris | 39 | 7.3% |
| Catalyst | 31 | 5.8% |
| ClientSuccess | 26 | 4.9% |
| SmartKarrot | 3 | 0.6% |
When Vitally and a competitor both appear in the same response, who gets the recommendation? One query with multiple competitors generates a matchup against each — so H2H totals will exceed the query count.
Win = primary recommendation (cross-platform majority). Loss = competitor was. Tie = neither or third party.
For the 75 queries where Vitally is completely absent:
[Synthesis] SOV rank #3 and favorable H2H records against smaller competitors (Catalyst, Velaris, ClientSuccess, Custify) tell a different story from the win rate data — and the distinction matters. H2H records measure pairwise matchups when both vendors appear together; win rate measures query-level outcomes. Vitally wins co-appearing matchups against four competitors but loses most buyer queries (14.8% win rate, 12/81) because the queries that define category leaders — Comparison, Shortlisting, consensus creation — are dominated by Gainsight (2W-13L H2H) and ChurnZero (3W-8L).
Vitally's competitive strength is real in direct matchups but does not translate to category-level recommendation frequency. Closing the Gainsight gap requires the Comparison content architecture investment in NIO 006, not just product differentiation.
What AI reads and trusts in this category.
[TL;DR] Vitally had 50 unique pages cited across buyer queries, ranking #1 among all cited domains. 10 high-authority domains cite competitors but not Vitally.
Strong citation volume — #1 domain rank with 75 citation instances across 50 unique pages — masks the real gap: 10 third-party citation deficits in ROI benchmarks and compliance categories where ChatGPT reaches for Gainsight's analyst coverage instead of Vitally's owned content.
Note: Domain-level citation counts (above) tally instances per individual domain. Competitor-level counts (below) aggregate across all domains owned by a single vendor, which may include subdomains.
Non-competitor domains citing other vendors but not Vitally — off-domain authority opportunities.
These domains cited competitors but did not cite Vitally pages in the queries analyzed. This reflects citation patterns in AI responses, not overall platform presence.
[Synthesis] Vitally's #1 citation rank confirms strong on-domain content indexing — this is not a technical crawlability failure. The 10 third-party citation gaps identify where AI platforms, particularly ChatGPT, reach for external authority rather than vendor-owned pages: ROI benchmarks, compliance verification, analyst category rankings, and competitor review aggregations. These gaps map directly onto NIO 001 (expansion ROI content) and NIO 003 (enterprise compliance documentation), where third-party amplification — analyst reports, benchmark publications, Trust Center listings — is the specific investment needed to close the citation authority deficit alongside on-domain publishing.
Three layers of recommendations ranked by commercial impact and implementation speed.
[TL;DR] 80 priority recommendations (plus 12 near-rebuild optimizations) targeting 142 queries where Vitally is currently invisible. 5 L1 technical fixes + 2 verification checks, 67 content optimizations (L2), 6 new content initiatives (L3).
142 recommendations execute in strict L1-first sequence: infrastructure fixes unblock Comparison page discovery and content freshness signaling before any new content goes live, then 67 L2 page remediations deepen existing pages, then 68 L3 new content pieces fill the six structural gaps driving early-funnel invisibility and zero decision-maker wins.
Reading the priority numbers: Recommendations are ranked 1–80 across all three layers by commercial impact × implementation speed. Within each layer, items appear in priority order. Gaps in the sequence (e.g., L1 shows 1, 2, then 12) mean higher-priority items belong to a different layer.
Configuration and infrastructure changes. Owner: Engineering / DevOps. Timeline: Days to weeks.
| Priority | Finding | Impact | Timeline |
|---|---|---|---|
| #1 | Broken Pages Linked from Site Navigation | High | < 1 day |
| #49 | Competitor Comparison Pages Lack Visible Publication Dates | Medium | < 1 day |
| #50 | Multiple H1 Tags on Most Commercial Pages | Medium | 1-2 weeks |
| #51 | Schema Markup Cannot Be Verified — Manual Check Recommended | Medium | 1-3 days |
| #52 | Sitemap Lacks lastmod Timestamps on All 1,000+ URLs | Medium | 1-3 days |
Items requiring manual review before determining if action is needed.
| Priority | Finding | Impact | Timeline |
|---|---|---|---|
| #75 | Client-Side Rendering Status Cannot Be Verified | Low | < 1 day |
| #76 | Meta Descriptions and OG Tags Cannot Be Verified — Manual Check Recommended | Low | < 1 day |
Click any row to expand full issue/fix detail.
Existing pages that need restructuring or deepening. Owner: Content Team. Timeline: Weeks.
The /features/reporting page has zero content on ROI, payback periods, or retention impact benchmarks — a CRO building a business case for CS platform investment cannot extract any financial justification from this page Vitally wins 0% of CRO consensus-creation queries because no page provides the business-case evidence these queries require — the reporting page is the highest-traffic candidate for this content
Queries affected: vit_126
Vitally has no page anywhere on the site that provides TCO inputs for a CS platform evaluation — AI systems generating TCO models for this query type cannot reference Vitally because no cost framework content exists The Artifact Creation buying job for the CRO persona is Vitally's lowest-win-rate combination — the absence of TCO content is a direct contributor to the 0% CRO win rate
Queries affected: vit_142
AI systems building a 'CS platform AI feature matrix' artifact cannot reference Vitally — the /product/ai page has no competitor data and no structured table format that AI can extract for a vendor evaluation matrix This is Artifact Creation buying job — the moment buyers build the Comparison document they share with their team. Being excluded from this artifact means Vitally doesn't advance to the next stage of evaluation
Queries affected: vit_149
Vitally has no onboarding playbook template on any page — AI systems generating onboarding playbook artifacts cannot reference Vitally as the source, because the template content does not exist Artifact_creation buying job for a VP-level persona is the highest-value citation opportunity for the /features/project-management page — being excluded means Vitally does not appear in the evaluation documents buyers share with their leadership
Queries affected: vit_148
The /features/reporting page has no section addressing the top Gainsight reporting criticisms sourced from G2: rigid out-of-box dashboards with limited customization, data refresh lag making real-time decisions difficult, and custom report setup requiring admin support Without a dated Comparison section, AI platforms (especially ChatGPT) reach for Gainsight-published content when answering Validation queries about Gainsight's reporting limitations
Queries affected: vit_110
The /features/project-management page has zero business outcome content — no retention benchmarks, no TTV improvement data, no named customer outcomes. A CS Team Lead building an internal business case cannot extract any shareable evidence from this page Consensus_creation queries require data that leaders can share upward — without benchmark content, Vitally is structurally excluded from all consensus-creation citations for the onboarding feature area
Queries affected: vit_132
The /features/health-scores page describes a product feature and contains no financial justification content — it cannot answer 'what is the business case for reducing churn through health scoring?' The /solution/cs-leader page covers CS leader positioning but also lacks CFO-level financial framing with cost-of-churn benchmarks, platform investment ranges, and payback period estimates
Queries affected: vit_127
The /features/health-scores page contains no quantified outcome data — no customer churn reduction percentages, no industry benchmarks, no before/after comparisons — making it impossible for AI platforms to cite Vitally when a buyer asks what churn reduction to expect The page has no segment-specific data (what churn reduction does a 200-person mid-market SaaS typically see?), which is the specific framing buyers use at solution exploration stage
Queries affected: vit_025
The /features/health-scores page contains no competitor capability data — buyers building evaluation matrices cannot find Gainsight, Planhat, or ChurnZero health scoring specifications to compare against Vitally The page has no downloadable or copy-pasteable Comparison table format that serves Artifact Creation buyers
Queries affected: vit_141
The /integrations page lists Vitally's integration capabilities but contains no competitor data — buyers building Comparison scorecards for Gainsight, ChurnZero, and Totango cannot find Vitally's position in their scorecard from this page No downloadable or web-accessible Comparison scorecard template exists on Vitally's site for the integration + automation + ease-of-use evaluation dimensions
Queries affected: vit_140
The /features/automation page contains no NRR improvement metrics — buyers asking for proof that workflow automation improves retention cannot find quantified evidence The case study section is disconnected from the feature pages — NRR outcome data buried in case study subpages is not extractable by AI platforms responding to automation-outcome queries
Queries affected: vit_128
The /vitally-vs-Gainsight page does not address the 'small CS team without a dedicated admin' use case — the specific buyer context in this query The page lacks a 'ChurnZero vs Gainsight vs Vitally for admin-light playbook automation' three-way Comparison that would intercept buyers comparing the two competitors
Queries affected: vit_088
The /vitally-vs-Gainsight page covers Vitally vs Gainsight but does not address the Gainsight vs Totango Comparison question — a buyer deciding between these two never encounters Vitally through this page The /vitally-vs-Gainsight page lacks a 'considering Totango as well?' section that intercepts buyers who have both Gainsight and Totango on their shortlist and introduces Vitally as a third option
Queries affected: vit_071
The /vitally-vs-Totango page does not include a head-to-head automation capability Comparison — buyers switching from Totango cannot find specific evidence that Vitally's automation is superior for small teams The page lacks customer testimonials from teams that specifically switched from Totango to Vitally for automation reasons
Queries affected: vit_100
The /features/automation page does not include a 'Gainsight alternative for workflow automation' positioning section — buyers who have pre-rejected Gainsight for cost or complexity reasons cannot find Vitally as the recommended next option The page lacks a mid-market framing: automation capability descriptions do not specify that they are designed for 50-500 person CS teams without a dedicated admin
Queries affected: vit_060
The /product/ai page is a feature marketing page, not an evaluation guide — it tells buyers what Vitally AI does but not what they should require from any CS platform AI or how to distinguish strong AI implementations from weak ones The Requirements Building buying job specifically needs checklist or tiered-criteria format that AI platforms can extract for RFP-style responses — the current page has no such structure
Queries affected: vit_036
The /product/ai page has no productivity benchmark content — 'how much time does AI actually save per CSM per week' cannot be answered from this page, making Vitally invisible for consensus-creation queries that require shareable evidence Gainsight wins this query specifically because it publishes time-savings claims with named customer examples — Vitally's absence from this content category is a direct cause of consensus-stage losses
Queries affected: vit_133
The /features/health-scores page has no customer voice content — no testimonials, G2 excerpts, or case study references from CS leaders who use Vitally's health scoring The page does not acknowledge any limitations or trade-offs in Vitally's health scoring approach, which reduces its credibility for buyers doing honest Validation research
Queries affected: vit_113
The /features/project-management page has no implementation timeline content — buyers researching ChurnZero's implementation burden cannot find Vitally as an alternative, because Vitally has not published its own implementation timeline on any page Validation-stage competitor queries are interception opportunities: buyers who encounter friction in their ChurnZero research are receptive to discovering Vitally as a faster-to-implement alternative
Queries affected: vit_108
Catalyst's implementation experience is not addressed on any Vitally page — buyers researching Catalyst implementation smoothness for mid-market teams cannot find Vitally as a Comparison point Mid-market-specific implementation framing is absent — 'no dedicated CS ops admin required' is Vitally's key mid-market implementation differentiator but is not stated anywhere
Queries affected: vit_119
The /integration/salesforce page does not reference ChurnZero's documented Salesforce sync issues — buyers researching integration reliability by competitor are not finding Vitally as the reliable alternative The page lacks data sync reliability specifications — sync frequency, error handling, and what happens when Salesforce records conflict with Vitally data
Queries affected: vit_103
The /features/health-scores page contains no Comparison against Gainsight's health scoring — buyers specifically comparing mid-market churn prediction options cannot find Vitally's positioning from this page The page does not specify whether health scoring is real-time or batch-processed — this is a direct Shortlisting criterion that buyers explicitly ask about
Queries affected: vit_045
The /integrations page does not include a 'Vitally vs Gainsight for Salesforce and HubSpot integration' Comparison — the specific competitive question buyers at Shortlisting stage are asking The /integration/salesforce and /integration/hubspot pages describe what Vitally's integrations do but do not compare integration depth, sync frequency, or bidirectional capability against Gainsight
Queries affected: vit_049
The /features/nps page does not provide the requirements checklist format that buyers at the Requirements Building stage need — AI cannot extract structured NPS evaluation criteria from the current feature-description content No differentiation between mandatory NPS requirements and advanced capabilities — buyers building a requirements document need this tiering to prioritize their vendor evaluation
Queries affected: vit_037
The /features/nps page has no retention-outcome content — 'how automated NPS follow-up workflows improve retention' cannot be answered from this page with Vitally as the recommended platform No named customer example connecting NPS automation to churn reduction — the Consensus Creation buying job specifically requires shareable, attributable evidence, which feature descriptions cannot satisfy
Queries affected: vit_136
The /features/reporting page does not describe how Vitally calculates and displays NRR — the specific metric CROs need for board reporting The page lacks a 'board reporting' use case section showing the executive dashboard views that a CRO would present to the board
Queries affected: vit_003
The /features/reporting page does not explain what a CS platform's NRR tracking provides that spreadsheets cannot — the specific decision buyers are evaluating at Solution Exploration stage The page lacks a 'before and after' Comparison showing a team's NRR reporting workflow in spreadsheets vs in Vitally
Queries affected: vit_019
The /features/health-scores page does not reference Planhat or position Vitally's health scoring methodology against Planhat's approach — buyers evaluating both cannot use this page for Comparison The page lacks G2 review evidence about health scoring accuracy — a key Validation-stage concern that Planhat's pages address more directly
Queries affected: vit_105
The /features/nps page opens with feature descriptions ('send NPS surveys from Vitally') rather than validating the buyer's frustration: 'NPS scores sit in a spreadsheet and no one acts on them' No specific mechanism explaining why NPS data goes unused — the buyer at Problem Identification stage needs to understand the root cause (lack of automated workflow routing) before they can evaluate a solution
Queries affected: vit_012
The /features/automation page does not address the 'Salesforce vs dedicated CS platform' decision — the most common alternative buyers are evaluating, particularly for automation workflows The page has no content on the specific automation gaps in Salesforce for CS workflows (no health scoring triggers, no CS-specific playbook templates, no customer lifecycle automation native to the CRM)
Queries affected: vit_015
The /features/reporting page describes reporting features in product language ('build dashboards', 'export data') without showing what the executive-facing output looks like — no named metrics, no export format descriptions, no executive use case No case study reference connecting Vitally's reporting to a board presentation outcome — ChatGPT has no citable named example to cite for executive reporting queries
Queries affected: vit_124
The /features/reporting page has no competitive positioning against Gainsight for executive dashboard quality — buyers Shortlisting for 'best executive dashboards for retention metrics' cannot find Vitally's position relative to Gainsight The page lacks a screenshot or description of Vitally's executive retention dashboard showing the specific metrics CROs care about
Queries affected: vit_047
The /features/reporting page is not structured as a requirements answer document — CROs cannot find which Vitally features address each of their executive reporting requirements The page lacks a 'requirements for executive CS reporting' checklist with Vitally's coverage of each requirement clearly marked
Queries affected: vit_033
The /features/health-scores page does not show or describe the leadership portfolio view — how a CRO or VP CS sees all account health scores across the entire book of business in a single dashboard The page has no 'escalation to leadership' workflow description — how at-risk accounts surface automatically to leadership without requiring manual reporting
Queries affected: vit_057
The /product/ai page does not target mid-market B2B SaaS as the audience — 'mid-market' is not named, and the page describes AI features that apply equally to any company size, making it non-competitive for segment-specific Shortlisting queries No contrast with Gainsight's admin-heavy AI configuration — buyers evaluating 'best AI-powered CS platforms for mid-market' need a reason to prefer Vitally over Gainsight beyond feature parity
Queries affected: vit_064
The /features/project-management page does not provide the requirements checklist format that buyers at the Requirements Building stage need — AI platforms cannot extract structured evaluation criteria from the current feature-description content Volume-specific guidance ('at 100+ new customers per quarter') is absent — buyers managing high-volume onboarding need criteria specific to their scale, not generic feature lists
Queries affected: vit_034
The /features/project-management page opens with feature descriptions ('manage tasks, milestones, and playbooks') rather than validating the buyer's problem: 'onboarding quality varies by CSM and I cannot scale without fixing this' No data point on the cost of inconsistent onboarding — the buyer at Problem Identification stage needs a reason to believe CS platform project management solves this, not a feature list
Queries affected: vit_006
The /product/ai page opens with Vitally AI feature descriptions rather than validating the category-level problem: 'can AI help CS teams manage more accounts without burning out?' — a buyer at Problem Identification stage who hasn't concluded AI is the solution cannot extract a direct answer No specific capacity ratio or benchmark on the page — AI platforms have no citable number to anchor the 'how many more accounts' answer
Queries affected: vit_010
The /features/health-scores page is structured as a product showcase, not as an answer to vendor evaluation questions — buyers building a requirements list cannot find Vitally's position on key questions (configurable weighting? ML vs rule-based? integration with Salesforce health fields?) The page lacks a FAQ or evaluation criteria section that would make it citable for requirements-building queries
Queries affected: vit_031
The /features/project-management page does not name the volume tier ('50+ new customers per quarter') that defines this buyer's Shortlisting context — AI cannot match Vitally to volume-specific Shortlisting queries without this explicit qualifier No competitive positioning statement naming Vitally against alternatives for high-volume onboarding — buyers Shortlisting need a reason to include Vitally over Totango or Catalyst
Queries affected: vit_051
The /features/health-scores page opens with product feature framing ('configure your health score') rather than buyer pain framing — an AI responding to 'how do I know which customers are about to churn' cannot extract a direct answer from this page The /features/health-scores page lacks any data on lead time: how many days before a churn event does Vitally's health scoring surface a risk signal? This is the specific evidence buyers at problem identification stage need to evaluate a solution
Queries affected: vit_001
The /product/ai page buries the capacity outcome — the primary claim ('Vitally AI lets CSMs manage significantly more accounts without sacrificing relationship quality') is not stated in the hero, above the fold, or in any H2 heading that AI platforms would extract for Shortlisting queries No specific capacity ratio is stated on the page — AI Shortlisting responses require a quantifiable claim to justify including Vitally as 'one of the platforms that helps CSMs manage larger books'
Queries affected: vit_050
The /features/automation page is a product showcase, not an evaluation framework — it cannot serve a buyer who is writing evaluation criteria for a vendor selection process No page on Vitally's site presents a buyer-facing evaluation criteria template for the automation, onboarding, and renewal management combination
Queries affected: vit_143
The /integrations page does not address the 'build on Salesforce vs buy a dedicated CS platform' decision — the most common architectural decision CS teams face before entering the CS platform evaluation process The /integration/salesforce page positions Vitally as a Salesforce integration but does not explain what Vitally provides that Salesforce + add-ons cannot replicate
Queries affected: vit_014
The /features/health-scores page has no content addressing why purpose-built CS platform health scoring outperforms homegrown solutions — buyers who built internally need to understand the specific gaps they are filling The page does not cite any G2 reviews or third-party social proof that validates Vitally's health scoring accuracy or predictive power
Queries affected: vit_063
The /features/health-scores page does not explain what data a health score needs to be predictive — buyers migrating from spreadsheets need a concrete list of inputs (product usage events, CRM data, support tickets, NPS scores) to evaluate whether Vitally can ingest their data The /features/health-scores page has no section addressing why spreadsheet-based health scoring breaks at scale — this is the buyer's current state and the page should acknowledge and address it
Queries affected: vit_011
The /features/health-scores page does not explain Vitally's health score methodology — whether signals are weighted manually, automatically, or via ML — leaving buyers unable to evaluate prediction accuracy The page has no section on how scoring thresholds (red/yellow/green) are calibrated to a specific customer base, which is the key practical question buyers ask at Solution Exploration stage
Queries affected: vit_016
The /features/health-scores page assumes the buyer has no current health scoring solution — it does not address the specific evaluation concerns of teams migrating from a homegrown system (data migration, scoring parity, transition period) The page provides no migration path or implementation timeline information, which is the primary concern of buyers who have already built internal scoring systems
Queries affected: vit_042
The /features/automation page does not address the 'is it time to move to a CS platform?' question — buyers at this stage need a concrete threshold (e.g., '50+ accounts per CSM,' '5+ person CS team') to self-qualify The page lacks a 'signs you've outgrown your current setup' section that validates the buyer's current pain and frames the decision timing
Queries affected: vit_023
The /vitally-vs-ChurnZero page does not reference ChurnZero's widely documented G2 review complaints about UI complexity and navigation difficulty — missing the specific Validation concern buyers have The page has no interface complexity Comparison showing Vitally's onboarding time vs ChurnZero's learning curve for CSMs
Queries affected: vit_112
The /integrations page does not explain how Vitally ingests and combines data from HubSpot (CRM), Segment (product analytics), and Zendesk (support) into a unified customer profile — the specific multi-tool architecture this buyer is evaluating No dedicated /integration/zendesk or /integration/segment landing pages exist — buyers cannot verify these specific integrations the same way they can verify Salesforce or HubSpot
Queries affected: vit_043
The /integration/salesforce page does not explicitly describe bidirectional sync capability — the buyer specifically wants to know if data flows both ways between Vitally and Salesforce No Planhat sync reliability Comparison exists on any Vitally page — buyers comparing Planhat and Vitally on Salesforce integration cannot find Vitally's position
Queries affected: vit_122
The /integration/salesforce page does not address Totango's documented Salesforce/HubSpot sync reliability problems — buyers researching Totango alternatives for integration reliability cannot find Vitally as the answer The page lacks a 'switching from Totango for integration reliability' framing that would intercept buyers who have already identified Totango's sync issues
Queries affected: vit_109
The /vitally-vs-Totango page does not reference G2 review data on Totango's specific automation limitations — the page makes comparative claims without citing the third-party evidence buyers use at Validation stage The page does not address the specific automation struggles teams experience post-implementation with Totango (typically: complex trigger configuration, limited native integrations, admin-heavy playbook management)
Queries affected: vit_104
The /vitally-vs-Totango page does not address Totango's documented implementation risks for companies without a dedicated CS ops team — the specific buyer context in this query The page lacks a Vitally implementation process description that would give buyers confidence in a lower-risk alternative
Queries affected: vit_123
The /features/nps page does not address the 'built-in CS platform NPS vs standalone NPS tool' decision — a buyer comparing these approaches cannot use this page to understand where Vitally fits in the decision landscape Named alternatives (Delighted, Typeform, Qualtrics) are not mentioned — ChatGPT requires named alternatives in Comparison content to treat the Comparison as authoritative
Queries affected: vit_024
The /features/automation page leads with product capability descriptions rather than the buyer pain — CSMs spending 60-70% of their time on admin tasks rather than customer interaction The page does not specify which CRM update tasks Vitally automates (contact updates, activity logging, health score pushes, renewal date tracking) — the specific information a frustrated CSM needs to evaluate fit
Queries affected: vit_013
The /integrations page opens with a list of available integrations rather than framing the buyer pain — a CS ops manager who needs to consolidate five tools cannot quickly determine from this page whether Vitally solves their specific data fragmentation problem The page does not explain what a unified customer data view looks like after integrations are connected — the outcome state buyers are trying to reach
Queries affected: vit_002
The /features/automation page does not include implementation timeline data — buyers looking for faster alternatives to Gainsight or their current tool need a specific time-to-value claim (e.g., 'first automated playbook running in 2 weeks') The page does not contrast Vitally's implementation speed against Gainsight's (known for 3-6 month implementations) — the most relevant competitive differentiator for this query type
Queries affected: vit_054
The /features/automation page lists every automation capability without distinguishing which are foundational vs advanced — buyers building requirements lists cannot determine what to prioritize from this page The page does not frame automation features by use case (onboarding automation vs renewal automation vs risk escalation automation) — Requirements Building buyers organize their evaluation by use case, not feature category
Queries affected: vit_032
The /integrations page presents integrations as an undifferentiated catalog — buyers at Requirements Building stage cannot determine which integrations are critical for a 200-person SaaS running Salesforce vs other profiles The page lacks a 'recommended integration stack for [company profile]' section that helps buyers build their integration requirements checklist
Queries affected: vit_030
The /features/project-management page does not frame Vitally's approach within the broader 'milestone vs playbook' solution landscape — a buyer comparing approaches cannot use this page to understand where Vitally fits in the decision Solution_exploration queries require a Comparison framework, not a single-vendor feature description — Vitally misses citation opportunities by not addressing the 'which approach' question
Queries affected: vit_022
The /product/ai page does not provide the category taxonomy a buyer uses to evaluate AI in CS — 'account summaries, next-best-action, risk alerts, automated communications' need to be organized as distinct use-case categories with H3 headings, not scattered across marketing copy Without a taxonomy structure, AI platforms cannot extract a clean answer to 'how is AI being used in CS platforms today' — Vitally is not cited because the page doesn't mirror the solution-exploration framing
Queries affected: vit_020
The /features/nps page has no response rate data — 'which CS platforms' NPS surveys actually get good response rates' cannot be answered with Vitally as the recommended platform because no response rate evidence exists on the page No description of the delivery mechanism (in-app vs email, trigger logic, survey frequency management) that explains how Vitally achieves its response rates — the mechanism is as important as the number for CRO Validation
Queries affected: vit_120
Without a 'Vitally AI vs Gainsight AI' section, AI platforms answering 'which platforms compete with Gainsight's AI' cannot include Vitally — there is no Vitally-sourced Comparison content to cite The /product/ai page's only Gainsight reference is a link to the Comparison page — a buyer researching AI capabilities specifically needs the Comparison on the AI product page, not a redirect
Queries affected: vit_080
Vitally has no multi-vendor AI Comparison content — AI platforms generating a '2026 AI feature Comparison across CS platforms' cannot include Vitally because there is no Vitally-sourced data for the Comparison matrix Without a dated section ('Updated Q1 2026'), the /product/ai page also loses to ChatGPT's recency bias even for queries where Vitally's features are competitive
Queries affected: vit_097
The /features/project-management page's value proposition is framed in product language, not the business outcome language (time-to-value, first-year retention impact) that CRO Shortlisting queries require No specific TTV improvement metric exists on the page — CRO Shortlisting queries for 'fastest TTV' need a citable outcome claim, not feature descriptions
Queries affected: vit_067
Net new content addressing visibility and positioning gaps. Owner: Content Strategy. Timeline: Months.
Expansion revenue is the commercial language of the CRO and VP CS — the executives who hold veto power over platform selection. Vitally appears in 53.9% (7/13) of Renewal & Expansion Management queries but wins 0% of those appearances, while Gainsight, ChurnZero, and Velaris claim the recommendation. The structural absence is a content hub that frames Vitally around 'how do we grow NRR' rather than 'how do we manage CS.' Winning 5 of these 13 queries adds coverage across every buying stage from problem identification through artifact creation — the full funnel for a CRO-driven deal. Gainsight and ChurnZero win by publishing ROI calculators, payback benchmarks, and upsell signal frameworks; Vitally can publish original data from its customer base to claim this space without product changes.
ChatGPT (high): ChatGPT citations favor third-party authoritative sources for ROI and benchmark claims. An original benchmark report with verifiable methodology gives ChatGPT the citable source needed to include Vitally in Consensus Creation and Artifact Creation responses. Perplexity (high): Perplexity rewards freshness signals and self-contained structured passages. A dedicated expansion hub with lastmod timestamps (currently missing site-wide per sitemap_missing_lastmod finding) and scannable data tables would make Vitally's expansion content highly extractable.
Product-led SaaS companies evaluate CS platforms primarily on how well they ingest product usage data from Amplitude, Mixpanel, and Segment alongside CRM data. Vitally's /integrations page lists connections but provides no dedicated landing pages for these analytics tools — the exact content format AI platforms need to cite Vitally. Custify wins vit_066 ('customer success tools that integrate with Amplitude or Mixpanel for real-time feature adoption tracking') by default because it has named integration pages; Vitally cannot be cited when its integration depth is undetectable from page structure. This gap directly affects the Director of CS Operations — the technical evaluator who vetoes platforms that cannot ingest the existing data stack.
ChatGPT (high): ChatGPT needs named, structured pages it can cite. When a buyer asks which CS platforms integrate with Amplitude, ChatGPT searches for pages explicitly named /integration/amplitude — a generic integrations index page is too broad to cite confidently for a specific integration query. Perplexity (high): Perplexity rewards self-contained integration specification pages with structured data. A page stating specific data fields Vitally syncs from Amplitude in real time is more extractable than a generic integrations index page.
The highest-value commercial moment in the CS platform category is when a mid-market CRO asks 'Is Gainsight worth $50K or is there a better option at our scale?' — query vit_084, won by Custify while Vitally is absent. These 14 queries share a common buyer need: enterprise-grade capability evidence without enterprise complexity or pricing. Vitally lacks any content on handling 2,000+ account hierarchies, SOC 2 Type II compliance, custom objects, API access depth, or Gainsight cost-of-ownership comparisons — the exact vocabulary buyers use when they want Gainsight's capabilities without Gainsight's cost. Every one of these 14 queries currently routes buyers to Gainsight, Totango, or Custify rather than positioning Vitally as the rational mid-market alternative.
ChatGPT (medium): ChatGPT requires verified third-party signals for security and compliance claims. A Trust Center linked from the /security page plus G2 reviews mentioning SOC 2 compliance gives ChatGPT the corroborating sources needed to cite Vitally in compliance-related queries. Perplexity (high): Perplexity will extract structured compliance specification tables directly from on-domain pages. A /security page with a scannable compliance checklist (SOC 2: Yes, GDPR DPA: Yes, SSO: Yes) is highly receptive to Perplexity extraction and would directly win vit_039 and vit_145.
The scaling question — how to grow from 5 to 15 CSMs and manage 5x the accounts without proportional headcount growth — is among the most commercially urgent problems mid-market CS leaders face. Vitally's Customer Segmentation & Lifecycle Management and automation capabilities directly address this, but no content exists that frames this use case in the vocabulary buyers search for: 'digital-touch,' 'tiered playbooks,' '1,000+ account management.' Gainsight wins vit_053 and Custify wins vit_040 despite Vitally's competitive feature strength. The root cause is a missing content type — buyers need blog content, use case pages, and methodology guides for scale-without-headcount queries, but Vitally only has product feature pages for this topic.
ChatGPT (medium): ChatGPT answers digital-touch CS queries by drawing from CS practitioner content and vendor guides. A Vitally-authored digital-touch guide needs third-party citations or customer data points to rank over Gainsight's established content in this methodology space. Perplexity (high): Perplexity strongly favors freshly dated, structured methodology content. A /resources/digital-touch-cs page with clean heading hierarchy and a lastmod timestamp extracts cleanly for definitional and how-to queries in this cluster.
When a CS team lead or Director of CS Ops asks 'how do we prevent losing all customer context when a CSM leaves?' they are describing a pain with direct revenue consequences: account risk during transitions, slower ramp for replacement CSMs, and customer trust erosion. Vitally's notes and collaboration features directly address this, but existing /features/ pages describe what the features do rather than why a company without them bleeds revenue during CSM turnover. The 7 queries in this cluster span the full buying journey, suggesting buyers are actively searching for this solution — Vitally simply is not in the frame when they find it. ChurnZero wins vit_121 by framing collaboration features around the knowledge retention use case specifically.
ChatGPT (medium): ChatGPT answers knowledge management queries by citing vendor-neutral guides and then naming specific platform solutions. A Vitally-authored CSM knowledge continuity guide needs to rank for the generic question before ChatGPT will cite it in vendor-specific responses. Perplexity (high): Perplexity extracts self-contained answers well. A page with a clear H2 'How Vitally prevents customer knowledge loss when CSMs leave' followed by a structured workflow description extracts directly as an answer to the Problem Identification queries in this cluster.
Comparison is the highest-intent buying job — buyers who ask Comparison questions have a shortlist and are choosing. Vitally's 5 existing competitor Comparison pages cover Vitally-vs-[competitor] matchups but fail to intercept two adjacent query types: competitor-vs-competitor queries where buyers are choosing between Gainsight and Planhat (Vitally never enters the frame), and feature-level Comparison queries where Vitally's feature pages lack Comparison-table structure that AI platforms need to extract and cite Vitally as an answer. The routing engine's affinity override — 'Comparison buying_job requires Comparison page types, found feature/product' — is the technical signal for this structural deficit. Fixing the Comparison content architecture protects the highest-intent buying stage and is the highest-leverage single investment in this audit.
ChatGPT (high): ChatGPT cites named Comparison pages directly for vendor evaluation queries. The broken /customer-success-platforms hub (L1 finding: broken_navigation_links) prevents ChatGPT from discovering the Comparison section through hierarchical crawling — restoring the hub page is the prerequisite for Comparison content to be discoverable. Perplexity (high): Perplexity extracts Comparison tables directly from pages. Existing Comparison pages lack visible publication dates (L1 finding: comparison_pages_undated) and carry 8+ H1 tags (L1 finding: multiple_h1_tags) — both structural issues reduce Perplexity extractability. Fixing those two L1 issues would immediately improve Perplexity performance for this cluster without any new content creation.
All recommendations across all three layers, ranked by commercial impact × implementation speed.
Two URLs linked from the site footer navigation return 404 errors: the 'Why Vitally' competitive hub page at /customer-success-platforms and the 'vs. CRM' Comparison page at /customer-success-platforms/vitally-vs-crm. These are publicly indexed URLs that AI crawlers will encounter and fail to process.
The /features/reporting page has zero content on ROI, payback periods, or retention impact benchmarks — a CRO building a business case for CS platform investment cannot extract any financial justification from this page
Vitally has no page anywhere on the site that provides TCO inputs for a CS platform evaluation — AI systems generating TCO models for this query type cannot reference Vitally because no cost framework content exists
15 high-intent Comparison and Shortlisting queries route to L3 because Vitally's pages use product/feature content types where Comparison buying_job queries require Comparison-format content. 6 queries ask about competitor-vs-competitor matchups where Vitally has no page or presence at all. Comparison is the highest-visibility buying job at 65.6% (21/32 queries) — these 15 losses directly compound an already weak conversion rate at the most commercially critical stage.
AI systems building a 'CS platform AI feature matrix' artifact cannot reference Vitally — the /product/ai page has no competitor data and no structured table format that AI can extract for a vendor evaluation matrix
Vitally has no onboarding playbook template on any page — AI systems generating onboarding playbook artifacts cannot reference Vitally as the source, because the template content does not exist
Vitally's Enterprise Scalability & Advanced Customization feature shows 45.5% visibility (5/11 queries) and 0% wins (0/5 visible). Of 14 L3 queries, 8 carry 'missing' coverage status — Vitally has no content on complex account hierarchies, SOC 2/GDPR compliance, custom objects, or Gainsight pricing comparisons. When a mid-market CRO asks whether Gainsight's $50K+ price tag is justified, Vitally is absent from the response.
Vitally achieves 0% wins across all 7 visible Renewal & Expansion Management queries (0/7 visible queries won) despite 53.8% visibility (7/13 total queries). No dedicated content hub exists for expansion revenue identification, upsell signal methodology, or NRR benchmarking — the commercial vocabulary CROs use at budget decision time.
The /features/reporting page has no section addressing the top Gainsight reporting criticisms sourced from G2: rigid out-of-box dashboards with limited customization, data refresh lag making real-time decisions difficult, and custom report setup requiring admin support
The /features/project-management page has zero business outcome content — no retention benchmarks, no TTV improvement data, no named customer outcomes. A CS Team Lead building an internal business case cannot extract any shareable evidence from this page
Vitally's Customer Segmentation & Lifecycle Management feature shows 63.6% visibility (7/11 queries) but only 28.6% wins (2/7 visible queries). Of 9 L3 queries, 6 carry 'missing' coverage status — no content exists on tiered playbook management, digital-touch CS methodology, or managing 1,000+ accounts with a small team. Gainsight and Custify win these queries by publishing digital-touch CS frameworks that Vitally has not created.
The /features/health-scores page describes a product feature and contains no financial justification content — it cannot answer 'what is the business case for reducing churn through health scoring?'
The /features/health-scores page contains no quantified outcome data — no customer churn reduction percentages, no industry benchmarks, no before/after comparisons — making it impossible for AI platforms to cite Vitally when a buyer asks what churn reduction to expect
The /features/health-scores page contains no competitor capability data — buyers building evaluation matrices cannot find Gainsight, Planhat, or ChurnZero health scoring specifications to compare against Vitally
The /integrations page lists Vitally's integration capabilities but contains no competitor data — buyers building Comparison scorecards for Gainsight, ChurnZero, and Totango cannot find Vitally's position in their scorecard from this page
The /features/automation page contains no NRR improvement metrics — buyers asking for proof that workflow automation improves retention cannot find quantified evidence
Vitally's Product Usage & Adoption Tracking feature shows 27.3% visibility (3/11 queries) and 0% wins (0/3 visible). No dedicated integration landing pages exist for Amplitude, Mixpanel, or Segment — the analytics tools that product-led SaaS teams rely on — making Vitally uncitable when buyers search for CS platforms that connect to their analytics stack.
The /vitally-vs-Gainsight page does not address the 'small CS team without a dedicated admin' use case — the specific buyer context in this query
The /vitally-vs-Gainsight page covers Vitally vs Gainsight but does not address the Gainsight vs Totango Comparison question — a buyer deciding between these two never encounters Vitally through this page
The /vitally-vs-Totango page does not include a head-to-head automation capability Comparison — buyers switching from Totango cannot find specific evidence that Vitally's automation is superior for small teams
The /features/automation page does not include a 'Gainsight alternative for workflow automation' positioning section — buyers who have pre-rejected Gainsight for cost or complexity reasons cannot find Vitally as the recommended next option
The /product/ai page is a feature marketing page, not an evaluation guide — it tells buyers what Vitally AI does but not what they should require from any CS platform AI or how to distinguish strong AI implementations from weak ones
The /product/ai page has no productivity benchmark content — 'how much time does AI actually save per CSM per week' cannot be answered from this page, making Vitally invisible for consensus-creation queries that require shareable evidence
The /features/health-scores page has no customer voice content — no testimonials, G2 excerpts, or case study references from CS leaders who use Vitally's health scoring
The /features/project-management page has no implementation timeline content — buyers researching ChurnZero's implementation burden cannot find Vitally as an alternative, because Vitally has not published its own implementation timeline on any page
Catalyst's implementation experience is not addressed on any Vitally page — buyers researching Catalyst implementation smoothness for mid-market teams cannot find Vitally as a Comparison point
The /integration/salesforce page does not reference ChurnZero's documented Salesforce sync issues — buyers researching integration reliability by competitor are not finding Vitally as the reliable alternative
The /features/health-scores page contains no Comparison against Gainsight's health scoring — buyers specifically comparing mid-market churn prediction options cannot find Vitally's positioning from this page
The /integrations page does not include a 'Vitally vs Gainsight for Salesforce and HubSpot integration' Comparison — the specific competitive question buyers at Shortlisting stage are asking
The /features/nps page does not provide the requirements checklist format that buyers at the Requirements Building stage need — AI cannot extract structured NPS evaluation criteria from the current feature-description content
The /features/nps page has no retention-outcome content — 'how automated NPS follow-up workflows improve retention' cannot be answered from this page with Vitally as the recommended platform
The /features/reporting page does not describe how Vitally calculates and displays NRR — the specific metric CROs need for board reporting
The /features/reporting page does not explain what a CS platform's NRR tracking provides that spreadsheets cannot — the specific decision buyers are evaluating at Solution Exploration stage
The /features/health-scores page does not reference Planhat or position Vitally's health scoring methodology against Planhat's approach — buyers evaluating both cannot use this page for Comparison
The /features/nps page opens with feature descriptions ('send NPS surveys from Vitally') rather than validating the buyer's frustration: 'NPS scores sit in a spreadsheet and no one acts on them'
The /features/automation page does not address the 'Salesforce vs dedicated CS platform' decision — the most common alternative buyers are evaluating, particularly for automation workflows
The /features/reporting page describes reporting features in product language ('build dashboards', 'export data') without showing what the executive-facing output looks like — no named metrics, no export format descriptions, no executive use case
The /features/reporting page has no competitive positioning against Gainsight for executive dashboard quality — buyers Shortlisting for 'best executive dashboards for retention metrics' cannot find Vitally's position relative to Gainsight
The /features/reporting page is not structured as a requirements answer document — CROs cannot find which Vitally features address each of their executive reporting requirements
The /features/health-scores page does not show or describe the leadership portfolio view — how a CRO or VP CS sees all account health scores across the entire book of business in a single dashboard
The /product/ai page does not target mid-market B2B SaaS as the audience — 'mid-market' is not named, and the page describes AI features that apply equally to any company size, making it non-competitive for segment-specific Shortlisting queries
The /features/project-management page does not provide the requirements checklist format that buyers at the Requirements Building stage need — AI platforms cannot extract structured evaluation criteria from the current feature-description content
The /features/project-management page opens with feature descriptions ('manage tasks, milestones, and playbooks') rather than validating the buyer's problem: 'onboarding quality varies by CSM and I cannot scale without fixing this'
The /product/ai page opens with Vitally AI feature descriptions rather than validating the category-level problem: 'can AI help CS teams manage more accounts without burning out?' — a buyer at Problem Identification stage who hasn't concluded AI is the solution cannot extract a direct answer
The /features/health-scores page is structured as a product showcase, not as an answer to vendor evaluation questions — buyers building a requirements list cannot find Vitally's position on key questions (configurable weighting? ML vs rule-based? integration with Salesforce health fields?)
The /features/project-management page does not name the volume tier ('50+ new customers per quarter') that defines this buyer's Shortlisting context — AI cannot match Vitally to volume-specific Shortlisting queries without this explicit qualifier
The /features/health-scores page opens with product feature framing ('configure your health score') rather than buyer pain framing — an AI responding to 'how do I know which customers are about to churn' cannot extract a direct answer from this page
The /product/ai page buries the capacity outcome — the primary claim ('Vitally AI lets CSMs manage significantly more accounts without sacrificing relationship quality') is not stated in the hero, above the fold, or in any H2 heading that AI platforms would extract for Shortlisting queries
All 5 competitor Comparison pages (vs. Gainsight, ChurnZero, Totango, Planhat, Catalyst) display no visible publication or last-updated date. The only temporal references are G2 badge descriptions mentioning 'Summer 2025'. Combined with the sitemap's missing lastmod timestamps, AI crawlers have no signal for when this Comparison content was created or last verified.
The majority of product, feature, and Comparison pages use multiple H1 tags — ranging from 4 to 14 H1 elements per page. For example, the CSM solution page has 14 H1 tags, Comparison pages average 8 H1 tags, and product pillar pages (Productivity, AI, Visibility) each have 6-7 H1 tags. Only the individual feature pages (NPS, Surveys, Note-Taking, Automation) and case study pages maintain a single H1.
Our analysis method returns rendered page content as markdown text, which does not include JSON-LD schema markup, meta descriptions, or Open Graph tags. We cannot determine whether appropriate schema types (Product, FAQPage, Article, Organization) are present on any page.
The sitemap.xml at https://www.vitally.io/sitemap.xml contains over 1,000 URLs but none include lastmod, changefreq, or priority attributes. Every entry contains only the <loc> element.
The /features/automation page is a product showcase, not an evaluation framework — it cannot serve a buyer who is writing evaluation criteria for a vendor selection process
Vitally's Team Collaboration & Shared Docs feature shows 37.5% visibility (3/8 queries) and 33.3% wins (1/3 visible). All 7 L3 queries carry 'thin' coverage status — collaboration features exist but are described as product capabilities rather than as a solution to the specific buyer pain of CSM turnover creating irreversible customer knowledge loss.
The /integrations page does not address the 'build on Salesforce vs buy a dedicated CS platform' decision — the most common architectural decision CS teams face before entering the CS platform evaluation process
The /features/health-scores page has no content addressing why purpose-built CS platform health scoring outperforms homegrown solutions — buyers who built internally need to understand the specific gaps they are filling
The /features/health-scores page does not explain what data a health score needs to be predictive — buyers migrating from spreadsheets need a concrete list of inputs (product usage events, CRM data, support tickets, NPS scores) to evaluate whether Vitally can ingest their data
The /features/health-scores page does not explain Vitally's health score methodology — whether signals are weighted manually, automatically, or via ML — leaving buyers unable to evaluate prediction accuracy
The /features/health-scores page assumes the buyer has no current health scoring solution — it does not address the specific evaluation concerns of teams migrating from a homegrown system (data migration, scoring parity, transition period)
The /features/automation page does not address the 'is it time to move to a CS platform?' question — buyers at this stage need a concrete threshold (e.g., '50+ accounts per CSM,' '5+ person CS team') to self-qualify
The /vitally-vs-ChurnZero page does not reference ChurnZero's widely documented G2 review complaints about UI complexity and navigation difficulty — missing the specific Validation concern buyers have
The /integrations page does not explain how Vitally ingests and combines data from HubSpot (CRM), Segment (product analytics), and Zendesk (support) into a unified customer profile — the specific multi-tool architecture this buyer is evaluating
The /integration/salesforce page does not explicitly describe bidirectional sync capability — the buyer specifically wants to know if data flows both ways between Vitally and Salesforce
The /integration/salesforce page does not address Totango's documented Salesforce/HubSpot sync reliability problems — buyers researching Totango alternatives for integration reliability cannot find Vitally as the answer
The /vitally-vs-Totango page does not reference G2 review data on Totango's specific automation limitations — the page makes comparative claims without citing the third-party evidence buyers use at Validation stage
The /vitally-vs-Totango page does not address Totango's documented implementation risks for companies without a dedicated CS ops team — the specific buyer context in this query
The /features/nps page does not address the 'built-in CS platform NPS vs standalone NPS tool' decision — a buyer comparing these approaches cannot use this page to understand where Vitally fits in the decision landscape
The /features/automation page leads with product capability descriptions rather than the buyer pain — CSMs spending 60-70% of their time on admin tasks rather than customer interaction
The /integrations page opens with a list of available integrations rather than framing the buyer pain — a CS ops manager who needs to consolidate five tools cannot quickly determine from this page whether Vitally solves their specific data fragmentation problem
The /features/automation page does not include implementation timeline data — buyers looking for faster alternatives to Gainsight or their current tool need a specific time-to-value claim (e.g., 'first automated playbook running in 2 weeks')
The /features/automation page lists every automation capability without distinguishing which are foundational vs advanced — buyers building requirements lists cannot determine what to prioritize from this page
The /integrations page presents integrations as an undifferentiated catalog — buyers at Requirements Building stage cannot determine which integrations are critical for a 200-person SaaS running Salesforce vs other profiles
The /features/project-management page does not frame Vitally's approach within the broader 'milestone vs playbook' solution landscape — a buyer comparing approaches cannot use this page to understand where Vitally fits in the decision
The /product/ai page does not provide the category taxonomy a buyer uses to evaluate AI in CS — 'account summaries, next-best-action, risk alerts, automated communications' need to be organized as distinct use-case categories with H3 headings, not scattered across marketing copy
Our analysis method cannot detect whether pages rely on client-side rendering (CSR) frameworks like React, Angular, or Vue that may block AI crawlers from accessing content. All pages returned substantive text content through our fetch method, suggesting server-side rendering is likely in place, but this cannot be confirmed without inspecting raw HTML source and testing with JavaScript disabled.
Meta descriptions and Open Graph tags are not visible in rendered markdown output. We cannot verify whether commercial pages have unique, keyword-optimized meta descriptions or proper OG tags for social sharing and AI context.
The /features/nps page has no response rate data — 'which CS platforms' NPS surveys actually get good response rates' cannot be answered with Vitally as the recommended platform because no response rate evidence exists on the page
Without a 'Vitally AI vs Gainsight AI' section, AI platforms answering 'which platforms compete with Gainsight's AI' cannot include Vitally — there is no Vitally-sourced Comparison content to cite
Vitally has no multi-vendor AI Comparison content — AI platforms generating a '2026 AI feature Comparison across CS platforms' cannot include Vitally because there is no Vitally-sourced data for the Comparison matrix
The /features/project-management page's value proposition is framed in product language, not the business outcome language (time-to-value, first-year retention impact) that CRO Shortlisting queries require
All three workstreams can start this week.
[Synthesis] The 142 recommendations execute in L1-first sequence regardless of commercial priority ranking, because two L1 fixes specifically unblock downstream content: restoring the broken /customer-success-platforms hub enables AI crawler discovery of all 5 Comparison pages, and adding sitemap lastmod timestamps signals freshness for all 68 new L3 pieces as they publish. L2 remediations deepen 67 existing pages — the majority targeting practitioner-to-decision-maker content reframes on health scoring, workflow automation, reporting, and CRM integration pages. L3 new content fills six structural gaps: expansion revenue intelligence (13 queries), enterprise scalability (14 queries), Comparison content architecture (15 queries), product usage tracking (10 queries), digital-touch Customer Segmentation & Lifecycle Management (9 queries), and team knowledge continuity (7 queries).