Competitive intelligence for AI-mediated buying decisions. Where 15Five wins, where it loses, and a prioritized three-layer execution plan — built from 150 buyer queries across ChatGPT + Perplexity.
15Five appears in 46% of AI buyer queries but wins only 12% — visible but rarely the choice Across 150 buyer queries on ChatGPT and Perplexity, 15Five is visible in 46% of responses (69/150) but claims the primary recommendation in only 12% (18/150). That 34-point gap between being seen and being chosen is the defining metric of this audit: 15Five enters AI buyer conversations but cedes the recommendation to Lattice or Culture Amp in the majority of queries where it is even present.
[Mechanism] 15Five participated in a 150-query GEO visibility audit across ChatGPT and Perplexity, covering five buyer personas, eight buying jobs, and ten platform features in the mid-market performance management and employee engagement category. Overall, 15Five is visible in 46% of buyer queries (69/150) but claims the primary recommendation in only 12% (18/150). That 34-point visibility-to-win gap defines the strategic picture: 15Five enters AI buyer conversations but loses the recommendation to Lattice (35 wins) or Culture Amp (20 wins) in the majority of queries where it is even present. Total client citations reached 102 across both platforms — ranking 3rd in domain citation volume behind Lattice (188) and Culture Amp (176). The funnel pattern is more alarming than the overall numbers suggest.
At problem identification — where buyers first encounter challenges with turnover, manager quality, and siloed HR data — 15Five is visible in only 4.2% of queries (1/24). By Shortlisting, visibility rises to 65.4% (34/52). This top-funnel invisibility means AI assistants educate buyers about their problems without 15Five in the room: competitors set the category mental model before 15Five enters the conversation. This is not primarily a content strategy failure. 15Five has existing pages matching the majority of buyer queries.
The barriers are structural. An XML sitemap that excludes all commercial pages (sitemap_missing_commercial_pages) reduces crawler priority for 15Five's best content. Three Comparison URLs redirect to a generic brand page with no competitor specifics (comparison_urls_redirect_to_generic_page).
A key case study is gated and AI-inaccessible (case_study_gating_or_csr), blocking the Kreg Tool 20%+ turnover reduction data from appearing in AI citations. Below these technical barriers sit content depth gaps: people analytics (AMAYA (AI Manager Assist)), OKR tracking, and talent calibration pages are rated thin against the specific questions buyers ask — not because the features are absent, but because the documentation does not match buyer query intent. One query (15f_142: CFO TCO model) has no matching 15Five content at all — a complete gap at the deal-closing stage with the veto-holding CFO persona. The prescription is sequenced: L1 technical fixes first (1–3 days each), then content depth in three feature areas, then one new TCO resource for the CFO.
[Synthesis] Execution sequences by reversibility and expected citation impact. L1 technical fixes execute first: the sitemap expansion (sitemap_missing_commercial_pages, 1-3 days) immediately increases AI crawler discovery priority for all 15+ commercial pages currently excluded; the Comparison URL restoration (comparison_urls_redirect_to_generic_page, 1-2 weeks) addresses the highest-intent buying-stage content gap — these URLs receive competitor-specific search traffic and serve no competitive content; the case study ungating (case_study_gating_or_csr, 1-3 days) unlocks Kreg Tool 20%+ turnover reduction evidence for AI citation. L2 improvements proceed in commercial-weight order. l2_001 (AMAYA (AI Manager Assist)/people analytics) and l2_003 (talent calibration) address veto-holder-adjacent evaluator queries with the highest-severity thin coverage. l2_005 (competitor Comparison pages) is the content complement to the comparison_urls_redirect L1 fix — the technical redirect fix enables the URL; the content investment gives it competitive substance. l2_004 (CFO ROI content) and nio_001 (CFO TCO model, the sole L3 gap) address the same veto-holder persona in one content investment phase. l2_006 (engagement survey positioning), l2_007 (Kona differentiation), and l2_009 (customer evidence structuring) are high commercial weight but lower urgency than thin-coverage items. l2_002 (OKR depth) and l2_008 (HRIS integration documentation) are medium priority.
Where 15Five appears and where it doesn't — across personas, buying jobs, and platforms.
[TL;DR] 15Five is visible in 46% of buyer queries but wins only 12%. Converting visibility to wins is the primary challenge (34% gap).
15Five's 65% Shortlisting visibility collapses to 4% at problem identification — AI assistants educate buyers about their performance management challenges without 15Five present, meaning competitors set the mental frame before 15Five enters the shortlist conversation.
| Dimension | Combined | Platform Delta |
|---|---|---|
| All Queries | 46% | Perplexity +12 percentage points |
| By Persona | ||
| Chief Financial Officer | 41.3% | Perplexity +13 percentage points |
| Chief People Officer | 32.4% | Perplexity +12 percentage points |
| Director of HR Technology & People Analytics | 34.4% | Perplexity +6 percentage points |
| VP of People Operations | 37.5% | Even |
| VP of Talent Management | 29.8% | Perplexity +33 percentage points |
| By Buying Job | ||
| Artifact Creation | 26.1% | Perplexity +20 percentage points |
| Comparison | 39.7% | Even |
| Consensus Creation | 11.5% | Perplexity +23 percentage points |
| Problem Identification | 4.2% | Perplexity +8 percentage points |
| Requirements Building | 23.3% | Perplexity +20 percentage points |
| Shortlisting | 65.4% | Perplexity +23 percentage points |
| Solution Exploration | 23.3% | Perplexity +7 percentage points |
| Validation | 41.3% | Perplexity +13 percentage points |
| Dimension | ChatGPT | Perplexity |
|---|---|---|
| All Queries | 28.7% | 40.9% |
| By Persona | ||
| Chief Financial Officer | 34.8% | 47.8% |
| Chief People Officer | 26.5% | 38.2% |
| Director of HR Technology & People Analytics | 31.2% | 37.5% |
| VP of People Operations | 37.5% | 37.5% |
| VP of Talent Management | 13.8% | 46.4% |
| By Buying Job | ||
| Artifact Creation | 16.7% | 36.4% |
| Comparison | 41.2% | 38.2% |
| Consensus Creation | 0% | 23.1% |
| Problem Identification | 0% | 8.3% |
| Requirements Building | 13.3% | 33.3% |
| Shortlisting | 53.8% | 76.9% |
| Solution Exploration | 20% | 26.7% |
| Validation | 34.8% | 47.8% |
[Data] Query-level visibility: 46% (69/150). Win rate: 12% (18/150). Win-when-visible rate: 26.1% (18/69).
ChatGPT visibility: 28.7% (43/150). Perplexity visibility: 40.9% (61/149). Platform gap: 12.2pp (ChatGPT lower).
By buying stage: Problem Identification 4.2% (1/24), Solution Exploration 23.3% (7/30), Requirements Building 23.3% (7/30), Consensus Creation 11.5% (3/26), Shortlisting 65.4% (34/52), Validation 41.3% (19/46), Comparison ~39.7% (27/68 responses).
[Synthesis] 15Five's visibility is structurally bifurcated by buying stage. At Shortlisting (65.4%) and Validation (41.3%), 15Five reaches buyers competitively. At problem identification (4.2%) and consensus creation (11.5%), it is functionally absent — the stages where buyers form category preferences and build internal business cases.
This asymmetry means buyers encounter 15Five only after competitors have already framed the problem space, entering Shortlisting as the third or fourth name rather than the category anchor. The 12.2-point ChatGPT visibility gap (28.7% vs. Perplexity's 40.9%) is the platform-specific urgency. ChatGPT is the dominant first-touch AI tool for buyer research and draws on training data for educational queries — precisely the stages (problem identification, solution exploration) where 15Five's content depth is thinnest.
Perplexity's higher visibility confirms the domain is technically accessible; ChatGPT's lower share is a content authority deficit, not a crawlability issue. Requirements_building (23.3% visible, 0 wins) and Consensus Creation (11.5% visible, 0 wins on ChatGPT) confirm that 15Five loses the stages where buyers are building evaluation frameworks and internal approval materials.
Queries where 15Five is not mentioned in any AI response.
| ID | Query | Persona | Stage | Winner |
|---|---|---|---|---|
| 15f_001 | "What are the warning signs that good employees are about to leave — and how do mid-market companies catch this early?" | Chief People Officer | Problem Identification | No Vendor Mentioned |
| 15f_002 | "How do you prove to a skeptical CFO that people programs actually reduce turnover and save money?" | Chief People Officer | Problem Identification | No Vendor Mentioned |
| 15f_003 | "What are mid-market companies doing about managers who were promoted for technical skills but can't actually manage people?" | Chief People Officer | Problem Identification | No Vendor Mentioned |
| 15f_004 | "Our annual reviews take two months of admin time and managers dread them — what are other companies doing instead?" | VP of People Operations | Problem Identification | No Clear Winner |
| 15f_005 | "Biggest challenges with first-time managers who were promoted for individual performance, not people skills" | VP of People Operations | Problem Identification | No Vendor Mentioned |
| 15f_006 | "We've been running engagement surveys for two years and employees keep asking what actually changed — how do you close the loop?" | VP of People Operations | Problem Identification | No Vendor Mentioned |
| 15f_007 | "What are the main approaches to unifying performance, engagement, and compensation data when they live in separate HR systems?" | Director of HR Technology & People Analytics | Problem Identification | No Vendor Mentioned |
| 15f_008 | "We have performance data in one system, engagement in another, and comp in spreadsheets — how do other mid-market companies fix this?" | Director of HR Technology & People Analytics | Problem Identification | No Vendor Mentioned |
| 15f_009 | "How much does a poor performance management process actually cost a mid-market company in turnover and lost productivity?" | Chief Financial Officer | Problem Identification | No Vendor Mentioned |
| 15f_010 | "We're linking compensation to performance using spreadsheets and it's becoming a liability — how do other companies handle this?" | Chief Financial Officer | Problem Identification | No Vendor Mentioned |
Remaining competitor wins: Lattice ×7, Culture Amp ×6, Leapsome ×5, Workleap ×2, PerformYard ×1. 6 queries with no clear winner. 44 queries with no vendor mentioned. Full query-level data available in the analysis export.
Queries where 15Five is mentioned but a competitor is positioned more favorably.
| ID | Query | Persona | Buying Job | Winner | 15Five Position |
|---|---|---|---|---|---|
| 15f_011 | "How do you identify which employees are high-potential and at risk of leaving before they hand in their notice?" | VP of Talent Management | Problem Identification | No Vendor Mentioned | Brief Mention |
| 15f_016 | "We're replacing our ad-hoc 1:1 process — what's the real difference between dedicated check-in platforms and just using meeting agenda templates?" | VP of People Operations | Solution Exploration | No Clear Winner | Mentioned In List |
| 15f_019 | "How do performance management platforms typically integrate with HRIS systems like Workday, BambooHR, and ADP?" | Director of HR Technology & People Analytics | Solution Exploration | No Clear Winner | Mentioned In List |
| 15f_021 | "Open source vs. commercial OKR tools — real tradeoffs for a company with 200-500 employees" | Director of HR Technology & People Analytics | Solution Exploration | No Clear Winner | Mentioned In List |
| 15f_022 | "We've outgrown SurveyMonkey for employee engagement — what does a modern performance management tech stack look like for 300+ employees?" | Director of HR Technology & People Analytics | Solution Exploration | Culture Amp | Mentioned In List |
| 15f_024 | "Our current review process doesn't connect to any business outcomes — how do companies move from annual reviews to something measurable?" | Chief Financial Officer | Solution Exploration | No Vendor Mentioned | Mentioned In List |
| 15f_026 | "What types of HR technology actually move the needle on reducing voluntary turnover at mid-market companies?" | VP of Talent Management | Solution Exploration | No Vendor Mentioned | Mentioned In List |
| 15f_031 | "Key requirements for evaluating performance review platforms for a 400-person company moving away from annual reviews" | VP of People Operations | Requirements Building | No Clear Winner | Brief Mention |
| 15f_033 | "We want continuous feedback between review cycles — what capabilities actually matter in a recognition and feedback tool?" | VP of People Operations | Requirements Building | No Clear Winner | Brief Mention |
| 15f_034 | "Integration requirements for evaluating performance management software — HRIS sync, SSO, SCIM provisioning, API access, webhook support" | Director of HR Technology & People Analytics | Requirements Building | No Vendor Mentioned | Mentioned In List |
| ID | Query | Persona | Buying Job | Winner | 15Five Position |
|---|---|---|---|---|---|
| 15f_037 | "We've tried and failed with spreadsheet-based OKRs — what features in a dedicated OKR tool actually make goal cascading work?" | Director of HR Technology & People Analytics | Requirements Building | No Vendor Mentioned | Brief Mention |
| 15f_039 | "Evaluation criteria for performance management platforms from a finance perspective — ROI metrics, implementation costs, time to value" | Chief Financial Officer | Requirements Building | No Vendor Mentioned | Brief Mention |
| 15f_042 | "We need structured 1:1 tools that connect manager check-ins to company goals — what capabilities should we prioritize?" | VP of Talent Management | Requirements Building | No Vendor Mentioned | Mentioned In List |
| 15f_044 | "Which employee engagement platforms actually help you act on survey results, not just collect engagement scores?" | Chief People Officer | Shortlisting | Culture Amp | Mentioned In List |
| 15f_045 | "We've outgrown our current performance tool — best platforms for continuous check-ins and manager coaching at a 350-person company" | Chief People Officer | Shortlisting | Betterworks | Mentioned In List |
| 15f_048 | "Best compensation management tools for mid-market companies trying to connect pay to performance data" | Chief People Officer | Shortlisting | No Vendor Mentioned | Mentioned In List |
| 15f_049 | "Top continuous performance review platforms for replacing spreadsheet-based annual reviews at a 200-500 person company" | VP of People Operations | Shortlisting | Lattice | Strong 2nd |
| 15f_050 | "alternatives to our current performance management tool for a 350-person SaaS company focused on reducing regrettable turnover" | VP of People Operations | Shortlisting | Lattice | Mentioned In List |
| 15f_054 | "performance management platforms with reliable BambooHR and Workday integration — replacing a tool that doesn't sync properly" | VP of People Operations | Shortlisting | Lattice | Mentioned In List |
| 15f_055 | "Best performance management platforms with native HRIS integrations — Workday, ADP, BambooHR sync without custom middleware" | Director of HR Technology & People Analytics | Shortlisting | Lattice | Mentioned In List |
| 15f_057 | "looking to replace our current review tool with a continuous performance platform that supports 360-degree feedback and custom review cycles" | Director of HR Technology & People Analytics | Shortlisting | Lattice | Mentioned In List |
| 15f_058 | "replacing our standalone engagement survey tool — need a platform with real-time dashboards, API access, and data export for an analytics-driven HR team" | Director of HR Technology & People Analytics | Shortlisting | Culture Amp | Mentioned In List |
| 15f_062 | "OKR platforms affordable enough for mid-market but robust enough to actually make goals stick across departments" | Chief Financial Officer | Shortlisting | No Vendor Mentioned | Strong 2nd |
| 15f_065 | "Best OKR tools for companies where goal cascading has never worked — switching from spreadsheets to a dedicated platform" | VP of Talent Management | Shortlisting | No Clear Winner | Mentioned In List |
| 15f_066 | "Which engagement platforms are best at connecting survey data to retention outcomes for mid-market companies?" | VP of Talent Management | Shortlisting | Lattice | Mentioned In List |
| 15f_067 | "Top tools for developing managers who've never had formal leadership training — practical coaching, not just theory" | VP of Talent Management | Shortlisting | Culture Amp | Mentioned In List |
| 15f_070 | "We're moving from annual reviews — how does Lattice compare to other platforms for making that transition smooth?" | Chief People Officer | Comparison | Lattice | Strong 2nd |
| 15f_074 | "How does Culture Amp handle continuous check-ins and manager enablement compared to dedicated check-in platforms?" | VP of People Operations | Comparison | Culture Amp | Strong 2nd |
| 15f_075 | "Switching from our current review tool — how does Lattice compare for making performance reviews less painful?" | VP of People Operations | Comparison | Lattice | Strong 2nd |
| 15f_076 | "How does Workleap's recognition and feedback functionality compare to more comprehensive performance management platforms?" | VP of People Operations | Comparison | Workleap | Mentioned In List |
| 15f_077 | "We're considering switching our engagement tool — how does Culture Amp's benchmarking compare to other platforms' action-planning features?" | VP of People Operations | Comparison | Culture Amp | Mentioned In List |
| 15f_078 | "How does Lattice's integration architecture compare to other performance platforms for HRIS sync, APIs, and webhooks?" | Director of HR Technology & People Analytics | Comparison | Lattice | Mentioned In List |
| 15f_082 | "We're replacing spreadsheet-based comp decisions — how does Lattice's compensation module compare for linking pay to performance?" | Chief Financial Officer | Comparison | Lattice | Brief Mention |
| 15f_084 | "How does Betterworks' total cost compare to mid-market alternatives — implementation, training, and per-seat pricing?" | Chief Financial Officer | Comparison | Betterworks | Strong 2nd |
| 15f_085 | "How does Lattice's talent calibration and 9-box feature compare to other performance management platforms?" | VP of Talent Management | Comparison | Lattice | Strong 2nd |
| 15f_086 | "How does Leapsome's continuous feedback compare to other 1:1 tools — which one do managers actually adopt?" | VP of Talent Management | Comparison | Leapsome | Mentioned In List |
| 15f_087 | "How does Workleap's engagement surveys compare to more analytics-heavy platforms for a 200-person company?" | VP of Talent Management | Comparison | Workleap | Strong 2nd |
| 15f_103 | "Lattice implementation problems when migrating from another performance management tool at a mid-market company" | Chief People Officer | Validation | No Vendor Mentioned | Brief Mention |
| 15f_106 | "We're evaluating Culture Amp as a replacement — what are the biggest downsides of their performance review features?" | VP of People Operations | Validation | No Clear Winner | Brief Mention |
| 15f_109 | "Betterworks analytics and reporting limitations — what can't it do that other platforms handle?" | Director of HR Technology & People Analytics | Validation | No Clear Winner | Brief Mention |
| 15f_111 | "Betterworks reviews from mid-market companies — is it worth the enterprise-level pricing?" | Chief Financial Officer | Validation | No Clear Winner | Mentioned In List |
| 15f_113 | "Is Workleap too basic for a growing mid-market company — will we outgrow it in two years?" | Chief Financial Officer | Validation | No Clear Winner | Brief Mention |
| 15f_114 | "Workleap Officevibe limitations — what are the biggest feature gaps compared to more comprehensive platforms?" | VP of Talent Management | Validation | No Clear Winner | Mentioned In List |
| 15f_119 | "15Five talent management and performance calibration — how does it compare to dedicated talent review platforms?" | VP of Talent Management | Validation | No Clear Winner | Primary Recommendation |
| 15f_121 | "Biggest risks of switching to continuous performance management from annual reviews at a mid-market company" | Director of HR Technology & People Analytics | Validation | No Vendor Mentioned | Mentioned In List |
| 15f_127 | "Case studies of mid-market companies that improved manager effectiveness after switching to continuous performance management" | Chief People Officer | Consensus Creation | Lattice | Mentioned In List |
| 15f_137 | "Case studies of companies that reduced regrettable turnover after switching from annual reviews to continuous performance management" | VP of Talent Management | Consensus Creation | No Vendor Mentioned | Mentioned In List |
| 15f_140 | "Create a vendor Comparison scorecard for 15Five, Lattice, Culture Amp, Betterworks, and Leapsome focused on integration capabilities and data architecture" | Director of HR Technology & People Analytics | Artifact Creation | Lattice | Strong 2nd |
| 15f_141 | "Build an evaluation template for comparing continuous performance management platforms — weighted scoring for reviews, check-ins, engagement, and analytics" | VP of People Operations | Artifact Creation | No Vendor Mentioned | Mentioned In List |
| 15f_147 | "Create a Comparison matrix for OKR and goal tracking features across 15Five, Betterworks, Lattice, and Leapsome" | Chief People Officer | Artifact Creation | No Clear Winner | Mentioned In List |
| 15f_149 | "Draft an executive summary comparing recognition and continuous feedback platforms for a leadership team — focus on retention impact" | VP of Talent Management | Artifact Creation | No Vendor Mentioned | Mentioned In List |
Who’s winning when 15Five isn’t — and who controls the narrative at each buying stage.
[TL;DR] 15Five wins 12% of queries (18/150), ranks #3 in SOV — H2H record: 30W–28L across 9 competitors.
Lattice leads by 35 wins to 18 and 188 citations to 102 — the gap is documentation depth in Shortlisting and Comparison stages, not brand disadvantage; it closes with 60–90 days of focused content investment in the three features (analytics, calibration, OKR) where Lattice's documentation currently exceeds 15Five's.
| Company | Mentions | Share |
|---|---|---|
| Lattice | 90 | 21.3% |
| Culture Amp | 73 | 17.3% |
| 15Five | 69 | 16.4% |
| Leapsome | 50 | 11.8% |
| Betterworks | 41 | 9.7% |
| Quantum Workplace | 30 | 7.1% |
| PerformYard | 28 | 6.6% |
| Workleap | 24 | 5.7% |
| Engagedly | 15 | 3.5% |
| Reflektive | 2 | 0.5% |
When 15Five and a competitor both appear in the same response, who gets the recommendation? One query with multiple competitors generates a matchup against each — so H2H totals will exceed the query count.
Win = primary recommendation (cross-platform majority). Loss = competitor was. Tie = neither or third party.
For the 81 queries where 15Five is completely absent:
Vendors appearing in responses not in 15Five’s defined competitive set.
[Synthesis] Lattice leads the field with 35 query wins to 15Five's 18 and 188 combined domain citations to 15Five's 102 — a 1.85x citation gap that reflects documentation depth, not brand superiority. The performance gap is concentrated in Comparison and Shortlisting buying jobs, where Lattice's feature documentation depth drives AI retrieval and Culture Amp's content volume dominates engagement analytics queries. Both competitive leads are content investment advantages, not structural brand disadvantages. Lattice wins through documentation authority (Lattice.com + help.Lattice.com cited 188 times combined) and Culture Amp through content volume (176 combined citations).
Neither gap requires product changes to close — both respond directly to focused content investment over 60–90 days. BambooHR's 20 surprise appearances signal an expanding category risk: AI assistants are recommending a broad HCM platform for queries where performance-management-first tools should dominate. This category boundary expansion — HCM platforms competing for performance management mindshare — makes 15Five's establishment of clear mid-market performance management identity more urgent, not less.
What AI reads and trusts in this category.
[TL;DR] 15Five had 102 unique pages cited across buyer queries. 10 high-authority domains cite competitors but not 15Five.
15Five's 102 client citations rank 3rd in the competitive field; Perplexity cites the domain more frequently than ChatGPT, confirming crawlability is not the issue — building third-party citation authority through analyst and HR media placements that name AMAYA (AI Manager Assist) and Kona would move both platforms simultaneously.
Note: Domain-level citation counts (above) tally instances per individual domain. Competitor-level counts (below) aggregate across all domains owned by a single vendor, which may include subdomains.
Non-competitor domains citing other vendors but not 15Five — off-domain authority opportunities.
These domains cited competitors but did not cite 15Five pages in the queries analyzed. This reflects citation patterns in AI responses, not overall platform presence.
[Synthesis] 15Five's 102 citations place it 3rd in overall domain citation volume — behind Lattice and Culture Amp but ahead of all other competitors. The gap to Lattice (86 additional citations) reflects documentation depth, and the gap to Culture Amp (74 additional citations) reflects content volume across educational and research queries. The platform citation split is the most actionable finding: success.15five.com (help documentation) contributes 36 of 102 citations, suggesting AI assistants actively retrieve 15Five's technical help content when they find it. The main 15five.com domain contributes 66 citations.
Together, this confirms that 15Five's content is technically accessible — robots.txt allows all major AI crawlers — and the citation gap is an authority and depth problem, not a technical access problem. The 343 third-party domains cited with no 15Five presence identify the structural off-domain gap. AI assistants for educational queries draw primarily from industry publications (aihr.com, gallup.com), media (peoplemanagingpeople.com), and professional networks (linkedin.com) — sources that have not yet established 15Five as a named reference for people analytics, OKR implementation, or talent calibration topics. An off-domain publication strategy would move both ChatGPT and Perplexity citations in parallel.
Three layers of recommendations ranked by commercial impact and implementation speed.
[TL;DR] 19 priority recommendations (plus 5 near-rebuild optimizations) targeting 138 queries where 15Five is currently invisible. 4 L1 technical fixes + 2 verification checks, 9 content optimizations (L2), 1 new content initiatives (L3).
Sequence: L1 technical fixes first (sitemap expansion, Comparison URL restoration, case study ungating — all 1-3 days each), then L2 content depth in people analytics (AMAYA (AI Manager Assist)), talent calibration, and competitor Comparison pages, then CFO TCO resource (nio_001) — together addressing the 54% of queries where 15Five is currently invisible.
Reading the priority numbers: Recommendations are ranked 1–16 across all three layers by commercial impact × implementation speed. Within each layer, items appear in priority order. Gaps in the sequence (e.g., L1 shows 1, 2, then 12) mean higher-priority items belong to a different layer.
Configuration and infrastructure changes. Owner: Engineering / DevOps. Timeline: Days to weeks.
| Priority | Finding | Impact | Timeline |
|---|---|---|---|
| #9 | Case Study Page Returns Minimal Body Content — Verify Gating or CSR | Medium | 1-3 days |
| #10 | Competitor Comparison URLs Redirect to Generic Brand Page With No Competitor Content | Medium | 1-2 weeks |
| #11 | No Date Signals on Any Product or Solution Page | Medium | 1-3 days |
| #12 | XML Sitemap Contains Only 19 Blog URLs — All Commercial Pages Absent | Medium | 1-3 days |
Items requiring manual review before determining if action is needed.
| Priority | Finding | Impact | Timeline |
|---|---|---|---|
| #15 | Meta Descriptions and OG Tags: Manual Verification Required | Low | 1-3 days |
| #16 | Schema Markup: Manual Verification Required | Low | 1-3 days |
Click any row to expand full issue/fix detail.
Existing pages that need restructuring or deepening. Owner: Content Team. Timeline: Weeks.
/pricing contains tier descriptions but no quantitative ROI framing — the CFO cannot estimate payback period, cost-per-employee TCO, or cost-of-inaction Comparison from the current page Competitor pricing Comparison queries (15f_094, 15f_111, 15f_113) represent shortlist-stage queries where 15Five is not mentioned — buyers comparing Lattice vs. Betterworks pricing never consider 15Five because no positioning content targets those Comparison patterns The 'average cost of replacing an employee' business case framing (15f_134) is not connected to 15Five's customer outcomes in any existing page
Queries affected: 15f_009, 15f_039, 15f_094, 15f_111, 15f_113, 15f_134
All three Comparison URLs redirect to /why-15five which contains zero competitor-specific content — buyers who arrive via 'vs Lattice' search intent immediately encounter a page that does not address their query, confirming what AI assistants already know: 15Five has no Comparison content at these URLs Lattice wins 7 Comparison/Shortlisting queries (15f_049, 15f_057, 15f_070, 15f_075, 15f_102, 15f_127, 15f_135) because Lattice has substantive feature documentation while 15Five has only one blog post Culture Amp wins 5 engagement-adjacent Comparison queries (15f_080, 15f_088, 15f_095, 15f_100) because no 15Five content directly positions against Culture Amp's feature set in a head-to-head format
Queries affected: 15f_049, 15f_057, 15f_070, 15f_075, 15f_080, 15f_088, 15f_095, 15f_100, 15f_102, 15f_103, 15f_127, 15f_135
AMAYA (AI Manager Assist) is not named or described on either solution page, making it invisible to AI assistants indexing 15Five for 'AI people analytics' or 'natural language workforce queries' No page currently addresses the 'build vs. buy for people analytics' question — a requirements-stage query that routes HR technology directors to Tableau/Looker framing rather than 15Five Board-level reporting capabilities are absent from all existing pages; the 'What HR data should we report to the board?' query (15f_023) has no answerable 15Five content
Queries affected: 15f_002, 15f_008, 15f_020, 15f_023, 15f_029, 15f_035, 15f_047, 15f_056, 15f_079, 15f_091, 15f_101, 15f_122, 15f_130, 15f_145
No existing page documents 9-box calibration workflow mechanics — how calibration sessions are structured, who participates, how ratings are normalized across managers — making AI assistants unable to compare 15Five's calibration against Lattice's Technical evaluation requirements (flexible rating scales, bias detection, manager override audit trails, integration with review workflows) requested in 15f_036 and 15f_148 are not answered by any existing 15Five page The business case for talent calibration at 300-person scale (15f_018: 'is it worth the administrative effort?') is not addressed with evidence from 15Five customers
Queries affected: 15f_011, 15f_018, 15f_036, 15f_085, 15f_109, 15f_119, 15f_131, 15f_148
The Kreg Tool case study (20%+ turnover reduction) is inaccessible to AI crawlers due to gating or CSR rendering — the highest-specificity turnover ROI metric 15Five has is invisible to AI systems Lattice wins 15f_050 ('alternatives to our current PM (Product Manager) tool focused on reducing regrettable turnover') and 15f_127 ('case studies of companies that improved manager effectiveness') despite 15Five having directly relevant customer evidence — blog prose format is insufficient for AI citation in Shortlisting contexts Customer outcome statistics appear embedded in narrative paragraphs rather than in extractable callout formats — AI assistants cite discrete metric claims ('Company X reduced turnover by 21%') not buried narrative prose
Queries affected: 15f_026, 15f_050, 15f_127, 15f_137, 15f_139, 15f_143
Culture Amp wins 5 queries (15f_022, 15f_044, 15f_058, 15f_077, 15f_099) by being positioned as the analytics-first engagement platform; 15Five's pages don't differentiate on actionability vs. measurement — Culture Amp measures more, 15Five acts faster, but this is not documented Workleap wins 3 simplicity-focused queries (15f_087, 15f_092) — 15Five's pages don't address the simplicity-vs-depth positioning that frames 15Five as the right choice for mid-market companies that need depth without enterprise complexity The Employee engagement surveys generate data but HR teams lack the time, tools, or pain point ('surveys generate data but HR teams can't act on it') is not directly answered by any existing page with a workflow walkthrough of how 15Five solves this specific problem
Queries affected: 15f_022, 15f_041, 15f_044, 15f_052, 15f_058, 15f_066, 15f_077, 15f_087, 15f_092, 15f_099, 15f_114, 15f_121
Leapsome wins 3 manager coaching Comparison queries (15f_072, 15f_089, 15f_097) by being positioned as the dedicated manager development platform; 15Five's Kona page does not differentiate against Leapsome's specific features at the head-to-head level Culture Amp wins 15f_067 ('top tools for developing managers who've never had formal training') — the first-time manager development use case that Kona was built for is not documented with scenario-specific examples that AI can cite The payback period question for manager coaching investment (15f_138: 'what's the typical payback period?') is not answered with 15Five Kona customer data anywhere on the site
Queries affected: 15f_025, 15f_046, 15f_067, 15f_072, 15f_089, 15f_097, 15f_107, 15f_110, 15f_138, 15f_144
No existing page addresses the OKR failure mode question (15f_120: 'Common failure modes when rolling out OKR software') — a Validation-stage query that competitors with explicit failure analysis content win by default The '/products/performance/okrs-and-goals/' page lacks mid-market scalability framing (15f_062: 'OKR platforms affordable enough for mid-market but robust enough to actually make goals stick') Goal cascading mechanics — how 15Five ensures VP-level OKRs reach individual contributors — are not documented in a way AI assistants can extract as capability claims
Queries affected: 15f_012, 15f_021, 15f_037, 15f_062, 15f_065, 15f_096, 15f_120, 15f_136, 15f_147
The /integrations page lists connectors without documenting technical depth — buyers evaluating SSO, SCIM, bi-directional sync, and webhook support (15f_034) cannot answer their evaluation checklist from the existing page Lattice wins 4 integration queries (15f_054, 15f_055, 15f_078) by help documentation depth; help.Lattice.com is cited 23+ times in integration queries while 15Five's success.15five.com integration articles appear in fewer Comparison citations The 'Culture Amp integration issues with Workday' Validation query (15f_108) offers an insertion opportunity — buyers vetting Culture Amp's integration weaknesses should find 15Five positioned as the alternative with stronger Workday/ADP integration, but no content makes this case
Queries affected: 15f_007, 15f_019, 15f_034, 15f_054, 15f_055, 15f_078, 15f_093, 15f_108, 15f_132
Net new content addressing visibility and positioning gaps. Owner: Content Strategy. Timeline: Months.
When CFOs build purchase justifications for performance management software, they construct 3-year cost models covering licensing, implementation, training, and change management. 15Five has no AI-readable content at this level of financial specificity, making it invisible at the moment a veto-holding decision-maker formalizes budget commitment. The Artifact Creation buying job signals that financial sign-off is actively being prepared — the closest point in the buying cycle to a signed contract. Critically, the winner is 'No Vendor Mentioned' on both platforms: no competitor has claimed this content territory either. A purpose-built TCO resource would earn Perplexity citations in financial-modeling responses and enter ChatGPT-generated cost summaries precisely when procurement approval is imminent, with zero competition for the position.
ChatGPT (high): ChatGPT responded to 15f_142 with a structured educational TCO framework citing no vendors (winner: No Vendor Mentioned). ChatGPT generates only 25 client citations across 150 responses (2.5% share) — demonstrating it cites vendor content only when it is specifically authoritative for the query. A structured TCO guide with numeric headings and downloadable template would be retrievable via ChatGPT's search plugin for cost-modeling queries. Perplexity (high): Perplexity generates 2,343 total citations across 149 responses (15.7 average per response) vs ChatGPT's 6.6 — demonstrating citation-heavy behavior for artifact queries. 15Five already earns 115 Perplexity citations (4.9% share), confirming the domain is indexed and crawlable. The TCO content gap is content specificity, not technical access.
All recommendations across all three layers, ranked by commercial impact × implementation speed.
15Five is entirely absent from 1 Artifact Creation query (15f_142) in which the CFO persona — a veto-holding decision-maker — explicitly requests a 3-year total cost of ownership model for performance management software. Winner is 'No Vendor Mentioned' on both ChatGPT and Perplexity: no 15Five pricing data, implementation cost breakdowns, or ROI content was cited in any AI-generated response to this query.
/pricing contains tier descriptions but no quantitative ROI framing — the CFO cannot estimate payback period, cost-per-employee TCO, or cost-of-inaction Comparison from the current page
All three Comparison URLs redirect to /why-15five which contains zero competitor-specific content — buyers who arrive via 'vs Lattice' search intent immediately encounter a page that does not address their query, confirming what AI assistants already know: 15Five has no Comparison content at these URLs
AMAYA (AI Manager Assist) is not named or described on either solution page, making it invisible to AI assistants indexing 15Five for 'AI people analytics' or 'natural language workforce queries'
No existing page documents 9-box calibration workflow mechanics — how calibration sessions are structured, who participates, how ratings are normalized across managers — making AI assistants unable to compare 15Five's calibration against Lattice's
The Kreg Tool case study (20%+ turnover reduction) is inaccessible to AI crawlers due to gating or CSR rendering — the highest-specificity turnover ROI metric 15Five has is invisible to AI systems
Culture Amp wins 5 queries (15f_022, 15f_044, 15f_058, 15f_077, 15f_099) by being positioned as the analytics-first engagement platform; 15Five's pages don't differentiate on actionability vs. measurement — Culture Amp measures more, 15Five acts faster, but this is not documented
Leapsome wins 3 manager coaching Comparison queries (15f_072, 15f_089, 15f_097) by being positioned as the dedicated manager development platform; 15Five's Kona page does not differentiate against Leapsome's specific features at the head-to-head level
The Kreg Tool case study page at /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 returned almost exclusively navigation and footer markup with negligible body content — only the headline metric ('reduced turnover by over 20%') and a download button were accessible. Related customer stories presented as blog posts (Pendo, Auror) returned full body content normally. The case study format on this URL appears to use a gated download model (PDF behind a form), which renders the page's substantive content inaccessible to AI crawlers.
Three URLs that appear in search engine results as dedicated competitor Comparison pages — /15five-vs-Lattice, /15five-vs-cultureamp/, and /15five-vs-Leapsome/ — all redirect to the generic /why-15five page. The /why-15five page contains no competitor-specific content: it does not mention Lattice, Culture Amp, or Leapsome by name, and contains only generic brand messaging ('The new ERA OF HR'). Fetching each Comparison URL confirmed the canonical page is /why-15five and the full page content is identical across all three. Web search results still index these URLs with competitor-specific titles (e.g., '15Five vs Culture Amp | Comparing Employee Management...'), meaning buyers and AI crawlers who follow these URLs from search results land on a page that does not address the query that brought them there.
All product pages, solution pages, the why-15five page, and the pricing page have no visible last-updated dates and are absent from the sitemap — meaning no lastmod signal is available from any source. Freshness could not be determined for 17 of 30 pages analyzed. While blog posts in the sitemap carry lastmod timestamps (November-December 2025), these appear to be bulk-refreshed timestamps rather than per-post content modification dates: several blog posts show sitemap lastmod of 2025-11-25 or 2025-11-26 regardless of their original publication date (some were written in 2017-2019).
The sitemap at https://www.15five.com/sitemap.xml contains exactly 19 URLs, all of which are blog posts or resource thank-you pages with lastmod timestamps of November-December 2025. Zero product pages, zero solution pages, zero pricing pages, zero integration pages, zero Comparison-redirect pages, and zero feature subpages appear in the sitemap. No sitemap index file exists (sitemap_index.xml and hs-sitemap.xml both return 404). Core commercial pages such as /products, /products/perform, /products/engage, /products/kona, /products/perform/compensation, /pricing, /integrations, /solutions/reduce-regrettable-turnover, and /why-15five are all entirely absent from any known sitemap.
No existing page addresses the OKR failure mode question (15f_120: 'Common failure modes when rolling out OKR software') — a Validation-stage query that competitors with explicit failure analysis content win by default
The /integrations page lists connectors without documenting technical depth — buyers evaluating SSO, SCIM, bi-directional sync, and webhook support (15f_034) cannot answer their evaluation checklist from the existing page
Meta descriptions and Open Graph tags (og:description, og:image, og:title) are not accessible via rendered markdown analysis. None of the 30 pages analyzed had visible meta description or OG tag content in the fetched output.
This analysis was conducted using rendered page content (web_fetch returns markdown, not raw HTML), so JSON-LD schema blocks, meta tags, and OG tags are not visible in any of the 30 pages analyzed. Whether product pages carry Product or SoftwareApplication schema, blog posts carry Article schema with datePublished/dateModified, pricing pages carry Offer schema, or FAQ sections carry FAQPage schema cannot be determined from this analysis method.
All three workstreams can start this week.
[Synthesis] Execution sequences by reversibility and expected citation impact. L1 technical fixes execute first: the sitemap expansion (sitemap_missing_commercial_pages, 1-3 days) immediately increases AI crawler discovery priority for all 15+ commercial pages currently excluded; the Comparison URL restoration (comparison_urls_redirect_to_generic_page, 1-2 weeks) addresses the highest-intent buying-stage content gap — these URLs receive competitor-specific search traffic and serve no competitive content; the case study ungating (case_study_gating_or_csr, 1-3 days) unlocks Kreg Tool 20%+ turnover reduction evidence for AI citation. L2 improvements proceed in commercial-weight order. l2_001 (AMAYA (AI Manager Assist)/people analytics) and l2_003 (talent calibration) address veto-holder-adjacent evaluator queries with the highest-severity thin coverage. l2_005 (competitor Comparison pages) is the content complement to the comparison_urls_redirect L1 fix — the technical redirect fix enables the URL; the content investment gives it competitive substance. l2_004 (CFO ROI content) and nio_001 (CFO TCO model, the sole L3 gap) address the same veto-holder persona in one content investment phase. l2_006 (engagement survey positioning), l2_007 (Kona differentiation), and l2_009 (customer evidence structuring) are high commercial weight but lower urgency than thin-coverage items. l2_002 (OKR depth) and l2_008 (HRIS integration documentation) are medium priority.