Competitive intelligence for AI-mediated buying decisions. Where 15Five wins, where it loses, and a prioritized three-layer execution plan — built from 150 buyer queries across ChatGPT + Perplexity.
[Narrative] 15Five enters this GEO audit with measurable competitive standing — a #3 share of voice rank with 16.35% of mentions (69 citations) behind Lattice at #1 (21.33%, 90 mentions) and Culture Amp at #2 (17.3%, 73 mentions) — but the headline metrics mask a structural problem in where that visibility occurs and who it reaches. When executive buyers evaluate 15Five, the platform wins 43.9% of the time: genuinely competitive performance. When evaluators doing initial research assess options, the win rate collapses to 15.87%, a 28-percentage-point gap representing buyers being shaped before 15Five enters the conversation. The gap is not primarily a positioning problem: it is a content inventory problem at the research stage. 81 of 132 total gaps are invisibility gaps — queries where 15Five is entirely absent, not present but outranked. Invisibility is sharpest at problem identification (4.17% visibility) and consensus creation (11.54% visibility), the buying stages where buyers form requirements and build internal stakeholder alignment. By the time a buyer reaches shortlisting (65.38% visibility), requirements have already been shaped by Lattice and Culture Amp, who dominate early-funnel research. Five content voids drive the structural gap. 15Five's AMAYA people analytics product is invisible across 15 buyer queries — buyers researching flight risk prediction, board reporting, and analytics platforms find no 15Five content, enabling Lattice and Culture Amp to win these queries by default. The absence of dedicated competitor comparison pages leaves 26 high-intent comparison and shortlisting queries uncontested, with Lattice winning 10 and Culture Amp winning 6. OKR goal tracking (9 queries), talent calibration (7 queries), and CFO financial modeling content (missing coverage) compound the early-funnel invisibility. The action plan addresses these gaps in three layers totaling 138 items: 6 technical fixes that unblock crawler access to commercial pages (the XML sitemap currently contains zero product or solution pages), 74 L2 content optimizations that improve pages found but not won, and 58 L3 new content items that fill the structural voids. Technical fixes execute first because they unblock the downstream improvements — new comparison pages and AMAYA content will not be discovered at full frequency until the sitemap issue is resolved.
[Core Insight] 15Five wins with buyers who already know it — and loses 96% of the early discovery queries that create those buyers Decision-makers win at 43.9% when 15Five is visible; evaluators doing initial research win at 15.87% — a 28-percentage-point role gap (metrics.visibility.role_type_split.gap_pp=28) that reveals the structural problem: buyers who encounter 15Five later in the journey evaluate it favorably, but the buyers who are building requirements, setting evaluation criteria, and forming shortlists at problem identification (4.17% visibility) and consensus creation (11.54% visibility) don't find 15Five at all.
Where 15Five appears and where it doesn't — across personas, buying jobs, and platforms.
[TL;DR] 15Five appears in 46% of buyer queries and wins 26.1% of those. Converting visibility to wins is the primary challenge (20% gap).
15Five is visible but concentrated in the wrong buying stages — appearing in 65% of shortlisting queries but only 4% of problem-identification queries means buyers are forming requirements and shortlists without 15Five, then encountering it only after decisions have effectively been made.
| Dimension | Combined | ChatGPT | Perplexity | Delta |
|---|---|---|---|---|
| All Queries | 46% | 28.7% | 40.9% | +12pp ChatGPT |
| By Persona | ||||
| Chief Financial Officer | 41.3% | 34.8% | 47.8% | −13pp Perplexity |
| Chief People Officer | 32.4% | 26.5% | 38.2% | −12pp Perplexity |
| Director of HR Technology & People Analytics | 34.4% | 31.2% | 37.5% | −6pp Perplexity |
| VP of People Operations | 37.5% | 37.5% | 37.5% | 0pp |
| VP of Talent Management | 29.8% | 13.8% | 46.4% | −33pp Perplexity |
| By Buying Job | ||||
| Artifact Creation | 26.1% | 16.7% | 36.4% | −20pp Perplexity |
| Comparison | 39.7% | 41.2% | 38.2% | +3pp ChatGPT |
| Consensus Creation | 11.5% | 0% | 23.1% | −23pp Perplexity |
| Problem Identification | 4.2% | 0% | 8.3% | −8pp Perplexity |
| Requirements Building | 23.3% | 13.3% | 33.3% | −20pp Perplexity |
| Shortlisting | 65.4% | 53.8% | 76.9% | −23pp Perplexity |
| Solution Exploration | 23.3% | 20% | 26.7% | −7pp Perplexity |
| Validation | 41.3% | 34.8% | 47.8% | −13pp Perplexity |
[Data] Overall visibility: 46% (69/150 queries). High-intent visibility: 48.19% (80/166). High-intent win rate: 33.75% (27/166), a 14pp visibility-to-win gap. By buying job: Shortlisting 65.38%, Validation 41.3%, Comparison 39.71%, but Problem Identification 4.17% and Consensus Creation 11.54%. Decision-maker win rate 43.9% vs. evaluator win rate 15.87% (28pp gap). Platform delta: Perplexity outperforms ChatGPT by 12pp. [Synthesis] 15Five's 46% overall visibility is a mid-field position — not invisible, not dominant. The visibility distribution reveals the structural problem: 15Five appears frequently at shortlisting (65.38%) and validation (41.3%) but is nearly absent at problem identification (4.17%) and consensus creation (11.54%). These early-stage buying jobs are where requirements crystallize and initial shortlists are formed. By the time a buyer reaches the shortlisting stage where 15Five does appear, Lattice and Culture Amp have already shaped the evaluation criteria through early-funnel dominance. The 14pp visibility-to-win gap (48.19% visible, 33.75% winning) shows 15Five is also underperforming relative to its own presence — it appears but is not recommended in 14 percentage points of high-intent queries where it is visible. The Perplexity advantage (12pp higher than ChatGPT) suggests 15Five's existing content is structurally better suited to Perplexity's search-backed citation model than to ChatGPT's authority-weight model, pointing to off-domain authority development as the ChatGPT-specific intervention.
Queries where 15Five is not mentioned in any AI response.
| ID | Query | Persona | Buying Job | Winner |
|---|---|---|---|---|
| 15f_001 | "What are the warning signs that good employees are about to leave — and how do mid-market companies catch this early?" | chro | Problem ID | No Vendor Mentioned |
| 15f_002 | "How do you prove to a skeptical CFO that people programs actually reduce turnover and save money?" | chro | Problem ID | No Vendor Mentioned |
| 15f_003 | "What are mid-market companies doing about managers who were promoted for technical skills but can't actually manage people?" | chro | Problem ID | No Vendor Mentioned |
| 15f_004 | "Our annual reviews take two months of admin time and managers dread them — what are other companies doing instead?" | vp_people_ops | Problem ID | No Clear Winner |
| 15f_005 | "Biggest challenges with first-time managers who were promoted for individual performance, not people skills" | vp_people_ops | Problem ID | No Vendor Mentioned |
| 15f_006 | "We've been running engagement surveys for two years and employees keep asking what actually changed — how do you close the loop?" | vp_people_ops | Problem ID | No Vendor Mentioned |
| 15f_007 | "What are the main approaches to unifying performance, engagement, and compensation data when they live in separate HR systems?" | hr_technology_director | Problem ID | No Vendor Mentioned |
| 15f_008 | "We have performance data in one system, engagement in another, and comp in spreadsheets — how do other mid-market companies fix this?" | hr_technology_director | Problem ID | No Vendor Mentioned |
| 15f_009 | "How much does a poor performance management process actually cost a mid-market company in turnover and lost productivity?" | cfo | Problem ID | No Vendor Mentioned |
| 15f_010 | "We're linking compensation to performance using spreadsheets and it's becoming a liability — how do other companies handle this?" | cfo | Problem ID | No Vendor Mentioned |
| ID | Query | Persona | Buying Job | Winner |
|---|---|---|---|---|
| 15f_012 | "Our company sets quarterly OKRs but nobody below the VP level can explain what their goals are — is there a better way to cascade them?" | vp_talent | Problem ID | No Vendor Mentioned |
| 15f_013 | "What's the difference between continuous performance management and traditional annual reviews — does continuous actually produce better outcomes?" | chro | Solution Exp. | No Vendor Mentioned |
| 15f_014 | "Does real-time employee recognition actually reduce turnover, or is it a feel-good feature that fades after a month?" | chro | Solution Exp. | No Vendor Mentioned |
| 15f_015 | "Main approaches to developing managers at scale — external coaching, training programs, or AI coaching tools?" | chro | Solution Exp. | No Vendor Mentioned |
| 15f_017 | "Annual engagement surveys vs. real-time pulse checks — which actually drives improvement at a mid-market company?" | vp_people_ops | Solution Exp. | No Vendor Mentioned |
| 15f_018 | "How does talent calibration work in practice — is it worth the administrative effort for a 300-person company?" | vp_people_ops | Solution Exp. | No Vendor Mentioned |
| 15f_020 | "Build vs. buy for people analytics — when should a mid-market company invest in a vendor vs. building dashboards in Tableau or Looker?" | hr_technology_director | Solution Exp. | No Vendor Mentioned |
| 15f_023 | "What workforce data should HR be reporting to the board, and what tools make that easier than building custom reports?" | cfo | Solution Exp. | No Vendor Mentioned |
| 15f_025 | "AI coaching tools for managers — how do they work and is there evidence they actually improve manager effectiveness?" | vp_talent | Solution Exp. | No Vendor Mentioned |
| 15f_027 | "How are mid-market companies connecting pay decisions to performance data without turning compensation into a political battle?" | vp_talent | Solution Exp. | No Vendor Mentioned |
| 15f_028 | "Must-have vs. nice-to-have features in an employee engagement platform for a mid-market company with 300+ employees" | chro | Req. Building | No Vendor Mentioned |
| 15f_029 | "What data should a people analytics tool connect to predict which employees are flight risks — performance scores, engagement trends, compensation?" | chro | Req. Building | No Vendor Mentioned |
| 15f_030 | "We're replacing our current check-in process — what features matter most in a continuous performance management tool for hybrid teams?" | chro | Req. Building | No Vendor Mentioned |
| 15f_032 | "What questions should I ask vendors about manager coaching and development features — what separates the good tools from the bad ones?" | vp_people_ops | Req. Building | No Vendor Mentioned |
| 15f_035 | "What analytics capabilities should I evaluate in a performance management platform — custom dashboards, data exports, natural language queries, or all three?" | hr_technology_director | Req. Building | No Vendor Mentioned |
| 15f_036 | "Technical requirements for a talent calibration tool — flexible rating scales, bias detection, manager override audit trails, integration with existing review workflows" | hr_technology_director | Req. Building | No Vendor Mentioned |
| 15f_038 | "What should I look for in compensation management software that ties pay decisions to performance data and supports pay equity compliance?" | cfo | Req. Building | No Vendor Mentioned |
| 15f_040 | "What makes a 360-degree feedback tool effective vs. just creating busywork — which features actually develop people?" | vp_talent | Req. Building | No Vendor Mentioned |
| 15f_041 | "What engagement survey features matter most for predicting turnover — benchmarking, trend analysis, or manager-level breakdowns?" | vp_talent | Req. Building | No Vendor Mentioned |
| 15f_046 | "Top AI coaching platforms for developing first-time managers at mid-market companies" | chro | Shortlisting | No Clear Winner |
| 15f_047 | "Which people analytics platforms let you ask plain-English questions about workforce data and get actionable answers?" | chro | Shortlisting | No Vendor Mentioned |
| 15f_052 | "switching from annual engagement surveys to a platform with real-time pulse and stronger benchmarking for predicting turnover" | vp_people_ops | Shortlisting | PerformYard |
| 15f_056 | "Top people analytics platforms with AI-powered flight risk detection for mid-market companies" | hr_technology_director | Shortlisting | Lattice |
| 15f_068 | "We need a recognition tool people will actually use — replacing a system nobody adopted at our 250-person company" | vp_talent | Shortlisting | No Vendor Mentioned |
| 15f_072 | "How does Leapsome's manager development compare to platforms with dedicated AI coaching features?" | chro | Comparison | Leapsome |
| 15f_079 | "How does Culture Amp's analytics compare to platforms with AI-powered people analytics for workforce insights?" | hr_technology_director | Comparison | Culture Amp |
| 15f_080 | "Lattice vs Culture Amp — which has more flexible performance review workflows for complex org structures?" | hr_technology_director | Comparison | Culture Amp |
| 15f_088 | "We're replacing our current engagement tool — Culture Amp vs Lattice, which is better for mid-market retention strategies?" | chro | Comparison | Culture Amp |
| 15f_089 | "Lattice vs Leapsome for manager coaching and development features at a mid-market company" | vp_people_ops | Comparison | Lattice |
| 15f_090 | "Culture Amp vs Leapsome for continuous check-ins and pulse surveys — which drives better manager habits?" | vp_people_ops | Comparison | Leapsome |
| 15f_091 | "Betterworks vs Lattice analytics — switching from a platform with limited reporting, which has stronger people insights?" | hr_technology_director | Comparison | Lattice |
| 15f_092 | "Culture Amp vs Workleap for engagement surveys — analytics depth vs. simplicity for smaller HR teams" | hr_technology_director | Comparison | Workleap |
| 15f_093 | "Leapsome vs Betterworks for HRIS integrations and data architecture — switching from a platform with poor API support" | hr_technology_director | Comparison | Leapsome |
| 15f_094 | "Lattice vs Betterworks pricing for a 300-person company — which is more cost-effective at mid-market scale?" | cfo | Comparison | Lattice |
| 15f_095 | "Culture Amp vs Betterworks for performance reviews — which do mid-market companies prefer after switching from spreadsheets?" | cfo | Comparison | Culture Amp |
| 15f_096 | "Betterworks vs Leapsome for OKR and goal tracking — pricing and capabilities for a 300-person company" | cfo | Comparison | Leapsome |
| 15f_097 | "Betterworks vs Leapsome for leadership development and manager training — which is better for mid-market?" | vp_talent | Comparison | Leapsome |
| 15f_098 | "Leapsome vs Workleap for peer recognition and real-time feedback — replacing our current tool, which do teams actually adopt?" | vp_talent | Comparison | Workleap |
| 15f_099 | "Lattice vs Workleap for engagement surveys — switching from a tool that measures engagement but doesn't help us improve it" | vp_people_ops | Comparison | Lattice |
| 15f_100 | "Culture Amp vs Betterworks for performance reviews — pros and cons for a mid-market company with 300+ employees" | vp_talent | Comparison | Culture Amp |
| 15f_101 | "We're thinking about switching — Betterworks vs Culture Amp, which has better people analytics and flight risk prediction?" | chro | Comparison | Culture Amp |
| 15f_102 | "Lattice vs Culture Amp for compensation management — switching from spreadsheets, which ties pay to performance data better?" | vp_people_ops | Comparison | Lattice |
| 15f_104 | "Culture Amp customer complaints — does it help teams improve engagement or just generate reports nobody acts on?" | chro | Validation | No Vendor Mentioned |
| 15f_105 | "Common complaints about Lattice from HR operations teams — what do they struggle with most?" | vp_people_ops | Validation | No Vendor Mentioned |
| 15f_107 | "Leapsome negative reviews — what do customers complain about most?" | vp_people_ops | Validation | No Clear Winner |
| 15f_108 | "Culture Amp integration issues — any known problems syncing with Workday or other enterprise HRIS platforms?" | hr_technology_director | Validation | No Clear Winner |
| 15f_110 | "Is Leapsome too new for a mid-market company making a multi-year platform investment — what are the risks?" | hr_technology_director | Validation | No Clear Winner |
| 15f_112 | "Hidden costs of Lattice that mid-market companies don't expect — implementation fees, module add-ons, pricing surprises" | cfo | Validation | No Clear Winner |
| 15f_120 | "Common failure modes when rolling out OKR software — what makes teams stop using it within six months?" | vp_people_ops | Validation | No Vendor Mentioned |
| 15f_122 | "How accurate are AI-powered flight risk predictions — do people analytics tools actually predict employee turnover?" | chro | Validation | No Vendor Mentioned |
| 15f_123 | "Do employee recognition tools actually sustain engagement improvements, or do people stop using them after a month?" | vp_talent | Validation | No Vendor Mentioned |
| 15f_124 | "What goes wrong when companies roll out structured 1:1 check-ins — manager resistance, low adoption, what to watch for?" | vp_talent | Validation | No Vendor Mentioned |
| 15f_125 | "Biggest risks of automating compensation decisions with software — what can go wrong with pay equity analysis?" | chro | Validation | No Vendor Mentioned |
| 15f_128 | "How to justify replacing annual reviews with continuous performance management to a board that doesn't see HR as strategic" | chro | Consensus | No Vendor Mentioned |
| 15f_129 | "Business case for connecting compensation management to performance reviews — impact on pay equity and retention" | chro | Consensus | No Vendor Mentioned |
| 15f_130 | "How do you build the business case for people analytics when the CEO doesn't trust HR data?" | vp_people_ops | Consensus | No Vendor Mentioned |
| 15f_131 | "Risk argument for investing in talent calibration — what happens when you lose top performers because you didn't identify them early enough?" | vp_people_ops | Consensus | No Vendor Mentioned |
| 15f_132 | "Total cost of HR data fragmentation — how much does manual reconciliation between disconnected HR systems cost a 300-person company?" | hr_technology_director | Consensus | No Vendor Mentioned |
| 15f_133 | "Impact of switching from annual reviews to continuous recognition and feedback — what do the studies and case studies show?" | hr_technology_director | Consensus | No Clear Winner |
| 15f_134 | "Average cost of replacing an employee vs. cost of retention tools — making the math work for HR tech budget" | cfo | Consensus | No Vendor Mentioned |
| 15f_135 | "Lattice vs Culture Amp — which platform has stronger ROI evidence for mid-market performance management?" | cfo | Consensus | Lattice |
| 15f_136 | "How to convince leadership that the current approach to goal setting isn't working and we need a dedicated OKR platform" | vp_talent | Consensus | No Vendor Mentioned |
| 15f_138 | "What's the typical payback period for investing in a manager coaching platform — how fast do companies see results?" | vp_talent | Consensus | No Vendor Mentioned |
| 15f_139 | "Draft an RFP for a continuous performance management and employee engagement platform for a 400-person mid-market company" | chro | Artifact | No Vendor Mentioned |
| 15f_142 | "Build a TCO model for implementing performance management software at a 300-person company over 3 years — licensing, implementation, training, and change management" | cfo | Artifact | No Vendor Mentioned |
| 15f_143 | "Draft evaluation criteria for employee engagement platforms that will be reviewed by an executive team — focus on actionability and ROI" | chro | Artifact | No Vendor Mentioned |
| 15f_144 | "Create a business case one-pager for investing in an AI-powered manager coaching platform at a mid-market company" | vp_talent | Artifact | No Vendor Mentioned |
| 15f_145 | "Write a security and compliance questionnaire for evaluating people analytics platforms — SOC 2, GDPR, data residency, and access controls" | hr_technology_director | Artifact | No Vendor Mentioned |
| 15f_146 | "Draft requirements document for a compensation management module that connects to our existing performance review process" | vp_people_ops | Artifact | No Vendor Mentioned |
| 15f_148 | "Build a weighted scoring rubric for evaluating talent calibration and 9-box assessment tools" | hr_technology_director | Artifact | No Vendor Mentioned |
| 15f_150 | "Create a change management plan for rolling out structured 1:1s and continuous check-ins to a team of 300+ with no current formal check-in process" | vp_talent | Artifact | No Vendor Mentioned |
Queries where 15Five is mentioned but a competitor is positioned more favorably.
| ID | Query | Persona | Buying Job | Winner | 15Five Position |
|---|---|---|---|---|---|
| 15f_011 | "How do you identify which employees are high-potential and at risk of leaving before they hand in their notice?" | vp_talent | Problem ID | No Vendor Mentioned | Brief Mention |
| 15f_016 | "We're replacing our ad-hoc 1:1 process — what's the real difference between dedicated check-in platforms and just using meeting agenda templates?" | vp_people_ops | Solution Exp. | No Clear Winner | Mentioned In List |
| 15f_019 | "How do performance management platforms typically integrate with HRIS systems like Workday, BambooHR, and ADP?" | hr_technology_director | Solution Exp. | No Clear Winner | Mentioned In List |
| 15f_021 | "Open source vs. commercial OKR tools — real tradeoffs for a company with 200-500 employees" | hr_technology_director | Solution Exp. | No Clear Winner | Mentioned In List |
| 15f_022 | "We've outgrown SurveyMonkey for employee engagement — what does a modern performance management tech stack look like for 300+ employees?" | hr_technology_director | Solution Exp. | Culture Amp | Mentioned In List |
| 15f_024 | "Our current review process doesn't connect to any business outcomes — how do companies move from annual reviews to something measurable?" | cfo | Solution Exp. | No Vendor Mentioned | Mentioned In List |
| 15f_026 | "What types of HR technology actually move the needle on reducing voluntary turnover at mid-market companies?" | vp_talent | Solution Exp. | No Vendor Mentioned | Mentioned In List |
| 15f_031 | "Key requirements for evaluating performance review platforms for a 400-person company moving away from annual reviews" | vp_people_ops | Req. Building | No Clear Winner | Brief Mention |
| 15f_033 | "We want continuous feedback between review cycles — what capabilities actually matter in a recognition and feedback tool?" | vp_people_ops | Req. Building | No Clear Winner | Brief Mention |
| 15f_034 | "Integration requirements for evaluating performance management software — HRIS sync, SSO, SCIM provisioning, API access, webhook support" | hr_technology_director | Req. Building | No Vendor Mentioned | Mentioned In List |
| ID | Query | Persona | Buying Job | Winner | 15Five Position |
|---|---|---|---|---|---|
| 15f_037 | "We've tried and failed with spreadsheet-based OKRs — what features in a dedicated OKR tool actually make goal cascading work?" | hr_technology_director | Req. Building | No Vendor Mentioned | Brief Mention |
| 15f_039 | "Evaluation criteria for performance management platforms from a finance perspective — ROI metrics, implementation costs, time to value" | cfo | Req. Building | No Vendor Mentioned | Brief Mention |
| 15f_042 | "We need structured 1:1 tools that connect manager check-ins to company goals — what capabilities should we prioritize?" | vp_talent | Req. Building | No Vendor Mentioned | Mentioned In List |
| 15f_044 | "Which employee engagement platforms actually help you act on survey results, not just collect engagement scores?" | chro | Shortlisting | Culture Amp | Mentioned In List |
| 15f_045 | "We've outgrown our current performance tool — best platforms for continuous check-ins and manager coaching at a 350-person company" | chro | Shortlisting | Betterworks | Mentioned In List |
| 15f_048 | "Best compensation management tools for mid-market companies trying to connect pay to performance data" | chro | Shortlisting | No Vendor Mentioned | Mentioned In List |
| 15f_049 | "Top continuous performance review platforms for replacing spreadsheet-based annual reviews at a 200-500 person company" | vp_people_ops | Shortlisting | Lattice | Strong 2nd |
| 15f_050 | "alternatives to our current performance management tool for a 350-person SaaS company focused on reducing regrettable turnover" | vp_people_ops | Shortlisting | Lattice | Mentioned In List |
| 15f_054 | "performance management platforms with reliable BambooHR and Workday integration — replacing a tool that doesn't sync properly" | vp_people_ops | Shortlisting | Lattice | Mentioned In List |
| 15f_055 | "Best performance management platforms with native HRIS integrations — Workday, ADP, BambooHR sync without custom middleware" | hr_technology_director | Shortlisting | Lattice | Mentioned In List |
| 15f_057 | "looking to replace our current review tool with a continuous performance platform that supports 360-degree feedback and custom review cycles" | hr_technology_director | Shortlisting | Lattice | Mentioned In List |
| 15f_058 | "replacing our standalone engagement survey tool — need a platform with real-time dashboards, API access, and data export for an analytics-driven HR team" | hr_technology_director | Shortlisting | Culture Amp | Mentioned In List |
| 15f_062 | "OKR platforms affordable enough for mid-market but robust enough to actually make goals stick across departments" | cfo | Shortlisting | No Vendor Mentioned | Strong 2nd |
| 15f_065 | "Best OKR tools for companies where goal cascading has never worked — switching from spreadsheets to a dedicated platform" | vp_talent | Shortlisting | No Clear Winner | Mentioned In List |
| 15f_066 | "Which engagement platforms are best at connecting survey data to retention outcomes for mid-market companies?" | vp_talent | Shortlisting | Lattice | Mentioned In List |
| 15f_067 | "Top tools for developing managers who've never had formal leadership training — practical coaching, not just theory" | vp_talent | Shortlisting | Culture Amp | Mentioned In List |
| 15f_070 | "We're moving from annual reviews — how does Lattice compare to other platforms for making that transition smooth?" | chro | Comparison | Lattice | Strong 2nd |
| 15f_074 | "How does Culture Amp handle continuous check-ins and manager enablement compared to dedicated check-in platforms?" | vp_people_ops | Comparison | Culture Amp | Strong 2nd |
| 15f_075 | "Switching from our current review tool — how does Lattice compare for making performance reviews less painful?" | vp_people_ops | Comparison | Lattice | Strong 2nd |
| 15f_076 | "How does Workleap's recognition and feedback functionality compare to more comprehensive performance management platforms?" | vp_people_ops | Comparison | Workleap | Mentioned In List |
| 15f_077 | "We're considering switching our engagement tool — how does Culture Amp's benchmarking compare to other platforms' action-planning features?" | vp_people_ops | Comparison | Culture Amp | Mentioned In List |
| 15f_078 | "How does Lattice's integration architecture compare to other performance platforms for HRIS sync, APIs, and webhooks?" | hr_technology_director | Comparison | Lattice | Mentioned In List |
| 15f_082 | "We're replacing spreadsheet-based comp decisions — how does Lattice's compensation module compare for linking pay to performance?" | cfo | Comparison | Lattice | Brief Mention |
| 15f_084 | "How does Betterworks' total cost compare to mid-market alternatives — implementation, training, and per-seat pricing?" | cfo | Comparison | Betterworks | Strong 2nd |
| 15f_085 | "How does Lattice's talent calibration and 9-box feature compare to other performance management platforms?" | vp_talent | Comparison | Lattice | Strong 2nd |
| 15f_086 | "How does Leapsome's continuous feedback compare to other 1:1 tools — which one do managers actually adopt?" | vp_talent | Comparison | Leapsome | Mentioned In List |
| 15f_087 | "How does Workleap's engagement surveys compare to more analytics-heavy platforms for a 200-person company?" | vp_talent | Comparison | Workleap | Strong 2nd |
| 15f_103 | "Lattice implementation problems when migrating from another performance management tool at a mid-market company" | chro | Validation | No Vendor Mentioned | Brief Mention |
| 15f_106 | "We're evaluating Culture Amp as a replacement — what are the biggest downsides of their performance review features?" | vp_people_ops | Validation | No Clear Winner | Brief Mention |
| 15f_109 | "Betterworks analytics and reporting limitations — what can't it do that other platforms handle?" | hr_technology_director | Validation | No Clear Winner | Brief Mention |
| 15f_111 | "Betterworks reviews from mid-market companies — is it worth the enterprise-level pricing?" | cfo | Validation | No Clear Winner | Mentioned In List |
| 15f_113 | "Is Workleap too basic for a growing mid-market company — will we outgrow it in two years?" | cfo | Validation | No Clear Winner | Brief Mention |
| 15f_114 | "Workleap Officevibe limitations — what are the biggest feature gaps compared to more comprehensive platforms?" | vp_talent | Validation | No Clear Winner | Mentioned In List |
| 15f_119 | "15Five talent management and performance calibration — how does it compare to dedicated talent review platforms?" | vp_talent | Validation | No Clear Winner | Primary Recommendation |
| 15f_121 | "Biggest risks of switching to continuous performance management from annual reviews at a mid-market company" | hr_technology_director | Validation | No Vendor Mentioned | Mentioned In List |
| 15f_127 | "Case studies of mid-market companies that improved manager effectiveness after switching to continuous performance management" | chro | Consensus | Lattice | Mentioned In List |
| 15f_137 | "Case studies of companies that reduced regrettable turnover after switching from annual reviews to continuous performance management" | vp_talent | Consensus | No Vendor Mentioned | Mentioned In List |
| 15f_140 | "Create a vendor comparison scorecard for 15Five, Lattice, Culture Amp, Betterworks, and Leapsome focused on integration capabilities and data architecture" | hr_technology_director | Artifact | Lattice | Strong 2nd |
| 15f_141 | "Build an evaluation template for comparing continuous performance management platforms — weighted scoring for reviews, check-ins, engagement, and analytics" | vp_people_ops | Artifact | No Vendor Mentioned | Mentioned In List |
| 15f_147 | "Create a comparison matrix for OKR and goal tracking features across 15Five, Betterworks, Lattice, and Leapsome" | chro | Artifact | No Clear Winner | Mentioned In List |
| 15f_149 | "Draft an executive summary comparing recognition and continuous feedback platforms for a leadership team — focus on retention impact" | vp_talent | Artifact | No Vendor Mentioned | Mentioned In List |
Who’s winning when 15Five isn’t — and who controls the narrative at each buying stage.
[TL;DR] 15Five ranks #3 in Share of Voice with a 30W–28L head-to-head record across 9 competitors.
15Five holds competitive parity with Lattice in direct head-to-head comparisons (8-8) but loses to Culture Amp (3-6) and Betterworks (2-6) in comparison-buying-job queries — losses that are structural (no comparison pages) rather than competitive, meaning they are fixable without changing product positioning.
| Company | Mentions | Share |
|---|---|---|
| Lattice | 90 | 21.3% |
| Culture Amp | 73 | 17.3% |
| 15Five | 69 | 16.4% |
| Leapsome | 50 | 11.8% |
| Betterworks | 41 | 9.7% |
| Quantum Workplace | 30 | 7.1% |
| PerformYard | 28 | 6.6% |
| Workleap | 24 | 5.7% |
| Engagedly | 15 | 3.5% |
| Reflektive | 2 | 0.5% |
For the 81 queries where 15Five is completely absent:
Vendors appearing in responses not in 15Five’s defined competitive set.
[Synthesis] The competitive picture presents two distinct 15Five stories. Against Leapsome (6-2 record), 15Five wins convincingly — suggesting that in direct comparison queries where both appear, 15Five's mid-market fit and established feature depth prevails over Leapsome's newer platform. The dead heat against Lattice (8-8 record) across 60 co-occurring queries reflects genuine competitive parity in direct evaluation — neither brand dominates where they both appear. The concerning results are the Culture Amp deficit (3-6) and the Betterworks deficit (2-6), particularly given that both competitors win in the comparison-buying-job cluster where 15Five has no comparison pages. These losses are not about feature parity — they are about content architecture. Culture Amp and Betterworks win comparison queries because they have comparison-format pages; 15Five redirects its comparison URLs to a generic brand page. The SOV gap to Lattice (21.33% vs. 16.35%) and Culture Amp (17.3% vs. 16.35%) is narrow enough that closing the comparison-page gap (NIO 002) and the AMAYA analytics gap (NIO 001) would likely shift SOV rank from #3 toward #1 over two quarters.
What AI reads and trusts in this category.
[TL;DR] 15Five received 71 citations across buyer queries. 10 high-authority domains cite competitors but not 15Five.
15Five's citations are self-domain concentrated, with a null client domain rank and 10 third-party gaps indicating that AI platforms trust 15Five for operational HR content but haven't yet learned to cite it as an authoritative category voice on analytics, goal-alignment, and talent management — an off-domain authority gap that requires third-party publication and analyst coverage to close.
[Synthesis] 15Five's citation pattern reveals an authority structure concentrated in self-domain content — 15five.com and success.15five.com (the customer success/help subdomain) are cited repeatedly, but the null client domain rank and 10 third-party citation gaps signal that AI platforms are sourcing authoritative content about performance management from HR media, analyst sites, and competitor pages rather than from 15Five's own content. The 10 third-party gap queries represent conversations where 15Five should be the cited expert but isn't — likely the analytics, OKR, and talent calibration query spaces where 15Five has thin or missing content. The success.15five.com citation pattern (36 instances) suggests AI platforms are finding 15Five's knowledge base content useful for how-to queries but not sourcing product authority from it. The strategic implication is that on-domain content improvements must be paired with off-domain authority development — G2 profile optimization, HR media bylines, and analyst mentions — to shift ChatGPT's citation authority model toward 15Five.
Three layers of recommendations ranked by commercial impact and implementation speed.
[TL;DR] 132 total gaps: 81 invisibility + 51 positioning. 4 can be addressed by optimizing existing content (L2), 8 require new content creation (L3).
The 138 actions execute in three phases: L1 technical fixes first (including the critical sitemap fix that unblocks all downstream discovery), then 74 L2 page optimizations, then 58 L3 new content items — with critical-priority NIOs (AMAYA analytics and comparison pages) executing before high-priority NIOs (OKR, talent calibration, CFO TCO).
Configuration and infrastructure changes. Owner: Engineering / DevOps. Timeline: Days to weeks.
Issue: The sitemap at https://www.15five.com/sitemap.xml contains exactly 19 URLs, all of which are blog posts or resource thank-you pages with lastmod timestamps of November-December 2025. Zero product pages, zero solution pages, zero pricing pages, zero integration pages, zero comparison-redirect pages, and zero feature subpages appear in the sitemap. No sitemap index file exists (sitemap_index.xml and hs-sitemap.xml both return 404). Core commercial pages such as /products, /products/perform, /products/engage, /products/kona, /products/perform/compensation, /pricing, /integrations, /solutions/reduce-regrettable-turnover, and /why-15five are all entirely absent from any known sitemap.
Fix: Expand the sitemap to include all commercial pages — product pages, feature subpages, solution pages, pricing, integrations, comparison-redirect pages, and customer stories. Add accurate lastmod timestamps. If HubSpot CMS is in use (suggested by robots.txt Disallow patterns for /_hcms/ paths), verify sitemap page-type inclusion settings in Settings > Website > Pages > Sitemap and ensure all page types are enabled. Submit the updated sitemap to Google Search Console and Bing Webmaster Tools.
Issue: Three URLs that appear in search engine results as dedicated competitor comparison pages — /15five-vs-lattice, /15five-vs-cultureamp/, and /15five-vs-leapsome/ — all redirect to the generic /why-15five page. The /why-15five page contains no competitor-specific content: it does not mention Lattice, Culture Amp, or Leapsome by name, and contains only generic brand messaging ('The new ERA OF HR'). Fetching each comparison URL confirmed the canonical page is /why-15five and the full page content is identical across all three. Web search results still index these URLs with competitor-specific titles (e.g., '15Five vs Culture Amp | Comparing Employee Management...'), meaning buyers and AI crawlers who follow these URLs from search results land on a page that does not address the query that brought them there.
Fix: Either (a) create dedicated comparison landing pages at the existing URLs with substantive head-to-head content for each competitor, or (b) if comparison pages are not being maintained, implement 301 redirects from these URLs to the blog posts that do contain comparison content (e.g., /15five-vs-lattice → /blog/heres-why-people-choose-15five-over-lattice). Option (a) is strongly preferred: dedicated comparison pages with feature matrices, use-case differentiation, and migration guides are among the highest-ROI content types for AI citation in competitive evaluation queries. At minimum, create comparison pages for the top 3 primary competitors: Lattice, Culture Amp, and Betterworks.
Issue: All product pages, solution pages, the why-15five page, and the pricing page have no visible last-updated dates and are absent from the sitemap — meaning no lastmod signal is available from any source. Freshness could not be determined for 17 of 30 pages analyzed. While blog posts in the sitemap carry lastmod timestamps (November-December 2025), these appear to be bulk-refreshed timestamps rather than per-post content modification dates: several blog posts show sitemap lastmod of 2025-11-25 or 2025-11-26 regardless of their original publication date (some were written in 2017-2019).
Fix: Add accurate lastmod timestamps to all commercial pages in the sitemap (requires first adding them to the sitemap per finding sitemap_missing_commercial_pages). Ensure sitemap lastmod values reflect actual content modification dates, not bulk publish dates. Consider adding visible 'Last updated: [date]' metadata to product and solution pages. Audit the bulk sitemap refresh — verify that pages with Nov 2025 lastmod were actually updated in November 2025 vs. a CMS auto-update.
Issue: The Kreg Tool case study page at /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 returned almost exclusively navigation and footer markup with negligible body content — only the headline metric ('reduced turnover by over 20%') and a download button were accessible. Related customer stories presented as blog posts (Pendo, Auror) returned full body content normally. The case study format on this URL appears to use a gated download model (PDF behind a form), which renders the page's substantive content inaccessible to AI crawlers.
Fix: Convert the highest-value case studies from gated PDF format to fully accessible HTML pages with inline outcome metrics, challenge/solution narrative, and specific product features used. Keep the formatted PDF as a downloadable bonus for users who want it. This approach makes the content available to both AI crawlers and human readers without sacrificing lead capture (the form can be offered as an optional 'download full report' CTA within the page). Priority case studies to convert: Kreg Tool, TrustRadius, and any others using the gated format.
Issue: This analysis was conducted using rendered page content (web_fetch returns markdown, not raw HTML), so JSON-LD schema blocks, meta tags, and OG tags are not visible in any of the 30 pages analyzed. Whether product pages carry Product or SoftwareApplication schema, blog posts carry Article schema with datePublished/dateModified, pricing pages carry Offer schema, or FAQ sections carry FAQPage schema cannot be determined from this analysis method.
Fix: Audit schema implementation using Google's Rich Results Test (https://search.google.com/test/rich-results) or a Screaming Frog structured data crawl. Priority items: (1) blog posts — verify Article/BlogPosting schema with author, datePublished, dateModified; (2) pricing page — verify Offer/PriceSpecification schema; (3) product FAQ sections — add FAQPage schema; (4) comparison pages — add WebPage schema with about properties referencing competitor entities once comparison content is restored.
Issue: Meta descriptions and Open Graph tags (og:description, og:image, og:title) are not accessible via rendered markdown analysis. None of the 30 pages analyzed had visible meta description or OG tag content in the fetched output.
Fix: Audit meta descriptions and OG tags using Screaming Frog, Ahrefs site audit, or browser view-source. Confirm every commercial page has a unique meta description (140-160 characters) with a specific capability claim. For the /why-15five page (which currently serves as the redirect destination for three competitor comparison URLs), ensure the meta description explicitly positions 15Five against named competitors to preserve some competitive signal.
Existing pages that need restructuring or deepening. Owner: Content Team. Timeline: Weeks.
The /products/engage page focuses on survey creation and distribution mechanics but has no section explaining the post-survey action workflow — buyers asking 'which platforms actually help you act on survey results' (15f_044, winner=culture_amp) are finding no answer on this page, causing Culture Amp to win that query The /products/engage page does not address the annual vs. pulse survey methodology comparison (15f_017) — it presents pulse surveys as a feature but provides no outcome evidence or methodology rationale that would justify switching from annual surveys to continuous measurement The /products/engage page lacks manager-level accountability content — there is no explanation of how manager teams receive survey results, what nudges or action prompts they receive, or how HR tracks whether managers acted on their team's engagement data
Queries affected: 15f_001, 15f_006, 15f_017, 15f_022, 15f_028, 15f_041, 15f_044, 15f_052, 15f_058, 15f_066, 15f_104, 15f_114, 15f_121, 15f_143
The /products/perform page does not answer the buyer's core question on 15f_004 and 15f_128 ('what are other companies doing instead of annual reviews?') — it describes continuous performance management features but provides no before/after outcome data from companies that made the switch, causing 'no clear winner' on queries where 15Five should dominate The /products/perform page lacks a structured transition framework — buyers on 15f_045 (which Betterworks wins) and 15f_049 (which Lattice wins) are looking for evidence that 15Five makes the platform transition smooth, but the page does not address implementation timeline, change management, or what the first 90 days of rollout look like The /products/perform page does not leverage the Lattice comparison blog content (/blog/heres-why-people-choose-15five-over-lattice) — buyers arriving at /products/perform after a Lattice comparison query (15f_103, 15f_127, 15f_135) find no competitive claims on the product page itself, while Lattice wins these queries by having product-page-level competitive positioning
Queries affected: 15f_003, 15f_004, 15f_013, 15f_016, 15f_024, 15f_030, 15f_031, 15f_040, 15f_042, 15f_045, 15f_049, 15f_050, 15f_057, 15f_103, 15f_105, 15f_106, 15f_124, 15f_127, 15f_128, 15f_135, 15f_137, 15f_141, 15f_150
The /products/kona page does not explain how Kona AI's coaching model works — buyers on 15f_025 ('AI coaching tools for managers — how do they work?') and 15f_046 ('top AI coaching platforms') find a product description rather than a methodology explanation, producing 'no_clear_winner' outcomes where 15Five should win by describing its proprietary AI approach The /products/kona page lacks measurable effectiveness evidence — queries on 15f_032 ('what separates good manager coaching tools from bad ones?') and 15f_138 ('typical payback period for a manager coaching platform') require specific outcome metrics, but the page does not include manager adoption rates, performance score improvement percentages, or time-to-impact benchmarks The /products/kona page does not address the alternative-comparison question on 15f_015 ('external coaching, training programs, or AI coaching tools?') — buyers evaluating whether AI coaching is worth investment vs. traditional approaches find no comparative framework on this page, causing 'no_vendor_mentioned' outcomes where 15Five's Kona could establish a positioning advantage
Queries affected: 15f_005, 15f_015, 15f_025, 15f_032, 15f_046, 15f_067, 15f_107, 15f_110, 15f_138, 15f_144
The /products/perform/compensation/ page does not answer the validation-stage question on 15f_125 ('biggest risks of automating compensation decisions — what can go wrong with pay equity analysis?') — the page describes compensation features but provides no risk framework, causing 'no_vendor_mentioned' outcomes where 15Five could establish authority by proactively addressing implementation risks The /products/perform/compensation/ page does not address compliance and pay equity requirements — buyers on 15f_038 ('what should I look for in compensation management software that ties pay decisions to performance data and supports pay equity compliance?') need compliance-specific content that cannot be found on the current product page The /products/perform/compensation/ page has no ROI or business case content — buyers on 15f_129 ('business case for connecting compensation management to performance reviews — impact on pay equity and retention') find a feature page, not a business justification framework, causing 'no_vendor_mentioned' results at consensus-creation stage
Queries affected: 15f_010, 15f_027, 15f_038, 15f_048, 15f_112, 15f_125, 15f_129, 15f_146
The /integrations page lists HRIS partners but does not explain how the integrations work — buyers on 15f_034 ('integration requirements — HRIS sync, SSO, SCIM provisioning, API access, webhook support') and 15f_019 find only partner logos rather than technical specifications that satisfy an HR Technology Director's evaluation criteria The /integrations page does not address the data architecture question on 15f_007 ('unifying performance, engagement, and compensation data when they live in separate HR systems') — the page shows what 15Five connects to but not what happens to the data after connection, causing 'no_vendor_mentioned' on a query where 15Five's unified data model is a core differentiator The /integrations page has no comparison content — buyers on 15f_140 ('vendor comparison scorecard focused on integration capabilities and data architecture,' winner=lattice) find no 15Five integration capability comparison, causing Lattice to win the integration architecture evaluation query
Queries affected: 15f_007, 15f_019, 15f_034, 15f_108, 15f_132, 15f_140
The /blog/check-ins-and-1-on-1s/ page answers 'what are check-ins?' but does not answer the buyer question on 15f_014 and 15f_123 ('does real-time recognition actually reduce turnover, or does it fade after a month?') — the page lacks any section with research citations or customer data linking check-in/recognition consistency to retention outcomes The /blog/check-ins-and-1-on-1s/ page does not address buyer adoption concerns on 15f_068 ('we need a recognition tool people will actually use — replacing a system nobody adopted') — the page describes 1:1 mechanics but has no section on adoption drivers, typical adoption rates, or what separates successful recognition rollouts from abandoned ones The /blog/360-best-self-review/ page is structured as a conceptual explainer but does not serve the requirements-building query on 15f_033 ('what capabilities actually matter in a recognition and feedback tool?') — it discusses 360 feedback methodology but does not translate that into a buyer-ready evaluation checklist
Queries affected: 15f_014, 15f_033, 15f_068, 15f_123, 15f_133, 15f_149
The /pricing page lists per-seat costs but does not frame cost against the business problem — buyers on 15f_009 ('how much does a poor performance management process actually cost in turnover and lost productivity?') need to see the cost of inaction before evaluating the cost of 15Five, but the /pricing page presents only product costs without context about what those costs offset The /pricing page does not address the finance-perspective evaluation criteria on 15f_039 ('evaluation criteria for performance management platforms from a finance perspective — ROI metrics, implementation costs, time to value') — there is no ROI methodology, implementation cost estimate, or time-to-value framework that a CFO can use to evaluate 15Five against Betterworks or Workleap pricing The /customer-stories/ hub does not present customer outcome data in a structured format suitable for CFO financial modeling — case study metrics (21% turnover reduction at Pendo, 94% retention at Auror) exist in narrative blog posts but are not aggregated into a structured outcomes table that CFOs can reference for benchmark comparisons in queries like 15f_113 and 15f_111
Queries affected: 15f_009, 15f_039, 15f_111, 15f_113, 15f_134, 15f_139
Net new content addressing visibility and positioning gaps. Owner: Content Strategy. Timeline: Months.
AMAYA is 15Five's most differentiated capability — AI-powered, natural-language workforce analytics and flight risk prediction — yet it earns zero AI citations across 15 buyer queries because no page on 15five.com has sufficient content depth to answer buyer questions about analytics architecture, build-vs-buy trade-offs, board reporting requirements, or flight risk model accuracy. This is not a ranking problem: it is a content absence. Competitors Lattice and Culture Amp, both with established people analytics pages, win these queries by default. The commercial cost is acute: 15f_002 (proving ROI to a skeptical CFO) and 15f_047 (shortlisting people analytics platforms) are CHRO and CFO queries at moments where platform selection is being decided. Without AMAYA-specific content, 15Five is excluded from the analytics buying conversation before it can make its case — and the XML sitemap's exclusion of commercial pages (L1: sitemap_missing_commercial_pages) means any new AMAYA pages will face delayed crawler discovery until that finding is resolved.
Chatgpt (low): ChatGPT has no substantive 15Five AMAYA analytics content in training data — Lattice wins 15f_056, Culture Amp wins 15f_079, 15f_091, 15f_101 without 15Five appearing. Authority requires third-party publication (G2, HR media) citations in addition to on-domain content to update model associations. Perplexity (medium): Perplexity is search-backed and would index new AMAYA pages immediately upon crawl, but the current 15Five product pages lack the heading hierarchy and comparison tables Perplexity needs to extract and cite specific analytics capability claims in analytical buyer queries.
Comparison is a high-intent buying job — when a buyer asks 'How does Lattice compare for making the transition from annual reviews smooth?' they are in active selection mode. Across 26 such queries, 15Five is absent because it has no dedicated comparison page type. The three comparison URLs that do exist (/15five-vs-lattice, /15five-vs-cultureamp/, /15five-vs-leapsome/) redirect to the generic /why-15five page with no competitor-specific content — a confirmed L1 issue (comparison_urls_redirect_to_generic_page). This is the single largest L3 cluster by query count (26 queries), spanning all 5 personas and 7 feature areas. Every uncontested competitor win in this cluster shapes a shortlist that excludes 15Five before evaluation begins. The commercial impact is direct: comparison and shortlisting are the high-intent buying jobs where platform selection is finalized, not researched.
Chatgpt (medium): ChatGPT wins are dominated by Lattice and Culture Amp in comparison queries — both platforms have substantial third-party comparison coverage in ChatGPT's training data. New 15Five comparison pages will require both on-domain pages and third-party editorial mentions of specific comparisons to shift ChatGPT's response patterns. Perplexity (high): Perplexity is search-backed and highly receptive to well-structured comparison pages — it will index and cite these immediately upon crawl. The winning competitors (Lattice, Culture Amp) are cited in Perplexity responses via structured feature tables that Perplexity extracts as self-contained passages. Replicating this structure is the primary Perplexity intervention for this NIO.
Goal misalignment is a confirmed pain point in the 15Five knowledge graph, yet the OKR and goal tracking feature — 15Five's direct answer to that pain — generates zero citations across 9 buyer queries because content inventory is insufficient. Buyers asking 'how do I make goal cascading work?' and 'what features in a dedicated OKR tool actually make goals stick?' receive answers from general HR media or competitors. The CFO persona appears in 15f_062 (OKR platform shortlisting) and 15f_096 (OKR comparison, Leapsome wins) — making this a potential deal-blocking gap at the financial approval stage. At 9 queries with thin coverage spanning every buying job stage, this is a content creation priority with clear commercial return and an achievable first-mover opportunity given competitors have not fully claimed this query space.
Chatgpt (medium): ChatGPT currently returns general OKR methodology content for these queries without naming 15Five. New on-domain OKR content will require third-party citation reinforcement (G2, HR media) before ChatGPT associates 15Five with OKR solution queries. Perplexity (medium): Perplexity would index a dedicated OKR page immediately upon crawl. The 'no_clear_winner' result in 15f_065 confirms this query space lacks a dominant incumbent — structured pages with concrete feature comparisons and OKR failure-mode analysis will outperform generic OKR guides.
Talent calibration addresses the top_talent_flight_risk pain point — the scenario where high-performing employees leave because HR couldn't identify their disengagement early enough, triggering a replacement cycle costing 150-200% of salary. This is a revenue event tied to a specific 15Five product capability, yet across 7 queries spanning the full buying journey, 15Five generates zero citations. VP Talent drives 4 of 7 queries in this cluster with direct input into shortlist decisions. The risk is timeline-specific: buyers forming talent calibration requirements (15f_036, requirements_building) will build those requirements around Lattice's 9-box documentation and arrive at the shortlist stage expecting Lattice's feature model. 15Five must establish its calibration narrative before requirements crystallize — and case study data that proves retention outcomes (currently gated per L1: case_study_gating_or_csr) must be made accessible to create the credibility signal AI platforms require.
Chatgpt (medium): Lattice wins the comparison query 15f_085 on ChatGPT, but no clear winner dominates most talent calibration queries — suggesting this is an emerging content category where early high-quality content can establish authority. Third-party analyst mentions (Gartner, Forrester) will accelerate ChatGPT citation adoption. Perplexity (medium): Perplexity returns no clear winner in most talent calibration queries, confirming the content gap is category-wide and unclaimed. A structured talent calibration page with FAQ-format sections ('Is talent calibration worth the effort for a 300-person company?') is well-matched to Perplexity's question-answering format.
The artifact_creation buying job at the CFO level represents the closest analog to a purchase-decision moment in this audit: the CFO is using AI to build the financial justification tool that will accompany the final vendor selection. Query 15f_142 — 'Build a TCO model for implementing performance management software at a 300-person company over 3 years' — produces no 15Five content (coverage='missing'). The commercial consequence is severe and asymmetric: the CFO's AI-assisted TCO model will be populated with Lattice's or Culture Amp's cost structures and implementation timelines, embedding competitor pricing assumptions into the evaluation framework before 15Five has made its case. A single well-crafted TCO framework page has high citeability by both Perplexity (which excels at surfacing structured financial breakdowns) and ChatGPT (which provides artifact templates). The 1-query cluster size understates the NIO's commercial impact — every CFO in a 15Five evaluation cycle runs this query type.
Chatgpt (medium): ChatGPT returns no vendor-specific content for the 15f_142 TCO query (coverage=missing, no winner). A well-structured TCO framework page with specific cost figures, implementation timelines, and SHRM/industry benchmark citations would provide ChatGPT with citable authority signals for CFO financial justification topics. Perplexity (high): Perplexity excels at surfacing structured financial breakdown content. A dedicated TCO page with itemized cost tables (licensing cost ranges, implementation timeline, training hours, change management activities) formatted in scannable sections would be highly receptive to Perplexity extraction and citation in CFO cost-evaluation artifact queries.
All recommendations across all three layers, ranked by commercial impact × implementation speed.
All product pages, solution pages, the why-15five page, and the pricing page have no visible last-updated dates and are absent from the sitemap — meaning no lastmod signal is available from any source. Freshness could not be determined for 17 of 30 pages analyzed. While blog posts in the sitemap carry lastmod timestamps (November-December 2025), these appear to be bulk-refreshed timestamps rather than per-post content modification dates: several blog posts show sitemap lastmod of 2025-11-25 or 2025-11-26 regardless of their original publication date (some were written in 2017-2019).
The sitemap at https://www.15five.com/sitemap.xml contains exactly 19 URLs, all of which are blog posts or resource thank-you pages with lastmod timestamps of November-December 2025. Zero product pages, zero solution pages, zero pricing pages, zero integration pages, zero comparison-redirect pages, and zero feature subpages appear in the sitemap. No sitemap index file exists (sitemap_index.xml and hs-sitemap.xml both return 404). Core commercial pages such as /products, /products/perform, /products/engage, /products/kona, /products/perform/compensation, /pricing, /integrations, /solutions/reduce-regrettable-turnover, and /why-15five are all entirely absent from any known sitemap.
15Five's AMAYA people analytics product has 'thin' content coverage on 15five.com across 15 buyer queries spanning every stage of the purchase journey. Coverage assessed as insufficient for AI extraction — Lattice wins 15f_056, Culture Amp wins 15f_079, 15f_091, and 15f_101, capturing CFO and CHRO audiences during requirements formation.
15Five's content inventory contains zero pages of type 'comparison.' Affinity routing assigned 26 comparison and shortlisting queries to L3 because the buying_job=comparison requires comparison page types but 15Five serves only blog, feature, product, and integration pages. Competitors win 25 of 26 queries: Lattice (10 wins), Culture Amp (6), Leapsome (5), Workleap (3), Betterworks (1).
15Five has no content of any type addressing the CFO's 3-year total cost of ownership calculation for performance management software — coverage_status='missing' (no coverage entry) for query 15f_142. The CFO (decision_maker, veto-holder) building a TCO model for performance management software receives no 15Five content, meaning 15Five is absent from the most financially structured buying decision support moment in the audit.
The /pricing page lists per-seat costs but does not frame cost against the business problem — buyers on 15f_009 ('how much does a poor performance management process actually cost in turnover and lost productivity?') need to see the cost of inaction before evaluating the cost of 15Five, but the /pricing page presents only product costs without context about what those costs offset
The /products/perform/compensation/ page does not answer the validation-stage question on 15f_125 ('biggest risks of automating compensation decisions — what can go wrong with pay equity analysis?') — the page describes compensation features but provides no risk framework, causing 'no_vendor_mentioned' outcomes where 15Five could establish authority by proactively addressing implementation risks
15Five's OKR and goal tracking feature has thin coverage across 9 buyer queries spanning the full purchase journey. All 9 routing rationales cite 'inventory assessed content as insufficient.' No page on 15five.com provides buyer-level content about goal cascading methodology, OKR implementation trade-offs, or evaluation criteria for OKR software. Leapsome wins 15f_096, indicating competitive displacement is already occurring.
15Five's talent calibration and performance matrix feature has thin coverage across 7 buyer queries, all with coverage_status='thin.' Buyers asking about 9-box methodology, high-potential identification, and succession planning find no 15Five content. Lattice wins 15f_085, while most other queries produce no clear winner — indicating the full query space is uncontested.
The /products/engage page focuses on survey creation and distribution mechanics but has no section explaining the post-survey action workflow — buyers asking 'which platforms actually help you act on survey results' (15f_044, winner=culture_amp) are finding no answer on this page, causing Culture Amp to win that query
The /products/kona page does not explain how Kona AI's coaching model works — buyers on 15f_025 ('AI coaching tools for managers — how do they work?') and 15f_046 ('top AI coaching platforms') find a product description rather than a methodology explanation, producing 'no_clear_winner' outcomes where 15Five should win by describing its proprietary AI approach
The /products/perform page does not answer the buyer's core question on 15f_004 and 15f_128 ('what are other companies doing instead of annual reviews?') — it describes continuous performance management features but provides no before/after outcome data from companies that made the switch, causing 'no clear winner' on queries where 15Five should dominate
The Kreg Tool case study page at /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 returned almost exclusively navigation and footer markup with negligible body content — only the headline metric ('reduced turnover by over 20%') and a download button were accessible. Related customer stories presented as blog posts (Pendo, Auror) returned full body content normally. The case study format on this URL appears to use a gated download model (PDF behind a form), which renders the page's substantive content inaccessible to AI crawlers.
Three URLs that appear in search engine results as dedicated competitor comparison pages — /15five-vs-lattice, /15five-vs-cultureamp/, and /15five-vs-leapsome/ — all redirect to the generic /why-15five page. The /why-15five page contains no competitor-specific content: it does not mention Lattice, Culture Amp, or Leapsome by name, and contains only generic brand messaging ('The new ERA OF HR'). Fetching each comparison URL confirmed the canonical page is /why-15five and the full page content is identical across all three. Web search results still index these URLs with competitor-specific titles (e.g., '15Five vs Culture Amp | Comparing Employee Management...'), meaning buyers and AI crawlers who follow these URLs from search results land on a page that does not address the query that brought them there.
The /integrations page lists HRIS partners but does not explain how the integrations work — buyers on 15f_034 ('integration requirements — HRIS sync, SSO, SCIM provisioning, API access, webhook support') and 15f_019 find only partner logos rather than technical specifications that satisfy an HR Technology Director's evaluation criteria
The /blog/check-ins-and-1-on-1s/ page answers 'what are check-ins?' but does not answer the buyer question on 15f_014 and 15f_123 ('does real-time recognition actually reduce turnover, or does it fade after a month?') — the page lacks any section with research citations or customer data linking check-in/recognition consistency to retention outcomes
Meta descriptions and Open Graph tags (og:description, og:image, og:title) are not accessible via rendered markdown analysis. None of the 30 pages analyzed had visible meta description or OG tag content in the fetched output.
This analysis was conducted using rendered page content (web_fetch returns markdown, not raw HTML), so JSON-LD schema blocks, meta tags, and OG tags are not visible in any of the 30 pages analyzed. Whether product pages carry Product or SoftwareApplication schema, blog posts carry Article schema with datePublished/dateModified, pricing pages carry Offer schema, or FAQ sections carry FAQPage schema cannot be determined from this analysis method.
All three workstreams can start this week.
[Synthesis] The 138 actions in the audit divide into three execution phases, ordered by dependency. Phase 1 (L1 technical): 6 structural fixes, including expanding the XML sitemap to include all commercial pages (currently containing zero product or solution pages), rebuilding the comparison URL redirects that send buyers to a generic brand page, and adding freshness signals to commercial pages. These execute first because they unblock crawler discovery for all downstream content improvements — any new AMAYA pages or comparison pages published before the sitemap is corrected will face delayed discovery. Phase 2 (L2 content optimization): 74 existing-page improvements across engagement surveys, performance management, manager coaching, compensation, integrations, recognition, and CFO financial content — prioritized by commercial weight, with the CFO-facing pages (l2_004, l2_007) and CHRO-facing pages (l2_001, l2_002) executing first. Phase 3 (L3 new content): 58 items clustered into 5 NIOs — AMAYA people analytics (NIO 001, critical), competitor comparison pages (NIO 002, critical), OKR content (NIO 003, high), talent calibration (NIO 004, high), and CFO TCO modeling (NIO 005, high) — executed in priority-badge order. Note that 3 L2 near-rebuild items (compensation methodology, integrations architecture, CFO ROI framework) have been flagged for re-routing into NIO blueprint execution rather than simple page editing.