Competitive intelligence for AI-mediated buying decisions. Where 15Five wins, where it loses, and a prioritized three-layer execution plan — built from 150 buyer queries across ChatGPT + Perplexity.
[Narrative] 15Five enters AI-mediated vendor evaluation with a split identity: strong brand recall at the shortlist stage but near-invisible where enterprise buying conversations actually begin. Across 150 queries modeled on five buyer personas and eight buying jobs, 15Five achieved 46% overall visibility (metrics.hero.overall_visibility.rate = 0.46) and ranks #3 in share of voice with 16.35% (69 mentions, metrics.competitive.share_of_voice rank 3) behind Lattice (21.33%) and Culture Amp (17.3%). At the shortlisting stage — where buyers request tool recommendations — visibility climbs to 81% and the win rate reaches 38%, confirming the brand is competitive when named in a shortlist context. The problem is upstream. Across the three early-funnel buying stages (problem identification, solution exploration, and requirements building), 15Five was invisible in 69% of queries. This means that for every 10 buyers who describe their HR performance management problem to an AI assistant, nearly 7 receive a response that does not include 15Five — creating a shortlist deficit before the brand even reaches the evaluation stage. The platform's 34.69% high-intent win rate (metrics.hero.high_intent_win_rate.rate = 0.3469) and a 24-percentage-point visibility-to-win gap (metrics.hero.key_gap.vis_to_win_gap_pp = 24) signal that even when 15Five appears, it frequently plays catch-up to competitors who framed the category first. The root causes are structural: 15Five has no dedicated competitor comparison pages (24 comparison-stage queries all carry affinity override flags), three underdocumented capabilities that surface 50%+ win rates when present (people analytics, talent calibration) but remain too thin to appear consistently, and six L1 technical issues including a sitemap that omits all commercial pages. The fix is a 138-action plan — 6 L1 technical fixes, 74 L2 content optimizations, 58 L3 new content items — that systematically closes both the early-funnel discovery gap and the comparison-stage content architecture gap. Repairing these gaps positions 15Five to earn top-of-funnel visibility before its competitors do, then convert that visibility at the comparison stage where it currently loses to better-documented competitors.
[Core Insight] 15Five wins the shortlist but loses the discovery conversation. AI systems surface 15Five in 81% of shortlisting queries (metrics.visibility.by_buying_job shortlisting.rate = 0.8077), but the platform is invisible in 69% of early-funnel buying jobs — problem identification, solution exploration, and requirements building — where buyers form the vendor universe they later shortlist from. Competitors who appear at discovery stage define the evaluation framework; 15Five arrives after the shortlist is already shaped.
Where 15Five appears and where it doesn't — across personas, buying jobs, and platforms.
[TL;DR] 15Five appears in 46% of buyer queries and wins 26.1% of those. Converting visibility to wins is the primary challenge (20% gap). High-intent queries run higher at 59.0%.
15Five's brand equity is highest at the shortlist but non-existent at the start of the buyer journey — 81% shortlisting visibility vs. 69% early-funnel invisibility means buyers arrive at shortlisting stages already predisposed toward competitors who framed the category first. The 28pp win-rate gap between decision-makers and evaluators reveals that the technical-buyer content gap is the specific lever for improving overall win rates.
| Dimension | Combined | ChatGPT | Perplexity | Delta |
|---|---|---|---|---|
| All Queries | 46% | 28.7% | 40.9% | Perplexity +12pp |
| Response count | 150 queries | 150 responses | 149 responses | |
| By Persona | ||||
| Chief Financial Officer | 52.2% | 34.8% | 47.8% | Perplexity +13pp |
| Chief People Officer | 38.2% | 26.5% | 38.2% | Perplexity +12pp |
| Director of HR Technology & People Analytics | 46.9% | 31.2% | 37.5% | Perplexity +6pp |
| VP of People Operations | 46.9% | 37.5% | 37.5% | 0pp |
| VP of Talent Management | 48.3% | 13.8% | 46.4% | Perplexity +33pp |
| By Buying Job | ||||
| Artifact Creation | 33.3% | 16.7% | 36.4% | Perplexity +20pp |
| Comparison | 47.1% | 41.2% | 38.2% | ChatGPT +3pp |
| Consensus Creation | 23.1% | 0% | 23.1% | Perplexity +23pp |
| Problem Identification | 8.3% | 0% | 8.3% | Perplexity +8pp |
| Requirements Building | 40% | 13.3% | 33.3% | Perplexity +20pp |
| Shortlisting | 80.8% | 53.8% | 76.9% | Perplexity +23pp |
| Solution Exploration | 40% | 20% | 26.7% | Perplexity +7pp |
| Validation | 52.2% | 34.8% | 47.8% | Perplexity +13pp |
[Data] Overall: 46% (69/150 queries, metrics.hero.overall_visibility). Shortlisting: 81% visibility / 38% win rate. Comparison: 47% / 31%. Validation: 52% / 33%. Problem identification: 8% visibility (1/12 queries). Solution exploration: 40% (6/15). Requirements building: 40% (6/15). Early-funnel invisibility: 69% across these three stages (metrics.funnel_metrics.early_funnel_invisibility_rate). Decision-maker win rate: 44%. Evaluator win rate: 15.9%. Role gap: 28pp (metrics.hero.key_gap.role_gap_pp). ChatGPT 12pp lower than Perplexity (metrics.visibility.platform_delta.value_pp). [Synthesis] The 69% early-funnel invisibility rate is not a sampling artifact — it reflects a systematic content gap at the stages where buyers define what 'good' looks like before they name vendors. At problem identification specifically, 15Five appears in only 8% of queries (1/12), meaning buyers describing their HR pain in their own words almost never receive 15Five as a contextual reference. The 28-point role gap between decision-maker win rate (44%) and evaluator win rate (15.9%) reveals a secondary dimension: evaluators — the Director of HR Technology and VP of People Operations who conduct technical due diligence — produce the weakest win rates despite having above-average visibility. These are the personas who ask integration architecture, analytics depth, and talent calibration questions that 15Five's current content cannot answer. The 12pp ChatGPT disadvantage vs. Perplexity suggests ChatGPT's more conservative citation patterns are suppressing 15Five in educational query formats that dominate the early funnel.
Queries where 15Five is not mentioned in any AI response.
| ID | Query | Persona | Buying Job | Winner |
|---|---|---|---|---|
| 15f_001 | "What are the warning signs that good employees are about to leave — and how do mid-market companies catch this early?" | chro | Problem ID | No Vendor Mentioned |
| 15f_002 | "How do you prove to a skeptical CFO that people programs actually reduce turnover and save money?" | chro | Problem ID | No Vendor Mentioned |
| 15f_003 | "What are mid-market companies doing about managers who were promoted for technical skills but can't actually manage people?" | chro | Problem ID | No Vendor Mentioned |
| 15f_004 | "Our annual reviews take two months of admin time and managers dread them — what are other companies doing instead?" | vp_people_ops | Problem ID | No Clear Winner |
| 15f_005 | "Biggest challenges with first-time managers who were promoted for individual performance, not people skills" | vp_people_ops | Problem ID | No Vendor Mentioned |
| 15f_006 | "We've been running engagement surveys for two years and employees keep asking what actually changed — how do you close the loop?" | vp_people_ops | Problem ID | No Vendor Mentioned |
| 15f_007 | "What are the main approaches to unifying performance, engagement, and compensation data when they live in separate HR systems?" | hr_technology_director | Problem ID | No Vendor Mentioned |
| 15f_008 | "We have performance data in one system, engagement in another, and comp in spreadsheets — how do other mid-market companies fix this?" | hr_technology_director | Problem ID | No Vendor Mentioned |
| 15f_009 | "How much does a poor performance management process actually cost a mid-market company in turnover and lost productivity?" | cfo | Problem ID | No Vendor Mentioned |
| 15f_010 | "We're linking compensation to performance using spreadsheets and it's becoming a liability — how do other companies handle this?" | cfo | Problem ID | No Vendor Mentioned |
| ID | Query | Persona | Buying Job | Winner |
|---|---|---|---|---|
| 15f_012 | "Our company sets quarterly OKRs but nobody below the VP level can explain what their goals are — is there a better way to cascade them?" | vp_talent | Problem ID | No Vendor Mentioned |
| 15f_013 | "What's the difference between continuous performance management and traditional annual reviews — does continuous actually produce better outcomes?" | chro | Solution Exp. | No Vendor Mentioned |
| 15f_014 | "Does real-time employee recognition actually reduce turnover, or is it a feel-good feature that fades after a month?" | chro | Solution Exp. | No Vendor Mentioned |
| 15f_015 | "Main approaches to developing managers at scale — external coaching, training programs, or AI coaching tools?" | chro | Solution Exp. | No Vendor Mentioned |
| 15f_017 | "Annual engagement surveys vs. real-time pulse checks — which actually drives improvement at a mid-market company?" | vp_people_ops | Solution Exp. | No Vendor Mentioned |
| 15f_018 | "How does talent calibration work in practice — is it worth the administrative effort for a 300-person company?" | vp_people_ops | Solution Exp. | No Vendor Mentioned |
| 15f_020 | "Build vs. buy for people analytics — when should a mid-market company invest in a vendor vs. building dashboards in Tableau or Looker?" | hr_technology_director | Solution Exp. | No Vendor Mentioned |
| 15f_023 | "What workforce data should HR be reporting to the board, and what tools make that easier than building custom reports?" | cfo | Solution Exp. | No Vendor Mentioned |
| 15f_025 | "AI coaching tools for managers — how do they work and is there evidence they actually improve manager effectiveness?" | vp_talent | Solution Exp. | No Vendor Mentioned |
| 15f_027 | "How are mid-market companies connecting pay decisions to performance data without turning compensation into a political battle?" | vp_talent | Solution Exp. | No Vendor Mentioned |
| 15f_028 | "Must-have vs. nice-to-have features in an employee engagement platform for a mid-market company with 300+ employees" | chro | Req. Building | No Vendor Mentioned |
| 15f_029 | "What data should a people analytics tool connect to predict which employees are flight risks — performance scores, engagement trends, compensation?" | chro | Req. Building | No Vendor Mentioned |
| 15f_030 | "We're replacing our current check-in process — what features matter most in a continuous performance management tool for hybrid teams?" | chro | Req. Building | No Vendor Mentioned |
| 15f_032 | "What questions should I ask vendors about manager coaching and development features — what separates the good tools from the bad ones?" | vp_people_ops | Req. Building | No Vendor Mentioned |
| 15f_035 | "What analytics capabilities should I evaluate in a performance management platform — custom dashboards, data exports, natural language queries, or all three?" | hr_technology_director | Req. Building | No Vendor Mentioned |
| 15f_036 | "Technical requirements for a talent calibration tool — flexible rating scales, bias detection, manager override audit trails, integration with existing review workflows" | hr_technology_director | Req. Building | No Vendor Mentioned |
| 15f_038 | "What should I look for in compensation management software that ties pay decisions to performance data and supports pay equity compliance?" | cfo | Req. Building | No Vendor Mentioned |
| 15f_040 | "What makes a 360-degree feedback tool effective vs. just creating busywork — which features actually develop people?" | vp_talent | Req. Building | No Vendor Mentioned |
| 15f_041 | "What engagement survey features matter most for predicting turnover — benchmarking, trend analysis, or manager-level breakdowns?" | vp_talent | Req. Building | No Vendor Mentioned |
| 15f_046 | "Top AI coaching platforms for developing first-time managers at mid-market companies" | chro | Shortlisting | No Clear Winner |
| 15f_047 | "Which people analytics platforms let you ask plain-English questions about workforce data and get actionable answers?" | chro | Shortlisting | No Vendor Mentioned |
| 15f_052 | "switching from annual engagement surveys to a platform with real-time pulse and stronger benchmarking for predicting turnover" | vp_people_ops | Shortlisting | PerformYard |
| 15f_056 | "Top people analytics platforms with AI-powered flight risk detection for mid-market companies" | hr_technology_director | Shortlisting | Lattice |
| 15f_068 | "We need a recognition tool people will actually use — replacing a system nobody adopted at our 250-person company" | vp_talent | Shortlisting | No Vendor Mentioned |
| 15f_072 | "How does Leapsome's manager development compare to platforms with dedicated AI coaching features?" | chro | Comparison | Leapsome |
| 15f_079 | "How does Culture Amp's analytics compare to platforms with AI-powered people analytics for workforce insights?" | hr_technology_director | Comparison | Culture Amp |
| 15f_080 | "Lattice vs Culture Amp — which has more flexible performance review workflows for complex org structures?" | hr_technology_director | Comparison | Culture Amp |
| 15f_088 | "We're replacing our current engagement tool — Culture Amp vs Lattice, which is better for mid-market retention strategies?" | chro | Comparison | Culture Amp |
| 15f_089 | "Lattice vs Leapsome for manager coaching and development features at a mid-market company" | vp_people_ops | Comparison | Lattice |
| 15f_090 | "Culture Amp vs Leapsome for continuous check-ins and pulse surveys — which drives better manager habits?" | vp_people_ops | Comparison | Leapsome |
| 15f_091 | "Betterworks vs Lattice analytics — switching from a platform with limited reporting, which has stronger people insights?" | hr_technology_director | Comparison | Lattice |
| 15f_092 | "Culture Amp vs Workleap for engagement surveys — analytics depth vs. simplicity for smaller HR teams" | hr_technology_director | Comparison | Workleap |
| 15f_093 | "Leapsome vs Betterworks for HRIS integrations and data architecture — switching from a platform with poor API support" | hr_technology_director | Comparison | Leapsome |
| 15f_094 | "Lattice vs Betterworks pricing for a 300-person company — which is more cost-effective at mid-market scale?" | cfo | Comparison | Lattice |
| 15f_095 | "Culture Amp vs Betterworks for performance reviews — which do mid-market companies prefer after switching from spreadsheets?" | cfo | Comparison | Culture Amp |
| 15f_096 | "Betterworks vs Leapsome for OKR and goal tracking — pricing and capabilities for a 300-person company" | cfo | Comparison | Leapsome |
| 15f_097 | "Betterworks vs Leapsome for leadership development and manager training — which is better for mid-market?" | vp_talent | Comparison | Leapsome |
| 15f_098 | "Leapsome vs Workleap for peer recognition and real-time feedback — replacing our current tool, which do teams actually adopt?" | vp_talent | Comparison | Workleap |
| 15f_099 | "Lattice vs Workleap for engagement surveys — switching from a tool that measures engagement but doesn't help us improve it" | vp_people_ops | Comparison | Lattice |
| 15f_100 | "Culture Amp vs Betterworks for performance reviews — pros and cons for a mid-market company with 300+ employees" | vp_talent | Comparison | Culture Amp |
| 15f_101 | "We're thinking about switching — Betterworks vs Culture Amp, which has better people analytics and flight risk prediction?" | chro | Comparison | Culture Amp |
| 15f_102 | "Lattice vs Culture Amp for compensation management — switching from spreadsheets, which ties pay to performance data better?" | vp_people_ops | Comparison | Lattice |
| 15f_104 | "Culture Amp customer complaints — does it help teams improve engagement or just generate reports nobody acts on?" | chro | Validation | No Vendor Mentioned |
| 15f_105 | "Common complaints about Lattice from HR operations teams — what do they struggle with most?" | vp_people_ops | Validation | No Vendor Mentioned |
| 15f_107 | "Leapsome negative reviews — what do customers complain about most?" | vp_people_ops | Validation | No Clear Winner |
| 15f_108 | "Culture Amp integration issues — any known problems syncing with Workday or other enterprise HRIS platforms?" | hr_technology_director | Validation | No Clear Winner |
| 15f_110 | "Is Leapsome too new for a mid-market company making a multi-year platform investment — what are the risks?" | hr_technology_director | Validation | No Clear Winner |
| 15f_112 | "Hidden costs of Lattice that mid-market companies don't expect — implementation fees, module add-ons, pricing surprises" | cfo | Validation | No Clear Winner |
| 15f_120 | "Common failure modes when rolling out OKR software — what makes teams stop using it within six months?" | vp_people_ops | Validation | No Vendor Mentioned |
| 15f_122 | "How accurate are AI-powered flight risk predictions — do people analytics tools actually predict employee turnover?" | chro | Validation | No Vendor Mentioned |
| 15f_123 | "Do employee recognition tools actually sustain engagement improvements, or do people stop using them after a month?" | vp_talent | Validation | No Vendor Mentioned |
| 15f_124 | "What goes wrong when companies roll out structured 1:1 check-ins — manager resistance, low adoption, what to watch for?" | vp_talent | Validation | No Vendor Mentioned |
| 15f_125 | "Biggest risks of automating compensation decisions with software — what can go wrong with pay equity analysis?" | chro | Validation | No Vendor Mentioned |
| 15f_128 | "How to justify replacing annual reviews with continuous performance management to a board that doesn't see HR as strategic" | chro | Consensus | No Vendor Mentioned |
| 15f_129 | "Business case for connecting compensation management to performance reviews — impact on pay equity and retention" | chro | Consensus | No Vendor Mentioned |
| 15f_130 | "How do you build the business case for people analytics when the CEO doesn't trust HR data?" | vp_people_ops | Consensus | No Vendor Mentioned |
| 15f_131 | "Risk argument for investing in talent calibration — what happens when you lose top performers because you didn't identify them early enough?" | vp_people_ops | Consensus | No Vendor Mentioned |
| 15f_132 | "Total cost of HR data fragmentation — how much does manual reconciliation between disconnected HR systems cost a 300-person company?" | hr_technology_director | Consensus | No Vendor Mentioned |
| 15f_133 | "Impact of switching from annual reviews to continuous recognition and feedback — what do the studies and case studies show?" | hr_technology_director | Consensus | No Clear Winner |
| 15f_134 | "Average cost of replacing an employee vs. cost of retention tools — making the math work for HR tech budget" | cfo | Consensus | No Vendor Mentioned |
| 15f_135 | "Lattice vs Culture Amp — which platform has stronger ROI evidence for mid-market performance management?" | cfo | Consensus | Lattice |
| 15f_136 | "How to convince leadership that the current approach to goal setting isn't working and we need a dedicated OKR platform" | vp_talent | Consensus | No Vendor Mentioned |
| 15f_138 | "What's the typical payback period for investing in a manager coaching platform — how fast do companies see results?" | vp_talent | Consensus | No Vendor Mentioned |
| 15f_139 | "Draft an RFP for a continuous performance management and employee engagement platform for a 400-person mid-market company" | chro | Artifact | No Vendor Mentioned |
| 15f_142 | "Build a TCO model for implementing performance management software at a 300-person company over 3 years — licensing, implementation, training, and change management" | cfo | Artifact | No Vendor Mentioned |
| 15f_143 | "Draft evaluation criteria for employee engagement platforms that will be reviewed by an executive team — focus on actionability and ROI" | chro | Artifact | No Vendor Mentioned |
| 15f_144 | "Create a business case one-pager for investing in an AI-powered manager coaching platform at a mid-market company" | vp_talent | Artifact | No Vendor Mentioned |
| 15f_145 | "Write a security and compliance questionnaire for evaluating people analytics platforms — SOC 2, GDPR, data residency, and access controls" | hr_technology_director | Artifact | No Vendor Mentioned |
| 15f_146 | "Draft requirements document for a compensation management module that connects to our existing performance review process" | vp_people_ops | Artifact | No Vendor Mentioned |
| 15f_148 | "Build a weighted scoring rubric for evaluating talent calibration and 9-box assessment tools" | hr_technology_director | Artifact | No Vendor Mentioned |
| 15f_150 | "Create a change management plan for rolling out structured 1:1s and continuous check-ins to a team of 300+ with no current formal check-in process" | vp_talent | Artifact | No Vendor Mentioned |
Queries where 15Five is mentioned but a competitor is positioned more favorably.
| ID | Query | Persona | Buying Job | Winner | 15Five Position |
|---|---|---|---|---|---|
| 15f_011 | "How do you identify which employees are high-potential and at risk of leaving before they hand in their notice?" | vp_talent | Problem ID | No Vendor Mentioned | Brief Mention |
| 15f_016 | "We're replacing our ad-hoc 1:1 process — what's the real difference between dedicated check-in platforms and just using meeting agenda templates?" | vp_people_ops | Solution Exp. | No Clear Winner | Mentioned In List |
| 15f_019 | "How do performance management platforms typically integrate with HRIS systems like Workday, BambooHR, and ADP?" | hr_technology_director | Solution Exp. | No Clear Winner | Mentioned In List |
| 15f_021 | "Open source vs. commercial OKR tools — real tradeoffs for a company with 200-500 employees" | hr_technology_director | Solution Exp. | No Clear Winner | Mentioned In List |
| 15f_022 | "We've outgrown SurveyMonkey for employee engagement — what does a modern performance management tech stack look like for 300+ employees?" | hr_technology_director | Solution Exp. | Culture Amp | Mentioned In List |
| 15f_024 | "Our current review process doesn't connect to any business outcomes — how do companies move from annual reviews to something measurable?" | cfo | Solution Exp. | No Vendor Mentioned | Mentioned In List |
| 15f_026 | "What types of HR technology actually move the needle on reducing voluntary turnover at mid-market companies?" | vp_talent | Solution Exp. | No Vendor Mentioned | Mentioned In List |
| 15f_031 | "Key requirements for evaluating performance review platforms for a 400-person company moving away from annual reviews" | vp_people_ops | Req. Building | No Clear Winner | Brief Mention |
| 15f_033 | "We want continuous feedback between review cycles — what capabilities actually matter in a recognition and feedback tool?" | vp_people_ops | Req. Building | No Clear Winner | Brief Mention |
| 15f_034 | "Integration requirements for evaluating performance management software — HRIS sync, SSO, SCIM provisioning, API access, webhook support" | hr_technology_director | Req. Building | No Vendor Mentioned | Mentioned In List |
| ID | Query | Persona | Buying Job | Winner | 15Five Position |
|---|---|---|---|---|---|
| 15f_037 | "We've tried and failed with spreadsheet-based OKRs — what features in a dedicated OKR tool actually make goal cascading work?" | hr_technology_director | Req. Building | No Vendor Mentioned | Brief Mention |
| 15f_039 | "Evaluation criteria for performance management platforms from a finance perspective — ROI metrics, implementation costs, time to value" | cfo | Req. Building | No Vendor Mentioned | Brief Mention |
| 15f_042 | "We need structured 1:1 tools that connect manager check-ins to company goals — what capabilities should we prioritize?" | vp_talent | Req. Building | No Vendor Mentioned | Mentioned In List |
| 15f_044 | "Which employee engagement platforms actually help you act on survey results, not just collect engagement scores?" | chro | Shortlisting | Culture Amp | Mentioned In List |
| 15f_045 | "We've outgrown our current performance tool — best platforms for continuous check-ins and manager coaching at a 350-person company" | chro | Shortlisting | Betterworks | Mentioned In List |
| 15f_048 | "Best compensation management tools for mid-market companies trying to connect pay to performance data" | chro | Shortlisting | No Vendor Mentioned | Mentioned In List |
| 15f_049 | "Top continuous performance review platforms for replacing spreadsheet-based annual reviews at a 200-500 person company" | vp_people_ops | Shortlisting | Lattice | Strong 2nd |
| 15f_050 | "alternatives to our current performance management tool for a 350-person SaaS company focused on reducing regrettable turnover" | vp_people_ops | Shortlisting | Lattice | Mentioned In List |
| 15f_054 | "performance management platforms with reliable BambooHR and Workday integration — replacing a tool that doesn't sync properly" | vp_people_ops | Shortlisting | Lattice | Mentioned In List |
| 15f_055 | "Best performance management platforms with native HRIS integrations — Workday, ADP, BambooHR sync without custom middleware" | hr_technology_director | Shortlisting | Lattice | Mentioned In List |
| 15f_057 | "looking to replace our current review tool with a continuous performance platform that supports 360-degree feedback and custom review cycles" | hr_technology_director | Shortlisting | Lattice | Mentioned In List |
| 15f_058 | "replacing our standalone engagement survey tool — need a platform with real-time dashboards, API access, and data export for an analytics-driven HR team" | hr_technology_director | Shortlisting | Culture Amp | Mentioned In List |
| 15f_062 | "OKR platforms affordable enough for mid-market but robust enough to actually make goals stick across departments" | cfo | Shortlisting | No Vendor Mentioned | Strong 2nd |
| 15f_065 | "Best OKR tools for companies where goal cascading has never worked — switching from spreadsheets to a dedicated platform" | vp_talent | Shortlisting | No Clear Winner | Mentioned In List |
| 15f_066 | "Which engagement platforms are best at connecting survey data to retention outcomes for mid-market companies?" | vp_talent | Shortlisting | Lattice | Mentioned In List |
| 15f_067 | "Top tools for developing managers who've never had formal leadership training — practical coaching, not just theory" | vp_talent | Shortlisting | Culture Amp | Mentioned In List |
| 15f_070 | "We're moving from annual reviews — how does Lattice compare to other platforms for making that transition smooth?" | chro | Comparison | Lattice | Strong 2nd |
| 15f_074 | "How does Culture Amp handle continuous check-ins and manager enablement compared to dedicated check-in platforms?" | vp_people_ops | Comparison | Culture Amp | Strong 2nd |
| 15f_075 | "Switching from our current review tool — how does Lattice compare for making performance reviews less painful?" | vp_people_ops | Comparison | Lattice | Strong 2nd |
| 15f_076 | "How does Workleap's recognition and feedback functionality compare to more comprehensive performance management platforms?" | vp_people_ops | Comparison | Workleap | Mentioned In List |
| 15f_077 | "We're considering switching our engagement tool — how does Culture Amp's benchmarking compare to other platforms' action-planning features?" | vp_people_ops | Comparison | Culture Amp | Mentioned In List |
| 15f_078 | "How does Lattice's integration architecture compare to other performance platforms for HRIS sync, APIs, and webhooks?" | hr_technology_director | Comparison | Lattice | Mentioned In List |
| 15f_082 | "We're replacing spreadsheet-based comp decisions — how does Lattice's compensation module compare for linking pay to performance?" | cfo | Comparison | Lattice | Brief Mention |
| 15f_084 | "How does Betterworks' total cost compare to mid-market alternatives — implementation, training, and per-seat pricing?" | cfo | Comparison | Betterworks | Strong 2nd |
| 15f_085 | "How does Lattice's talent calibration and 9-box feature compare to other performance management platforms?" | vp_talent | Comparison | Lattice | Strong 2nd |
| 15f_086 | "How does Leapsome's continuous feedback compare to other 1:1 tools — which one do managers actually adopt?" | vp_talent | Comparison | Leapsome | Mentioned In List |
| 15f_087 | "How does Workleap's engagement surveys compare to more analytics-heavy platforms for a 200-person company?" | vp_talent | Comparison | Workleap | Strong 2nd |
| 15f_103 | "Lattice implementation problems when migrating from another performance management tool at a mid-market company" | chro | Validation | No Vendor Mentioned | Brief Mention |
| 15f_106 | "We're evaluating Culture Amp as a replacement — what are the biggest downsides of their performance review features?" | vp_people_ops | Validation | No Clear Winner | Brief Mention |
| 15f_109 | "Betterworks analytics and reporting limitations — what can't it do that other platforms handle?" | hr_technology_director | Validation | No Clear Winner | Brief Mention |
| 15f_111 | "Betterworks reviews from mid-market companies — is it worth the enterprise-level pricing?" | cfo | Validation | No Clear Winner | Mentioned In List |
| 15f_113 | "Is Workleap too basic for a growing mid-market company — will we outgrow it in two years?" | cfo | Validation | No Clear Winner | Brief Mention |
| 15f_114 | "Workleap Officevibe limitations — what are the biggest feature gaps compared to more comprehensive platforms?" | vp_talent | Validation | No Clear Winner | Mentioned In List |
| 15f_119 | "15Five talent management and performance calibration — how does it compare to dedicated talent review platforms?" | vp_talent | Validation | No Clear Winner | Primary Recommendation |
| 15f_121 | "Biggest risks of switching to continuous performance management from annual reviews at a mid-market company" | hr_technology_director | Validation | No Vendor Mentioned | Mentioned In List |
| 15f_127 | "Case studies of mid-market companies that improved manager effectiveness after switching to continuous performance management" | chro | Consensus | Lattice | Mentioned In List |
| 15f_137 | "Case studies of companies that reduced regrettable turnover after switching from annual reviews to continuous performance management" | vp_talent | Consensus | No Vendor Mentioned | Mentioned In List |
| 15f_140 | "Create a vendor comparison scorecard for 15Five, Lattice, Culture Amp, Betterworks, and Leapsome focused on integration capabilities and data architecture" | hr_technology_director | Artifact | Lattice | Strong 2nd |
| 15f_141 | "Build an evaluation template for comparing continuous performance management platforms — weighted scoring for reviews, check-ins, engagement, and analytics" | vp_people_ops | Artifact | No Vendor Mentioned | Mentioned In List |
| 15f_147 | "Create a comparison matrix for OKR and goal tracking features across 15Five, Betterworks, Lattice, and Leapsome" | chro | Artifact | No Clear Winner | Mentioned In List |
| 15f_149 | "Draft an executive summary comparing recognition and continuous feedback platforms for a leadership team — focus on retention impact" | vp_talent | Artifact | No Vendor Mentioned | Mentioned In List |
Who’s winning when 15Five isn’t — and who controls the narrative at each buying stage.
[TL;DR] 15Five ranks #3 in Share of Voice with a 30W–28L head-to-head record across 9 competitors.
15Five holds its own against Lattice (tied H2H 8-8) and dominates Leapsome and Engagedly, but loses clearly to Culture Amp (3-6) and Betterworks (2-6) — the difference is content depth, not product capability. Kona's 67% win rate when visible is the single most underexploited competitive signal in the data, confirming AI coaching is a differentiator that buyers respond to when they can find it.
| Company | Mentions | Share |
|---|---|---|
| Lattice | 90 | 21.3% |
| Culture Amp | 73 | 17.3% |
| 15Five | 69 | 16.4% |
| Leapsome | 50 | 11.8% |
| Betterworks | 41 | 9.7% |
| Quantum Workplace | 30 | 7.1% |
| PerformYard | 28 | 6.6% |
| Workleap | 24 | 5.7% |
| Engagedly | 15 | 3.5% |
| Reflektive | 2 | 0.5% |
For the 81 queries where 15Five is completely absent:
Vendors appearing in responses not in 15Five’s defined competitive set.
[Synthesis] The competitive picture reveals a split field. 15Five is exactly tied with Lattice on a per-match basis (8W-8L-44T) — the primary competitor cannot pull ahead when they co-appear — but loses clearly to Culture Amp (3W-6L) and Betterworks (2W-6L) when those competitors appear in the same response. Culture Amp wins on engagement survey and analytics positioning; Betterworks wins on performance review platform queries where buyers are actively switching. 15Five's strongest competitive record is against Leapsome (6W-2L-24T), suggesting the brand has developed effective comparison positioning when Leapsome is the alternative. The 66.7% manager coaching win rate when visible is the single strongest competitive signal in the data: when Kona is cited, 15Five wins two-thirds of the time — confirming that Kona is an under-marketed differentiator. The BambooHR surprise (20 mentions in the evaluation space) indicates AI systems increasingly position BambooHR as a performance management alternative — a competitive threat not addressed in 15Five's current comparison content.
What AI reads and trusts in this category.
[TL;DR] 15Five had 71 unique pages cited across buyer queries, ranking #3 among all cited domains. 10 high-authority domains cite competitors but not 15Five.
15Five's #3 citation rank confirms a credible content footprint, but the 10-query third-party gap — where AI systems cited Gallup, SHRM, or academic sources instead of 15Five — maps precisely to the thin content areas targeted by the L3 NIO plan. Closing these content gaps converts third-party citations to owned citations, shifting 15Five from vendor-that-has-features to authority-that-defines-the-category.
[Synthesis] 15Five's 71 unique cited pages and #3 citation rank confirm the brand has established a content footprint that AI systems recognize — this is a foundation to build on, not a failure state. The alignment between #3 SOV rank and #3 citation rank is mechanically consistent: citation share and mention share correlate because the same content quality gaps that suppress SOV also suppress citation frequency. The 10-query third-party gap identifies the most actionable opportunity: for these 10 queries, AI systems cited Gallup, SHRM, academic publications, or consulting firm reports instead of 15Five's content. These third-party citations concentrate in people analytics methodology, OKR implementation guidance, and talent calibration frameworks — exactly the topics mapped to NIO 001 (people analytics), NIO 003 (OKR), and NIO 004 (talent calibration). The implication is that 15Five's current content on these topics is either absent or not authoritative enough to displace neutral third-party sources, confirming that the L3 NIO content investments are the path to citation capture, not optimization of existing pages.
Three layers of recommendations ranked by commercial impact and implementation speed.
[TL;DR] 132 total gaps: 81 invisibility + 51 positioning. 6 L1 technical fixes, 74 can be addressed by optimizing existing content (L2), 58 require new content creation (L3).
Six L1 technical fixes are prerequisites for all other improvements — specifically the sitemap expansion and comparison URL repair, which unblock AI crawler access to new content and restore authority to the comparison URL structure. Execute L1 first, then proceed through 74 L2 page edits and 58 L3 new content items in priority order; the single highest-ROI action is creating the 4 comparison landing pages in NIO 002, which addresses 24 comparison-stage queries currently producing near-zero 15Five wins.
Priority numbering: Numbers reflect unified ranking across all layers, ordered by commercial impact × implementation speed. Lower number = higher priority. Gaps in numbering indicate actions assigned to other layers.
Configuration and infrastructure changes. Owner: Engineering / DevOps. Timeline: Days to weeks.
Issue: The sitemap at https://www.15five.com/sitemap.xml contains exactly 19 URLs, all of which are blog posts or resource thank-you pages with lastmod timestamps of November-December 2025. Zero product pages, zero solution pages, zero pricing pages, zero integration pages, zero comparison-redirect pages, and zero feature subpages appear in the sitemap. No sitemap index file exists (sitemap_index.xml and hs-sitemap.xml both return 404). Core commercial pages such as /products, /products/perform, /products/engage, /products/kona, /products/perform/compensation, /pricing, /integrations, /solutions/reduce-regrettable-turnover, and /why-15five are all entirely absent from any known sitemap.
Fix: Expand the sitemap to include all commercial pages — product pages, feature subpages, solution pages, pricing, integrations, comparison-redirect pages, and customer stories. Add accurate lastmod timestamps. If HubSpot CMS is in use (suggested by robots.txt Disallow patterns for /_hcms/ paths), verify sitemap page-type inclusion settings in Settings > Website > Pages > Sitemap and ensure all page types are enabled. Submit the updated sitemap to Google Search Console and Bing Webmaster Tools.
Issue: Three URLs that appear in search engine results as dedicated competitor comparison pages — /15five-vs-lattice, /15five-vs-cultureamp/, and /15five-vs-leapsome/ — all redirect to the generic /why-15five page. The /why-15five page contains no competitor-specific content: it does not mention Lattice, Culture Amp, or Leapsome by name, and contains only generic brand messaging ('The new ERA OF HR'). Fetching each comparison URL confirmed the canonical page is /why-15five and the full page content is identical across all three. Web search results still index these URLs with competitor-specific titles (e.g., '15Five vs Culture Amp | Comparing Employee Management...'), meaning buyers and AI crawlers who follow these URLs from search results land on a page that does not address the query that brought them there.
Fix: Either (a) create dedicated comparison landing pages at the existing URLs with substantive head-to-head content for each competitor, or (b) if comparison pages are not being maintained, implement 301 redirects from these URLs to the blog posts that do contain comparison content (e.g., /15five-vs-lattice → /blog/heres-why-people-choose-15five-over-lattice). Option (a) is strongly preferred: dedicated comparison pages with feature matrices, use-case differentiation, and migration guides are among the highest-ROI content types for AI citation in competitive evaluation queries. At minimum, create comparison pages for the top 3 primary competitors: Lattice, Culture Amp, and Betterworks.
Issue: All product pages, solution pages, the why-15five page, and the pricing page have no visible last-updated dates and are absent from the sitemap — meaning no lastmod signal is available from any source. Freshness could not be determined for 17 of 30 pages analyzed. While blog posts in the sitemap carry lastmod timestamps (November-December 2025), these appear to be bulk-refreshed timestamps rather than per-post content modification dates: several blog posts show sitemap lastmod of 2025-11-25 or 2025-11-26 regardless of their original publication date (some were written in 2017-2019).
Fix: Add accurate lastmod timestamps to all commercial pages in the sitemap (requires first adding them to the sitemap per finding sitemap_missing_commercial_pages). Ensure sitemap lastmod values reflect actual content modification dates, not bulk publish dates. Consider adding visible 'Last updated: [date]' metadata to product and solution pages. Audit the bulk sitemap refresh — verify that pages with Nov 2025 lastmod were actually updated in November 2025 vs. a CMS auto-update.
Issue: The Kreg Tool case study page at /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 returned almost exclusively navigation and footer markup with negligible body content — only the headline metric ('reduced turnover by over 20%') and a download button were accessible. Related customer stories presented as blog posts (Pendo, Auror) returned full body content normally. The case study format on this URL appears to use a gated download model (PDF behind a form), which renders the page's substantive content inaccessible to AI crawlers.
Fix: Convert the highest-value case studies from gated PDF format to fully accessible HTML pages with inline outcome metrics, challenge/solution narrative, and specific product features used. Keep the formatted PDF as a downloadable bonus for users who want it. This approach makes the content available to both AI crawlers and human readers without sacrificing lead capture (the form can be offered as an optional 'download full report' CTA within the page). Priority case studies to convert: Kreg Tool, TrustRadius, and any others using the gated format.
Issue: This analysis was conducted using rendered page content (web_fetch returns markdown, not raw HTML), so JSON-LD schema blocks, meta tags, and OG tags are not visible in any of the 30 pages analyzed. Whether product pages carry Product or SoftwareApplication schema, blog posts carry Article schema with datePublished/dateModified, pricing pages carry Offer schema, or FAQ sections carry FAQPage schema cannot be determined from this analysis method.
Fix: Audit schema implementation using Google's Rich Results Test (https://search.google.com/test/rich-results) or a Screaming Frog structured data crawl. Priority items: (1) blog posts — verify Article/BlogPosting schema with author, datePublished, dateModified; (2) pricing page — verify Offer/PriceSpecification schema; (3) product FAQ sections — add FAQPage schema; (4) comparison pages — add WebPage schema with about properties referencing competitor entities once comparison content is restored.
Issue: Meta descriptions and Open Graph tags (og:description, og:image, og:title) are not accessible via rendered markdown analysis. None of the 30 pages analyzed had visible meta description or OG tag content in the fetched output.
Fix: Audit meta descriptions and OG tags using Screaming Frog, Ahrefs site audit, or browser view-source. Confirm every commercial page has a unique meta description (140-160 characters) with a specific capability claim. For the /why-15five page (which currently serves as the redirect destination for three competitor comparison URLs), ensure the meta description explicitly positions 15Five against named competitors to preserve some competitive signal.
Existing pages that need restructuring or deepening. Owner: Content Team. Timeline: Weeks.
The /products/engage page presents engagement features as a capabilities list but has no problem-framing section — queries asking about warning signs of employee attrition (15f_001) or how to close the loop on engagement surveys (15f_006) cannot be answered by citing a product feature page. The /products/engage page lacks an outcome evidence block — it claims engagement improvements but provides no quantified customer results (response rate improvements, action-plan completion rates, turnover reduction data) that AI systems can extract as citable claims. The /products/engage page does not address the pulse-vs-annual survey tradeoff that appears in 3 queries (15f_017, 15f_041, 15f_114) — Culture Amp's comparable page wins these queries by including explicit 'when to use pulse vs. annual' guidance.
Queries affected: 15f_001, 15f_006, 15f_017, 15f_022, 15f_028, 15f_041, 15f_052, 15f_058, 15f_066, 15f_104, 15f_114, 15f_121, 15f_143
The /products/perform page has no switching or migration narrative — queries like 15f_049 ('Top continuous performance review platforms for replacing spreadsheet-based annual reviews') and 15f_057 ('replacing our current review tool — support for 360-degree feedback and custom review cycles') lose to Lattice because Lattice's comparable page includes explicit 'migrating from spreadsheets' language and a migration guide. The /products/perform page lacks customer outcome evidence tied specifically to the performance review feature — the Auror and Pendo case study data exists on blog posts but is not integrated into the product page narrative, making the page non-citable for 'does continuous PM actually produce better outcomes?' queries. The /products/perform page does not include a structured 'Continuous vs. Annual Reviews: Key Structural Differences' comparison that AI systems can extract for the educational solution-exploration queries (15f_013, 15f_024) where no vendor is recommended but a structural comparison would surface 15Five as the page host.
Queries affected: 15f_004, 15f_013, 15f_016, 15f_024, 15f_030, 15f_031, 15f_040, 15f_042, 15f_049, 15f_057, 15f_103, 15f_105, 15f_124, 15f_127, 15f_128, 15f_137, 15f_141, 15f_150
The /products/kona page describes Kona AI Coach as a product but provides no evidence of effectiveness — queries 15f_025 ('AI coaching tools for managers — is there evidence they actually improve manager effectiveness?') and 15f_046 (shortlisting, no_clear_winner) can't cite this page because it makes no verifiable outcome claims with data. The /products/kona page contains no explanation of HOW the AI coaching works — there is no methodology section covering what data Kona uses, how it generates coaching recommendations, and what differentiates it from generic AI prompting — making it non-citable for 'how do AI coaching tools work?' queries (15f_015, 15f_025). The /products/kona page does not address the 'AI coaching vs. external coaching programs vs. training platforms' comparison framing that appears in 4 queries (15f_015, 15f_025, 15f_067, 15f_138) — buyers evaluating manager development approaches need this comparison to justify AI coaching selection.
Queries affected: 15f_003, 15f_005, 15f_015, 15f_025, 15f_032, 15f_046, 15f_067, 15f_107, 15f_110, 15f_138, 15f_144
The /products/perform/compensation/ page does not include pay equity compliance specifics — query 15f_038 ('What should I look for in compensation management software that supports pay equity compliance?') and 15f_125 ('Biggest risks of automating compensation decisions — what can go wrong with pay equity analysis?') cannot cite this page because compliance capabilities are not documented. The /products/perform/compensation/ page lacks a buyer evaluation checklist or evaluation criteria framework — queries 15f_038 (requirements building) and 15f_048 (shortlisting) need a page that helps buyers evaluate compensation management tools, not just a feature description. The /products/perform/compensation/ page does not describe the performance-rating-to-compensation data flow — the defining value proposition ('connect pay decisions to performance data without spreadsheets') is stated but not illustrated with a step-by-step process that AI systems can extract as a citable workflow.
Queries affected: 15f_010, 15f_027, 15f_038, 15f_048, 15f_112, 15f_125, 15f_129, 15f_146
The /integrations directory at https://www.15five.com/integrations lists supported HRIS platforms but contains zero content about integration architecture — queries 15f_034 ('Integration requirements for evaluating performance management software — HRIS sync, SSO, SCIM provisioning, API access, webhook support') and 15f_108 ('Culture Amp integration issues — any known problems syncing with Workday or other enterprise HRIS platforms?') cannot cite this page because technical architecture is not documented. The /integrations page has no customer integration success stories — shortlisting queries 15f_054 and 15f_055 (both winner=lattice) require evidence that integrations work reliably at scale with named HRIS platforms, not just confirmation that integrations exist. The /integrations page lacks any comparison framing against competitor integration ecosystems — query 15f_140 ('Create a vendor comparison scorecard for 15Five, Lattice, Culture Amp — integration capabilities and data architecture', winner=lattice) loses because Lattice has comparison-ready integration documentation that 15Five's directory cannot provide.
Queries affected: 15f_007, 15f_019, 15f_034, 15f_108, 15f_132, 15f_140
The /pricing page at https://www.15five.com/pricing lists plan prices and feature tiers but has no ROI framing — CFO queries about the cost of poor PM processes (15f_009) and evaluation ROI metrics (15f_039) cannot be answered by citing a pricing page, and routing these queries to /pricing as a coverage fallback confirms the content gap rather than filling it. The /customer-stories/ page has case studies with outcome data but formats them as narrative blog posts rather than extractable ROI metrics — the Pendo (21% turnover reduction) and Auror (94% retention) outcomes are buried in story prose rather than surfaced as structured, AI-extractable claims. The routing of 15f_009 and 15f_039 to /pricing reveals the absence of any dedicated business-case or ROI content on the site — the CFO's question 'how much does poor performance management cost?' has no home anywhere in 15Five's content inventory.
Queries affected: 15f_009, 15f_039, 15f_111, 15f_113, 15f_134, 15f_135
The /blog/check-ins-and-1-on-1s/ page explains how to run check-ins but lacks outcome evidence — query 15f_014 ('Does real-time employee recognition actually reduce turnover, or is it a feel-good feature?') and 15f_123 ('Do employee recognition tools actually sustain engagement improvements?') require citable evidence connecting recognition frequency to retention outcomes, which is absent from this methodology guide. The /blog/check-ins-and-1-on-1s/ page structure is optimized for human reading, not AI extraction — headings describe rather than answer ('How to run effective check-ins' instead of 'What are the most important capabilities in a continuous feedback tool?'), reducing the probability of passage extraction for requirements-building queries (15f_033). Recognition_feedback has no dedicated product landing page — this blog post is the primary coverage for all 6 queries in this cluster, but a blog post format cannot compete with Workleap's dedicated recognition product page that includes feature comparisons, adoption data, and customer outcome statistics.
Queries affected: 15f_014, 15f_033, 15f_068, 15f_123, 15f_133, 15f_149
The /solutions/reduce-regrettable-turnover page makes retention claims but doesn't explain the mechanism — queries like 15f_026 ('What types of HR technology actually move the needle on reducing voluntary turnover?') need a page that explains WHICH features drive WHICH retention outcomes, not just a claim that 15Five reduces turnover. The /solutions/reduce-regrettable-turnover page has insufficient customer outcome density — Lattice's equivalent page (winner on 15f_050) includes 5+ named company outcomes with specific retention percentages; 15Five's page references Auror and Pendo outcomes but does not present them in a structured, scannable density that AI systems can extract as a recommendation signal. The /solutions/reduce-regrettable-turnover page lacks a buyer evaluation resource — RFP-creation query 15f_139 ('Draft an RFP for a continuous performance management platform') routes to this page but finds no RFP template, evaluation criteria, or downloadable reference content.
Queries affected: 15f_026, 15f_050, 15f_139
Net new content addressing visibility and positioning gaps. Owner: Content Strategy. Timeline: Months.
People analytics is 15Five's highest-leverage underdocumented capability: when AMAYA appears in AI responses, 15Five wins half the time — but it appears in only 2 of 16 analytics-focused queries. The 15 L3 gaps in this cluster span every buying stage from problem identification through artifact creation, indicating that buyers who start their journey asking about flight risk prediction, workforce insights, or CFO-facing HR data never encounter 15Five as a solution. CHRO and CFO veto-holders are the primary affected personas, making this a deal-blocking invisibility gap. Closing this content deficit would make AMAYA a cited solution at the exact moment buyers form their vendor consideration set.
Chatgpt (medium): Analytics methodology queries (15f_002, 15f_020, 15f_023) produced educational responses citing Gallup, SHRM, and academic sources with no vendor mention — ChatGPT defaults to research citations for ROI/methodology queries. A 15Five page with embedded research citations and quantified outcome data would compete for this citation pattern. Perplexity (high): Shortlisting queries (15f_047, 15f_056) where Lattice won confirm Perplexity cites vendor product pages for 'top platforms' queries; AMAYA needs a standalone product page with structured feature claims to appear in these recommendation lists.
Comparison is the highest commercial-intent buying stage before final vendor selection. 15Five is invisible or losing on 24 of 34 comparison-stage queries (47% visibility, 31% win rate, metrics.visibility.by_buying_job comparison.rate/win_rate), and the root cause is structural: no comparison pages exist anywhere on the site. Competitors who win on comparison queries — Lattice (7 wins), Culture Amp (5 wins), Leapsome (4 wins), Workleap (3 wins) — each have dedicated comparison or versus landing pages that AI systems extract for head-to-head evaluations. This is the highest-volume NIO cluster and maps directly to L1 finding comparison_urls_redirect_to_generic_page, meaning the fix addresses a confirmed technical debt item and a content gap simultaneously. At comparison-stage visibility of 47%, 15Five is present in nearly half of these conversations — but absent content means presence without recommendation.
Chatgpt (high): ChatGPT comparison queries (15f_075, 15f_082, 15f_095) produced structured competitor comparisons citing competitor product pages and blog content — confirming dedicated comparison landing pages would be cited once they exist and are discoverable by GPTBot. Perplexity (high): Perplexity consistently cites dedicated 'vs.' or comparison pages in competitive evaluation queries; the existing comparison URL structure at 15Five would receive Perplexity citations as soon as the redirect-to-generic-page issue (L1: comparison_urls_redirect_to_generic_page) is resolved and content is added.
OKR and goal tracking is a category where 15Five is frequently present but rarely wins: the platform appears in 6 of 10 OKR-focused queries but captures only 1 win, with Betterworks, Leapsome, and Lattice — all of which have dedicated OKR methodology content — winning the majority. The 9 L3 queries span the full buying journey from 'our quarterly OKRs don't cascade below VP level' (problem identification) through 'convince leadership we need a dedicated OKR platform' (consensus creation) and 'compare OKR features across 15Five, Betterworks, Lattice, Leapsome' (artifact creation), indicating systematic content absence rather than a stage-specific weakness. Because OKR tracking is an expansion module that increases deal value from engagement-only to full-platform contracts, closing this gap affects both initial conversion and deal size.
Chatgpt (medium): OKR methodology queries (15f_012, 15f_021) produced vendor-neutral educational responses — ChatGPT treats OKR questions as management methodology, not software selection. Content combining OKR methodology authority with product recommendation (positioning 15Five as the implementation vehicle) would need to be distinctly authoritative to break this pattern. Perplexity (medium): Shortlisting queries (15f_062, 15f_065) produced mixed vendor lists where 15Five was absent; Perplexity responded favorably to Betterworks and Leapsome's OKR-focused product pages. A dedicated 15Five OKR page structured around mid-market cascading methodology would appear in these recommendation lists.
Talent calibration sits at the intersection of 15Five's strongest commercial proposition (preventing top talent flight risk) and its weakest content coverage. The VP of Talent Management persona — who most frequently searches talent calibration queries — has a win rate of only 7.14% across all 29 queries (1 win, metrics.visibility.by_persona.vp_talent.win_rate = 0.0714) despite 48% visibility, meaning 15Five appears in VP Talent conversations but consistently loses the recommendation. The 7 L3 queries span from 'how does talent calibration work in practice?' (solution exploration) through 'technical requirements for a talent calibration tool' (requirements building) and 'how does 15Five's calibration compare to dedicated talent review platforms?' (validation), indicating buyers who actively search 15Five's calibration capabilities cannot find sufficient product evidence to justify selection. Because top_talent_flight_risk is a deal-closing pain point for both CHRO and VP Talent veto-holders, this content gap directly affects close rates on full-platform deals.
Chatgpt (low): Talent calibration queries (15f_018, 15f_011) produced vendor-neutral educational responses about calibration methodology — ChatGPT treats these as HR methodology questions rather than software selection questions. Content must blend methodology authority with product evidence to shift citation patterns; a standalone product feature page without methodology context will not be cited. Perplexity (medium): Validation query 15f_119 ('15Five talent management and performance calibration — how does it compare to dedicated talent review platforms?') resulted in no_clear_winner, suggesting Perplexity found insufficient structured comparison data on either side. A dedicated talent calibration page with structured feature-level claims and a comparison table vs. standalone talent review platforms would shift this to a 15Five win.
This cluster addresses a lower-volume but high-commercial-weight gap: technical evaluators researching integration reliability and CFOs building 3-year total cost of ownership models — both late-stage evaluation queries that, when unanswered, can cause a shortlisted vendor to be eliminated. 15Five loses both HRIS integration shortlisting queries to Lattice (15f_054, 15f_055, winner=lattice) not because integrations fail, but because Lattice has case studies and technical documentation proving they work at scale with BambooHR and Workday. The CFO TCO query (15f_142) has completely missing coverage — no content exists to answer 'build a 3-year TCO model for implementing performance management software' — leaving 15Five absent from a query that surfaces during CFO final approval stages.
Chatgpt (low): TCO query (15f_142) produced no vendor mentions — ChatGPT treated it as a financial modeling exercise. Integration shortlisting queries cited Lattice based on integration case study content that ChatGPT indexed. 15Five's integration pages would need to include named customer outcomes to shift this citation pattern. Perplexity (medium): Integration shortlisting queries (15f_054, 15f_055) cited Lattice as the winner based on integration-specific landing pages that Perplexity indexed; similar HRIS-specific integration success pages on 15Five's domain would appear in the same citation pattern given Perplexity's known preference for page-type-matched content.
All recommendations across all three layers, ranked by commercial impact × implementation speed.
All product pages, solution pages, the why-15five page, and the pricing page have no visible last-updated dates and are absent from the sitemap — meaning no lastmod signal is available from any source. Freshness could not be determined for 17 of 30 pages analyzed. While blog posts in the sitemap carry lastmod timestamps (November-December 2025), these appear to be bulk-refreshed timestamps rather than per-post content modification dates: several blog posts show sitemap lastmod of 2025-11-25 or 2025-11-26 regardless of their original publication date (some were written in 2017-2019).
The sitemap at https://www.15five.com/sitemap.xml contains exactly 19 URLs, all of which are blog posts or resource thank-you pages with lastmod timestamps of November-December 2025. Zero product pages, zero solution pages, zero pricing pages, zero integration pages, zero comparison-redirect pages, and zero feature subpages appear in the sitemap. No sitemap index file exists (sitemap_index.xml and hs-sitemap.xml both return 404). Core commercial pages such as /products, /products/perform, /products/engage, /products/kona, /products/perform/compensation, /pricing, /integrations, /solutions/reduce-regrettable-turnover, and /why-15five are all entirely absent from any known sitemap.
15Five has zero dedicated competitor comparison landing pages: the three indexed comparison URLs (/15five-vs-lattice, /15five-vs-cultureamp/, /15five-vs-leapsome/) all redirect to the generic /why-15five page that contains no competitor-specific content. All 24 comparison-stage queries in this cluster carry 'AFFINITY OVERRIDE: buying_job=comparison requires page types [comparison] but found [blog/feature/product]' routing flags.
15Five's AMAYA people analytics feature has thin content coverage across all buying stages: visibility rate = 12.5% (2/16 queries, metrics.visibility.by_feature.people_analytics.rate) despite a 50% win rate when cited, meaning the product wins when it appears but is missing from 87.5% of analytics queries.
The /pricing page at https://www.15five.com/pricing lists plan prices and feature tiers but has no ROI framing — CFO queries about the cost of poor PM processes (15f_009) and evaluation ROI metrics (15f_039) cannot be answered by citing a pricing page, and routing these queries to /pricing as a coverage fallback confirms the content gap rather than filling it.
15Five's OKR and goal tracking feature has thin content coverage with 60% visibility rate (6/10 queries, metrics.visibility.by_feature.okr_goal_tracking.rate) but only 16.7% win rate (1/10 wins) — present in AI responses but rarely winning because the content that surfaces it doesn't address OKR methodology, cascading implementation, or comparison to standalone OKR tools.
15Five's talent calibration and 9-box assessment capability has thin content coverage; visibility rate = 50% (4/8 queries, metrics.visibility.by_feature.talent_calibration.rate) but win rate = 25% (1 win), with Lattice winning 2 comparison queries on talent calibration framing and the validation query 15f_119 ('how does 15Five's calibration compare to dedicated talent review platforms?') returning no_clear_winner.
The /products/kona page describes Kona AI Coach as a product but provides no evidence of effectiveness — queries 15f_025 ('AI coaching tools for managers — is there evidence they actually improve manager effectiveness?') and 15f_046 (shortlisting, no_clear_winner) can't cite this page because it makes no verifiable outcome claims with data.
The /products/perform page has no switching or migration narrative — queries like 15f_049 ('Top continuous performance review platforms for replacing spreadsheet-based annual reviews') and 15f_057 ('replacing our current review tool — support for 360-degree feedback and custom review cycles') lose to Lattice because Lattice's comparable page includes explicit 'migrating from spreadsheets' language and a migration guide.
The /products/perform/compensation/ page does not include pay equity compliance specifics — query 15f_038 ('What should I look for in compensation management software that supports pay equity compliance?') and 15f_125 ('Biggest risks of automating compensation decisions — what can go wrong with pay equity analysis?') cannot cite this page because compliance capabilities are not documented.
The /products/engage page presents engagement features as a capabilities list but has no problem-framing section — queries asking about warning signs of employee attrition (15f_001) or how to close the loop on engagement surveys (15f_006) cannot be answered by citing a product feature page.
The /solutions/reduce-regrettable-turnover page makes retention claims but doesn't explain the mechanism — queries like 15f_026 ('What types of HR technology actually move the needle on reducing voluntary turnover?') need a page that explains WHICH features drive WHICH retention outcomes, not just a claim that 15Five reduces turnover.
The Kreg Tool case study page at /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 returned almost exclusively navigation and footer markup with negligible body content — only the headline metric ('reduced turnover by over 20%') and a download button were accessible. Related customer stories presented as blog posts (Pendo, Auror) returned full body content normally. The case study format on this URL appears to use a gated download model (PDF behind a form), which renders the page's substantive content inaccessible to AI crawlers.
Three URLs that appear in search engine results as dedicated competitor comparison pages — /15five-vs-lattice, /15five-vs-cultureamp/, and /15five-vs-leapsome/ — all redirect to the generic /why-15five page. The /why-15five page contains no competitor-specific content: it does not mention Lattice, Culture Amp, or Leapsome by name, and contains only generic brand messaging ('The new ERA OF HR'). Fetching each comparison URL confirmed the canonical page is /why-15five and the full page content is identical across all three. Web search results still index these URLs with competitor-specific titles (e.g., '15Five vs Culture Amp | Comparing Employee Management...'), meaning buyers and AI crawlers who follow these URLs from search results land on a page that does not address the query that brought them there.
15Five's /integrations page is an integration directory listing supported platforms without architecture details, success stories, or technical specifications. Two shortlisting queries where buyers need integration proof points route to this directory and lose to Lattice due to affinity override (shortlisting requires case_study/landing_page types but found only integration directory). One CFO artifact query (TCO model) has completely missing coverage — no matching page exists anywhere on the site.
The /integrations directory at https://www.15five.com/integrations lists supported HRIS platforms but contains zero content about integration architecture — queries 15f_034 ('Integration requirements for evaluating performance management software — HRIS sync, SSO, SCIM provisioning, API access, webhook support') and 15f_108 ('Culture Amp integration issues — any known problems syncing with Workday or other enterprise HRIS platforms?') cannot cite this page because technical architecture is not documented.
The /blog/check-ins-and-1-on-1s/ page explains how to run check-ins but lacks outcome evidence — query 15f_014 ('Does real-time employee recognition actually reduce turnover, or is it a feel-good feature?') and 15f_123 ('Do employee recognition tools actually sustain engagement improvements?') require citable evidence connecting recognition frequency to retention outcomes, which is absent from this methodology guide.
Meta descriptions and Open Graph tags (og:description, og:image, og:title) are not accessible via rendered markdown analysis. None of the 30 pages analyzed had visible meta description or OG tag content in the fetched output.
This analysis was conducted using rendered page content (web_fetch returns markdown, not raw HTML), so JSON-LD schema blocks, meta tags, and OG tags are not visible in any of the 30 pages analyzed. Whether product pages carry Product or SoftwareApplication schema, blog posts carry Article schema with datePublished/dateModified, pricing pages carry Offer schema, or FAQ sections carry FAQPage schema cannot be determined from this analysis method.
All three workstreams can start this week.
[Synthesis] The 138-action plan is structured as a value-ordered execution sequence with a mandatory sequencing rule: L1 technical fixes execute first regardless of their commercial priority because they unblock downstream improvements. Without the sitemap expansion (L1: sitemap_missing_commercial_pages), new L3 content pages may not be discovered by AI crawlers; without the comparison URL fix (L1: comparison_urls_redirect_to_generic_page), new comparison pages cannot leverage the existing URL authority. L2 edits to existing pages come second — 74 optimizations across engagement, performance, manager coaching, compensation, integration, and ROI content deepen the pages buyers already find. L3 new content creates the structural assets that are currently absent: the 5 NIOs address the two highest-volume gap types (comparison content architecture at 24 queries, and thin-feature content at 31 queries across people analytics, OKR, talent calibration, and integration proof). The NIO with the highest single-action impact is nio_002 (Comparison Content Architecture Gap) at 24 queries — resolving the comparison URL redirect and creating 4 comparison landing pages would directly convert 15Five from absent to competitive across the highest-intent buying stage.
Gap coverage note: 129 of 132 gap queries (98%) are assigned to an L2 or L3 action item. 3 gap queries remain unrouted — these may represent edge-case queries that don’t cluster neatly or fall below the LLM’s grouping threshold.