AI Visibility Audit

15Five
Visibility Report

Competitive intelligence for AI-mediated buying decisions. Where 15Five wins, where it loses, and a prioritized three-layer execution plan — built from 150 buyer queries across ChatGPT + Perplexity.

150 Buyer Queries
5 Personas
8 Buying Jobs
ChatGPT + Perplexity
March 2026

TL;DR


Metric Definitions
Query vs Response
A query is one buyer question. A response is one platform’s answer. With 2 platforms, each query produces 2 responses. Headline metrics are per-query (union-deduped); platform columns are per-response.
Visible / Invisible
A query is visible if the client appears in any platform’s response. Invisible means the client was absent from all responses for that query.
Win / Positioning Gap
A win means the client is the primary recommendation. A positioning gap means the client is visible but not recommended first.
Share of Voice (SOV)
The percentage of all buyer queries where a brand is mentioned. Measures mindshare across the competitive set.
Unique Pages Cited / Citation Instances
Unique pages cited = distinct URLs after normalization. Citation instances = raw count of times a domain is cited across all responses (a domain with 112 instances may map to fewer unique URLs).
Mentions
Query-level, union-deduped count of how many buyer queries reference a brand by name (with or without a URL citation).
Section 1
Visible at Shortlist, Invisible at Discovery
46%
Overall Visibility
69 of 150 queries
34.7%
High-Intent Win Rate
Shortlist + Compare + Validate
#3
Share of Voice
Behind Lattice, Culture Amp
8–8–44
vs. Lattice H2H
8 wins, 8 losses, 44 ties
28pp
Role Type Gap
DM 44% vs Eval 15.9%

[Narrative] 15Five enters AI-mediated vendor evaluation with a split identity: strong brand recall at the shortlist stage but near-invisible where enterprise buying conversations actually begin. Across 150 queries modeled on five buyer personas and eight buying jobs, 15Five achieved 46% overall visibility (metrics.hero.overall_visibility.rate = 0.46) and ranks #3 in share of voice with 16.35% (69 mentions, metrics.competitive.share_of_voice rank 3) behind Lattice (21.33%) and Culture Amp (17.3%). At the shortlisting stage — where buyers request tool recommendations — visibility climbs to 81% and the win rate reaches 38%, confirming the brand is competitive when named in a shortlist context. The problem is upstream. Across the three early-funnel buying stages (problem identification, solution exploration, and requirements building), 15Five was invisible in 69% of queries. This means that for every 10 buyers who describe their HR performance management problem to an AI assistant, nearly 7 receive a response that does not include 15Five — creating a shortlist deficit before the brand even reaches the evaluation stage. The platform's 34.69% high-intent win rate (metrics.hero.high_intent_win_rate.rate = 0.3469) and a 24-percentage-point visibility-to-win gap (metrics.hero.key_gap.vis_to_win_gap_pp = 24) signal that even when 15Five appears, it frequently plays catch-up to competitors who framed the category first. The root causes are structural: 15Five has no dedicated competitor comparison pages (24 comparison-stage queries all carry affinity override flags), three underdocumented capabilities that surface 50%+ win rates when present (people analytics, talent calibration) but remain too thin to appear consistently, and six L1 technical issues including a sitemap that omits all commercial pages. The fix is a 138-action plan — 6 L1 technical fixes, 74 L2 content optimizations, 58 L3 new content items — that systematically closes both the early-funnel discovery gap and the comparison-stage content architecture gap. Repairing these gaps positions 15Five to earn top-of-funnel visibility before its competitors do, then convert that visibility at the comparison stage where it currently loses to better-documented competitors.

[Core Insight] 15Five wins the shortlist but loses the discovery conversation. AI systems surface 15Five in 81% of shortlisting queries (metrics.visibility.by_buying_job shortlisting.rate = 0.8077), but the platform is invisible in 69% of early-funnel buying jobs — problem identification, solution exploration, and requirements building — where buyers form the vendor universe they later shortlist from. Competitors who appear at discovery stage define the evaluation framework; 15Five arrives after the shortlist is already shaped.

Section 2
Visibility Analysis

Where 15Five appears and where it doesn't — across personas, buying jobs, and platforms.

[TL;DR] 15Five appears in 46% of buyer queries and wins 26.1% of those. Converting visibility to wins is the primary challenge (20% gap). High-intent queries run higher at 59.0%.

15Five's brand equity is highest at the shortlist but non-existent at the start of the buyer journey — 81% shortlisting visibility vs. 69% early-funnel invisibility means buyers arrive at shortlisting stages already predisposed toward competitors who framed the category first. The 28pp win-rate gap between decision-makers and evaluators reveals that the technical-buyer content gap is the specific lever for improving overall win rates.

Platform Visibility Table

DimensionCombinedChatGPTPerplexityDelta
All Queries46%28.7%40.9%Perplexity +12pp
Response count150 queries150 responses149 responses
By Persona
Chief Financial Officer52.2%34.8%47.8%Perplexity +13pp
Chief People Officer38.2%26.5%38.2%Perplexity +12pp
Director of HR Technology & People Analytics46.9%31.2%37.5%Perplexity +6pp
VP of People Operations46.9%37.5%37.5%0pp
VP of Talent Management48.3%13.8%46.4%Perplexity +33pp
By Buying Job
Artifact Creation33.3%16.7%36.4%Perplexity +20pp
Comparison47.1%41.2%38.2%ChatGPT +3pp
Consensus Creation23.1%0%23.1%Perplexity +23pp
Problem Identification8.3%0%8.3%Perplexity +8pp
Requirements Building40%13.3%33.3%Perplexity +20pp
Shortlisting80.8%53.8%76.9%Perplexity +23pp
Solution Exploration40%20%26.7%Perplexity +7pp
Validation52.2%34.8%47.8%Perplexity +13pp

Visibility by Buying Job

Artifact Creation33.3% (4/12)
Comparison47.1% (16/34)
Consensus Creation23.1% (3/13)
Problem Identification8.3% (1/12)
Requirements Building40% (6/15)
Shortlisting80.8% (21/26)
Solution Exploration40% (6/15)
Validation52.2% (12/23)
High-intent visibility
Shortlist + Compare + Validate
59.0% (49/83)
High-intent win rate34.7% (17/49)
Visibility-to-win gap−24pp

Visibility & Win Rate by Persona

Chief Financial Officer52.2% vis · 41.7% win (12/23)
Chief People Officer38.2% vis · 46.2% win (13/34)
Director of HR Technology & People Analytics46.9% vis · 20% win (15/32)
VP of People Operations46.9% vis · 20% win (15/32)
VP of Talent Management48.3% vis · 7.1% win (14/29)
Decision-maker win rate
cfo + chro
44% (11/25 visible)
Evaluator win rate
hr_technology_director + vp_people_ops + vp_talent
15.9% (7/44 visible)
Role type gap28pp

Visibility by Feature Focus

Compensation Management27.3% vis (3/11) · 33.3% win (1/3)
Continuous Checkins57.1% vis (8/14) · 25% win (2/8)
Employee Engagement Surveys47.6% vis · 20% win (N=21)
Hris Integrations63.6% vis (7/11) · 14.3% win (1/7)
Manager Coaching20% vis (3/15) · 66.7% win (2/3)
Okr Goal Tracking60% vis (6/10) · 16.7% win (1/6)
People Analytics12.5% vis (2/16) · 50% win (1/2)
Performance Reviews61.9% vis · 15.4% win (N=21)
Recognition Feedback44.4% vis (4/9) · 25% win (1/4)
Talent Calibration50% vis (4/8) · 25% win (1/4)

Visibility by Pain Point

Annual Review Burden69.2% vis (9/13) · 11.1% win (1/9)
Goal Misalignment66.7% vis (4/6) · 25% win (1/4)
Hr Roi Proof45.8% vis · 45.5% win (N=24)
Ineffective Managers38.9% vis (7/18) · 28.6% win (2/7)
Low Engagement No Action25% vis (2/8) · 50% win (1/2)
Regrettable Turnover57.9% vis · 36.4% win (N=19)
Siloed Hr Data33.3% vis (2/6) · 50% win (1/2)
Top Talent Flight Risk27.3% vis (3/11) · 33.3% win (1/3)

[Data] Overall: 46% (69/150 queries, metrics.hero.overall_visibility). Shortlisting: 81% visibility / 38% win rate. Comparison: 47% / 31%. Validation: 52% / 33%. Problem identification: 8% visibility (1/12 queries). Solution exploration: 40% (6/15). Requirements building: 40% (6/15). Early-funnel invisibility: 69% across these three stages (metrics.funnel_metrics.early_funnel_invisibility_rate). Decision-maker win rate: 44%. Evaluator win rate: 15.9%. Role gap: 28pp (metrics.hero.key_gap.role_gap_pp). ChatGPT 12pp lower than Perplexity (metrics.visibility.platform_delta.value_pp). [Synthesis] The 69% early-funnel invisibility rate is not a sampling artifact — it reflects a systematic content gap at the stages where buyers define what 'good' looks like before they name vendors. At problem identification specifically, 15Five appears in only 8% of queries (1/12), meaning buyers describing their HR pain in their own words almost never receive 15Five as a contextual reference. The 28-point role gap between decision-maker win rate (44%) and evaluator win rate (15.9%) reveals a secondary dimension: evaluators — the Director of HR Technology and VP of People Operations who conduct technical due diligence — produce the weakest win rates despite having above-average visibility. These are the personas who ask integration architecture, analytics depth, and talent calibration questions that 15Five's current content cannot answer. The 12pp ChatGPT disadvantage vs. Perplexity suggests ChatGPT's more conservative citation patterns are suppressing 15Five in educational query formats that dominate the early funnel.

Invisibility Gaps — 81 Queries Where 15Five Doesn’t Appear

Queries where 15Five is not mentioned in any AI response.

IDQueryPersonaBuying JobWinner
15f_001"What are the warning signs that good employees are about to leave — and how do mid-market companies catch this early?"chroProblem IDNo Vendor Mentioned
15f_002"How do you prove to a skeptical CFO that people programs actually reduce turnover and save money?"chroProblem IDNo Vendor Mentioned
15f_003"What are mid-market companies doing about managers who were promoted for technical skills but can't actually manage people?"chroProblem IDNo Vendor Mentioned
15f_004"Our annual reviews take two months of admin time and managers dread them — what are other companies doing instead?"vp_people_opsProblem IDNo Clear Winner
15f_005"Biggest challenges with first-time managers who were promoted for individual performance, not people skills"vp_people_opsProblem IDNo Vendor Mentioned
15f_006"We've been running engagement surveys for two years and employees keep asking what actually changed — how do you close the loop?"vp_people_opsProblem IDNo Vendor Mentioned
15f_007"What are the main approaches to unifying performance, engagement, and compensation data when they live in separate HR systems?"hr_technology_directorProblem IDNo Vendor Mentioned
15f_008"We have performance data in one system, engagement in another, and comp in spreadsheets — how do other mid-market companies fix this?"hr_technology_directorProblem IDNo Vendor Mentioned
15f_009"How much does a poor performance management process actually cost a mid-market company in turnover and lost productivity?"cfoProblem IDNo Vendor Mentioned
15f_010"We're linking compensation to performance using spreadsheets and it's becoming a liability — how do other companies handle this?"cfoProblem IDNo Vendor Mentioned
Show 71 more queries
IDQueryPersonaBuying JobWinner
15f_012"Our company sets quarterly OKRs but nobody below the VP level can explain what their goals are — is there a better way to cascade them?"vp_talentProblem IDNo Vendor Mentioned
15f_013"What's the difference between continuous performance management and traditional annual reviews — does continuous actually produce better outcomes?"chroSolution Exp.No Vendor Mentioned
15f_014"Does real-time employee recognition actually reduce turnover, or is it a feel-good feature that fades after a month?"chroSolution Exp.No Vendor Mentioned
15f_015"Main approaches to developing managers at scale — external coaching, training programs, or AI coaching tools?"chroSolution Exp.No Vendor Mentioned
15f_017"Annual engagement surveys vs. real-time pulse checks — which actually drives improvement at a mid-market company?"vp_people_opsSolution Exp.No Vendor Mentioned
15f_018"How does talent calibration work in practice — is it worth the administrative effort for a 300-person company?"vp_people_opsSolution Exp.No Vendor Mentioned
15f_020"Build vs. buy for people analytics — when should a mid-market company invest in a vendor vs. building dashboards in Tableau or Looker?"hr_technology_directorSolution Exp.No Vendor Mentioned
15f_023"What workforce data should HR be reporting to the board, and what tools make that easier than building custom reports?"cfoSolution Exp.No Vendor Mentioned
15f_025"AI coaching tools for managers — how do they work and is there evidence they actually improve manager effectiveness?"vp_talentSolution Exp.No Vendor Mentioned
15f_027"How are mid-market companies connecting pay decisions to performance data without turning compensation into a political battle?"vp_talentSolution Exp.No Vendor Mentioned
15f_028"Must-have vs. nice-to-have features in an employee engagement platform for a mid-market company with 300+ employees"chroReq. BuildingNo Vendor Mentioned
15f_029"What data should a people analytics tool connect to predict which employees are flight risks — performance scores, engagement trends, compensation?"chroReq. BuildingNo Vendor Mentioned
15f_030"We're replacing our current check-in process — what features matter most in a continuous performance management tool for hybrid teams?"chroReq. BuildingNo Vendor Mentioned
15f_032"What questions should I ask vendors about manager coaching and development features — what separates the good tools from the bad ones?"vp_people_opsReq. BuildingNo Vendor Mentioned
15f_035"What analytics capabilities should I evaluate in a performance management platform — custom dashboards, data exports, natural language queries, or all three?"hr_technology_directorReq. BuildingNo Vendor Mentioned
15f_036"Technical requirements for a talent calibration tool — flexible rating scales, bias detection, manager override audit trails, integration with existing review workflows"hr_technology_directorReq. BuildingNo Vendor Mentioned
15f_038"What should I look for in compensation management software that ties pay decisions to performance data and supports pay equity compliance?"cfoReq. BuildingNo Vendor Mentioned
15f_040"What makes a 360-degree feedback tool effective vs. just creating busywork — which features actually develop people?"vp_talentReq. BuildingNo Vendor Mentioned
15f_041"What engagement survey features matter most for predicting turnover — benchmarking, trend analysis, or manager-level breakdowns?"vp_talentReq. BuildingNo Vendor Mentioned
15f_046"Top AI coaching platforms for developing first-time managers at mid-market companies"chroShortlistingNo Clear Winner
15f_047"Which people analytics platforms let you ask plain-English questions about workforce data and get actionable answers?"chroShortlistingNo Vendor Mentioned
15f_052"switching from annual engagement surveys to a platform with real-time pulse and stronger benchmarking for predicting turnover"vp_people_opsShortlistingPerformYard
15f_056"Top people analytics platforms with AI-powered flight risk detection for mid-market companies"hr_technology_directorShortlistingLattice
15f_068"We need a recognition tool people will actually use — replacing a system nobody adopted at our 250-person company"vp_talentShortlistingNo Vendor Mentioned
15f_072"How does Leapsome's manager development compare to platforms with dedicated AI coaching features?"chroComparisonLeapsome
15f_079"How does Culture Amp's analytics compare to platforms with AI-powered people analytics for workforce insights?"hr_technology_directorComparisonCulture Amp
15f_080"Lattice vs Culture Amp — which has more flexible performance review workflows for complex org structures?"hr_technology_directorComparisonCulture Amp
15f_088"We're replacing our current engagement tool — Culture Amp vs Lattice, which is better for mid-market retention strategies?"chroComparisonCulture Amp
15f_089"Lattice vs Leapsome for manager coaching and development features at a mid-market company"vp_people_opsComparisonLattice
15f_090"Culture Amp vs Leapsome for continuous check-ins and pulse surveys — which drives better manager habits?"vp_people_opsComparisonLeapsome
15f_091"Betterworks vs Lattice analytics — switching from a platform with limited reporting, which has stronger people insights?"hr_technology_directorComparisonLattice
15f_092"Culture Amp vs Workleap for engagement surveys — analytics depth vs. simplicity for smaller HR teams"hr_technology_directorComparisonWorkleap
15f_093"Leapsome vs Betterworks for HRIS integrations and data architecture — switching from a platform with poor API support"hr_technology_directorComparisonLeapsome
15f_094"Lattice vs Betterworks pricing for a 300-person company — which is more cost-effective at mid-market scale?"cfoComparisonLattice
15f_095"Culture Amp vs Betterworks for performance reviews — which do mid-market companies prefer after switching from spreadsheets?"cfoComparisonCulture Amp
15f_096"Betterworks vs Leapsome for OKR and goal tracking — pricing and capabilities for a 300-person company"cfoComparisonLeapsome
15f_097"Betterworks vs Leapsome for leadership development and manager training — which is better for mid-market?"vp_talentComparisonLeapsome
15f_098"Leapsome vs Workleap for peer recognition and real-time feedback — replacing our current tool, which do teams actually adopt?"vp_talentComparisonWorkleap
15f_099"Lattice vs Workleap for engagement surveys — switching from a tool that measures engagement but doesn't help us improve it"vp_people_opsComparisonLattice
15f_100"Culture Amp vs Betterworks for performance reviews — pros and cons for a mid-market company with 300+ employees"vp_talentComparisonCulture Amp
15f_101"We're thinking about switching — Betterworks vs Culture Amp, which has better people analytics and flight risk prediction?"chroComparisonCulture Amp
15f_102"Lattice vs Culture Amp for compensation management — switching from spreadsheets, which ties pay to performance data better?"vp_people_opsComparisonLattice
15f_104"Culture Amp customer complaints — does it help teams improve engagement or just generate reports nobody acts on?"chroValidationNo Vendor Mentioned
15f_105"Common complaints about Lattice from HR operations teams — what do they struggle with most?"vp_people_opsValidationNo Vendor Mentioned
15f_107"Leapsome negative reviews — what do customers complain about most?"vp_people_opsValidationNo Clear Winner
15f_108"Culture Amp integration issues — any known problems syncing with Workday or other enterprise HRIS platforms?"hr_technology_directorValidationNo Clear Winner
15f_110"Is Leapsome too new for a mid-market company making a multi-year platform investment — what are the risks?"hr_technology_directorValidationNo Clear Winner
15f_112"Hidden costs of Lattice that mid-market companies don't expect — implementation fees, module add-ons, pricing surprises"cfoValidationNo Clear Winner
15f_120"Common failure modes when rolling out OKR software — what makes teams stop using it within six months?"vp_people_opsValidationNo Vendor Mentioned
15f_122"How accurate are AI-powered flight risk predictions — do people analytics tools actually predict employee turnover?"chroValidationNo Vendor Mentioned
15f_123"Do employee recognition tools actually sustain engagement improvements, or do people stop using them after a month?"vp_talentValidationNo Vendor Mentioned
15f_124"What goes wrong when companies roll out structured 1:1 check-ins — manager resistance, low adoption, what to watch for?"vp_talentValidationNo Vendor Mentioned
15f_125"Biggest risks of automating compensation decisions with software — what can go wrong with pay equity analysis?"chroValidationNo Vendor Mentioned
15f_128"How to justify replacing annual reviews with continuous performance management to a board that doesn't see HR as strategic"chroConsensusNo Vendor Mentioned
15f_129"Business case for connecting compensation management to performance reviews — impact on pay equity and retention"chroConsensusNo Vendor Mentioned
15f_130"How do you build the business case for people analytics when the CEO doesn't trust HR data?"vp_people_opsConsensusNo Vendor Mentioned
15f_131"Risk argument for investing in talent calibration — what happens when you lose top performers because you didn't identify them early enough?"vp_people_opsConsensusNo Vendor Mentioned
15f_132"Total cost of HR data fragmentation — how much does manual reconciliation between disconnected HR systems cost a 300-person company?"hr_technology_directorConsensusNo Vendor Mentioned
15f_133"Impact of switching from annual reviews to continuous recognition and feedback — what do the studies and case studies show?"hr_technology_directorConsensusNo Clear Winner
15f_134"Average cost of replacing an employee vs. cost of retention tools — making the math work for HR tech budget"cfoConsensusNo Vendor Mentioned
15f_135"Lattice vs Culture Amp — which platform has stronger ROI evidence for mid-market performance management?"cfoConsensusLattice
15f_136"How to convince leadership that the current approach to goal setting isn't working and we need a dedicated OKR platform"vp_talentConsensusNo Vendor Mentioned
15f_138"What's the typical payback period for investing in a manager coaching platform — how fast do companies see results?"vp_talentConsensusNo Vendor Mentioned
15f_139"Draft an RFP for a continuous performance management and employee engagement platform for a 400-person mid-market company"chroArtifactNo Vendor Mentioned
15f_142"Build a TCO model for implementing performance management software at a 300-person company over 3 years — licensing, implementation, training, and change management"cfoArtifactNo Vendor Mentioned
15f_143"Draft evaluation criteria for employee engagement platforms that will be reviewed by an executive team — focus on actionability and ROI"chroArtifactNo Vendor Mentioned
15f_144"Create a business case one-pager for investing in an AI-powered manager coaching platform at a mid-market company"vp_talentArtifactNo Vendor Mentioned
15f_145"Write a security and compliance questionnaire for evaluating people analytics platforms — SOC 2, GDPR, data residency, and access controls"hr_technology_directorArtifactNo Vendor Mentioned
15f_146"Draft requirements document for a compensation management module that connects to our existing performance review process"vp_people_opsArtifactNo Vendor Mentioned
15f_148"Build a weighted scoring rubric for evaluating talent calibration and 9-box assessment tools"hr_technology_directorArtifactNo Vendor Mentioned
15f_150"Create a change management plan for rolling out structured 1:1s and continuous check-ins to a team of 300+ with no current formal check-in process"vp_talentArtifactNo Vendor Mentioned

Positioning Gaps — 51 Queries Where 15Five Appears But Loses

Queries where 15Five is mentioned but a competitor is positioned more favorably.

IDQueryPersonaBuying JobWinner15Five Position
15f_011"How do you identify which employees are high-potential and at risk of leaving before they hand in their notice?"vp_talentProblem IDNo Vendor MentionedBrief Mention
15f_016"We're replacing our ad-hoc 1:1 process — what's the real difference between dedicated check-in platforms and just using meeting agenda templates?"vp_people_opsSolution Exp.No Clear WinnerMentioned In List
15f_019"How do performance management platforms typically integrate with HRIS systems like Workday, BambooHR, and ADP?"hr_technology_directorSolution Exp.No Clear WinnerMentioned In List
15f_021"Open source vs. commercial OKR tools — real tradeoffs for a company with 200-500 employees"hr_technology_directorSolution Exp.No Clear WinnerMentioned In List
15f_022"We've outgrown SurveyMonkey for employee engagement — what does a modern performance management tech stack look like for 300+ employees?"hr_technology_directorSolution Exp.Culture AmpMentioned In List
15f_024"Our current review process doesn't connect to any business outcomes — how do companies move from annual reviews to something measurable?"cfoSolution Exp.No Vendor MentionedMentioned In List
15f_026"What types of HR technology actually move the needle on reducing voluntary turnover at mid-market companies?"vp_talentSolution Exp.No Vendor MentionedMentioned In List
15f_031"Key requirements for evaluating performance review platforms for a 400-person company moving away from annual reviews"vp_people_opsReq. BuildingNo Clear WinnerBrief Mention
15f_033"We want continuous feedback between review cycles — what capabilities actually matter in a recognition and feedback tool?"vp_people_opsReq. BuildingNo Clear WinnerBrief Mention
15f_034"Integration requirements for evaluating performance management software — HRIS sync, SSO, SCIM provisioning, API access, webhook support"hr_technology_directorReq. BuildingNo Vendor MentionedMentioned In List
Show 41 more queries
IDQueryPersonaBuying JobWinner15Five Position
15f_037"We've tried and failed with spreadsheet-based OKRs — what features in a dedicated OKR tool actually make goal cascading work?"hr_technology_directorReq. BuildingNo Vendor MentionedBrief Mention
15f_039"Evaluation criteria for performance management platforms from a finance perspective — ROI metrics, implementation costs, time to value"cfoReq. BuildingNo Vendor MentionedBrief Mention
15f_042"We need structured 1:1 tools that connect manager check-ins to company goals — what capabilities should we prioritize?"vp_talentReq. BuildingNo Vendor MentionedMentioned In List
15f_044"Which employee engagement platforms actually help you act on survey results, not just collect engagement scores?"chroShortlistingCulture AmpMentioned In List
15f_045"We've outgrown our current performance tool — best platforms for continuous check-ins and manager coaching at a 350-person company"chroShortlistingBetterworksMentioned In List
15f_048"Best compensation management tools for mid-market companies trying to connect pay to performance data"chroShortlistingNo Vendor MentionedMentioned In List
15f_049"Top continuous performance review platforms for replacing spreadsheet-based annual reviews at a 200-500 person company"vp_people_opsShortlistingLatticeStrong 2nd
15f_050"alternatives to our current performance management tool for a 350-person SaaS company focused on reducing regrettable turnover"vp_people_opsShortlistingLatticeMentioned In List
15f_054"performance management platforms with reliable BambooHR and Workday integration — replacing a tool that doesn't sync properly"vp_people_opsShortlistingLatticeMentioned In List
15f_055"Best performance management platforms with native HRIS integrations — Workday, ADP, BambooHR sync without custom middleware"hr_technology_directorShortlistingLatticeMentioned In List
15f_057"looking to replace our current review tool with a continuous performance platform that supports 360-degree feedback and custom review cycles"hr_technology_directorShortlistingLatticeMentioned In List
15f_058"replacing our standalone engagement survey tool — need a platform with real-time dashboards, API access, and data export for an analytics-driven HR team"hr_technology_directorShortlistingCulture AmpMentioned In List
15f_062"OKR platforms affordable enough for mid-market but robust enough to actually make goals stick across departments"cfoShortlistingNo Vendor MentionedStrong 2nd
15f_065"Best OKR tools for companies where goal cascading has never worked — switching from spreadsheets to a dedicated platform"vp_talentShortlistingNo Clear WinnerMentioned In List
15f_066"Which engagement platforms are best at connecting survey data to retention outcomes for mid-market companies?"vp_talentShortlistingLatticeMentioned In List
15f_067"Top tools for developing managers who've never had formal leadership training — practical coaching, not just theory"vp_talentShortlistingCulture AmpMentioned In List
15f_070"We're moving from annual reviews — how does Lattice compare to other platforms for making that transition smooth?"chroComparisonLatticeStrong 2nd
15f_074"How does Culture Amp handle continuous check-ins and manager enablement compared to dedicated check-in platforms?"vp_people_opsComparisonCulture AmpStrong 2nd
15f_075"Switching from our current review tool — how does Lattice compare for making performance reviews less painful?"vp_people_opsComparisonLatticeStrong 2nd
15f_076"How does Workleap's recognition and feedback functionality compare to more comprehensive performance management platforms?"vp_people_opsComparisonWorkleapMentioned In List
15f_077"We're considering switching our engagement tool — how does Culture Amp's benchmarking compare to other platforms' action-planning features?"vp_people_opsComparisonCulture AmpMentioned In List
15f_078"How does Lattice's integration architecture compare to other performance platforms for HRIS sync, APIs, and webhooks?"hr_technology_directorComparisonLatticeMentioned In List
15f_082"We're replacing spreadsheet-based comp decisions — how does Lattice's compensation module compare for linking pay to performance?"cfoComparisonLatticeBrief Mention
15f_084"How does Betterworks' total cost compare to mid-market alternatives — implementation, training, and per-seat pricing?"cfoComparisonBetterworksStrong 2nd
15f_085"How does Lattice's talent calibration and 9-box feature compare to other performance management platforms?"vp_talentComparisonLatticeStrong 2nd
15f_086"How does Leapsome's continuous feedback compare to other 1:1 tools — which one do managers actually adopt?"vp_talentComparisonLeapsomeMentioned In List
15f_087"How does Workleap's engagement surveys compare to more analytics-heavy platforms for a 200-person company?"vp_talentComparisonWorkleapStrong 2nd
15f_103"Lattice implementation problems when migrating from another performance management tool at a mid-market company"chroValidationNo Vendor MentionedBrief Mention
15f_106"We're evaluating Culture Amp as a replacement — what are the biggest downsides of their performance review features?"vp_people_opsValidationNo Clear WinnerBrief Mention
15f_109"Betterworks analytics and reporting limitations — what can't it do that other platforms handle?"hr_technology_directorValidationNo Clear WinnerBrief Mention
15f_111"Betterworks reviews from mid-market companies — is it worth the enterprise-level pricing?"cfoValidationNo Clear WinnerMentioned In List
15f_113"Is Workleap too basic for a growing mid-market company — will we outgrow it in two years?"cfoValidationNo Clear WinnerBrief Mention
15f_114"Workleap Officevibe limitations — what are the biggest feature gaps compared to more comprehensive platforms?"vp_talentValidationNo Clear WinnerMentioned In List
15f_119"15Five talent management and performance calibration — how does it compare to dedicated talent review platforms?"vp_talentValidationNo Clear WinnerPrimary Recommendation
15f_121"Biggest risks of switching to continuous performance management from annual reviews at a mid-market company"hr_technology_directorValidationNo Vendor MentionedMentioned In List
15f_127"Case studies of mid-market companies that improved manager effectiveness after switching to continuous performance management"chroConsensusLatticeMentioned In List
15f_137"Case studies of companies that reduced regrettable turnover after switching from annual reviews to continuous performance management"vp_talentConsensusNo Vendor MentionedMentioned In List
15f_140"Create a vendor comparison scorecard for 15Five, Lattice, Culture Amp, Betterworks, and Leapsome focused on integration capabilities and data architecture"hr_technology_directorArtifactLatticeStrong 2nd
15f_141"Build an evaluation template for comparing continuous performance management platforms — weighted scoring for reviews, check-ins, engagement, and analytics"vp_people_opsArtifactNo Vendor MentionedMentioned In List
15f_147"Create a comparison matrix for OKR and goal tracking features across 15Five, Betterworks, Lattice, and Leapsome"chroArtifactNo Clear WinnerMentioned In List
15f_149"Draft an executive summary comparing recognition and continuous feedback platforms for a leadership team — focus on retention impact"vp_talentArtifactNo Vendor MentionedMentioned In List
Section 3
Competitive Position

Who’s winning when 15Five isn’t — and who controls the narrative at each buying stage.

[TL;DR] 15Five ranks #3 in Share of Voice with a 30W–28L head-to-head record across 9 competitors.

15Five holds its own against Lattice (tied H2H 8-8) and dominates Leapsome and Engagedly, but loses clearly to Culture Amp (3-6) and Betterworks (2-6) — the difference is content depth, not product capability. Kona's 67% win rate when visible is the single most underexploited competitive signal in the data, confirming AI coaching is a differentiator that buyers respond to when they can find it.

Share of Voice

CompanyMentionsShare
Lattice9021.3%
Culture Amp7317.3%
15Five6916.4%
Leapsome5011.8%
Betterworks419.7%
Quantum Workplace307.1%
PerformYard286.6%
Workleap245.7%
Engagedly153.5%
Reflektive20.5%

Head-to-Head Records

vs. Lattice8W – 8L – 44T (60 co-appear)
vs. Culture Amp3W – 6L – 30T (39 co-appear)
vs. Betterworks2W – 6L – 16T (24 co-appear)
vs. Leapsome6W – 2L – 24T (32 co-appear)
vs. Workleap2W – 2L – 16T (20 co-appear)
vs. Quantum Workplace1W – 1L – 15T (17 co-appear)
vs. Engagedly4W – 1L – 5T (10 co-appear)
vs. PerformYard3W – 2L – 14T (19 co-appear)
vs. Reflektive1W – 0L – 1T (2 co-appear)

Invisible Query Winners

For the 81 queries where 15Five is completely absent:

No AI Coverage53 wins (65%)
Culture Amp7 wins (9%)
No Clear Winner7 wins (9%)
Lattice5 wins (6%)
Betterworks4 wins (5%)
Leapsome3 wins (4%)
PerformYard1 win (1%)
Workleap1 win (1%)

Surprise Competitors

Vendors appearing in responses not in 15Five’s defined competitive set.

BambooHR — 4.7% SOVFlagged
Perceptyx — 1.7% SOVFlagged
beqom — 1.4% SOVFlagged
HiBob — 1.4% SOVFlagged
WorkTango — 1.4% SOVFlagged
Deel — 1.4% SOVFlagged
Workday — 1.2% SOVFlagged
Workhuman — 1.2% SOVFlagged
Visier — 1.2% SOVFlagged
Paycor — 1.2% SOVFlagged

[Synthesis] The competitive picture reveals a split field. 15Five is exactly tied with Lattice on a per-match basis (8W-8L-44T) — the primary competitor cannot pull ahead when they co-appear — but loses clearly to Culture Amp (3W-6L) and Betterworks (2W-6L) when those competitors appear in the same response. Culture Amp wins on engagement survey and analytics positioning; Betterworks wins on performance review platform queries where buyers are actively switching. 15Five's strongest competitive record is against Leapsome (6W-2L-24T), suggesting the brand has developed effective comparison positioning when Leapsome is the alternative. The 66.7% manager coaching win rate when visible is the single strongest competitive signal in the data: when Kona is cited, 15Five wins two-thirds of the time — confirming that Kona is an under-marketed differentiator. The BambooHR surprise (20 mentions in the evaluation space) indicates AI systems increasingly position BambooHR as a performance management alternative — a competitive threat not addressed in 15Five's current comparison content.

Section 4
Citation & Content Landscape

What AI reads and trusts in this category.

[TL;DR] 15Five had 71 unique pages cited across buyer queries, ranking #3 among all cited domains. 10 high-authority domains cite competitors but not 15Five.

15Five's #3 citation rank confirms a credible content footprint, but the 10-query third-party gap — where AI systems cited Gallup, SHRM, or academic sources instead of 15Five — maps precisely to the thin content areas targeted by the L3 NIO plan. Closing these content gaps converts third-party citations to owned citations, shifting 15Five from vendor-that-has-features to authority-that-defines-the-category.

Top Cited Domains (citation instances)

lattice.com165
cultureamp.com126
15five.com112 (#3)
betterworks.com99
leapsome.com77
g2.com75
success.15five.com68 (#7)
peoplemanagingpeople.com54
linkedin.com53
quantumworkplace.com51

15Five URL Citations by Page

www.15five.com16
success.15five.com/hc/en-us/articles/1997683054...3
www.15five.com/blog/pendo-reduces-turnover-by-2...3
www.15five.com/solutions/reduce-regrettable-tur...2
www.15five.com/blog/guide-to-performance-manage...2
www.15five.com/products/perform/ai-assisted-rev...2
success.15five.com/hc/en-us/articles/3600523467...2
success.15five.com/hc/en-us/articles/1390263345...2
success.15five.com/hc/en-us/articles/1392119953...2
success.15five.com/hc/en-us/articles/3177987475...2
Show 61 more pages
success.15five.com/hc/en-us/articles/3600026995...2
success.15five.com/hc/en-us/articles/3600026996...2
www.15five.com/products/perform2
www.15five.com/partners/technology-partners/int...2
www.15five.com/products/perform/okrs-and-goals2
www.15five.com/blog/ai-predictive-analytics-for...1
www.15five.com/blog/trustradius-how-using-15fiv...1
success.15five.com/hc/en-us/articles/3090774315...1
success.15five.com/hc/en-us/articles/3085435206...1
success.15five.com/hc/en-us/articles/3028541446...1
www.15five.com/resources/on-demand/performance-...1
www.15five.com/products/15five-ai1
success.15five.com/hc/en-us/articles/3605404832...1
www.15five.com/resources/on-demand/the-ai-compa...1
success.15five.com/hc/en-us/articles/1581797015...1
www.15five.com/resources/research/reviewing-the...1
success.15five.com/hc/en-us/articles/3600065766921
www.15five.com/solutions/improve-manager-effect...1
success.15five.com/hc/en-us/articles/3600065766...1
www.15five.com/blog/empowered-education1
www.15five.com/blog/how-to-implement-impactful-...1
www.15five.com/blog/workplace-challenges1
www.15five.com/blog/top-hr-issues-20211
www.15five.com/blog/creating-a-pip-performance-...1
www.15five.com/blog/career-hub-employee-growth1
www.15five.com/blog/best-self-kickoff1
www.15five.com/blog/6-steps-to-better-onboardin...1
www.15five.com/blog/4-hidden-challenges-that-ho...1
www.15five.com/blog/continuous-employee-feedback1
success.15five.com/hc/en-us/articles/3600517782...1
www.15five.com/blog/the-benefits-of-integrating...1
success.15five.com/hc/en-us/articles/1710639436...1
success.15five.com/hc/en-us/articles/3600026995...1
success.15five.com/hc/en-us/articles/3600206958...1
www.15five.com/security1
success.15five.com/hc/en-us/articles/3086753652...1
success.15five.com/hc/en-us/articles/1181684228...1
www.15five.com/hubfs/Content/E-Books/15Five_202...1
www.15five.com/solutions/increase-employee-enga...1
www.15five.com/blog/how-15five-can-help-improve...1
www.15five.com/hubfs/Content/E-Books/15Five_Emp...1
www.15five.com/blog/employee-engagement-roi-cal...1
www.15five.com/blog/a-case-for-increasing-your-...1
www.15five.com/resources/on-demand/role-of-enga...1
success.15five.com/hc/en-us/articles/4404620478...1
success.15five.com/hc/en-us/articles/3600571794...1
success.15five.com/hc/en-us/articles/4404620505...1
www.15five.com/blog/ensure-fair-and-consistent-...1
success.15five.com/hc/en-us/articles/2386021413...1
success.15five.com/hc/en-us/articles/4404623881...1
www.15five.com/products/perform/calibrations1
www.15five.com/blog/kreg-tool1
www.15five.com/blog/state-of-employee-turnover1
www.15five.com/blog/what-is-continuous-performa...1
www.15five.com/blog/the-impact-of-regrettable-t...1
www.15five.com/winter-2026-product-release1
success.15five.com/hc/en-us/articles/3600256000...1
success.15five.com/hc/en-us/articles/3600026989...1
success.15five.com/hc/en-us/articles/3600026821...1
www.15five.com/blog/using-15fives-performance-m...1
www.15five.com/blog/5-must-have-features-to-loo...1
Total 15Five unique pages cited71
15Five domain rank#3

Competitor URL Citations

Lattice188 URL citations
Culture Amp176 URL citations
Betterworks116 URL citations
Leapsome90 URL citations
Workleap61 URL citations
Quantum Workplace52 URL citations
PerformYard36 URL citations
Engagedly12 URL citations
Reflektive1 URL citations

Key Third-Party Gaps

lattice.com (165 domain citations)15Five not present
cultureamp.com (126 domain citations)15Five not present
betterworks.com (99 domain citations)15Five not present
leapsome.com (77 domain citations)15Five not present
peoplemanagingpeople.com (54 domain citations)15Five not present

[Synthesis] 15Five's 71 unique cited pages and #3 citation rank confirm the brand has established a content footprint that AI systems recognize — this is a foundation to build on, not a failure state. The alignment between #3 SOV rank and #3 citation rank is mechanically consistent: citation share and mention share correlate because the same content quality gaps that suppress SOV also suppress citation frequency. The 10-query third-party gap identifies the most actionable opportunity: for these 10 queries, AI systems cited Gallup, SHRM, academic publications, or consulting firm reports instead of 15Five's content. These third-party citations concentrate in people analytics methodology, OKR implementation guidance, and talent calibration frameworks — exactly the topics mapped to NIO 001 (people analytics), NIO 003 (OKR), and NIO 004 (talent calibration). The implication is that 15Five's current content on these topics is either absent or not authoritative enough to displace neutral third-party sources, confirming that the L3 NIO content investments are the path to citation capture, not optimization of existing pages.

Section 5
Prioritized Action Plan

Three layers of recommendations ranked by commercial impact and implementation speed.

[TL;DR] 132 total gaps: 81 invisibility + 51 positioning. 6 L1 technical fixes, 74 can be addressed by optimizing existing content (L2), 58 require new content creation (L3).

Six L1 technical fixes are prerequisites for all other improvements — specifically the sitemap expansion and comparison URL repair, which unblock AI crawler access to new content and restore authority to the comparison URL structure. Execute L1 first, then proceed through 74 L2 page edits and 58 L3 new content items in priority order; the single highest-ROI action is creating the 4 comparison landing pages in NIO 002, which addresses 24 comparison-stage queries currently producing near-zero 15Five wins.

Priority numbering: Numbers reflect unified ranking across all layers, ordered by commercial impact × implementation speed. Lower number = higher priority. Gaps in numbering indicate actions assigned to other layers.

Layer 1 Technical Fixes

Configuration and infrastructure changes. Owner: Engineering / DevOps. Timeline: Days to weeks.

XML Sitemap Contains Only 19 Blog URLs — All Commercial Pages Absent

Priority 2
Impact: MediumTime: 1-3 days

Issue: The sitemap at https://www.15five.com/sitemap.xml contains exactly 19 URLs, all of which are blog posts or resource thank-you pages with lastmod timestamps of November-December 2025. Zero product pages, zero solution pages, zero pricing pages, zero integration pages, zero comparison-redirect pages, and zero feature subpages appear in the sitemap. No sitemap index file exists (sitemap_index.xml and hs-sitemap.xml both return 404). Core commercial pages such as /products, /products/perform, /products/engage, /products/kona, /products/perform/compensation, /pricing, /integrations, /solutions/reduce-regrettable-turnover, and /why-15five are all entirely absent from any known sitemap.

Fix: Expand the sitemap to include all commercial pages — product pages, feature subpages, solution pages, pricing, integrations, comparison-redirect pages, and customer stories. Add accurate lastmod timestamps. If HubSpot CMS is in use (suggested by robots.txt Disallow patterns for /_hcms/ paths), verify sitemap page-type inclusion settings in Settings > Website > Pages > Sitemap and ensure all page types are enabled. Submit the updated sitemap to Google Search Console and Bing Webmaster Tools.

Competitor Comparison URLs Redirect to Generic Brand Page With No Competitor Content

Priority 14
Impact: MediumTime: 1-2 weeks

Issue: Three URLs that appear in search engine results as dedicated competitor comparison pages — /15five-vs-lattice, /15five-vs-cultureamp/, and /15five-vs-leapsome/ — all redirect to the generic /why-15five page. The /why-15five page contains no competitor-specific content: it does not mention Lattice, Culture Amp, or Leapsome by name, and contains only generic brand messaging ('The new ERA OF HR'). Fetching each comparison URL confirmed the canonical page is /why-15five and the full page content is identical across all three. Web search results still index these URLs with competitor-specific titles (e.g., '15Five vs Culture Amp | Comparing Employee Management...'), meaning buyers and AI crawlers who follow these URLs from search results land on a page that does not address the query that brought them there.

Fix: Either (a) create dedicated comparison landing pages at the existing URLs with substantive head-to-head content for each competitor, or (b) if comparison pages are not being maintained, implement 301 redirects from these URLs to the blog posts that do contain comparison content (e.g., /15five-vs-lattice → /blog/heres-why-people-choose-15five-over-lattice). Option (a) is strongly preferred: dedicated comparison pages with feature matrices, use-case differentiation, and migration guides are among the highest-ROI content types for AI citation in competitive evaluation queries. At minimum, create comparison pages for the top 3 primary competitors: Lattice, Culture Amp, and Betterworks.

No Date Signals on Any Product or Solution Page

Priority 1
Impact: MediumTime: 1-3 days

Issue: All product pages, solution pages, the why-15five page, and the pricing page have no visible last-updated dates and are absent from the sitemap — meaning no lastmod signal is available from any source. Freshness could not be determined for 17 of 30 pages analyzed. While blog posts in the sitemap carry lastmod timestamps (November-December 2025), these appear to be bulk-refreshed timestamps rather than per-post content modification dates: several blog posts show sitemap lastmod of 2025-11-25 or 2025-11-26 regardless of their original publication date (some were written in 2017-2019).

Fix: Add accurate lastmod timestamps to all commercial pages in the sitemap (requires first adding them to the sitemap per finding sitemap_missing_commercial_pages). Ensure sitemap lastmod values reflect actual content modification dates, not bulk publish dates. Consider adding visible 'Last updated: [date]' metadata to product and solution pages. Audit the bulk sitemap refresh — verify that pages with Nov 2025 lastmod were actually updated in November 2025 vs. a CMS auto-update.

Case Study Page Returns Minimal Body Content — Verify Gating or CSR

Priority 13
Impact: MediumTime: 1-3 days

Issue: The Kreg Tool case study page at /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 returned almost exclusively navigation and footer markup with negligible body content — only the headline metric ('reduced turnover by over 20%') and a download button were accessible. Related customer stories presented as blog posts (Pendo, Auror) returned full body content normally. The case study format on this URL appears to use a gated download model (PDF behind a form), which renders the page's substantive content inaccessible to AI crawlers.

Fix: Convert the highest-value case studies from gated PDF format to fully accessible HTML pages with inline outcome metrics, challenge/solution narrative, and specific product features used. Keep the formatted PDF as a downloadable bonus for users who want it. This approach makes the content available to both AI crawlers and human readers without sacrificing lead capture (the form can be offered as an optional 'download full report' CTA within the page). Priority case studies to convert: Kreg Tool, TrustRadius, and any others using the gated format.

Schema Markup: Manual Verification Required

Priority 19
Impact: LowTime: 1-3 days

Issue: This analysis was conducted using rendered page content (web_fetch returns markdown, not raw HTML), so JSON-LD schema blocks, meta tags, and OG tags are not visible in any of the 30 pages analyzed. Whether product pages carry Product or SoftwareApplication schema, blog posts carry Article schema with datePublished/dateModified, pricing pages carry Offer schema, or FAQ sections carry FAQPage schema cannot be determined from this analysis method.

Fix: Audit schema implementation using Google's Rich Results Test (https://search.google.com/test/rich-results) or a Screaming Frog structured data crawl. Priority items: (1) blog posts — verify Article/BlogPosting schema with author, datePublished, dateModified; (2) pricing page — verify Offer/PriceSpecification schema; (3) product FAQ sections — add FAQPage schema; (4) comparison pages — add WebPage schema with about properties referencing competitor entities once comparison content is restored.

Meta Descriptions and OG Tags: Manual Verification Required

Priority 18
Impact: LowTime: 1-3 days

Issue: Meta descriptions and Open Graph tags (og:description, og:image, og:title) are not accessible via rendered markdown analysis. None of the 30 pages analyzed had visible meta description or OG tag content in the fetched output.

Fix: Audit meta descriptions and OG tags using Screaming Frog, Ahrefs site audit, or browser view-source. Confirm every commercial page has a unique meta description (140-160 characters) with a specific capability claim. For the /why-15five page (which currently serves as the redirect destination for three competitor comparison URLs), ensure the meta description explicitly positions 15Five against named competitors to preserve some competitive signal.

Layer 2 Existing Content Optimization

Existing pages that need restructuring or deepening. Owner: Content Team. Timeline: Weeks.

Deepen /products/engage for Early-Funnel Action-Planning and Turnover-Prediction Queries

Priority 11
Currently: coveredThe /products/engage page covers the product but lacks: (1) a 'Survey → Action → Outcome' workflow that answers 'how do you close the loop on engagement data?'; (2) benchmarking or outcome evidence; (3) pulse-vs-annual comparison framing; (4) buyer-language evaluation criteria for engagement platforms. Culture Amp wins on queries like 15f_044 and 15f_058 by providing this action-planning and analytics framing; PerformYard wins 15f_052 with benchmark comparison content.

The /products/engage page presents engagement features as a capabilities list but has no problem-framing section — queries asking about warning signs of employee attrition (15f_001) or how to close the loop on engagement surveys (15f_006) cannot be answered by citing a product feature page. The /products/engage page lacks an outcome evidence block — it claims engagement improvements but provides no quantified customer results (response rate improvements, action-plan completion rates, turnover reduction data) that AI systems can extract as citable claims. The /products/engage page does not address the pulse-vs-annual survey tradeoff that appears in 3 queries (15f_017, 15f_041, 15f_114) — Culture Amp's comparable page wins these queries by including explicit 'when to use pulse vs. annual' guidance.

Queries affected: 15f_001, 15f_006, 15f_017, 15f_022, 15f_028, 15f_041, 15f_052, 15f_058, 15f_066, 15f_104, 15f_114, 15f_121, 15f_143

Add Migration Evidence and Outcome Proof to /products/perform for Continuous PM Switching Queries

Priority 9
Currently: coveredThe /products/perform page covers the product but lacks: (1) migration narrative for buyers switching from annual reviews or incumbent tools; (2) quantified outcome evidence connected to the performance review feature specifically; (3) a continuous vs. annual comparison table that AI systems can extract; (4) requirements-building content for buyers evaluating replacement platforms. Lattice wins 15f_049, 15f_057, 15f_070, 15f_075, 15f_127 by providing 'why teams switch to Lattice' sections and migration guides.

The /products/perform page has no switching or migration narrative — queries like 15f_049 ('Top continuous performance review platforms for replacing spreadsheet-based annual reviews') and 15f_057 ('replacing our current review tool — support for 360-degree feedback and custom review cycles') lose to Lattice because Lattice's comparable page includes explicit 'migrating from spreadsheets' language and a migration guide. The /products/perform page lacks customer outcome evidence tied specifically to the performance review feature — the Auror and Pendo case study data exists on blog posts but is not integrated into the product page narrative, making the page non-citable for 'does continuous PM actually produce better outcomes?' queries. The /products/perform page does not include a structured 'Continuous vs. Annual Reviews: Key Structural Differences' comparison that AI systems can extract for the educational solution-exploration queries (15f_013, 15f_024) where no vendor is recommended but a structural comparison would surface 15Five as the page host.

Queries affected: 15f_004, 15f_013, 15f_016, 15f_024, 15f_030, 15f_031, 15f_040, 15f_042, 15f_049, 15f_057, 15f_103, 15f_105, 15f_124, 15f_127, 15f_128, 15f_137, 15f_141, 15f_150

Add AI Coaching Evidence and Methodology to /products/kona for Manager Effectiveness Queries

Priority 8
Currently: coveredThe /products/kona page covers the product existence but lacks: (1) outcome evidence showing Kona improves manager effectiveness; (2) a methodology explainer describing how AI coaching works; (3) a head-to-head comparison with external coaching programs, training platforms, and generic AI tools; (4) an ROI framework. Culture Amp wins 15f_067; Leapsome wins 15f_072; both competitors present manager development content with methodology and outcome framing that Kona's page currently lacks.

The /products/kona page describes Kona AI Coach as a product but provides no evidence of effectiveness — queries 15f_025 ('AI coaching tools for managers — is there evidence they actually improve manager effectiveness?') and 15f_046 (shortlisting, no_clear_winner) can't cite this page because it makes no verifiable outcome claims with data. The /products/kona page contains no explanation of HOW the AI coaching works — there is no methodology section covering what data Kona uses, how it generates coaching recommendations, and what differentiates it from generic AI prompting — making it non-citable for 'how do AI coaching tools work?' queries (15f_015, 15f_025). The /products/kona page does not address the 'AI coaching vs. external coaching programs vs. training platforms' comparison framing that appears in 4 queries (15f_015, 15f_025, 15f_067, 15f_138) — buyers evaluating manager development approaches need this comparison to justify AI coaching selection.

Queries affected: 15f_003, 15f_005, 15f_015, 15f_025, 15f_032, 15f_046, 15f_067, 15f_107, 15f_110, 15f_138, 15f_144

Add Pay Equity Compliance and Evaluation Criteria to /products/perform/compensation/

Priority 10
Currently: coveredThe /products/perform/compensation/ page covers the feature but is missing: (1) pay equity compliance specifics (audit trails, bias detection, EEOC-compatible reporting); (2) structured evaluation criteria for buyers comparing compensation management modules; (3) an explicit data flow narrative from performance ratings to compensation recommendations to manager override documentation. Lattice wins comparison queries on compensation (15f_082, 15f_102) by providing a 'Pay for performance' framework with described data flow.

The /products/perform/compensation/ page does not include pay equity compliance specifics — query 15f_038 ('What should I look for in compensation management software that supports pay equity compliance?') and 15f_125 ('Biggest risks of automating compensation decisions — what can go wrong with pay equity analysis?') cannot cite this page because compliance capabilities are not documented. The /products/perform/compensation/ page lacks a buyer evaluation checklist or evaluation criteria framework — queries 15f_038 (requirements building) and 15f_048 (shortlisting) need a page that helps buyers evaluate compensation management tools, not just a feature description. The /products/perform/compensation/ page does not describe the performance-rating-to-compensation data flow — the defining value proposition ('connect pay decisions to performance data without spreadsheets') is stated but not illustrated with a step-by-step process that AI systems can extract as a citable workflow.

Queries affected: 15f_010, 15f_027, 15f_038, 15f_048, 15f_112, 15f_125, 15f_129, 15f_146

Rebuild /integrations from Directory to Integration Evidence Hub with Technical Architecture and Success Stories

Priority 16
Currently: coveredThe /integrations page lists supported integrations (Workday, BambooHR, ADP, etc.) but provides no: (1) technical integration architecture documentation (SSO types, SCIM provisioning, API access, webhooks); (2) customer success stories demonstrating integration reliability at scale; (3) comparison data vs. Lattice's integration ecosystem. Lattice wins the integration vendor comparison artifact (15f_140) by providing integration-specific case studies and API documentation. The shortlisting affinity override flags confirm the directory format doesn't match the case_study/landing_page content type required for shortlisting queries.

The /integrations directory at https://www.15five.com/integrations lists supported HRIS platforms but contains zero content about integration architecture — queries 15f_034 ('Integration requirements for evaluating performance management software — HRIS sync, SSO, SCIM provisioning, API access, webhook support') and 15f_108 ('Culture Amp integration issues — any known problems syncing with Workday or other enterprise HRIS platforms?') cannot cite this page because technical architecture is not documented. The /integrations page has no customer integration success stories — shortlisting queries 15f_054 and 15f_055 (both winner=lattice) require evidence that integrations work reliably at scale with named HRIS platforms, not just confirmation that integrations exist. The /integrations page lacks any comparison framing against competitor integration ecosystems — query 15f_140 ('Create a vendor comparison scorecard for 15Five, Lattice, Culture Amp — integration capabilities and data architecture', winner=lattice) loses because Lattice has comparison-ready integration documentation that 15Five's directory cannot provide.

Queries affected: 15f_007, 15f_019, 15f_034, 15f_108, 15f_132, 15f_140

Create CFO ROI Content Hub to Replace /pricing Fallback for Business-Case and Cost-of-Inaction Queries

Priority 5
Currently: partialThe /pricing page covers platform costs but is missing: (1) cost-of-inaction framing (what does bad PM actually cost in turnover?); (2) ROI metrics from PM investment; (3) competitor pricing comparison transparency; (4) payback period modeling. Lattice wins 15f_135 (CFO, consensus creation, ROI evidence comparison) by having ROI calculator content and outcome statistics on their pricing and customer evidence pages. Queries 15f_009, 15f_039, 15f_134 are routed to /pricing only as a fallback because no dedicated ROI/business-case content exists.

The /pricing page at https://www.15five.com/pricing lists plan prices and feature tiers but has no ROI framing — CFO queries about the cost of poor PM processes (15f_009) and evaluation ROI metrics (15f_039) cannot be answered by citing a pricing page, and routing these queries to /pricing as a coverage fallback confirms the content gap rather than filling it. The /customer-stories/ page has case studies with outcome data but formats them as narrative blog posts rather than extractable ROI metrics — the Pendo (21% turnover reduction) and Auror (94% retention) outcomes are buried in story prose rather than surfaced as structured, AI-extractable claims. The routing of 15f_009 and 15f_039 to /pricing reveals the absence of any dedicated business-case or ROI content on the site — the CFO's question 'how much does poor performance management cost?' has no home anywhere in 15Five's content inventory.

Queries affected: 15f_009, 15f_039, 15f_111, 15f_113, 15f_134, 15f_135

Restructure /blog/check-ins-and-1-on-1s/ for AI Extractability on Recognition and Continuous Feedback Queries

Priority 17
Currently: coveredBoth blog posts cover check-in and feedback methodology but lack: (1) outcome evidence connecting recognition/check-ins to retention improvement; (2) structured capability comparison data for buyer evaluation queries; (3) AI-extractable heading hierarchy for specific buyer questions. Workleap wins 15f_076 and 15f_098 with recognition-specific product pages that include feature comparisons and adoption evidence. The feature (recognition_feedback) has no dedicated product landing page — coverage depends entirely on methodology blog posts.

The /blog/check-ins-and-1-on-1s/ page explains how to run check-ins but lacks outcome evidence — query 15f_014 ('Does real-time employee recognition actually reduce turnover, or is it a feel-good feature?') and 15f_123 ('Do employee recognition tools actually sustain engagement improvements?') require citable evidence connecting recognition frequency to retention outcomes, which is absent from this methodology guide. The /blog/check-ins-and-1-on-1s/ page structure is optimized for human reading, not AI extraction — headings describe rather than answer ('How to run effective check-ins' instead of 'What are the most important capabilities in a continuous feedback tool?'), reducing the probability of passage extraction for requirements-building queries (15f_033). Recognition_feedback has no dedicated product landing page — this blog post is the primary coverage for all 6 queries in this cluster, but a blog post format cannot compete with Workleap's dedicated recognition product page that includes feature comparisons, adoption data, and customer outcome statistics.

Queries affected: 15f_014, 15f_033, 15f_068, 15f_123, 15f_133, 15f_149

Increase Social Proof Density and Retention Mechanism Clarity on /solutions/reduce-regrettable-turnover

Priority 12
Currently: addressedThe solution page addresses regrettable turnover thematically but lacks: (1) a feature-to-outcome mechanism map explaining which 15Five features drive which retention outcomes; (2) social proof density matching Lattice's equivalent page (5+ named customer outcomes with specific percentages); (3) a buyer evaluation framework for assessing retention-focused HR platforms. Lattice wins 15f_050 (shortlisting for reducing regrettable turnover) because its retention solution page includes more customer outcomes and a clearer mechanism narrative.

The /solutions/reduce-regrettable-turnover page makes retention claims but doesn't explain the mechanism — queries like 15f_026 ('What types of HR technology actually move the needle on reducing voluntary turnover?') need a page that explains WHICH features drive WHICH retention outcomes, not just a claim that 15Five reduces turnover. The /solutions/reduce-regrettable-turnover page has insufficient customer outcome density — Lattice's equivalent page (winner on 15f_050) includes 5+ named company outcomes with specific retention percentages; 15Five's page references Auror and Pendo outcomes but does not present them in a structured, scannable density that AI systems can extract as a recommendation signal. The /solutions/reduce-regrettable-turnover page lacks a buyer evaluation resource — RFP-creation query 15f_139 ('Draft an RFP for a continuous performance management platform') routes to this page but finds no RFP template, evaluation criteria, or downloadable reference content.

Queries affected: 15f_026, 15f_050, 15f_139

Layer 3 Narrative Intelligence Opportunities

Net new content addressing visibility and positioning gaps. Owner: Content Strategy. Timeline: Months.

NIO #1: People Analytics Authority Gap — AMAYA Has No Supporting Content Hub
Gap Type: Content Type Deficit — 15Five's AMAYA people analytics feature has thin content coverage across all buying stages: visibility rate = 12.5% (2/16 queries, metrics.visibility.by_feature.people_analytics.rate) despite a 50% win rate when cited, meaning the product wins when it appears but is missing from 87.5% of analytics queries.
Critical

People analytics is 15Five's highest-leverage underdocumented capability: when AMAYA appears in AI responses, 15Five wins half the time — but it appears in only 2 of 16 analytics-focused queries. The 15 L3 gaps in this cluster span every buying stage from problem identification through artifact creation, indicating that buyers who start their journey asking about flight risk prediction, workforce insights, or CFO-facing HR data never encounter 15Five as a solution. CHRO and CFO veto-holders are the primary affected personas, making this a deal-blocking invisibility gap. Closing this content deficit would make AMAYA a cited solution at the exact moment buyers form their vendor consideration set.

Query Cluster
IDs: 15f_002, 15f_008, 15f_020, 15f_023, 15f_029, 15f_035, 15f_047, 15f_056, 15f_079, 15f_091, 15f_101, 15f_109, 15f_122, 15f_130, 15f_145
“How do you prove to a skeptical CFO that people programs actually reduce turnover and save money?”
“Top people analytics platforms with AI-powered flight risk detection for mid-market companies”
“Which people analytics platforms let you ask plain-English questions about workforce data and get actionable answers?”
“Build vs. buy for people analytics — when should a mid-market company invest in a vendor vs. building dashboards in Tableau or Looker?”
Blueprint
  • On-Domain: Create a dedicated AMAYA product hub at /products/amaya with a methodology explainer covering: what data AMAYA analyzes, how flight risk scores are generated, and what actions the platform recommends — making the page citable for 'how do AI analytics tools predict turnover?' queries.
  • On-Domain: Publish a 'People Analytics ROI Guide' at /resources/people-analytics-roi covering CFO-facing metrics: cost-per-hire avoided, turnover cost benchmarks, payback period modeling, and board-ready workforce analytics templates — directly addressing the 4 CFO/CHRO consensus queries in this cluster.
  • On-Domain: Create a 'Flight Risk Prediction: How It Works' explainer page linking AMAYA's behavioral signals (engagement score trends, check-in sentiment, goal completion gaps) to turnover prediction accuracy evidence and outcome data.
  • On-Domain: Publish a 'People Analytics Buyer's Guide for Mid-Market HR Teams' at /resources/ framing the build-vs-buy decision, evaluation criteria for analytics platforms, and AMAYA's specific capabilities — addressing requirements-building and shortlisting queries where Lattice currently wins.
  • On-Domain: Add AMAYA-specific quantified outcome data to /customer-stories/ and /solutions/ pages — surfacing extractable claims about flight risk detection accuracy and turnover reduction results attributable to AMAYA.
  • Off-Domain: Submit AMAYA methodology content to SHRM and HR technology analyst publications (Sapient Insights, RedThread Research) to establish third-party citations for 'AI-powered flight risk detection' queries where AI systems currently prefer academic or analyst sources.
  • Off-Domain: Pursue co-marketing content with Workday and BambooHR partners describing AMAYA analytics running on top of their HRIS data — creating integration-specific people analytics proof points that appear in technical evaluation queries.
Platform Acuity

Chatgpt (medium): Analytics methodology queries (15f_002, 15f_020, 15f_023) produced educational responses citing Gallup, SHRM, and academic sources with no vendor mention — ChatGPT defaults to research citations for ROI/methodology queries. A 15Five page with embedded research citations and quantified outcome data would compete for this citation pattern. Perplexity (high): Shortlisting queries (15f_047, 15f_056) where Lattice won confirm Perplexity cites vendor product pages for 'top platforms' queries; AMAYA needs a standalone product page with structured feature claims to appear in these recommendation lists.

NIO #2: Comparison Content Architecture Gap — No Dedicated Comparison Pages Exist Site-Wide
Gap Type: Structural Gap — 15Five has zero dedicated competitor comparison landing pages: the three indexed comparison URLs (/15five-vs-lattice, /15five-vs-cultureamp/, /15five-vs-leapsome/) all redirect to the generic /why-15five page that contains no competitor-specific content. All 24 comparison-stage queries in this cluster carry 'AFFINITY OVERRIDE: buying_job=comparison requires page types [comparison] but found [blog/feature/product]' routing flags.
Critical

Comparison is the highest commercial-intent buying stage before final vendor selection. 15Five is invisible or losing on 24 of 34 comparison-stage queries (47% visibility, 31% win rate, metrics.visibility.by_buying_job comparison.rate/win_rate), and the root cause is structural: no comparison pages exist anywhere on the site. Competitors who win on comparison queries — Lattice (7 wins), Culture Amp (5 wins), Leapsome (4 wins), Workleap (3 wins) — each have dedicated comparison or versus landing pages that AI systems extract for head-to-head evaluations. This is the highest-volume NIO cluster and maps directly to L1 finding comparison_urls_redirect_to_generic_page, meaning the fix addresses a confirmed technical debt item and a content gap simultaneously. At comparison-stage visibility of 47%, 15Five is present in nearly half of these conversations — but absent content means presence without recommendation.

Query Cluster
IDs: 15f_070, 15f_072, 15f_074, 15f_075, 15f_076, 15f_077, 15f_078, 15f_080, 15f_082, 15f_084, 15f_086, 15f_087, 15f_088, 15f_089, 15f_090, 15f_092, 15f_093, 15f_094, 15f_095, 15f_097, 15f_098, 15f_099, 15f_100, 15f_102
“We're moving from annual reviews — how does Lattice compare to other platforms for making that transition smooth?”
“Culture Amp vs Leapsome for continuous check-ins and pulse surveys — which drives better manager habits?”
“Culture Amp vs Betterworks for performance reviews — which do mid-market companies prefer after switching from spreadsheets?”
“Lattice vs Culture Amp — which platform has stronger ROI evidence for mid-market performance management?”
Blueprint
  • On-Domain: Create a dedicated '15Five vs. Lattice' comparison landing page at /15five-vs-lattice with: a side-by-side feature matrix across performance reviews, engagement surveys, check-ins, manager coaching, and analytics; a 'Why teams switch from Lattice to 15Five' section with G2 reviewer evidence; migration timeline; and pricing comparison transparency — this single page addresses 9+ queries in the cluster where Lattice currently wins.
  • On-Domain: Create a '15Five vs. Culture Amp' comparison page at /15five-vs-cultureamp/ with explicit differentiation on action planning (Culture Amp's 'data collection without action' positioning), AI coaching capabilities (Kona has no Culture Amp equivalent), and mid-market vs. enterprise fit — addressing the 5 queries where Culture Amp wins.
  • On-Domain: Create a '15Five vs. Betterworks' comparison page at /15five-vs-betterworks covering OKR tracking depth, performance review flexibility, and mid-market pricing — Betterworks wins 6 H2H queries overall and is the comparison target for CFO-facing financial and performance queries.
  • On-Domain: Create a '15Five vs. Leapsome' comparison page at /15five-vs-leapsome addressing manager development features (Leapsome wins 4 comparison queries on manager coaching), check-in methodology differences, and European vs. US mid-market fit.
  • On-Domain: Add a 'Compare 15Five to [Competitor]' navigation element to /why-15five and the product pages that routes buyers to the correct comparison landing page — improving AI-crawler discovery of comparison pages from the brand positioning entry point.
  • Off-Domain: Optimize G2 'Compare' pages for 15Five vs. each primary competitor — AI systems frequently cite G2 comparison data for evaluation queries; ensuring 15Five's G2 comparison profiles have detailed feature ratings and differentiated reviewer quotes improves citation probability for comparison queries that don't land on 15Five's owned pages.
  • Off-Domain: Publish 'Why we chose 15Five over [Competitor]' customer content in HR practitioner communities (SHRM blog, People Managing People) to create third-party comparison evidence that AI systems cite alongside owned comparison pages.
Platform Acuity

Chatgpt (high): ChatGPT comparison queries (15f_075, 15f_082, 15f_095) produced structured competitor comparisons citing competitor product pages and blog content — confirming dedicated comparison landing pages would be cited once they exist and are discoverable by GPTBot. Perplexity (high): Perplexity consistently cites dedicated 'vs.' or comparison pages in competitive evaluation queries; the existing comparison URL structure at 15Five would receive Perplexity citations as soon as the redirect-to-generic-page issue (L1: comparison_urls_redirect_to_generic_page) is resolved and content is added.

NIO #3: OKR & Goal Cascading Content Gap — No Methodology Hub for Mid-Market Goal Alignment
Gap Type: Content Type Deficit — 15Five's OKR and goal tracking feature has thin content coverage with 60% visibility rate (6/10 queries, metrics.visibility.by_feature.okr_goal_tracking.rate) but only 16.7% win rate (1/10 wins) — present in AI responses but rarely winning because the content that surfaces it doesn't address OKR methodology, cascading implementation, or comparison to standalone OKR tools.
High

OKR and goal tracking is a category where 15Five is frequently present but rarely wins: the platform appears in 6 of 10 OKR-focused queries but captures only 1 win, with Betterworks, Leapsome, and Lattice — all of which have dedicated OKR methodology content — winning the majority. The 9 L3 queries span the full buying journey from 'our quarterly OKRs don't cascade below VP level' (problem identification) through 'convince leadership we need a dedicated OKR platform' (consensus creation) and 'compare OKR features across 15Five, Betterworks, Lattice, Leapsome' (artifact creation), indicating systematic content absence rather than a stage-specific weakness. Because OKR tracking is an expansion module that increases deal value from engagement-only to full-platform contracts, closing this gap affects both initial conversion and deal size.

Query Cluster
IDs: 15f_012, 15f_021, 15f_037, 15f_062, 15f_065, 15f_096, 15f_120, 15f_136, 15f_147
“Our company sets quarterly OKRs but nobody below the VP level can explain what their goals are — is there a better way to cascade them?”
“OKR platforms affordable enough for mid-market but robust enough to actually make goals stick across departments”
“We've tried and failed with spreadsheet-based OKRs — what features in a dedicated OKR tool actually make goal cascading work?”
“How to convince leadership that the current approach to goal setting isn't working and we need a dedicated OKR platform”
Blueprint
  • On-Domain: Create a dedicated OKR & Goal Tracking feature page at /products/perform/goals with buyer-language sections: 'How 15Five makes OKR cascading work below the VP level,' 'Spreadsheet OKRs vs. 15Five's connected goal platform,' and 'Common OKR failure modes — and the structural fixes that prevent them' — directly addressing 15f_012, 15f_037, and 15f_120.
  • On-Domain: Publish a 'Mid-Market OKR Playbook: Getting Goals to Cascade from Executive to Team Level' at /resources/ — a methodology guide (not a product page) that 15Five can own as an educational resource, making it citable for problem identification and consensus creation queries where buyers want methodology, not product marketing.
  • On-Domain: Create a 'How to Build the Business Case for a Dedicated OKR Platform' guide specifically targeting the VP Talent and CHRO consensus-creation queries (15f_136) — include board-presentation templates and ROI framing for moving from spreadsheet OKRs to a connected platform.
  • On-Domain: Add OKR feature comparison data to the '15Five vs. Betterworks' and '15Five vs. Leapsome' comparison pages (from NIO 002) — ensuring comparison queries that include OKR as a feature dimension route to pages with explicit OKR head-to-head comparisons.
  • Off-Domain: Publish OKR methodology content in HR practitioner publications (SHRM, HR Brew, People Managing People) positioning 15Five as a mid-market OKR success story — establishing third-party citations for problem identification and solution exploration queries where no vendors are currently cited.
  • Off-Domain: Create a publicly shareable OKR implementation case study from a current 15Five customer who successfully cascaded goals across a 200-500 person organization — a named company with specific before/after metrics would win the shortlisting queries (15f_062, 15f_065) where Lattice and Leapsome currently win on brand familiarity.
Platform Acuity

Chatgpt (medium): OKR methodology queries (15f_012, 15f_021) produced vendor-neutral educational responses — ChatGPT treats OKR questions as management methodology, not software selection. Content combining OKR methodology authority with product recommendation (positioning 15Five as the implementation vehicle) would need to be distinctly authoritative to break this pattern. Perplexity (medium): Shortlisting queries (15f_062, 15f_065) produced mixed vendor lists where 15Five was absent; Perplexity responded favorably to Betterworks and Leapsome's OKR-focused product pages. A dedicated 15Five OKR page structured around mid-market cascading methodology would appear in these recommendation lists.

NIO #4: Talent Calibration & High-Potential Identification Gap — 9-Box and Flight Risk Detection Undocumented
Gap Type: Content Type Deficit — 15Five's talent calibration and 9-box assessment capability has thin content coverage; visibility rate = 50% (4/8 queries, metrics.visibility.by_feature.talent_calibration.rate) but win rate = 25% (1 win), with Lattice winning 2 comparison queries on talent calibration framing and the validation query 15f_119 ('how does 15Five's calibration compare to dedicated talent review platforms?') returning no_clear_winner.
High

Talent calibration sits at the intersection of 15Five's strongest commercial proposition (preventing top talent flight risk) and its weakest content coverage. The VP of Talent Management persona — who most frequently searches talent calibration queries — has a win rate of only 7.14% across all 29 queries (1 win, metrics.visibility.by_persona.vp_talent.win_rate = 0.0714) despite 48% visibility, meaning 15Five appears in VP Talent conversations but consistently loses the recommendation. The 7 L3 queries span from 'how does talent calibration work in practice?' (solution exploration) through 'technical requirements for a talent calibration tool' (requirements building) and 'how does 15Five's calibration compare to dedicated talent review platforms?' (validation), indicating buyers who actively search 15Five's calibration capabilities cannot find sufficient product evidence to justify selection. Because top_talent_flight_risk is a deal-closing pain point for both CHRO and VP Talent veto-holders, this content gap directly affects close rates on full-platform deals.

Query Cluster
IDs: 15f_011, 15f_018, 15f_036, 15f_085, 15f_119, 15f_131, 15f_148
“How do you identify which employees are high-potential and at risk of leaving before they hand in their notice?”
“How does talent calibration work in practice — is it worth the administrative effort for a 300-person company?”
“Technical requirements for a talent calibration tool — flexible rating scales, bias detection, manager override audit trails”
“How does Lattice's talent calibration and 9-box feature compare to other performance management platforms?”
Blueprint
  • On-Domain: Create a dedicated Talent Calibration feature page at /products/perform/talent-calibration documenting: 9-box assessment methodology, flexible rating scale configuration, bias detection features, manager override audit trails, and integration with AMAYA flight risk signals — directly addressing 15f_036 (technical requirements) and 15f_018 (how calibration works in practice).
  • On-Domain: Add a 'Talent Calibration vs. Annual Stack Rankings: Key Differences' section to the talent calibration page explaining the methodological distinction between calibrated assessments and forced distributions — positioning 15Five's approach as the modern retention-focused alternative.
  • On-Domain: Create a 'How to Identify High-Potential Employees Before They Leave' guide at /resources/ connecting talent calibration session data to AMAYA flight risk signals — a content piece AI systems would cite for the 15f_011 problem-identification query about identifying at-risk high-potentials.
  • On-Domain: Publish a 'Risk of Losing Top Performers: The Business Case for Talent Calibration' content piece for the consensus-creation stage (15f_131) — making the ROI argument for talent calibration investment with specific retention cost data a VP People Ops can present to leadership.
  • Off-Domain: Publish a 'Talent Calibration Implementation Guide for Mid-Market Companies' on SHRM or HR Brew to establish 15Five as a talent calibration methodology authority — creating third-party citations for the methodology queries where AI systems currently cite academic HR sources.
  • Off-Domain: Optimize 15Five's G2 listing with talent calibration and 9-box-specific reviewer quotes and feature ratings to increase citation probability in Perplexity's comparison queries for talent review platforms (15f_085, 15f_119).
Platform Acuity

Chatgpt (low): Talent calibration queries (15f_018, 15f_011) produced vendor-neutral educational responses about calibration methodology — ChatGPT treats these as HR methodology questions rather than software selection questions. Content must blend methodology authority with product evidence to shift citation patterns; a standalone product feature page without methodology context will not be cited. Perplexity (medium): Validation query 15f_119 ('15Five talent management and performance calibration — how does it compare to dedicated talent review platforms?') resulted in no_clear_winner, suggesting Perplexity found insufficient structured comparison data on either side. A dedicated talent calibration page with structured feature-level claims and a comparison table vs. standalone talent review platforms would shift this to a 15Five win.

NIO #5: Integration Proof Points & CFO Financial Evaluation Assets — Directory Exists, Evidence Missing
Gap Type: Content Type Deficit — 15Five's /integrations page is an integration directory listing supported platforms without architecture details, success stories, or technical specifications. Two shortlisting queries where buyers need integration proof points route to this directory and lose to Lattice due to affinity override (shortlisting requires case_study/landing_page types but found only integration directory). One CFO artifact query (TCO model) has completely missing coverage — no matching page exists anywhere on the site.
Medium

This cluster addresses a lower-volume but high-commercial-weight gap: technical evaluators researching integration reliability and CFOs building 3-year total cost of ownership models — both late-stage evaluation queries that, when unanswered, can cause a shortlisted vendor to be eliminated. 15Five loses both HRIS integration shortlisting queries to Lattice (15f_054, 15f_055, winner=lattice) not because integrations fail, but because Lattice has case studies and technical documentation proving they work at scale with BambooHR and Workday. The CFO TCO query (15f_142) has completely missing coverage — no content exists to answer 'build a 3-year TCO model for implementing performance management software' — leaving 15Five absent from a query that surfaces during CFO final approval stages.

Query Cluster
IDs: 15f_054, 15f_055, 15f_142
“Performance management platforms with reliable BambooHR and Workday integration — replacing a tool that doesn't sync properly”
“Best performance management platforms with native HRIS integrations — Workday, ADP, BambooHR sync without custom middleware”
“Build a TCO model for implementing performance management software at a 300-person company over 3 years — licensing, implementation, training, and change management”
Blueprint
  • On-Domain: Create HRIS integration success stories at /integrations/workday and /integrations/bamboohr documenting: customer company profile, employee count, specific sync configuration, sync reliability outcomes, and implementation timeline — making these pages citable for shortlisting queries asking whether named integrations actually work reliably at mid-market scale.
  • On-Domain: Add an Integration Architecture section to /integrations documenting SSO (SAML/OIDC), SCIM provisioning, API capabilities, webhook support, and data sync frequency — addressing the technical requirements queries in the L2 cluster that currently route to the integration directory without finding technical specification content.
  • On-Domain: Create a 3-Year TCO Calculator or downloadable model at /resources/tco-calculator for the CFO artifact query (15f_142) — include licensing costs at 200/300/500-person scale, implementation and onboarding fees, change management cost estimates, and a comparison vs. status quo (manual processes, incumbent platforms).
  • Off-Domain: Pursue co-marketing case studies with BambooHR and Workday integration partners — a joint 'How [Customer] unified BambooHR with 15Five's people analytics' success story would create third-party citations for integration shortlisting queries where Lattice currently wins on integration partner credibility.
  • Off-Domain: Submit a 'Total Cost of Ownership: Performance Management Software for Mid-Market HR' guide to a neutral HR technology publication (HR Executive, Sapient Insights) to create a citable resource that frames TCO in terms favorable to 15Five's integrated-suite pricing model.
Platform Acuity

Chatgpt (low): TCO query (15f_142) produced no vendor mentions — ChatGPT treated it as a financial modeling exercise. Integration shortlisting queries cited Lattice based on integration case study content that ChatGPT indexed. 15Five's integration pages would need to include named customer outcomes to shift this citation pattern. Perplexity (medium): Integration shortlisting queries (15f_054, 15f_055) cited Lattice as the winner based on integration-specific landing pages that Perplexity indexed; similar HRIS-specific integration success pages on 15Five's domain would appear in the same citation pattern given Perplexity's known preference for page-type-matched content.

Unified Priority Ranking

All recommendations across all three layers, ranked by commercial impact × implementation speed.

  • 1

    No Date Signals on Any Product or Solution Page

    All product pages, solution pages, the why-15five page, and the pricing page have no visible last-updated dates and are absent from the sitemap — meaning no lastmod signal is available from any source. Freshness could not be determined for 17 of 30 pages analyzed. While blog posts in the sitemap carry lastmod timestamps (November-December 2025), these appear to be bulk-refreshed timestamps rather than per-post content modification dates: several blog posts show sitemap lastmod of 2025-11-25 or 2025-11-26 regardless of their original publication date (some were written in 2017-2019).

    Technical Fix · Engineering · 17 of 30 pages analyzed have no freshness signal — all product, solution, integration, and pricing pages
  • 2

    XML Sitemap Contains Only 19 Blog URLs — All Commercial Pages Absent

    The sitemap at https://www.15five.com/sitemap.xml contains exactly 19 URLs, all of which are blog posts or resource thank-you pages with lastmod timestamps of November-December 2025. Zero product pages, zero solution pages, zero pricing pages, zero integration pages, zero comparison-redirect pages, and zero feature subpages appear in the sitemap. No sitemap index file exists (sitemap_index.xml and hs-sitemap.xml both return 404). Core commercial pages such as /products, /products/perform, /products/engage, /products/kona, /products/perform/compensation, /pricing, /integrations, /solutions/reduce-regrettable-turnover, and /why-15five are all entirely absent from any known sitemap.

    Technical Fix · Engineering · All product, feature, solution, pricing, and integration pages — approximately 15+ high-value commercial URLs absent from sitemap
  • 3

    Comparison Content Architecture Gap — No Dedicated Comparison Pages Exist Site-Wide

    15Five has zero dedicated competitor comparison landing pages: the three indexed comparison URLs (/15five-vs-lattice, /15five-vs-cultureamp/, /15five-vs-leapsome/) all redirect to the generic /why-15five page that contains no competitor-specific content. All 24 comparison-stage queries in this cluster carry 'AFFINITY OVERRIDE: buying_job=comparison requires page types [comparison] but found [blog/feature/product]' routing flags.

    New Content · Content · 24 queries affecting personas: chro, vp_people_ops, hr_technology_director, cfo, vp_talent
  • 4

    People Analytics Authority Gap — AMAYA Has No Supporting Content Hub

    15Five's AMAYA people analytics feature has thin content coverage across all buying stages: visibility rate = 12.5% (2/16 queries, metrics.visibility.by_feature.people_analytics.rate) despite a 50% win rate when cited, meaning the product wins when it appears but is missing from 87.5% of analytics queries.

    New Content · Content · 15 queries affecting personas: chro, hr_technology_director, cfo, vp_people_ops
  • 5

    Create CFO ROI Content Hub to Replace /pricing Fallback for Business-Case and Cost-of-Inaction Queries

    The /pricing page at https://www.15five.com/pricing lists plan prices and feature tiers but has no ROI framing — CFO queries about the cost of poor PM processes (15f_009) and evaluation ROI metrics (15f_039) cannot be answered by citing a pricing page, and routing these queries to /pricing as a coverage fallback confirms the content gap rather than filling it.

    Content Optimization → New Content · Content · 6 queries, personas: cfo, chro, vp_people_ops
  • 6

    OKR & Goal Cascading Content Gap — No Methodology Hub for Mid-Market Goal Alignment

    15Five's OKR and goal tracking feature has thin content coverage with 60% visibility rate (6/10 queries, metrics.visibility.by_feature.okr_goal_tracking.rate) but only 16.7% win rate (1/10 wins) — present in AI responses but rarely winning because the content that surfaces it doesn't address OKR methodology, cascading implementation, or comparison to standalone OKR tools.

    New Content · Content · 9 queries affecting personas: vp_talent, hr_technology_director, cfo, vp_people_ops, chro
  • 7

    Talent Calibration & High-Potential Identification Gap — 9-Box and Flight Risk Detection Undocumented

    15Five's talent calibration and 9-box assessment capability has thin content coverage; visibility rate = 50% (4/8 queries, metrics.visibility.by_feature.talent_calibration.rate) but win rate = 25% (1 win), with Lattice winning 2 comparison queries on talent calibration framing and the validation query 15f_119 ('how does 15Five's calibration compare to dedicated talent review platforms?') returning no_clear_winner.

    New Content · Content · 7 queries affecting personas: vp_talent, vp_people_ops, hr_technology_director, chro
  • 8

    Add AI Coaching Evidence and Methodology to /products/kona for Manager Effectiveness Queries

    The /products/kona page describes Kona AI Coach as a product but provides no evidence of effectiveness — queries 15f_025 ('AI coaching tools for managers — is there evidence they actually improve manager effectiveness?') and 15f_046 (shortlisting, no_clear_winner) can't cite this page because it makes no verifiable outcome claims with data.

    Content Optimization · Content · 11 queries, personas: chro, vp_people_ops, vp_talent, hr_technology_director
  • 9

    Add Migration Evidence and Outcome Proof to /products/perform for Continuous PM Switching Queries

    The /products/perform page has no switching or migration narrative — queries like 15f_049 ('Top continuous performance review platforms for replacing spreadsheet-based annual reviews') and 15f_057 ('replacing our current review tool — support for 360-degree feedback and custom review cycles') lose to Lattice because Lattice's comparable page includes explicit 'migrating from spreadsheets' language and a migration guide.

    Content Optimization · Content · 18 queries, personas: chro, vp_people_ops, hr_technology_director, vp_talent
  • 10

    Add Pay Equity Compliance and Evaluation Criteria to /products/perform/compensation/

    The /products/perform/compensation/ page does not include pay equity compliance specifics — query 15f_038 ('What should I look for in compensation management software that supports pay equity compliance?') and 15f_125 ('Biggest risks of automating compensation decisions — what can go wrong with pay equity analysis?') cannot cite this page because compliance capabilities are not documented.

    Content Optimization · Content · 8 queries, personas: cfo, chro, vp_people_ops, vp_talent
  • 11

    Deepen /products/engage for Early-Funnel Action-Planning and Turnover-Prediction Queries

    The /products/engage page presents engagement features as a capabilities list but has no problem-framing section — queries asking about warning signs of employee attrition (15f_001) or how to close the loop on engagement surveys (15f_006) cannot be answered by citing a product feature page.

    Content Optimization · Content · 13 queries, personas: chro, vp_people_ops, hr_technology_director, vp_talent
  • 12

    Increase Social Proof Density and Retention Mechanism Clarity on /solutions/reduce-regrettable-turnover

    The /solutions/reduce-regrettable-turnover page makes retention claims but doesn't explain the mechanism — queries like 15f_026 ('What types of HR technology actually move the needle on reducing voluntary turnover?') need a page that explains WHICH features drive WHICH retention outcomes, not just a claim that 15Five reduces turnover.

    Content Optimization · Content · 3 queries, personas: chro, vp_people_ops, vp_talent
  • 13

    Case Study Page Returns Minimal Body Content — Verify Gating or CSR

    The Kreg Tool case study page at /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 returned almost exclusively navigation and footer markup with negligible body content — only the headline metric ('reduced turnover by over 20%') and a download button were accessible. Related customer stories presented as blog posts (Pendo, Auror) returned full body content normally. The case study format on this URL appears to use a gated download model (PDF behind a form), which renders the page's substantive content inaccessible to AI crawlers.

    Technical Fix · Content · /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 — other case studies available as blog posts appear accessible
  • 14

    Competitor Comparison URLs Redirect to Generic Brand Page With No Competitor Content

    Three URLs that appear in search engine results as dedicated competitor comparison pages — /15five-vs-lattice, /15five-vs-cultureamp/, and /15five-vs-leapsome/ — all redirect to the generic /why-15five page. The /why-15five page contains no competitor-specific content: it does not mention Lattice, Culture Amp, or Leapsome by name, and contains only generic brand messaging ('The new ERA OF HR'). Fetching each comparison URL confirmed the canonical page is /why-15five and the full page content is identical across all three. Web search results still index these URLs with competitor-specific titles (e.g., '15Five vs Culture Amp | Comparing Employee Management...'), meaning buyers and AI crawlers who follow these URLs from search results land on a page that does not address the query that brought them there.

    Technical Fix · Content · /15five-vs-lattice, /15five-vs-cultureamp/, /15five-vs-leapsome/ — all redirect to /why-15five with no competitor-specific content
  • 15

    Integration Proof Points & CFO Financial Evaluation Assets — Directory Exists, Evidence Missing

    15Five's /integrations page is an integration directory listing supported platforms without architecture details, success stories, or technical specifications. Two shortlisting queries where buyers need integration proof points route to this directory and lose to Lattice due to affinity override (shortlisting requires case_study/landing_page types but found only integration directory). One CFO artifact query (TCO model) has completely missing coverage — no matching page exists anywhere on the site.

    New Content · Content · 3 queries affecting personas: vp_people_ops, hr_technology_director, cfo
  • 16

    Rebuild /integrations from Directory to Integration Evidence Hub with Technical Architecture and Success Stories

    The /integrations directory at https://www.15five.com/integrations lists supported HRIS platforms but contains zero content about integration architecture — queries 15f_034 ('Integration requirements for evaluating performance management software — HRIS sync, SSO, SCIM provisioning, API access, webhook support') and 15f_108 ('Culture Amp integration issues — any known problems syncing with Workday or other enterprise HRIS platforms?') cannot cite this page because technical architecture is not documented.

    Content Optimization → New Content · Content · 6 queries, personas: hr_technology_director, vp_people_ops, cfo
  • 17

    Restructure /blog/check-ins-and-1-on-1s/ for AI Extractability on Recognition and Continuous Feedback Queries

    The /blog/check-ins-and-1-on-1s/ page explains how to run check-ins but lacks outcome evidence — query 15f_014 ('Does real-time employee recognition actually reduce turnover, or is it a feel-good feature?') and 15f_123 ('Do employee recognition tools actually sustain engagement improvements?') require citable evidence connecting recognition frequency to retention outcomes, which is absent from this methodology guide.

    Content Optimization · Content · 6 queries, personas: chro, vp_people_ops, hr_technology_director, vp_talent
  • 18

    Meta Descriptions and OG Tags: Manual Verification Required

    Meta descriptions and Open Graph tags (og:description, og:image, og:title) are not accessible via rendered markdown analysis. None of the 30 pages analyzed had visible meta description or OG tag content in the fetched output.

    Technical Fix · Marketing · All 30 pages analyzed — priority: /why-15five, product pages, pricing page
  • 19

    Schema Markup: Manual Verification Required

    This analysis was conducted using rendered page content (web_fetch returns markdown, not raw HTML), so JSON-LD schema blocks, meta tags, and OG tags are not visible in any of the 30 pages analyzed. Whether product pages carry Product or SoftwareApplication schema, blog posts carry Article schema with datePublished/dateModified, pricing pages carry Offer schema, or FAQ sections carry FAQPage schema cannot be determined from this analysis method.

    Technical Fix · Engineering · All 30 pages analyzed — schema markup cannot be assessed via rendered markdown

Workstream Mapping

All three workstreams can start this week.

Engineering / DevOps

Layer 1 — Technical Fixes
Timeline: Days to 2 weeks
  • XML Sitemap Contains Only 19 Blog URLs — All Commercial…
  • Competitor Comparison URLs Redirect to Generic Brand Page…
  • No Date Signals on Any Product or Solution Page
  • Case Study Page Returns Minimal Body Content — Verify…

Content Team

Layer 2 — Content Optimization
Timeline: 2–6 weeks
  • Deepen /products/engage for Early-Funnel Action-Planning…
  • Add Migration Evidence and Outcome Proof to…
  • Add AI Coaching Evidence and Methodology to /products/kona…
  • Add Pay Equity Compliance and Evaluation Criteria to…

Content Strategy

Layer 3 — NIOs + Off-Domain
Timeline: 1–3 months
  • Create a dedicated AMAYA product hub at /products/amaya…
  • Create a dedicated '15Five vs. Lattice' comparison landing…
  • Create a dedicated OKR & Goal Tracking feature page at…
  • Create a dedicated Talent Calibration feature page at…
  • Create HRIS integration success stories at…

[Synthesis] The 138-action plan is structured as a value-ordered execution sequence with a mandatory sequencing rule: L1 technical fixes execute first regardless of their commercial priority because they unblock downstream improvements. Without the sitemap expansion (L1: sitemap_missing_commercial_pages), new L3 content pages may not be discovered by AI crawlers; without the comparison URL fix (L1: comparison_urls_redirect_to_generic_page), new comparison pages cannot leverage the existing URL authority. L2 edits to existing pages come second — 74 optimizations across engagement, performance, manager coaching, compensation, integration, and ROI content deepen the pages buyers already find. L3 new content creates the structural assets that are currently absent: the 5 NIOs address the two highest-volume gap types (comparison content architecture at 24 queries, and thin-feature content at 31 queries across people analytics, OKR, talent calibration, and integration proof). The NIO with the highest single-action impact is nio_002 (Comparison Content Architecture Gap) at 24 queries — resolving the comparison URL redirect and creating 4 comparison landing pages would directly convert 15Five from absent to competitive across the highest-intent buying stage.

Gap coverage note: 129 of 132 gap queries (98%) are assigned to an L2 or L3 action item. 3 gap queries remain unrouted — these may represent edge-case queries that don’t cluster neatly or fall below the LLM’s grouping threshold.

Methodology
Audit Methodology

Query Construction

150 queries constructed from persona × buying job × feature focus × pain point matrix
Every query carries four metadata fields assigned at creation time
High-intent jobs (Shortlisting + Comparison + Validation): 55% of queries (83 of 150)
Note: 150 queries across full buying journey.

Personas

Chief People Officer — Chief People Officer · Decision Maker
VP of People Operations — VP of People Operations · Evaluator
Director of HR Technology & People Analytics — Director of HR Technology & People Analytics · Evaluator
Chief Financial Officer — Chief Financial Officer · Decision Maker
VP of Talent Management — VP of Talent Management · Evaluator

Buying Jobs Framework

8 non-linear buying jobs: Artifact Creation → Comparison → Consensus Creation → Problem Identification → Requirements Building → Shortlisting → Solution Exploration → Validation
High-intent jobs (Shortlisting + Comparison + Validation): 55% of queries (83 of 150)

Competitive Set

Primary: Lattice, Culture Amp, Betterworks, Leapsome, Workleap
Secondary: Quantum Workplace, Engagedly, PerformYard, Reflektive
Surprise: BambooHR — flagged for review

Platforms & Scoring

Platforms: ChatGPT + Perplexity
Visibility: Binary — does the client appear in the response?
Win rate: Of visible queries, is the client the primary recommendation?

Cross-Platform Counting (Union Method)

When a query is run on multiple platforms, union logic is applied: a query counts as “visible” if the client appears on any platform, not each platform separately.
Winner resolution: When platforms disagree on the winner, majority vote is used. Vendor names are preferred over meta-values (e.g. “no clear winner”). True ties resolve to “no clear winner.”
Share of Voice: Each entity is counted once per query across platforms (union dedup), preventing double-counting when both platforms mention the same company.
This approach ensures headline metrics reflect real buyer-query outcomes rather than inflated per-platform counts.

Terminology

Mentions: Query-level visibility count. A company receives one mention per query where it appears in any platform response (union-deduped). This is the numerator for Share of Voice.
Unique Pages Cited: Count of distinct client page URLs cited across all platform responses, after URL normalization (stripping tracking parameters). The footer total in the Citation section uses this measure.
Citation Instances (Top Cited Domains): Raw count of citation occurrences per domain across all responses. A single domain can accumulate multiple citation instances from different queries and platforms. The Top Cited Domains table uses this measure.