AI Visibility Audit

15Five
Visibility Report

Competitive intelligence for AI-mediated buying decisions. Where 15Five wins, where it loses, and a prioritized three-layer execution plan — built from 150 buyer queries across ChatGPT + Perplexity.

150 Buyer Queries
5 Personas
8 Buying Jobs
ChatGPT + Perplexity
February 27, 2026

TL;DR

46%
Visibility
69 of 150 queries
12%
Win Rate
18 wins of 150 queries
81
Invisible
queries where 15Five absent
19
Recommendations
targeting 138 gap queries (+ 5 near-rebuild optimizations)
Section 1
Visible but Unchosen: Closing 15Five's AI Recommendation Gap

15Five appears in 46% of AI buyer queries but wins only 12% — visible but rarely the choice Across 150 buyer queries on ChatGPT and Perplexity, 15Five is visible in 46% of responses (69/150) but claims the primary recommendation in only 12% (18/150). That 34-point gap between being seen and being chosen is the defining metric of this audit: 15Five enters AI buyer conversations but cedes the recommendation to Lattice or Culture Amp in the majority of queries where it is even present.

Early Funnel — Where 15Five is visible but not winning
Problem Identification
4.2%
Requirements Building
23.3%
Solution Exploration
23.3%
Late Funnel — Where 15Five competes
Shortlisting
65.4%
Validation
41.3%
Comparison
39.7%
Artifact Creation
26.1%
Consensus Creation
11.5%

[Mechanism] 15Five participated in a 150-query GEO visibility audit across ChatGPT and Perplexity, covering five buyer personas, eight buying jobs, and ten platform features in the mid-market performance management and employee engagement category. Overall, 15Five is visible in 46% of buyer queries (69/150) but claims the primary recommendation in only 12% (18/150). That 34-point visibility-to-win gap defines the strategic picture: 15Five enters AI buyer conversations but loses the recommendation to Lattice (35 wins) or Culture Amp (20 wins) in the majority of queries where it is even present. Total client citations reached 102 across both platforms — ranking 3rd in domain citation volume behind Lattice (188) and Culture Amp (176). The funnel pattern is more alarming than the overall numbers suggest.

At problem identification — where buyers first encounter challenges with turnover, manager quality, and siloed HR data — 15Five is visible in only 4.2% of queries (1/24). By Shortlisting, visibility rises to 65.4% (34/52). This top-funnel invisibility means AI assistants educate buyers about their problems without 15Five in the room: competitors set the category mental model before 15Five enters the conversation. This is not primarily a content strategy failure. 15Five has existing pages matching the majority of buyer queries.

The barriers are structural. An XML sitemap that excludes all commercial pages (sitemap_missing_commercial_pages) reduces crawler priority for 15Five's best content. Three Comparison URLs redirect to a generic brand page with no competitor specifics (comparison_urls_redirect_to_generic_page).

A key case study is gated and AI-inaccessible (case_study_gating_or_csr), blocking the Kreg Tool 20%+ turnover reduction data from appearing in AI citations. Below these technical barriers sit content depth gaps: people analytics (AMAYA (AI Manager Assist)), OKR tracking, and talent calibration pages are rated thin against the specific questions buyers ask — not because the features are absent, but because the documentation does not match buyer query intent. One query (15f_142: CFO TCO model) has no matching 15Five content at all — a complete gap at the deal-closing stage with the veto-holding CFO persona. The prescription is sequenced: L1 technical fixes first (1–3 days each), then content depth in three feature areas, then one new TCO resource for the CFO.

Layer 1
Technical Fixes
Fix crawlability and indexing issues that block AI platforms from discovering content.
4 fixes + 2 checks · Days to 2 weeks
Layer 2
Content Optimization
Optimize existing pages to improve positioning in AI responses.
9 recommendations · 2–6 weeks
Layer 3
New Content
Create new pages targeting buying stages where no content exists.
1 recommendations · 1–3 months

[Synthesis] Execution sequences by reversibility and expected citation impact. L1 technical fixes execute first: the sitemap expansion (sitemap_missing_commercial_pages, 1-3 days) immediately increases AI crawler discovery priority for all 15+ commercial pages currently excluded; the Comparison URL restoration (comparison_urls_redirect_to_generic_page, 1-2 weeks) addresses the highest-intent buying-stage content gap — these URLs receive competitor-specific search traffic and serve no competitive content; the case study ungating (case_study_gating_or_csr, 1-3 days) unlocks Kreg Tool 20%+ turnover reduction evidence for AI citation. L2 improvements proceed in commercial-weight order. l2_001 (AMAYA (AI Manager Assist)/people analytics) and l2_003 (talent calibration) address veto-holder-adjacent evaluator queries with the highest-severity thin coverage. l2_005 (competitor Comparison pages) is the content complement to the comparison_urls_redirect L1 fix — the technical redirect fix enables the URL; the content investment gives it competitive substance. l2_004 (CFO ROI content) and nio_001 (CFO TCO model, the sole L3 gap) address the same veto-holder persona in one content investment phase. l2_006 (engagement survey positioning), l2_007 (Kona differentiation), and l2_009 (customer evidence structuring) are high commercial weight but lower urgency than thin-coverage items. l2_002 (OKR depth) and l2_008 (HRIS integration documentation) are medium priority.

Reference
How to Read This Report

Visibility

Whether 15Five is mentioned at all in an AI response to a buyer query. Being visible does not mean being recommended — it just means 15Five appeared somewhere in the answer.

Win Rate

Of the queries where 15Five is visible, the percentage where it is the primary recommendation — the vendor the AI tells the buyer to evaluate first.

Share of Voice (SOV)

How often a vendor is mentioned by AI across all 150 buyer queries. Measures brand presence in AI-generated answers, not ad spend or traditional media.

Buying Jobs

The 8 non-linear tasks buyers perform during a purchase: Problem Identification, Solution Exploration, Requirements Building, Shortlisting, Comparison, Validation, Consensus Creation, and Artifact Creation.

NIO

Narrative Intelligence Opportunity — a cluster of related buyer queries where 15Five has no content. Each NIO includes a blueprint of on-domain pages and off-domain actions to close the gap.

L1 / L2 / L3

The three execution layers. L1 = technical infrastructure fixes. L2 = optimization of existing pages. L3 = new content creation and off-domain authority building.

Citation

When an AI tool references a specific webpage as its source. AI systems build recommendations from cited pages — if your pages aren't cited, your content didn't influence the answer.

Invisible Query

A buyer query where 15Five does not appear in the AI response at all. Distinct from a positioning gap, where 15Five appears but is not the recommended vendor.
Section 2
Visibility Analysis

Where 15Five appears and where it doesn't — across personas, buying jobs, and platforms.

[TL;DR] 15Five is visible in 46% of buyer queries but wins only 12%. Converting visibility to wins is the primary challenge (34% gap).

15Five's 65% Shortlisting visibility collapses to 4% at problem identification — AI assistants educate buyers about their performance management challenges without 15Five present, meaning competitors set the mental frame before 15Five enters the shortlist conversation.

Platform Visibility

−12 percentage points
Perplexity leads ChatGPT overall
−33 percentage points
VP of Talent Management — widest persona swing
−23 percentage points
Consensus Creation — widest stage swing
DimensionCombinedPlatform Delta
All Queries46%Perplexity +12 percentage points
By Persona
Chief Financial Officer41.3%Perplexity +13 percentage points
Chief People Officer32.4%Perplexity +12 percentage points
Director of HR Technology & People Analytics34.4%Perplexity +6 percentage points
VP of People Operations37.5%Even
VP of Talent Management29.8%Perplexity +33 percentage points
By Buying Job
Artifact Creation26.1%Perplexity +20 percentage points
Comparison39.7%Even
Consensus Creation11.5%Perplexity +23 percentage points
Problem Identification4.2%Perplexity +8 percentage points
Requirements Building23.3%Perplexity +20 percentage points
Shortlisting65.4%Perplexity +23 percentage points
Solution Exploration23.3%Perplexity +7 percentage points
Validation41.3%Perplexity +13 percentage points
Show per-platform breakdown (ChatGPT vs Perplexity raw %)
DimensionChatGPTPerplexity
All Queries28.7%40.9%
By Persona
Chief Financial Officer34.8%47.8%
Chief People Officer26.5%38.2%
Director of HR Technology & People Analytics31.2%37.5%
VP of People Operations37.5%37.5%
VP of Talent Management13.8%46.4%
By Buying Job
Artifact Creation16.7%36.4%
Comparison41.2%38.2%
Consensus Creation0%23.1%
Problem Identification0%8.3%
Requirements Building13.3%33.3%
Shortlisting53.8%76.9%
Solution Exploration20%26.7%
Validation34.8%47.8%

Visibility by Buying Job

Artifact Creation26.1% (6/23)
Comparison39.7% (27/68)
Consensus Creation11.5% (3/26)
Problem Identification4.2% (1/24)
Requirements Building23.3% (7/30)
Shortlisting65.4% (34/52)
Solution Exploration23.3% (7/30)
Validation41.3% (19/46)
High-intent visibility
Shortlist + Compare + Validate
48.2% (80/166)
High-intent win rate33.8% (27/80)
Appearance → win conversion33.8% (27/80)

Visibility & Win Rate by Persona

Chief Financial Officer41.3% vis · 42.1% win (8/19)
Chief People Officer32.4% vis · 45.5% win (10/22)
Director of HR Technology & People Analytics34.4% vis · 22.7% win (5/22)
VP of People Operations37.5% vis · 16.7% win (4/24)
VP of Talent Management29.8% vis · 5.9% win (1/17)
Decision-maker win rate
Chief Financial Officer + Chief People Officer
43.9% (18/41 visible)
Evaluator win rate
Director of HR Technology & People Analytics + VP of People Operations + VP of Talent Management
15.9% (10/63 visible)
Role type gap28 percentage points

Visibility by Feature Focus

Compensation Management13.6% vis (3/22) · 33.3% win (1/3)
Continuous Checkins44.4% vis · 25% win (N=27)
Employee Engagement Surveys35.7% vis · 20% win (N=42)
Hris Integrations54.5% vis · 16.7% win (N=22)
Manager Coaching13.3% vis (4/30) · 75% win (3/4)
Okr Goal Tracking45% vis (9/20) · 22.2% win (2/9)
People Analytics9.4% vis (3/32) · 66.7% win (2/3)
Performance Reviews47.6% vis · 20% win (N=42)
Recognition Feedback27.8% vis (5/18) · 20% win (1/5)
Talent Calibration31.2% vis (5/16) · 20% win (1/5)

Visibility by Pain Point

Annual Review Burden53.8% vis · 14.3% win (N=26)
Goal Misalignment50% vis (6/12) · 33.3% win (2/6)
Hr Roi Proof37.5% vis · 44.4% win (N=48)
Ineffective Managers27.8% vis · 30% win (N=36)
Low Engagement No Action25% vis (4/16) · 50% win (2/4)
Regrettable Turnover44.7% vis · 35.3% win (N=38)
Siloed Hr Data25% vis (3/12) · 66.7% win (2/3)
Top Talent Flight Risk18.2% vis (4/22) · 25% win (1/4)

[Data] Query-level visibility: 46% (69/150). Win rate: 12% (18/150). Win-when-visible rate: 26.1% (18/69).

ChatGPT visibility: 28.7% (43/150). Perplexity visibility: 40.9% (61/149). Platform gap: 12.2pp (ChatGPT lower).

By buying stage: Problem Identification 4.2% (1/24), Solution Exploration 23.3% (7/30), Requirements Building 23.3% (7/30), Consensus Creation 11.5% (3/26), Shortlisting 65.4% (34/52), Validation 41.3% (19/46), Comparison ~39.7% (27/68 responses).

[Synthesis] 15Five's visibility is structurally bifurcated by buying stage. At Shortlisting (65.4%) and Validation (41.3%), 15Five reaches buyers competitively. At problem identification (4.2%) and consensus creation (11.5%), it is functionally absent — the stages where buyers form category preferences and build internal business cases.

This asymmetry means buyers encounter 15Five only after competitors have already framed the problem space, entering Shortlisting as the third or fourth name rather than the category anchor. The 12.2-point ChatGPT visibility gap (28.7% vs. Perplexity's 40.9%) is the platform-specific urgency. ChatGPT is the dominant first-touch AI tool for buyer research and draws on training data for educational queries — precisely the stages (problem identification, solution exploration) where 15Five's content depth is thinnest.

Perplexity's higher visibility confirms the domain is technically accessible; ChatGPT's lower share is a content authority deficit, not a crawlability issue. Requirements_building (23.3% visible, 0 wins) and Consensus Creation (11.5% visible, 0 wins on ChatGPT) confirm that 15Five loses the stages where buyers are building evaluation frameworks and internal approval materials.

Invisibility Gaps — 81 Queries Where 15Five Doesn’t Appear

Queries where 15Five is not mentioned in any AI response.

IDQueryPersonaStageWinner
15f_001"What are the warning signs that good employees are about to leave — and how do mid-market companies catch this early?"Chief People OfficerProblem IdentificationNo Vendor Mentioned
15f_002"How do you prove to a skeptical CFO that people programs actually reduce turnover and save money?"Chief People OfficerProblem IdentificationNo Vendor Mentioned
15f_003"What are mid-market companies doing about managers who were promoted for technical skills but can't actually manage people?"Chief People OfficerProblem IdentificationNo Vendor Mentioned
15f_004"Our annual reviews take two months of admin time and managers dread them — what are other companies doing instead?"VP of People OperationsProblem IdentificationNo Clear Winner
15f_005"Biggest challenges with first-time managers who were promoted for individual performance, not people skills"VP of People OperationsProblem IdentificationNo Vendor Mentioned
15f_006"We've been running engagement surveys for two years and employees keep asking what actually changed — how do you close the loop?"VP of People OperationsProblem IdentificationNo Vendor Mentioned
15f_007"What are the main approaches to unifying performance, engagement, and compensation data when they live in separate HR systems?"Director of HR Technology & People AnalyticsProblem IdentificationNo Vendor Mentioned
15f_008"We have performance data in one system, engagement in another, and comp in spreadsheets — how do other mid-market companies fix this?"Director of HR Technology & People AnalyticsProblem IdentificationNo Vendor Mentioned
15f_009"How much does a poor performance management process actually cost a mid-market company in turnover and lost productivity?"Chief Financial OfficerProblem IdentificationNo Vendor Mentioned
15f_010"We're linking compensation to performance using spreadsheets and it's becoming a liability — how do other companies handle this?"Chief Financial OfficerProblem IdentificationNo Vendor Mentioned
Show 21 more competitor wins + 50 uncontested queries

Remaining competitor wins: Lattice ×7, Culture Amp ×6, Leapsome ×5, Workleap ×2, PerformYard ×1. 6 queries with no clear winner. 44 queries with no vendor mentioned. Full query-level data available in the analysis export.

Positioning Gaps — 51 Queries Where 15Five Appears But Loses

Queries where 15Five is mentioned but a competitor is positioned more favorably.

IDQueryPersonaBuying JobWinner15Five Position
15f_011"How do you identify which employees are high-potential and at risk of leaving before they hand in their notice?"VP of Talent ManagementProblem IdentificationNo Vendor MentionedBrief Mention
15f_016"We're replacing our ad-hoc 1:1 process — what's the real difference between dedicated check-in platforms and just using meeting agenda templates?"VP of People OperationsSolution ExplorationNo Clear WinnerMentioned In List
15f_019"How do performance management platforms typically integrate with HRIS systems like Workday, BambooHR, and ADP?"Director of HR Technology & People AnalyticsSolution ExplorationNo Clear WinnerMentioned In List
15f_021"Open source vs. commercial OKR tools — real tradeoffs for a company with 200-500 employees"Director of HR Technology & People AnalyticsSolution ExplorationNo Clear WinnerMentioned In List
15f_022"We've outgrown SurveyMonkey for employee engagement — what does a modern performance management tech stack look like for 300+ employees?"Director of HR Technology & People AnalyticsSolution ExplorationCulture AmpMentioned In List
15f_024"Our current review process doesn't connect to any business outcomes — how do companies move from annual reviews to something measurable?"Chief Financial OfficerSolution ExplorationNo Vendor MentionedMentioned In List
15f_026"What types of HR technology actually move the needle on reducing voluntary turnover at mid-market companies?"VP of Talent ManagementSolution ExplorationNo Vendor MentionedMentioned In List
15f_031"Key requirements for evaluating performance review platforms for a 400-person company moving away from annual reviews"VP of People OperationsRequirements BuildingNo Clear WinnerBrief Mention
15f_033"We want continuous feedback between review cycles — what capabilities actually matter in a recognition and feedback tool?"VP of People OperationsRequirements BuildingNo Clear WinnerBrief Mention
15f_034"Integration requirements for evaluating performance management software — HRIS sync, SSO, SCIM provisioning, API access, webhook support"Director of HR Technology & People AnalyticsRequirements BuildingNo Vendor MentionedMentioned In List
Show 41 more queries
IDQueryPersonaBuying JobWinner15Five Position
15f_037"We've tried and failed with spreadsheet-based OKRs — what features in a dedicated OKR tool actually make goal cascading work?"Director of HR Technology & People AnalyticsRequirements BuildingNo Vendor MentionedBrief Mention
15f_039"Evaluation criteria for performance management platforms from a finance perspective — ROI metrics, implementation costs, time to value"Chief Financial OfficerRequirements BuildingNo Vendor MentionedBrief Mention
15f_042"We need structured 1:1 tools that connect manager check-ins to company goals — what capabilities should we prioritize?"VP of Talent ManagementRequirements BuildingNo Vendor MentionedMentioned In List
15f_044"Which employee engagement platforms actually help you act on survey results, not just collect engagement scores?"Chief People OfficerShortlistingCulture AmpMentioned In List
15f_045"We've outgrown our current performance tool — best platforms for continuous check-ins and manager coaching at a 350-person company"Chief People OfficerShortlistingBetterworksMentioned In List
15f_048"Best compensation management tools for mid-market companies trying to connect pay to performance data"Chief People OfficerShortlistingNo Vendor MentionedMentioned In List
15f_049"Top continuous performance review platforms for replacing spreadsheet-based annual reviews at a 200-500 person company"VP of People OperationsShortlistingLatticeStrong 2nd
15f_050"alternatives to our current performance management tool for a 350-person SaaS company focused on reducing regrettable turnover"VP of People OperationsShortlistingLatticeMentioned In List
15f_054"performance management platforms with reliable BambooHR and Workday integration — replacing a tool that doesn't sync properly"VP of People OperationsShortlistingLatticeMentioned In List
15f_055"Best performance management platforms with native HRIS integrations — Workday, ADP, BambooHR sync without custom middleware"Director of HR Technology & People AnalyticsShortlistingLatticeMentioned In List
15f_057"looking to replace our current review tool with a continuous performance platform that supports 360-degree feedback and custom review cycles"Director of HR Technology & People AnalyticsShortlistingLatticeMentioned In List
15f_058"replacing our standalone engagement survey tool — need a platform with real-time dashboards, API access, and data export for an analytics-driven HR team"Director of HR Technology & People AnalyticsShortlistingCulture AmpMentioned In List
15f_062"OKR platforms affordable enough for mid-market but robust enough to actually make goals stick across departments"Chief Financial OfficerShortlistingNo Vendor MentionedStrong 2nd
15f_065"Best OKR tools for companies where goal cascading has never worked — switching from spreadsheets to a dedicated platform"VP of Talent ManagementShortlistingNo Clear WinnerMentioned In List
15f_066"Which engagement platforms are best at connecting survey data to retention outcomes for mid-market companies?"VP of Talent ManagementShortlistingLatticeMentioned In List
15f_067"Top tools for developing managers who've never had formal leadership training — practical coaching, not just theory"VP of Talent ManagementShortlistingCulture AmpMentioned In List
15f_070"We're moving from annual reviews — how does Lattice compare to other platforms for making that transition smooth?"Chief People OfficerComparisonLatticeStrong 2nd
15f_074"How does Culture Amp handle continuous check-ins and manager enablement compared to dedicated check-in platforms?"VP of People OperationsComparisonCulture AmpStrong 2nd
15f_075"Switching from our current review tool — how does Lattice compare for making performance reviews less painful?"VP of People OperationsComparisonLatticeStrong 2nd
15f_076"How does Workleap's recognition and feedback functionality compare to more comprehensive performance management platforms?"VP of People OperationsComparisonWorkleapMentioned In List
15f_077"We're considering switching our engagement tool — how does Culture Amp's benchmarking compare to other platforms' action-planning features?"VP of People OperationsComparisonCulture AmpMentioned In List
15f_078"How does Lattice's integration architecture compare to other performance platforms for HRIS sync, APIs, and webhooks?"Director of HR Technology & People AnalyticsComparisonLatticeMentioned In List
15f_082"We're replacing spreadsheet-based comp decisions — how does Lattice's compensation module compare for linking pay to performance?"Chief Financial OfficerComparisonLatticeBrief Mention
15f_084"How does Betterworks' total cost compare to mid-market alternatives — implementation, training, and per-seat pricing?"Chief Financial OfficerComparisonBetterworksStrong 2nd
15f_085"How does Lattice's talent calibration and 9-box feature compare to other performance management platforms?"VP of Talent ManagementComparisonLatticeStrong 2nd
15f_086"How does Leapsome's continuous feedback compare to other 1:1 tools — which one do managers actually adopt?"VP of Talent ManagementComparisonLeapsomeMentioned In List
15f_087"How does Workleap's engagement surveys compare to more analytics-heavy platforms for a 200-person company?"VP of Talent ManagementComparisonWorkleapStrong 2nd
15f_103"Lattice implementation problems when migrating from another performance management tool at a mid-market company"Chief People OfficerValidationNo Vendor MentionedBrief Mention
15f_106"We're evaluating Culture Amp as a replacement — what are the biggest downsides of their performance review features?"VP of People OperationsValidationNo Clear WinnerBrief Mention
15f_109"Betterworks analytics and reporting limitations — what can't it do that other platforms handle?"Director of HR Technology & People AnalyticsValidationNo Clear WinnerBrief Mention
15f_111"Betterworks reviews from mid-market companies — is it worth the enterprise-level pricing?"Chief Financial OfficerValidationNo Clear WinnerMentioned In List
15f_113"Is Workleap too basic for a growing mid-market company — will we outgrow it in two years?"Chief Financial OfficerValidationNo Clear WinnerBrief Mention
15f_114"Workleap Officevibe limitations — what are the biggest feature gaps compared to more comprehensive platforms?"VP of Talent ManagementValidationNo Clear WinnerMentioned In List
15f_119"15Five talent management and performance calibration — how does it compare to dedicated talent review platforms?"VP of Talent ManagementValidationNo Clear WinnerPrimary Recommendation
15f_121"Biggest risks of switching to continuous performance management from annual reviews at a mid-market company"Director of HR Technology & People AnalyticsValidationNo Vendor MentionedMentioned In List
15f_127"Case studies of mid-market companies that improved manager effectiveness after switching to continuous performance management"Chief People OfficerConsensus CreationLatticeMentioned In List
15f_137"Case studies of companies that reduced regrettable turnover after switching from annual reviews to continuous performance management"VP of Talent ManagementConsensus CreationNo Vendor MentionedMentioned In List
15f_140"Create a vendor Comparison scorecard for 15Five, Lattice, Culture Amp, Betterworks, and Leapsome focused on integration capabilities and data architecture"Director of HR Technology & People AnalyticsArtifact CreationLatticeStrong 2nd
15f_141"Build an evaluation template for comparing continuous performance management platforms — weighted scoring for reviews, check-ins, engagement, and analytics"VP of People OperationsArtifact CreationNo Vendor MentionedMentioned In List
15f_147"Create a Comparison matrix for OKR and goal tracking features across 15Five, Betterworks, Lattice, and Leapsome"Chief People OfficerArtifact CreationNo Clear WinnerMentioned In List
15f_149"Draft an executive summary comparing recognition and continuous feedback platforms for a leadership team — focus on retention impact"VP of Talent ManagementArtifact CreationNo Vendor MentionedMentioned In List
Section 3
Competitive Position

Who’s winning when 15Five isn’t — and who controls the narrative at each buying stage.

[TL;DR] 15Five wins 12% of queries (18/150), ranks #3 in SOV — H2H record: 30W–28L across 9 competitors.

Lattice leads by 35 wins to 18 and 188 citations to 102 — the gap is documentation depth in Shortlisting and Comparison stages, not brand disadvantage; it closes with 60–90 days of focused content investment in the three features (analytics, calibration, OKR) where Lattice's documentation currently exceeds 15Five's.

Share of Voice

CompanyMentionsShare
Lattice9021.3%
Culture Amp7317.3%
15Five6916.4%
Leapsome5011.8%
Betterworks419.7%
Quantum Workplace307.1%
PerformYard286.6%
Workleap245.7%
Engagedly153.5%
Reflektive20.5%

Head-to-Head Records

When 15Five and a competitor both appear in the same response, who gets the recommendation? One query with multiple competitors generates a matchup against each — so H2H totals will exceed the query count.

Win = primary recommendation (cross-platform majority). Loss = competitor was. Tie = neither or third party.

vs. Lattice8W – 8L – 44T (60 mentioned together)
vs. Culture Amp3W – 6L – 30T (39 mentioned together)
vs. Betterworks2W – 6L – 16T (24 mentioned together)
vs. Leapsome6W – 2L – 24T (32 mentioned together)
vs. Workleap2W – 2L – 16T (20 mentioned together)
vs. Quantum Workplace1W – 1L – 15T (17 mentioned together)
vs. Engagedly4W – 1L – 5T (10 mentioned together)
vs. PerformYard3W – 2L – 14T (19 mentioned together)
vs. Reflektive1W – 0L – 1T (2 mentioned together)

Invisible Query Winners

For the 81 queries where 15Five is completely absent:

Culture Amp7 wins (8.6%)
Lattice5 wins (6.2%)
Betterworks4 wins (4.9%)
Leapsome3 wins (3.7%)
PerformYard1 win (1.2%)
Workleap1 win (1.2%)
Uncontested (no winner)53 queries (65.4%)

Surprise Competitors

Vendors appearing in responses not in 15Five’s defined competitive set.

BambooHR — 4.7% SOVFlagged
Perceptyx — 1.7% SOVFlagged
beqom — 1.4% SOVFlagged
HiBob — 1.4% SOVFlagged
WorkTango — 1.4% SOVFlagged
Deel — 1.4% SOVFlagged
Workday — 1.2% SOVFlagged
Workhuman — 1.2% SOVFlagged
Visier — 1.2% SOVFlagged
Paycor — 1.2% SOVFlagged

[Synthesis] Lattice leads the field with 35 query wins to 15Five's 18 and 188 combined domain citations to 15Five's 102 — a 1.85x citation gap that reflects documentation depth, not brand superiority. The performance gap is concentrated in Comparison and Shortlisting buying jobs, where Lattice's feature documentation depth drives AI retrieval and Culture Amp's content volume dominates engagement analytics queries. Both competitive leads are content investment advantages, not structural brand disadvantages. Lattice wins through documentation authority (Lattice.com + help.Lattice.com cited 188 times combined) and Culture Amp through content volume (176 combined citations).

Neither gap requires product changes to close — both respond directly to focused content investment over 60–90 days. BambooHR's 20 surprise appearances signal an expanding category risk: AI assistants are recommending a broad HCM platform for queries where performance-management-first tools should dominate. This category boundary expansion — HCM platforms competing for performance management mindshare — makes 15Five's establishment of clear mid-market performance management identity more urgent, not less.

Section 4
Citation & Content Landscape

What AI reads and trusts in this category.

[TL;DR] 15Five had 102 unique pages cited across buyer queries. 10 high-authority domains cite competitors but not 15Five.

15Five's 102 client citations rank 3rd in the competitive field; Perplexity cites the domain more frequently than ChatGPT, confirming crawlability is not the issue — building third-party citation authority through analyst and HR media placements that name AMAYA (AI Manager Assist) and Kona would move both platforms simultaneously.

Top Cited Domains (citation instances)

Lattice.com165
cultureamp.com126
15five.com112
Betterworks.com99
Leapsome.com77
Show 15 more domains
g2.com75
success.15five.com68
peoplemanagingpeople.com54
linkedin.com53
quantumworkplace.com51
Workleap.com51
support.cultureamp.com50
PerformYard.com36
reddit.com35
aihr.com32
capterra.com32
en.wikipedia.org30
bamboohr.com26
youtube.com24
gallup.com23

15Five URL Citations by Page

www.15five.com16
success.15five.com/hc/en-us/articles/1997683054...3
www.15five.com/blog/pendo-reduces-turnover-by-2...3
www.15five.com/solutions/reduce-regrettable-tur...2
www.15five.com/blog/guide-to-performance-manage...2
Show 66 more pages
www.15five.com/products/perform/ai-assisted-rev...2
success.15five.com/hc/en-us/articles/3600523467...2
success.15five.com/hc/en-us/articles/1390263345...2
success.15five.com/hc/en-us/articles/1392119953...2
success.15five.com/hc/en-us/articles/3177987475...2
success.15five.com/hc/en-us/articles/3600026995...2
success.15five.com/hc/en-us/articles/3600026996...2
www.15five.com/products/perform2
www.15five.com/partners/technology-partners/int...2
www.15five.com/products/perform/okrs-and-goals2
www.15five.com/blog/ai-predictive-analytics-for...1
www.15five.com/blog/trustradius-how-using-15fiv...1
success.15five.com/hc/en-us/articles/3090774315...1
success.15five.com/hc/en-us/articles/3085435206...1
success.15five.com/hc/en-us/articles/3028541446...1
www.15five.com/resources/on-demand/performance-...1
www.15five.com/products/15five-ai1
success.15five.com/hc/en-us/articles/3605404832...1
www.15five.com/resources/on-demand/the-ai-compa...1
success.15five.com/hc/en-us/articles/1581797015...1
www.15five.com/resources/research/reviewing-the...1
success.15five.com/hc/en-us/articles/3600065766921
www.15five.com/solutions/improve-manager-effect...1
success.15five.com/hc/en-us/articles/3600065766...1
www.15five.com/blog/empowered-education1
www.15five.com/blog/how-to-implement-impactful-...1
www.15five.com/blog/workplace-challenges1
www.15five.com/blog/top-hr-issues-20211
www.15five.com/blog/creating-a-pip-performance-...1
www.15five.com/blog/career-hub-employee-growth1
www.15five.com/blog/best-self-kickoff1
www.15five.com/blog/6-steps-to-better-onboardin...1
www.15five.com/blog/4-hidden-challenges-that-ho...1
www.15five.com/blog/continuous-employee-feedback1
success.15five.com/hc/en-us/articles/3600517782...1
www.15five.com/blog/the-benefits-of-integrating...1
success.15five.com/hc/en-us/articles/1710639436...1
success.15five.com/hc/en-us/articles/3600026995...1
success.15five.com/hc/en-us/articles/3600206958...1
www.15five.com/security1
success.15five.com/hc/en-us/articles/3086753652...1
success.15five.com/hc/en-us/articles/1181684228...1
www.15five.com/hubfs/Content/E-Books/15Five_202...1
www.15five.com/solutions/increase-employee-enga...1
www.15five.com/blog/how-15five-can-help-improve...1
www.15five.com/hubfs/Content/E-Books/15Five_Emp...1
www.15five.com/blog/employee-engagement-roi-cal...1
www.15five.com/blog/a-case-for-increasing-your-...1
www.15five.com/resources/on-demand/role-of-enga...1
success.15five.com/hc/en-us/articles/4404620478...1
success.15five.com/hc/en-us/articles/3600571794...1
success.15five.com/hc/en-us/articles/4404620505...1
www.15five.com/blog/ensure-fair-and-consistent-...1
success.15five.com/hc/en-us/articles/2386021413...1
success.15five.com/hc/en-us/articles/4404623881...1
www.15five.com/products/perform/calibrations1
www.15five.com/blog/kreg-tool1
www.15five.com/blog/state-of-employee-turnover1
www.15five.com/blog/what-is-continuous-performa...1
www.15five.com/blog/the-impact-of-regrettable-t...1
www.15five.com/winter-2026-product-release1
success.15five.com/hc/en-us/articles/3600256000...1
success.15five.com/hc/en-us/articles/3600026989...1
success.15five.com/hc/en-us/articles/3600026821...1
www.15five.com/blog/using-15fives-performance-m...1
www.15five.com/blog/5-must-have-features-to-loo...1
Total 15Five unique pages cited102
15Five domain rankNot in top cited domains

Competitor URL Citations

Note: Domain-level citation counts (above) tally instances per individual domain. Competitor-level counts (below) aggregate across all domains owned by a single vendor, which may include subdomains.

Lattice188 URL citations
Culture Amp176 URL citations
Betterworks116 URL citations
Leapsome90 URL citations
Workleap61 URL citations
Quantum Workplace52 URL citations
PerformYard36 URL citations
Engagedly12 URL citations
Reflektive1 URL citations

Third-Party Citation Gaps

Non-competitor domains citing other vendors but not 15Five — off-domain authority opportunities.

These domains cited competitors but did not cite 15Five pages in the queries analyzed. This reflects citation patterns in AI responses, not overall platform presence.

Lattice.com165 citations · 15Five not cited
cultureamp.com126 citations · 15Five not cited
Betterworks.com99 citations · 15Five not cited
Leapsome.com77 citations · 15Five not cited
peoplemanagingpeople.com54 citations · 15Five not cited

[Synthesis] 15Five's 102 citations place it 3rd in overall domain citation volume — behind Lattice and Culture Amp but ahead of all other competitors. The gap to Lattice (86 additional citations) reflects documentation depth, and the gap to Culture Amp (74 additional citations) reflects content volume across educational and research queries. The platform citation split is the most actionable finding: success.15five.com (help documentation) contributes 36 of 102 citations, suggesting AI assistants actively retrieve 15Five's technical help content when they find it. The main 15five.com domain contributes 66 citations.

Together, this confirms that 15Five's content is technically accessible — robots.txt allows all major AI crawlers — and the citation gap is an authority and depth problem, not a technical access problem. The 343 third-party domains cited with no 15Five presence identify the structural off-domain gap. AI assistants for educational queries draw primarily from industry publications (aihr.com, gallup.com), media (peoplemanagingpeople.com), and professional networks (linkedin.com) — sources that have not yet established 15Five as a named reference for people analytics, OKR implementation, or talent calibration topics. An off-domain publication strategy would move both ChatGPT and Perplexity citations in parallel.

Section 5
Prioritized Action Plan

Three layers of recommendations ranked by commercial impact and implementation speed.

[TL;DR] 19 priority recommendations (plus 5 near-rebuild optimizations) targeting 138 queries where 15Five is currently invisible. 4 L1 technical fixes + 2 verification checks, 9 content optimizations (L2), 1 new content initiatives (L3).

Sequence: L1 technical fixes first (sitemap expansion, Comparison URL restoration, case study ungating — all 1-3 days each), then L2 content depth in people analytics (AMAYA (AI Manager Assist)), talent calibration, and competitor Comparison pages, then CFO TCO resource (nio_001) — together addressing the 54% of queries where 15Five is currently invisible.

Reading the priority numbers: Recommendations are ranked 1–16 across all three layers by commercial impact × implementation speed. Within each layer, items appear in priority order. Gaps in the sequence (e.g., L1 shows 1, 2, then 12) mean higher-priority items belong to a different layer.

Layer 1 Technical Fixes

Configuration and infrastructure changes. Owner: Engineering / DevOps. Timeline: Days to weeks.

Priority Finding Impact Timeline
#9Case Study Page Returns Minimal Body Content — Verify Gating or CSRMedium1-3 days

Issue: The Kreg Tool case study page at /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 returned almost exclusively navigation and footer markup with negligible body content — only the headline metric ('reduced turnover by over 20%') and a download button were accessible. Related customer stories presented as blog posts (Pendo, Auror) returned full body content normally. The case study format on this URL appears to use a gated download model (PDF behind a form), which renders the page's substantive content inaccessible to AI crawlers.

Fix: Convert the highest-value case studies from gated PDF format to fully accessible HTML pages with inline outcome metrics, challenge/solution narrative, and specific product features used. Keep the formatted PDF as a downloadable bonus for users who want it. This approach makes the content available to both AI crawlers and human readers without sacrificing lead capture (the form can be offered as an optional 'download full report' CTA within the page). Priority case studies to convert: Kreg Tool, TrustRadius, and any others using the gated format.

#10Competitor Comparison URLs Redirect to Generic Brand Page With No Competitor ContentMedium1-2 weeks

Issue: Three URLs that appear in search engine results as dedicated competitor Comparison pages — /15five-vs-Lattice, /15five-vs-cultureamp/, and /15five-vs-Leapsome/ — all redirect to the generic /why-15five page. The /why-15five page contains no competitor-specific content: it does not mention Lattice, Culture Amp, or Leapsome by name, and contains only generic brand messaging ('The new ERA OF HR'). Fetching each Comparison URL confirmed the canonical page is /why-15five and the full page content is identical across all three. Web search results still index these URLs with competitor-specific titles (e.g., '15Five vs Culture Amp | Comparing Employee Management...'), meaning buyers and AI crawlers who follow these URLs from search results land on a page that does not address the query that brought them there.

Fix: Either (a) create dedicated Comparison landing pages at the existing URLs with substantive head-to-head content for each competitor, or (b) if Comparison pages are not being maintained, implement 301 redirects from these URLs to the blog posts that do contain Comparison content (e.g., /15five-vs-Lattice → /blog/heres-why-people-choose-15five-over-Lattice). Option (a) is strongly preferred: dedicated Comparison pages with feature matrices, use-case differentiation, and migration guides are among the highest-ROI content types for AI citation in competitive evaluation queries. At minimum, create Comparison pages for the top 3 primary competitors: Lattice, Culture Amp, and Betterworks.

#11No Date Signals on Any Product or Solution PageMedium1-3 days

Issue: All product pages, solution pages, the why-15five page, and the pricing page have no visible last-updated dates and are absent from the sitemap — meaning no lastmod signal is available from any source. Freshness could not be determined for 17 of 30 pages analyzed. While blog posts in the sitemap carry lastmod timestamps (November-December 2025), these appear to be bulk-refreshed timestamps rather than per-post content modification dates: several blog posts show sitemap lastmod of 2025-11-25 or 2025-11-26 regardless of their original publication date (some were written in 2017-2019).

Fix: Add accurate lastmod timestamps to all commercial pages in the sitemap (requires first adding them to the sitemap per finding sitemap_missing_commercial_pages). Ensure sitemap lastmod values reflect actual content modification dates, not bulk publish dates. Consider adding visible 'Last updated: [date]' metadata to product and solution pages. Audit the bulk sitemap refresh — verify that pages with Nov 2025 lastmod were actually updated in November 2025 vs. a CMS auto-update.

#12XML Sitemap Contains Only 19 Blog URLs — All Commercial Pages AbsentMedium1-3 days

Issue: The sitemap at https://www.15five.com/sitemap.xml contains exactly 19 URLs, all of which are blog posts or resource thank-you pages with lastmod timestamps of November-December 2025. Zero product pages, zero solution pages, zero pricing pages, zero integration pages, zero Comparison-redirect pages, and zero feature subpages appear in the sitemap. No sitemap index file exists (sitemap_index.xml and hs-sitemap.xml both return 404). Core commercial pages such as /products, /products/perform, /products/engage, /products/kona, /products/perform/compensation, /pricing, /integrations, /solutions/reduce-regrettable-turnover, and /why-15five are all entirely absent from any known sitemap.

Fix: Expand the sitemap to include all commercial pages — product pages, feature subpages, solution pages, pricing, integrations, Comparison-redirect pages, and customer stories. Add accurate lastmod timestamps. If HubSpot CMS is in use (suggested by robots.txt Disallow patterns for /_hcms/ paths), verify sitemap page-type inclusion settings in Settings > Website > Pages > Sitemap and ensure all page types are enabled. Submit the updated sitemap to Google Search Console and Bing Webmaster Tools.

Verification Checks

Items requiring manual review before determining if action is needed.

Priority Finding Impact Timeline
#15Meta Descriptions and OG Tags: Manual Verification RequiredLow1-3 days

Issue: Meta descriptions and Open Graph tags (og:description, og:image, og:title) are not accessible via rendered markdown analysis. None of the 30 pages analyzed had visible meta description or OG tag content in the fetched output.

Fix: Audit meta descriptions and OG tags using Screaming Frog, Ahrefs site audit, or browser view-source. Confirm every commercial page has a unique meta description (140-160 characters) with a specific capability claim. For the /why-15five page (which currently serves as the redirect destination for three competitor Comparison URLs), ensure the meta description explicitly positions 15Five against named competitors to preserve some competitive signal.

#16Schema Markup: Manual Verification RequiredLow1-3 days

Issue: This analysis was conducted using rendered page content (web_fetch returns markdown, not raw HTML), so JSON-LD schema blocks, meta tags, and OG tags are not visible in any of the 30 pages analyzed. Whether product pages carry Product or SoftwareApplication schema, blog posts carry Article schema with datePublished/dateModified, pricing pages carry Offer schema, or FAQ sections carry FAQPage schema cannot be determined from this analysis method.

Fix: Audit schema implementation using Google's Rich Results Test (https://search.google.com/test/rich-results) or a Screaming Frog structured data crawl. Priority items: (1) blog posts — verify Article/BlogPosting schema with author, datePublished, dateModified; (2) pricing page — verify Offer/PriceSpecification schema; (3) product FAQ sections — add FAQPage schema; (4) Comparison pages — add WebPage schema with about properties referencing competitor entities once Comparison content is restored.

Click any row to expand full issue/fix detail.

Layer 2 Existing Content Optimization

Existing pages that need restructuring or deepening. Owner: Content Team. Timeline: Weeks.

CFO/Finance Buyer ROI Content: Partial Coverage Across 6 Pricing-Adjacent Queries

Priority 2
Currently: partialThe /pricing page provides tier information but contains no cost-of-inaction analysis, no industry benchmark replacement cost data, and no time-to-ROI estimates for mid-market companies. CFO-persona queries require financial framing — cost per employee at 300-person scale, payback period estimates, Comparison against the cost of doing nothing with spreadsheets — that is completely absent from the current pricing page and all solution pages. Competitor pricing Comparison queries (15f_094: Lattice vs. Betterworks pricing; 15f_111: Betterworks reviews; 15f_113: Workleap limitations) represent missed opportunities where 15Five should insert itself as the preferred alternative.

/pricing contains tier descriptions but no quantitative ROI framing — the CFO cannot estimate payback period, cost-per-employee TCO, or cost-of-inaction Comparison from the current page Competitor pricing Comparison queries (15f_094, 15f_111, 15f_113) represent shortlist-stage queries where 15Five is not mentioned — buyers comparing Lattice vs. Betterworks pricing never consider 15Five because no positioning content targets those Comparison patterns The 'average cost of replacing an employee' business case framing (15f_134) is not connected to 15Five's customer outcomes in any existing page

Queries affected: 15f_009, 15f_039, 15f_094, 15f_111, 15f_113, 15f_134

Competitor Comparison Pages: Three High-Intent URLs Serve Generic Brand Content With No Competitor Specifics

Priority 3
Currently: coveredThe three Comparison URLs (/15five-vs-Lattice, /15five-vs-cultureamp/, /15five-vs-Leapsome/) are indexed in search results with competitor-specific titles but redirect to /why-15five — a page containing only generic brand messaging with no competitor references (L1 finding: comparison_urls_redirect_to_generic_page). The only substantive Lattice Comparison content is a December 2025 blog post (/blog/heres-why-people-choose-15five-over-Lattice) that is not linked from the Comparison URL. Culture Amp and Leapsome have no substantive Comparison content anywhere on the site. This is simultaneously an L1 technical issue (redirect) and an L2 content issue (even if the redirect were fixed, there is no competitor-specific content to serve).

All three Comparison URLs redirect to /why-15five which contains zero competitor-specific content — buyers who arrive via 'vs Lattice' search intent immediately encounter a page that does not address their query, confirming what AI assistants already know: 15Five has no Comparison content at these URLs Lattice wins 7 Comparison/Shortlisting queries (15f_049, 15f_057, 15f_070, 15f_075, 15f_102, 15f_127, 15f_135) because Lattice has substantive feature documentation while 15Five has only one blog post Culture Amp wins 5 engagement-adjacent Comparison queries (15f_080, 15f_088, 15f_095, 15f_100) because no 15Five content directly positions against Culture Amp's feature set in a head-to-head format

Queries affected: 15f_049, 15f_057, 15f_070, 15f_075, 15f_080, 15f_088, 15f_095, 15f_100, 15f_102, 15f_103, 15f_127, 15f_135

People Analytics & AMAYA (AI Manager Assist) Platform Documentation: Expand Thin Coverage Across 14 Buyer Queries

Priority 4
Currently: thinThe matching pages address retention outcomes and manager effectiveness narratively but do not surface AMAYA (AI Manager Assist) by name, do not describe natural language query (NLQ) functionality, do not address flight risk prediction specifically, and do not provide the board-ready metric examples or build-vs-buy decision frameworks that 14 of these queries explicitly request. /solutions/reduce-regrettable-turnover is a solution page with no analytics capability specifics — it cannot satisfy queries about 'what analytics capabilities should I evaluate' or 'which platforms support AI-powered flight risk detection.'

AMAYA (AI Manager Assist) is not named or described on either solution page, making it invisible to AI assistants indexing 15Five for 'AI people analytics' or 'natural language workforce queries' No page currently addresses the 'build vs. buy for people analytics' question — a requirements-stage query that routes HR technology directors to Tableau/Looker framing rather than 15Five Board-level reporting capabilities are absent from all existing pages; the 'What HR data should we report to the board?' query (15f_023) has no answerable 15Five content

Queries affected: 15f_002, 15f_008, 15f_020, 15f_023, 15f_029, 15f_035, 15f_047, 15f_056, 15f_079, 15f_091, 15f_101, 15f_122, 15f_130, 15f_145

Talent Calibration & 9-Box Documentation: Thin Coverage Across 8 Buyer Queries

Priority 5
Currently: thinThe /products/perform/assessments page covers performance reviews and assessment cycles but does not specifically document the 9-box talent matrix, calibration session workflow, bias detection capabilities, manager override audit trails, or calibration-to-flight-risk linking. Technical requirements for calibration tools (15f_036 and 15f_148 query this explicitly) are not documented. Lattice's calibration feature documentation depth appears to exceed 15Five's, explaining its win in 15f_085.

No existing page documents 9-box calibration workflow mechanics — how calibration sessions are structured, who participates, how ratings are normalized across managers — making AI assistants unable to compare 15Five's calibration against Lattice's Technical evaluation requirements (flexible rating scales, bias detection, manager override audit trails, integration with review workflows) requested in 15f_036 and 15f_148 are not answered by any existing 15Five page The business case for talent calibration at 300-person scale (15f_018: 'is it worth the administrative effort?') is not addressed with evidence from 15Five customers

Queries affected: 15f_011, 15f_018, 15f_036, 15f_085, 15f_109, 15f_119, 15f_131, 15f_148

Customer Evidence Accessibility: Gated and Blog-Format Case Studies Limit AI Citation of 15Five's Strongest ROI Proof

Priority 6
Currently: covered15Five has strong customer outcome data — Auror 94% retention, Pendo 21% turnover reduction, Kreg Tool 20%+ turnover decrease — but this data is inaccessible or insufficiently structured for AI citation. The Kreg Tool case study returns only navigation/footer markup (gated or CSR-rendered, as documented in L1 finding case_study_gating_or_csr). The Pendo and Auror stories are accessible blog posts but present outcomes in narrative prose rather than the discrete, AI-extractable metric format that AI assistants prefer when generating Shortlisting recommendations.

The Kreg Tool case study (20%+ turnover reduction) is inaccessible to AI crawlers due to gating or CSR rendering — the highest-specificity turnover ROI metric 15Five has is invisible to AI systems Lattice wins 15f_050 ('alternatives to our current PM (Product Manager) tool focused on reducing regrettable turnover') and 15f_127 ('case studies of companies that improved manager effectiveness') despite 15Five having directly relevant customer evidence — blog prose format is insufficient for AI citation in Shortlisting contexts Customer outcome statistics appear embedded in narrative paragraphs rather than in extractable callout formats — AI assistants cite discrete metric claims ('Company X reduced turnover by 21%') not buried narrative prose

Queries affected: 15f_026, 15f_050, 15f_127, 15f_137, 15f_139, 15f_143

Engagement Survey Positioning: Culture Amp and Workleap Win 11 Shortlisting and Comparison Queries Despite Covered 15Five Pages

Priority 7
Currently: coveredThe /products/engage page covers the engagement survey product but does not differentiate 15Five's engagement-to-action workflow against Culture Amp's analytics-heavy model. AI assistants default to Culture Amp for 'engagement analytics' queries because Culture Amp's content explicitly claims analytics leadership. 15Five's action-planning differentiator ('from survey results to manager action') is not documented in a format AI assistants can cite. Workleap wins 'simplicity for smaller HR teams' queries because 15Five's pages don't address the depth-vs-simplicity trade-off positioning.

Culture Amp wins 5 queries (15f_022, 15f_044, 15f_058, 15f_077, 15f_099) by being positioned as the analytics-first engagement platform; 15Five's pages don't differentiate on actionability vs. measurement — Culture Amp measures more, 15Five acts faster, but this is not documented Workleap wins 3 simplicity-focused queries (15f_087, 15f_092) — 15Five's pages don't address the simplicity-vs-depth positioning that frames 15Five as the right choice for mid-market companies that need depth without enterprise complexity The Employee engagement surveys generate data but HR teams lack the time, tools, or pain point ('surveys generate data but HR teams can't act on it') is not directly answered by any existing page with a workflow walkthrough of how 15Five solves this specific problem

Queries affected: 15f_022, 15f_041, 15f_044, 15f_052, 15f_058, 15f_066, 15f_077, 15f_087, 15f_092, 15f_099, 15f_114, 15f_121

Kona AI Manager Coaching: Covered Pages Losing Comparison Queries to Leapsome and Culture Amp

Priority 8
Currently: coveredThe /products/kona page covers Kona's AI coaching capability but does not specifically differentiate against Leapsome's manager development workflows, does not address the 'dedicated AI coaching' category that buyers are increasingly searching for, and does not provide the payback period or ROI evidence that Validation-stage buyers need to justify a manager coaching investment. AI assistants default to Leapsome for manager coaching Comparison queries because Leapsome's documentation is more detailed at the feature-differentiation level.

Leapsome wins 3 manager coaching Comparison queries (15f_072, 15f_089, 15f_097) by being positioned as the dedicated manager development platform; 15Five's Kona page does not differentiate against Leapsome's specific features at the head-to-head level Culture Amp wins 15f_067 ('top tools for developing managers who've never had formal training') — the first-time manager development use case that Kona was built for is not documented with scenario-specific examples that AI can cite The payback period question for manager coaching investment (15f_138: 'what's the typical payback period?') is not answered with 15Five Kona customer data anywhere on the site

Queries affected: 15f_025, 15f_046, 15f_067, 15f_072, 15f_089, 15f_097, 15f_107, 15f_110, 15f_138, 15f_144

OKR & Goal Cascading Content Depth: Thin Coverage Across 9 Buyer Queries

Priority 13
Currently: thinThe /products/perform page is a broad product overview without dedicated goal-cascading mechanics, OKR failure mode analysis, or mid-market scalability specifics. /products/performance/okrs-and-goals/ appears to exist but does not surface in AI citations for the specific pain points in this cluster ('goal cascading has never worked,' 'spreadsheet OKRs failed,' 'convince leadership current approach isn't working'). Buyer-language pain point framing is absent from all existing pages.

No existing page addresses the OKR failure mode question (15f_120: 'Common failure modes when rolling out OKR software') — a Validation-stage query that competitors with explicit failure analysis content win by default The '/products/performance/okrs-and-goals/' page lacks mid-market scalability framing (15f_062: 'OKR platforms affordable enough for mid-market but robust enough to actually make goals stick') Goal cascading mechanics — how 15Five ensures VP-level OKRs reach individual contributors — are not documented in a way AI assistants can extract as capability claims

Queries affected: 15f_012, 15f_021, 15f_037, 15f_062, 15f_065, 15f_096, 15f_120, 15f_136, 15f_147

HRIS Integration Technical Documentation: /integrations Is Too Thin for Deep Evaluation Queries

Priority 14
Currently: coveredThe /integrations page lists supported connectors but does not document integration depth: sync frequency, field mapping, bi-directional vs. one-way data flow, SSO/SCIM provisioning, webhook support, or API access per connector. The HR Technology Director evaluation criteria (15f_034: 'HRIS sync, SSO, SCIM provisioning, API access, webhook support') are not answered anywhere on the main /integrations page. Lattice wins by having help documentation that is more detailed and more frequently cited by Perplexity than 15Five's connector list page.

The /integrations page lists connectors without documenting technical depth — buyers evaluating SSO, SCIM, bi-directional sync, and webhook support (15f_034) cannot answer their evaluation checklist from the existing page Lattice wins 4 integration queries (15f_054, 15f_055, 15f_078) by help documentation depth; help.Lattice.com is cited 23+ times in integration queries while 15Five's success.15five.com integration articles appear in fewer Comparison citations The 'Culture Amp integration issues with Workday' Validation query (15f_108) offers an insertion opportunity — buyers vetting Culture Amp's integration weaknesses should find 15Five positioned as the alternative with stronger Workday/ADP integration, but no content makes this case

Queries affected: 15f_007, 15f_019, 15f_034, 15f_054, 15f_055, 15f_078, 15f_093, 15f_108, 15f_132

Layer 3 Narrative Intelligence Opportunities

Net new content addressing visibility and positioning gaps. Owner: Content Strategy. Timeline: Months.

NIO #1: CFO Financial Modeling Absence: No 15Five Presence in 3-Year TCO Artifact Query
Gap Type: Invisibility Gap — 15Five is entirely absent from 1 Artifact Creation query (15f_142) in which the CFO persona — a veto-holding decision-maker — explicitly requests a 3-year total cost of ownership model for performance management software. Winner is 'No Vendor Mentioned' on both ChatGPT and Perplexity: no 15Five pricing data, implementation cost breakdowns, or ROI content was cited in any AI-generated response to this query.
Critical

When CFOs build purchase justifications for performance management software, they construct 3-year cost models covering licensing, implementation, training, and change management. 15Five has no AI-readable content at this level of financial specificity, making it invisible at the moment a veto-holding decision-maker formalizes budget commitment. The Artifact Creation buying job signals that financial sign-off is actively being prepared — the closest point in the buying cycle to a signed contract. Critically, the winner is 'No Vendor Mentioned' on both platforms: no competitor has claimed this content territory either. A purpose-built TCO resource would earn Perplexity citations in financial-modeling responses and enter ChatGPT-generated cost summaries precisely when procurement approval is imminent, with zero competition for the position.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: 15f_142
“Build a TCO model for implementing performance management software at a 300-person company over 3 years — licensing, implementation, training, and change management”
Blueprint
  • On-Domain: Create a dedicated '3-Year Performance Management TCO Guide' page targeting CFO and finance-buyer search intent. Structure it as a complete cost model with: (1) per-seat licensing costs at 300-person scale across tiers, (2) implementation and onboarding fee ranges, (3) training cost per manager cohort, (4) change management effort in hours and dollar equivalents, and (5) opportunity cost Comparison against status quo spreadsheet or legacy tool. Use AI-extractable headings and discrete numeric claims throughout.
  • On-Domain: Publish a downloadable TCO worksheet (Google Sheets or Excel template) linked from the guide — structured artifacts are highly citable by Perplexity and retrievable by ChatGPT's search plugin. A CFO-formatted PDF output creates a shareable internal budget document that embeds 15Five's cost narrative.
  • On-Domain: Expand /pricing to include a visible 'Cost of Inaction' section with quantified turnover cost benchmarks (average replacement cost 50–200% of annual salary) linked to verified 15Five customer outcomes: Auror 94% retention rate, Pendo 21% turnover reduction, Kreg Tool 20%+ turnover reduction.
  • On-Domain: Add explicit ROI payback-period language to /solutions/reduce-regrettable-turnover and /solutions/improve-manager-effectiveness — formatted as discrete, AI-extractable sentences (e.g., 'Customers report payback within 9–14 months at 300-person scale') rather than embedded in general marketing prose.
  • Off-Domain: Pitch a contributed article to HR Executive or CFO Dive titled 'How to Build a 3-Year Performance Management Business Case Your CFO Will Approve' — third-party publication authority generates the citation depth that AI assistants prefer over on-domain vendor content for financial modeling queries.
  • Off-Domain: Submit the TCO guide for inclusion in G2's ROI data section and request review from Sapient Insights or Josh Bersin Group for coverage in annual HR technology cost-of-ownership buyer guides — analyst citations are the highest-authority source for CFO-facing AI responses.
  • Off-Domain: Develop a mid-market CFO customer reference story explicitly framed around the financial ROI calculation used to justify 15Five — this first-person finance-leader narrative is the exact artifact format AI assistants cite when answering cost-justification queries.
Platform Acuity

ChatGPT (high): ChatGPT responded to 15f_142 with a structured educational TCO framework citing no vendors (winner: No Vendor Mentioned). ChatGPT generates only 25 client citations across 150 responses (2.5% share) — demonstrating it cites vendor content only when it is specifically authoritative for the query. A structured TCO guide with numeric headings and downloadable template would be retrievable via ChatGPT's search plugin for cost-modeling queries. Perplexity (high): Perplexity generates 2,343 total citations across 149 responses (15.7 average per response) vs ChatGPT's 6.6 — demonstrating citation-heavy behavior for artifact queries. 15Five already earns 115 Perplexity citations (4.9% share), confirming the domain is indexed and crawlable. The TCO content gap is content specificity, not technical access.

Unified Priority Ranking

All recommendations across all three layers, ranked by commercial impact × implementation speed.

  • 1

    CFO Financial Modeling Absence: No 15Five Presence in 3-Year TCO Artifact Query

    15Five is entirely absent from 1 Artifact Creation query (15f_142) in which the CFO persona — a veto-holding decision-maker — explicitly requests a 3-year total cost of ownership model for performance management software. Winner is 'No Vendor Mentioned' on both ChatGPT and Perplexity: no 15Five pricing data, implementation cost breakdowns, or ROI content was cited in any AI-generated response to this query.

    New Content · Content · 1 queries affecting personas: Chief Financial Officer
  • 2

    CFO/Finance Buyer ROI Content: Partial Coverage Across 6 Pricing-Adjacent Queries

    /pricing contains tier descriptions but no quantitative ROI framing — the CFO cannot estimate payback period, cost-per-employee TCO, or cost-of-inaction Comparison from the current page

    Content Optimization (Near-Rebuild) · Content · 6 queries, personas: Chief Financial Officer
  • 3

    Competitor Comparison Pages: Three High-Intent URLs Serve Generic Brand Content With No Competitor Specifics

    All three Comparison URLs redirect to /why-15five which contains zero competitor-specific content — buyers who arrive via 'vs Lattice' search intent immediately encounter a page that does not address their query, confirming what AI assistants already know: 15Five has no Comparison content at these URLs

    Content Optimization (Near-Rebuild) · Content · 12 queries, personas: Chief People Officer, VP of People Operations, Director of HR Technology & People Analytics, Chief Financial Officer, VP of Talent Management
  • 4

    People Analytics & AMAYA (AI Manager Assist) Platform Documentation: Expand Thin Coverage Across 14 Buyer Queries

    AMAYA (AI Manager Assist) is not named or described on either solution page, making it invisible to AI assistants indexing 15Five for 'AI people analytics' or 'natural language workforce queries'

    Content Optimization (Near-Rebuild) · Content · 14 queries, personas: Chief People Officer, Director of HR Technology & People Analytics, Chief Financial Officer, VP of People Operations
  • 5

    Talent Calibration & 9-Box Documentation: Thin Coverage Across 8 Buyer Queries

    No existing page documents 9-box calibration workflow mechanics — how calibration sessions are structured, who participates, how ratings are normalized across managers — making AI assistants unable to compare 15Five's calibration against Lattice's

    Content Optimization (Near-Rebuild) · Content · 8 queries, personas: VP of Talent Management, VP of People Operations, Director of HR Technology & People Analytics
  • 6

    Customer Evidence Accessibility: Gated and Blog-Format Case Studies Limit AI Citation of 15Five's Strongest ROI Proof

    The Kreg Tool case study (20%+ turnover reduction) is inaccessible to AI crawlers due to gating or CSR rendering — the highest-specificity turnover ROI metric 15Five has is invisible to AI systems

    Content Optimization · Content · 6 queries, personas: Chief People Officer, VP of People Operations, VP of Talent Management, Chief Financial Officer
  • 7

    Engagement Survey Positioning: Culture Amp and Workleap Win 11 Shortlisting and Comparison Queries Despite Covered 15Five Pages

    Culture Amp wins 5 queries (15f_022, 15f_044, 15f_058, 15f_077, 15f_099) by being positioned as the analytics-first engagement platform; 15Five's pages don't differentiate on actionability vs. measurement — Culture Amp measures more, 15Five acts faster, but this is not documented

    Content Optimization · Content · 12 queries, personas: Chief People Officer, VP of People Operations, Director of HR Technology & People Analytics, VP of Talent Management
  • 8

    Kona AI Manager Coaching: Covered Pages Losing Comparison Queries to Leapsome and Culture Amp

    Leapsome wins 3 manager coaching Comparison queries (15f_072, 15f_089, 15f_097) by being positioned as the dedicated manager development platform; 15Five's Kona page does not differentiate against Leapsome's specific features at the head-to-head level

    Content Optimization · Content · 10 queries, personas: Chief People Officer, VP of People Operations, VP of Talent Management, Director of HR Technology & People Analytics
  • 9

    Case Study Page Returns Minimal Body Content — Verify Gating or CSR

    The Kreg Tool case study page at /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 returned almost exclusively navigation and footer markup with negligible body content — only the headline metric ('reduced turnover by over 20%') and a download button were accessible. Related customer stories presented as blog posts (Pendo, Auror) returned full body content normally. The case study format on this URL appears to use a gated download model (PDF behind a form), which renders the page's substantive content inaccessible to AI crawlers.

    Technical Fix · Content · /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 — other case studies available as blog posts appear accessible
  • 10

    Competitor Comparison URLs Redirect to Generic Brand Page With No Competitor Content

    Three URLs that appear in search engine results as dedicated competitor Comparison pages — /15five-vs-Lattice, /15five-vs-cultureamp/, and /15five-vs-Leapsome/ — all redirect to the generic /why-15five page. The /why-15five page contains no competitor-specific content: it does not mention Lattice, Culture Amp, or Leapsome by name, and contains only generic brand messaging ('The new ERA OF HR'). Fetching each Comparison URL confirmed the canonical page is /why-15five and the full page content is identical across all three. Web search results still index these URLs with competitor-specific titles (e.g., '15Five vs Culture Amp | Comparing Employee Management...'), meaning buyers and AI crawlers who follow these URLs from search results land on a page that does not address the query that brought them there.

    Technical Fix · Content · /15five-vs-Lattice, /15five-vs-cultureamp/, /15five-vs-Leapsome/ — all redirect to /why-15five with no competitor-specific content
  • 11

    No Date Signals on Any Product or Solution Page

    All product pages, solution pages, the why-15five page, and the pricing page have no visible last-updated dates and are absent from the sitemap — meaning no lastmod signal is available from any source. Freshness could not be determined for 17 of 30 pages analyzed. While blog posts in the sitemap carry lastmod timestamps (November-December 2025), these appear to be bulk-refreshed timestamps rather than per-post content modification dates: several blog posts show sitemap lastmod of 2025-11-25 or 2025-11-26 regardless of their original publication date (some were written in 2017-2019).

    Technical Fix · Engineering · 17 of 30 pages analyzed have no freshness signal — all product, solution, integration, and pricing pages
  • 12

    XML Sitemap Contains Only 19 Blog URLs — All Commercial Pages Absent

    The sitemap at https://www.15five.com/sitemap.xml contains exactly 19 URLs, all of which are blog posts or resource thank-you pages with lastmod timestamps of November-December 2025. Zero product pages, zero solution pages, zero pricing pages, zero integration pages, zero Comparison-redirect pages, and zero feature subpages appear in the sitemap. No sitemap index file exists (sitemap_index.xml and hs-sitemap.xml both return 404). Core commercial pages such as /products, /products/perform, /products/engage, /products/kona, /products/perform/compensation, /pricing, /integrations, /solutions/reduce-regrettable-turnover, and /why-15five are all entirely absent from any known sitemap.

    Technical Fix · Engineering · All product, feature, solution, pricing, and integration pages — approximately 15+ high-value commercial URLs absent from sitemap
  • 13

    OKR & Goal Cascading Content Depth: Thin Coverage Across 9 Buyer Queries

    No existing page addresses the OKR failure mode question (15f_120: 'Common failure modes when rolling out OKR software') — a Validation-stage query that competitors with explicit failure analysis content win by default

    Content Optimization (Near-Rebuild) · Content · 9 queries, personas: VP of Talent Management, Director of HR Technology & People Analytics, Chief Financial Officer, VP of People Operations, Chief People Officer
  • 14

    HRIS Integration Technical Documentation: /integrations Is Too Thin for Deep Evaluation Queries

    The /integrations page lists connectors without documenting technical depth — buyers evaluating SSO, SCIM, bi-directional sync, and webhook support (15f_034) cannot answer their evaluation checklist from the existing page

    Content Optimization · Content · 9 queries, personas: Director of HR Technology & People Analytics, VP of People Operations
  • 15

    Meta Descriptions and OG Tags: Manual Verification Required

    Meta descriptions and Open Graph tags (og:description, og:image, og:title) are not accessible via rendered markdown analysis. None of the 30 pages analyzed had visible meta description or OG tag content in the fetched output.

    Technical Fix · Marketing · All 30 pages analyzed — priority: /why-15five, product pages, pricing page
  • 16

    Schema Markup: Manual Verification Required

    This analysis was conducted using rendered page content (web_fetch returns markdown, not raw HTML), so JSON-LD schema blocks, meta tags, and OG tags are not visible in any of the 30 pages analyzed. Whether product pages carry Product or SoftwareApplication schema, blog posts carry Article schema with datePublished/dateModified, pricing pages carry Offer schema, or FAQ sections carry FAQPage schema cannot be determined from this analysis method.

    Technical Fix · Engineering · All 30 pages analyzed — schema markup cannot be assessed via rendered markdown

Workstream Mapping

All three workstreams can start this week.

Engineering / DevOps

Layer 1 — Technical Fixes
Timeline: Days to 2 weeks
  • XML Sitemap Contains Only 19 Blog URLs — All Commercial…
  • Competitor Comparison URLs Redirect to Generic Brand Page…
  • No Date Signals on Any Product or Solution Page
  • Case Study Page Returns Minimal Body Content — Verify…

Content Team

Layer 2 — Content Optimization
Timeline: 2–6 weeks
  • People Analytics & AMAYA (AI Manager Assist) Platform Documentation: Expand…
  • OKR & Goal Cascading Content Depth: Thin Coverage Across 9…
  • Talent Calibration & 9-Box Documentation: Thin Coverage…
  • CFO/Finance Buyer ROI Content: Partial Coverage Across 6…

Content Strategy

Layer 3 — NIOs + Off-Domain
Timeline: 1–3 months
  • Create a dedicated '3-Year Performance Management TCO…

[Synthesis] Execution sequences by reversibility and expected citation impact. L1 technical fixes execute first: the sitemap expansion (sitemap_missing_commercial_pages, 1-3 days) immediately increases AI crawler discovery priority for all 15+ commercial pages currently excluded; the Comparison URL restoration (comparison_urls_redirect_to_generic_page, 1-2 weeks) addresses the highest-intent buying-stage content gap — these URLs receive competitor-specific search traffic and serve no competitive content; the case study ungating (case_study_gating_or_csr, 1-3 days) unlocks Kreg Tool 20%+ turnover reduction evidence for AI citation. L2 improvements proceed in commercial-weight order. l2_001 (AMAYA (AI Manager Assist)/people analytics) and l2_003 (talent calibration) address veto-holder-adjacent evaluator queries with the highest-severity thin coverage. l2_005 (competitor Comparison pages) is the content complement to the comparison_urls_redirect L1 fix — the technical redirect fix enables the URL; the content investment gives it competitive substance. l2_004 (CFO ROI content) and nio_001 (CFO TCO model, the sole L3 gap) address the same veto-holder persona in one content investment phase. l2_006 (engagement survey positioning), l2_007 (Kona differentiation), and l2_009 (customer evidence structuring) are high commercial weight but lower urgency than thin-coverage items. l2_002 (OKR depth) and l2_008 (HRIS integration documentation) are medium priority.

Methodology
Audit Methodology

Query Construction

150 queries constructed from persona × buying job × feature focus × pain point matrix
Every query carries four metadata fields assigned at creation time
High-intent jobs (Shortlisting + Comparison + Validation): 55% of queries (83 of 150)
Note: 150 queries across full buying journey.

Personas

Chief People Officer — Chief People Officer · Decision Maker
VP of People Operations — VP of People Operations · Evaluator
Director of HR Technology & People Analytics — Director of HR Technology & People Analytics · Evaluator
Chief Financial Officer — Chief Financial Officer · Decision Maker
VP of Talent Management — VP of Talent Management · Evaluator

Buying Jobs Framework

8 non-linear buying jobs: Artifact Creation → Comparison → Consensus Creation → Problem Identification → Requirements Building → Shortlisting → Solution Exploration → Validation
High-intent jobs (Shortlisting + Comparison + Validation): 55% of queries (83 of 150)

Competitive Set

Primary: Lattice, Culture Amp, Betterworks, Leapsome, Workleap
Secondary: Quantum Workplace, Engagedly, PerformYard, Reflektive
Surprise: BambooHR — flagged for review

Platforms & Scoring

Platforms: ChatGPT + Perplexity
Visibility: Binary — does the client appear in the response?
Win rate: Of visible queries, is the client the primary recommendation?

Cross-Platform Counting (Union Method)

When a query is run on multiple platforms, union logic is applied: a query counts as “visible” if the client appears on any platform, not each platform separately.
Winner resolution: When platforms disagree on the winner, majority vote is used. Vendor names are preferred over meta-values (e.g. “no clear winner”). True ties resolve to “no clear winner.”
Share of Voice: Each entity is counted once per query across platforms (union dedup), preventing double-counting when both platforms mention the same company.
This approach ensures headline metrics reflect real buyer-query outcomes rather than inflated per-platform counts.

Terminology

Mentions: Query-level visibility count. A company receives one mention per query where it appears in any platform response (union-deduped). This is the numerator for Share of Voice.
Unique Pages Cited: Count of distinct client page URLs cited across all platform responses, after URL normalization (stripping tracking parameters). The footer total in the Citation section uses this measure.
Citation Instances (Top Cited Domains): Raw count of citation occurrences per domain across all responses. A single domain can accumulate multiple citation instances from different queries and platforms. The Top Cited Domains table uses this measure.