Engagement Foundation Review

15Five Audit Foundation

Before we run the audit, we need to make sure we're asking the right questions about the right competitors to the right buyers. This document presents what we've learned about 15Five's market — your job is to tell us what we got right, what we got wrong, and what we missed.

Prepared February 2026
www.15five.com
Continuous Performance Management & Employee Engagement
01 — Foundation

How This Works

Read this document before your validation call. Flag anything that's wrong, missing, or mischaracterized.

WHAT THIS IS This document presents the knowledge model driving your audit — the competitive set, buyer personas, feature taxonomy, and pain point map that will generate 150–200 queries across ChatGPT and Perplexity. The audit tests how well 15Five appears when buyers search in the continuous performance management and employee engagement category. Everything in this document is a hypothesis until you validate it.

WHAT TO DO For each section, confirm or correct: Are the right people in the persona set? Are the competitors listed the ones you actually lose deals to? Are the feature strength ratings accurate relative to what you hear in competitive calls? Are the pain points phrased the way your buyers describe them — not the way your marketing describes them? Your answers directly change which queries get run and how results are interpreted. The validation call is where this becomes a 15Five-specific audit, not a generic performance management benchmark.

CONFIDENCE BADGES Every item carries a confidence badge: High = sourced directly from G2 reviews, product pages, or category listings with corroboration; Medium = partially sourced or inferred — needs your confirmation; Low = placeholder requiring replacement before query generation. Medium-confidence items are the ones most worth scrutinizing on the call.

02 — Profile

Company Profile

Foundational data about 15Five used to configure query targeting, entity disambiguation, and category context.

15Five

Company name 15Five High
Domain www.15five.com High
Name variants 15 Five · FifteenFive · 15Five Inc · 15Five.com High
Category Continuous performance management & employee engagement High
Segment Mid-market High
Key products 15Five Perform · 15Five Engage · 15Five Compensation · Kona AI · AMAYA High
Positioning (from site) AI-powered platform that develops high-performing employees, turns managers into leaders, and transforms HR into a strategic function

VALIDATION QUESTION 15Five ships five named products under one roof: Perform, Engage, Compensation, Kona AI, and AMAYA. The platform spans two distinct buying conversations — performance management (reviews, goals, calibration) and employee engagement (surveys, retention analytics) — plus two AI products that are relatively recent additions. In a typical 15Five deal, is this a unified platform sale where one buyer evaluates everything together, or do different stakeholders own the performance vs. engagement evaluation? If buyers in your deals separate these conversations — for example, HR Ops owns the Perform evaluation while a People Analytics function evaluates AMAYA independently — the query set needs separate clusters for each conversation, and we may need to add personas to cover the engagement analytics side of that divide. The answer also affects whether Kona AI and AMAYA get their own query clusters or are bundled under the core platform.

03 — Personas

Buyer Personas

These five roles represent the people who evaluate, champion, and approve a continuous performance management platform purchase. Each persona's distinct search vocabulary drives a separate query cluster in Step 4 — which is why getting the right roles with the right influence levels matters more than having a complete list.

CRITICAL REVIEW AREA Personas are the highest-leverage input in the audit. A role that doesn't appear in your actual deal cycles generates queries that dilute coverage of the roles that do. A missing evaluator creates a blind spot in exactly the searches buyers run during real evaluations. Please flag roles that don't appear in deals, roles that significantly overlap, and any key evaluators we've missed.

DATA SOURCING NOTE Persona names are illustrative. The role, department, seniority, influence level, veto power, and technical level fields are sourced from G2 review titles and case study attribution (review_mining) or inferred from category patterns (llm_inference) — the source is noted on each card. Primary buying jobs and query focus areas are synthesized from role context; they reflect what this type of buyer typically searches for, not confirmed behavioral data from 15Five's specific deal history. Treat synthesized fields as hypotheses.

Dana Osei
Chief People Officer
Decision-maker High
The executive sponsor of HR technology investment. At mid-market companies, the CPO owns the people strategy and controls the HR tech budget. They evaluate platforms on whether the solution will demonstrably reduce regrettable turnover, improve manager effectiveness, and generate the data to make the case for people programs to the CEO and board.
Veto power: Yes — controls HR budget and signs off on vendor selection
Technical level: Low — evaluates on outcomes and ease of rollout, not on integration architecture or API capability
Primary buying jobs: Define the people strategy problem requiring a platform; align the HR tech investment to business outcomes; champion vendor selection with the CEO and CFO; establish success metrics for the board
Query focus areas: Best performance management platforms for HR leaders, how to reduce regrettable turnover, employee engagement ROI, 15Five vs. alternatives, continuous feedback software for mid-market
Source: G2 review titles and case study attribution (review_mining)

At mid-market companies (200–1,000 employees), does the CPO/CHRO typically control HR tech budget directly, or does budget authority sit with Finance and require a separate CFO approval step? We've classified this role as Decision-maker based on title and G2 review evidence, but if the CPO primarily builds the business case and the CFO independently holds the budget gate, the query strategy should split: more CPO-level queries around outcomes and strategy, and a separate CFO query cluster around ROI proof and cost justification. If the CPO controls budget end-to-end and the CFO only reviews the contract, we can weight the CPO cluster much more heavily. Which pattern matches your actual deals?

Maya Reyes
VP of People Operations
Evaluator High
The primary day-to-day evaluator and implementation owner. This role translates the CPO's strategic goals into a vendor shortlist — conducting demos, running pilots, assessing integration requirements, and managing the internal rollout. In most mid-market companies, VP People Ops is the person who lives in the platform after go-live and owns the ongoing relationship with the vendor.
Veto power: No — recommends vendor to CPO; does not hold final budget approval
Technical level: Medium — understands HRIS integrations and workflow configuration, but defers schema and API work to IT or HR Technology
Primary buying jobs: Build and manage vendor shortlist; evaluate feature completeness and UX; assess integration fit with existing HRIS stack; run pilot and gather manager feedback; synthesize recommendation for the CPO
Query focus areas: 15Five vs. Lattice comparison, best performance management software for 500 employees, continuous feedback software reviews, HRIS integration for performance tools, check-in software for managers
Source: G2 review titles and case study attribution (review_mining)

In your deals, does VP People Ops drive initial vendor discovery — actively searching and building the longlist — or does the CPO initiate and hand the shortlist research down to People Ops? This changes where we concentrate query volume. If VP People Ops starts the search, their feature-comparison and G2-review-style queries should carry the most weight in the discovery cluster. If they receive a CPO-initiated shortlist and focus on due diligence, query weight should be heavier at the CPO level during discovery and shift to People Ops at the evaluation and comparison stage. Which pattern describes most 15Five deals?

Tariq Mensah
Director of HR Technology & People Analytics
Evaluator High
The technical evaluator responsible for integration architecture, data governance, and HR system stack decisions. This role assesses API capability, SSO/SCIM provisioning, data export flexibility, and whether the platform can serve as a reliable data source for a broader people analytics stack. Often the primary contact for implementation and post-go-live admin.
Veto power: No — technical recommender; can block a vendor on integration or security grounds but does not hold budget authority
Technical level: High — evaluates API documentation, integration reliability, data model, and security certifications (SOC 2, GDPR)
Primary buying jobs: Assess integration compatibility with Workday/BambooHR/ADP stack; evaluate data export and reporting flexibility; confirm SSO and SCIM provisioning; assess AI feature data handling and privacy; scope implementation timeline and IT resource requirements
Query focus areas: 15Five API documentation, 15Five Workday integration, 15Five HRIS integrations list, AMAYA data governance, performance management platform security certifications
Source: G2 review titles and case study attribution (review_mining)

A dedicated Director of HR Technology role typically exists at companies of 500+ employees. If 15Five's primary mid-market segment includes companies of 200–500 employees, does this persona appear in most deals — or mainly in the upper band of your mid-market accounts? If smaller accounts handle technical evaluation through the VP People Ops or a general IT Manager (rather than a dedicated HRTech function), we should reduce Tariq's query cluster weight relative to Maya's and reattribute some of the technical queries to the IT Manager role. If this role appears in fewer than half of your deals, it warrants influencer classification rather than evaluator. Does a dedicated HR Technology Director show up consistently in 15Five evaluations?

Sandra Liu
Chief Financial Officer
Decision-maker Medium
The budget authority for HR technology spend, particularly relevant when the platform is positioned as a retention tool with quantifiable ROI. This role evaluates through a cost-justification lens: what is the total cost of ownership, what is the measurable return relative to the cost of regrettable turnover, and does the platform reduce headcount risk at a number the board can see?
Veto power: Yes — controls budget and can block investment if the ROI case does not close
Technical level: Low — evaluates on financial outcomes, TCO, and contract terms; not involved in feature evaluation
Primary buying jobs: Evaluate total cost of ownership vs. cost of regrettable turnover; review contract terms and multi-year pricing; assess whether HR tech ROI can be demonstrated to the board; approve or block budget allocation
Query focus areas: ROI of performance management software, cost of employee turnover calculator, HR tech business case template, 15Five pricing, performance management software TCO
Source: Inferred from category patterns (llm_inference) — not directly sourced from 15Five review data. Only medium-confidence persona in set.

This persona was inferred by LLM reasoning, not sourced from 15Five reviews — it's the only persona in the set without direct evidence. Does your CFO actively engage with the 15Five evaluation, or do they only see the contract at signature? If the CFO participates pre-shortlist (reviewing the business case framing, approving budget for a pilot), we should add ROI-calculator and cost-of-turnover queries that target finance-side searches. If the CFO only appears at contract stage and never independently searches the category, adding CFO-specific queries wastes query capacity on a role that won't be influenced by your visibility in AI answers. What actually happens in your deals — is Finance involved in the evaluation or just the closing?

James Abernathy
VP of Talent Management
Influencer Medium
The internal stakeholder responsible for succession planning, talent calibration, high-potential identification, and development programming. This role influences vendor selection when platform capabilities touch talent pipeline management — specifically the 9-box calibration matrix, goal alignment tools, and recognition features that surface who is ready for promotion.
Veto power: No — provides functional input on talent management capabilities; does not hold budget approval
Technical level: Low — evaluates on feature usability and output quality, not on technical implementation
Primary buying jobs: Evaluate talent calibration and succession planning features; assess whether the platform surfaces high-potential employees; provide input on recognition and development workflows; validate that the OKR/goal module supports performance-based promotion decisions
Query focus areas: Talent calibration software, 9-box grid performance management, high-potential employee identification tools, succession planning software, performance review platform for talent development
Source: G2 review titles and case study attribution (review_mining) — medium confidence on role distinctiveness from VP People Ops

At mid-market companies, VP of Talent Management and VP of People Operations sometimes describe the same person, or the talent function rolls up under People Operations. If both James and Maya are genuinely distinct roles in your deals, their query clusters should stay separate — James focused on succession and calibration, Maya on systems and operational reviews. But if one person typically does both jobs at the company sizes 15Five targets, merging them into a single VP-level HR persona concentrates query weight more accurately and avoids splitting coverage. Are these two separate roles in a typical 15Five evaluation, or does one person wear both hats?

MISSING PERSONAS? Three roles plausibly present in performance management evaluations but absent from this set: Head of Learning & Development — if Kona AI's manager coaching is evaluated as an L&D investment rather than a people-ops tool, a dedicated L&D function may run a parallel evaluation with meaningfully different query behavior; IT Director or VP of Engineering — HRIS integration, SSO provisioning, and data security reviews often require an IT sign-off at mid-market companies that don't have a dedicated HR Technology Director, and this role searches differently than Tariq (security-first, not people-data-first); Business Unit VP or Department Head — 15Five's published case studies (Kreg Tool, Auror) show turnover pain that originates at the business unit level before it reaches HR, and in some deals the BU leader sponsors the evaluation and brings People Ops in as the implementer rather than the initiator. Do any of these show up in your deals? Who else do we need to add?

04 — Competitive Set

Competitors

Nine vendors define the competitive landscape for the query set. Primary competitors appear directly in your deals — they're who buyers compare you against when they search "15Five vs." Secondary competitors are adjacent or occasional — they appear in category shortlists but less frequently in direct head-to-head evaluations.

Primary Competitors

Lattice

Primary High
lattice.com
The most frequently compared alternative to 15Five — combines performance reviews, OKRs, engagement surveys, and compensation in a single mid-market platform. Wins on customization depth and 360-degree reviews that include external stakeholders, but costs more as modules stack up and requires more implementation effort than 15Five.
Source: G2 category listings and "vs" search data (category_listing)

Culture Amp

Primary High
cultureamp.com
Leads with survey science and engagement benchmarking backed by 1.4 billion survey responses. Wins when the buyer's primary concern is engagement analytics and research-grade insights, but weaker than 15Five on continuous check-ins and manager coaching tools.
Source: G2 category listings and "vs" search data (category_listing)

Betterworks

Primary High
betterworks.com
Enterprise-focused OKR and performance enablement platform built for complex goal-alignment needs at large organizations. Stronger than 15Five on OKR rigor and enterprise analytics, but overkill for most mid-market buyers and more expensive at scale.
Source: G2 category listings and "vs" search data (category_listing)

Leapsome

Primary High
leapsome.com
Combines performance management, engagement surveys, and a built-in LMS in a single platform. Stronger than 15Five on learning path integration and multilingual support for international teams, but a newer vendor with a smaller customer base and less established track record in North America.
Source: G2 category listings and "vs" search data (category_listing)

Workleap

Primary High
workleap.com
Formerly Officevibe — targets smaller teams with simpler, more affordable continuous feedback and engagement tools. Competes on price and ease of adoption for teams without dedicated HR departments, but lacks the analytics depth and feature breadth of 15Five at mid-market scale.
Source: G2 category listings and "vs" search data (category_listing)

Secondary Competitors

Quantum Workplace

Secondary Medium
quantumworkplace.com
Comprehensive engagement and performance platform with one of the largest engagement benchmarking databases in the industry. Appears in enterprise shortlists but is less frequently evaluated head-to-head against 15Five in mid-market deals — deal presence unconfirmed.
Source: G2 category listings (category_listing) — mid-market deal presence unconfirmed

Engagedly

Secondary Medium
engagedly.com
Unified performance management, engagement, and LMS platform with agentic AI guidance. Overlaps with 15Five's product surface but skews toward companies wanting LMS-integrated performance workflows — deal overlap with 15Five's buyer profile unconfirmed.
Source: G2 category listings (category_listing)

PerformYard

Secondary Medium
performyard.com
Highly customizable performance review platform supporting a wide range of review structures and rating scales. Appeals to companies with non-standard review processes, but lacks 15Five's engagement survey depth and manager coaching capabilities.
Source: G2 category listings (category_listing)

Reflektive

Secondary Medium
reflektive.com
Real-time feedback and recognition platform focused on continuous improvement cultures. Competes with 15Five's feedback and recognition features but has a narrower product surface and smaller market presence — current market status should be verified given reported ownership changes.
Source: G2 category listings (category_listing) — verify current market status

VALIDATION QUESTIONS Three items need confirmation before query generation: First, all four secondary competitors carry medium confidence — Quantum Workplace, Engagedly, PerformYard, and Reflektive were sourced from category listings without deal-specific evidence. Do any of these names appear in your actual deal cycles — in RFP responses, buyer comparison emails, or shortlist conversations? If a vendor doesn't appear in deals, H2H query budget spent on them is wasted. Second, Reflektive specifically has undergone ownership changes and some analyst reports indicate declining market presence. Do buyers still mention Reflektive as a consideration, or has it largely exited the conversation in the accounts you target? If so, it should be removed from the competitive set before query generation. Third, are any primary competitors missing? Specifically: does Workday Peakon, LinkedIn Glint, or any HCM suite's embedded performance module appear in deals — either as a competitor or as the incumbent system 15Five is asked to replace?

05 — Feature Taxonomy

Feature Taxonomy

Ten buyer-level capabilities define the query surface for 15Five's platform. Each feature maps to buyer-language searches — the way a prospect describes what they need during evaluation, not how the product team describes what it does. Strength ratings determine which capabilities anchor the query set offensively and which are tested defensively.

Performance Reviews & 360° Feedback Strong High

Run structured performance reviews — including self-reviews, manager reviews, and 360-degree peer feedback — without living in spreadsheets or chasing down responses

Continuous Check-ins & Structured 1:1s Strong High

Keep a weekly pulse on every employee with guided check-ins and 1:1 agendas so managers know what's happening before problems escalate

Employee Engagement Surveys & Pulse Checks Strong High

Measure employee engagement with lifecycle surveys and pulse checks, then get AI-driven recommendations for which levers to pull to improve retention

OKR & Goal Tracking Moderate High

Set company, team, and individual goals that cascade and connect so employees understand how their work maps to business priorities

Manager Development & AI Coaching (Kona AI) Strong High

Give managers an AI coach that preps them for 1:1s, flags team risk signals, and surfaces micro-learning content to build the skills most managers never got formal training on

AI-Powered People Analytics & Insights (AMAYA) Moderate Medium

Ask plain-English questions about your workforce data and get answers that connect performance, engagement, and retention into a single view without building dashboards from scratch

Compensation Management & Pay Equity Weak Medium

Link pay decisions to performance data, run compensation cycles with real-time market benchmarks, and give employees a clear view of their total rewards

HRIS Ecosystem Integrations Strong High

Sync employee data automatically with Workday, BambooHR, ADP, and other HRIS systems so HR admins aren't manually updating rosters or reconciling headcount discrepancies

Employee Recognition & Continuous Feedback Strong High

Enable managers and peers to give real-time recognition and feedback between review cycles so good work doesn't go unnoticed until the annual review

Talent Calibration & Performance Matrix Moderate Medium

Calibrate performance ratings across managers with a 9-box or performance matrix to reduce bias and identify high-potential employees ready for development or promotion

VALIDATION QUESTIONS Three features need direct input before we finalize query weights: OKR & Goal Tracking is rated moderate based on multiple G2 reviews noting the goal module is functional but less advanced than Betterworks or Lattice for complex hierarchical goal structures. When you're in a competitive eval against Betterworks specifically, how do you position the OKR module — as a known trade-off ("15Five is simpler but sufficient for mid-market") or as a capability you actively compete on? If the module has significantly improved since those reviews, revising the rating to strong changes the query strategy from defensive to offensive on goal-alignment searches. AMAYA (AI-Powered People Analytics) carries medium confidence because it's a recently launched product — feature maturity can't be reliably assessed from outside. Is AMAYA generally available to all 15Five customers, or still in limited access? If it's in beta, we should hold AMAYA-specific query clusters until it's fully launched, because driving AI citations to a capability buyers can't yet access creates a credibility problem. Compensation Management is rated weak based on G2 reviews that reflect an earlier version of the module. Has the comp module reached competitive parity with Lattice's compensation offering since those reviews were written? If yes, the rating should be revised to moderate before building comp-focused queries.

06 — Pain Points

Pain Point Taxonomy

Eight buyer pains shape how queries are phrased and weighted. The buyer language fields are how we write the queries — they need to sound like a frustrated HR leader describing their problem in a search bar, not like a vendor describing their solution.

Regrettable Turnover High High

"We keep losing our best people and only find out they were unhappy after they've already accepted another offer — by then it's too late"
Personas: Chief People Officer · VP of People Operations · VP of Talent Management

Ineffective Managers High High

"Our managers are great individual contributors but terrible people managers — they don't know how to have hard conversations, develop their teams, or spot burnout before it becomes a resignation"
Personas: Chief People Officer · VP of People Operations · VP of Talent Management

Annual Review Burden High High

"Our annual reviews take two months of HR admin time, managers dread them, employees feel blindsided, and then the ratings are useless for comp decisions because everyone gets a 3 out of 5"
Personas: VP of People Operations · Chief People Officer · Director of HR Technology

Siloed HR Data High High

"I have performance data in one system, engagement surveys in another, and comp in a spreadsheet — I can't tell you which managers are driving turnover because the data is never in the same place"
Personas: Director of HR Technology · VP of People Operations · Chief People Officer

Engagement Survey — No Action Medium High

"We run engagement surveys twice a year and employees keep asking what we actually did with the results — and honestly I can't tell them because we never know where to start"
Personas: VP of People Operations · Chief People Officer · VP of Talent Management

HR ROI Proof High High

"Every year I have to fight the CFO for HR tech budget and I can't show a hard number for what our performance management programs actually return — so I keep losing to other priorities"
Personas: Chief People Officer · Chief Financial Officer

Goal Misalignment Medium Medium

"We announce company OKRs every quarter but ask any individual contributor what their goals are and they'll give you a blank stare — the strategy never makes it past the manager layer"
Personas: Chief People Officer · VP of People Operations · VP of Talent Management

Top Talent Flight Risk High High

"Our top performers are the ones most likely to get recruited away and we have no visibility into who is quietly disengaged until they hand in their notice"
Personas: Chief People Officer · VP of People Operations · VP of Talent Management

VALIDATION QUESTIONS Three items to verify before query generation: Regrettable Turnover and Top Talent Flight Risk cover adjacent territory — both describe high performers leaving unexpectedly, but one is reactive (we already lost them) and one is proactive (we can see them leaving). If buyers search for "how to identify flight risk employees" and "how to reduce regrettable turnover" as separate query intents, two distinct pain points are the right model and the query clusters should remain separate. If those searches collapse into the same behavior in your category, merging them into a single higher-weight pain point concentrates query volume more accurately. What do your discovery calls suggest about how buyers frame this? Goal Misalignment carries medium confidence — it's the only pain point with uncertainty about how central it is to 15Five's selling motion. Is OKR/goal alignment a genuine entry-point pain that drives people to evaluate platforms, or is it a benefit buyers discover after they've already decided to buy for turnover or manager effectiveness reasons? If it's the latter, a lower query weight is appropriate. Three pain points not in the current set that may be relevant for this category: (1) Performance rating inflation — managers rate everyone "meets expectations" to avoid conflict, making ratings useless for promotion and comp decisions; (2) Remote/hybrid engagement gap — engagement scores diverge sharply between in-office and distributed employees and managers don't know how to close it; (3) Manager accountability gap — managers commit to development goals in 1:1s and then nothing is tracked, with no visibility for HR between cycles. Do any of these drive deals for 15Five?

07 — Technical Findings

Site Findings

Layer 1 analysis of www.15five.com — 30 pages reviewed for AI crawler accessibility, structural discoverability, and content signal quality. All six major AI crawlers tested are currently permitted with no blocking rules.

ENGINEERING ACTION ITEMS No critical or high-severity blocking issues found — GPTBot, ChatGPT-User, ClaudeBot, PerplexityBot, Google-Extended, and Googlebot are all permitted in robots.txt. Three medium-severity technical items are actionable before the validation call and will improve baseline visibility before queries run: (1) Expand the XML sitemap — currently only 19 blog URLs appear in sitemap.xml; all product pages, solution pages, pricing, and integration pages are completely absent, forcing crawlers to rely on link-following alone for the highest-value commercial content; (2) Investigate the Kreg Tool case study page at /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 — load it with JavaScript disabled to determine whether the near-empty body content reflects a CSR rendering issue or intentional PDF gating; the 20%+ turnover reduction outcome on that page is exactly the type of evidence LLMs cite in vendor evaluation queries; (3) Run Google's Rich Results Test on three product pages and two blog posts to establish a schema markup baseline before the audit runs queries. These items don't require waiting for the validation call.

🔵 XML Sitemap Contains Only 19 Blog URLs — All Commercial Pages Absent

What we found: The sitemap at sitemap.xml contains exactly 19 URLs — all blog posts or resource thank-you pages with lastmod timestamps from November–December 2025. Zero product pages, zero solution pages, zero pricing pages, zero integration pages, and zero feature subpages appear in any sitemap. No sitemap index file exists (sitemap_index.xml and hs-sitemap.xml both return 404). Core commercial pages including /products/perform, /products/engage, /products/kona, /products/perform/compensation, /pricing, /integrations, /solutions/reduce-regrettable-turnover, and /why-15five are entirely absent from any known sitemap.

Why it matters: AI crawlers and search engines use sitemaps to discover pages and prioritize recrawl frequency using lastmod timestamps. When the highest-value commercial pages are absent, crawlers rely entirely on link-following — pages linked only from secondary navigation may be discovered less frequently than blog content explicitly listed in the sitemap. This is especially significant for recently launched product pages (AMAYA, Kona AI) where prompt crawler discovery ensures AI systems learn about new capabilities quickly. The current sitemap actively deprioritizes product and solution content in favor of older blog posts.

Recommended fix: Expand the sitemap to include all commercial pages — product pages, feature subpages, solution pages, pricing, integrations, comparison-redirect pages, and customer stories — with accurate lastmod timestamps. If HubSpot CMS is in use (indicated by /_hcms/ Disallow patterns in robots.txt), verify sitemap page-type inclusion settings in Settings > Website > Pages > Sitemap and ensure all page types are enabled. Submit the updated sitemap to Google Search Console and Bing Webmaster Tools.

Impact: Medium Effort: 1–3 days Owner: Engineering Affected: ~15+ high-value commercial URLs entirely absent from sitemap

🔵 Competitor Comparison URLs Redirect to Generic Brand Page With No Competitor Content

What we found: Three URLs indexed in search results as dedicated comparison pages — /15five-vs-lattice, /15five-vs-cultureamp/, and /15five-vs-leapsome/ — all redirect to the generic /why-15five page. The /why-15five page contains no competitor-specific content: it does not mention Lattice, Culture Amp, or Leapsome by name and contains only generic brand messaging. Buyers who follow these comparison URLs from search results land on a page that does not address the query that brought them there. The only substantive Lattice comparison content on the site is a December 2025 blog post that is not linked from the /15five-vs-lattice URL. Culture Amp and Leapsome have no substantive comparison content anywhere on the site.

Why it matters: Competitor comparison pages are among the most frequently cited content types in LLM vendor evaluation queries. When an AI system is asked "How does 15Five compare to Lattice?" it looks for pages that explicitly address that comparison. The three indexed comparison URLs may receive crawler visits for high-intent competitive queries, but the content served provides zero differentiated positioning — wasting whatever link equity and indexing priority those URLs have accumulated.

Recommended fix: Either (a) create dedicated comparison landing pages at the existing URLs with substantive head-to-head content for each competitor, or (b) if comparison pages are not being maintained, implement 301 redirects to blog posts that do contain comparison content (e.g., /15five-vs-lattice → /blog/heres-why-people-choose-15five-over-lattice). Option (a) is strongly preferred. At minimum, create comparison pages for the three most frequently searched primary competitors: Lattice, Culture Amp, and Betterworks.

Impact: Medium Effort: 1–2 weeks Owner: Content Affected: /15five-vs-lattice · /15five-vs-cultureamp/ · /15five-vs-leapsome/

🔵 No Date Signals on Any Product or Solution Page

What we found: All product pages, solution pages, the /why-15five page, and the pricing page have no visible last-updated dates and are absent from the sitemap — meaning no lastmod signal is available from any source. Freshness could not be determined for 17 of 30 pages analyzed. Blog posts in the sitemap carry lastmod timestamps, but these appear to be bulk-refreshed: multiple posts show sitemap lastmod of 2025-11-25 or 2025-11-26 regardless of original publication date, including posts written in 2017–2019.

Why it matters: AI systems that factor content freshness (including Perplexity's recency weighting and Google's freshness signals for AI Overviews) cannot assess whether 15Five's product and solution pages reflect current capabilities. This matters particularly for recently launched features — AMAYA and Kona AI are new additions, but without freshness signals the pages describing them carry no recency advantage over older competitor pages. Bulk sitemap timestamp refreshes that don't reflect actual content changes may also be detected as inaccurate by crawlers.

Recommended fix: Add accurate lastmod timestamps to all commercial pages in the sitemap once the sitemap expansion (finding #1) is complete. Ensure lastmod values reflect actual content modification dates, not bulk CMS publish dates. Consider adding visible "Last updated: [date]" metadata to product and solution pages. Audit the November 2025 bulk sitemap refresh to verify which pages were actually updated vs. which received automated timestamp reassignment.

Impact: Medium Effort: 1–3 days Owner: Engineering Affected: 17 of 30 pages analyzed — all product, solution, integration, and pricing pages

🔵 Case Study Page Returns Minimal Body Content — Verify Gating or CSR

What we found: The Kreg Tool case study page at /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 returned almost exclusively navigation and footer markup with negligible body content — only the headline metric ("reduced turnover by over 20%") and a download button were accessible. Related customer stories presented as blog posts (Pendo, Auror) returned full body content normally. The case study format on this URL appears to use a gated download model (PDF behind a form), rendering the substantive content inaccessible to AI crawlers.

Why it matters: Customer outcome data — specific turnover reduction percentages, engagement improvements, ROI metrics — is the content type LLMs most commonly cite when answering vendor evaluation queries ("How much does 15Five reduce turnover?"). A gated case study means high-credibility, citable evidence is invisible to AI systems. 15Five's published outcomes (Auror 94% retention, Kreg Tool 20%+ turnover reduction, TrustRadius 88% turnover decrease) are powerful competitive differentiators that AI systems cannot access in their current format.

Recommended fix: Convert the highest-value case studies from gated PDF format to fully accessible HTML pages with inline outcome metrics, challenge/solution narrative, and specific product features used. Keep the formatted PDF as a downloadable bonus — the form can be offered as an optional "download full report" CTA within the page. This makes the content available to AI crawlers without sacrificing lead capture. Priority case studies to convert: Kreg Tool, TrustRadius, and any others using the gated format. First, verify with JavaScript disabled whether the page renders body content to confirm CSR vs. intentional gating.

Impact: Medium Effort: 1–3 days Owner: Content Affected: /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20

🔵 Schema Markup: Manual Verification Required

What we found: This analysis was conducted using rendered page content (web fetch returns markdown, not raw HTML), so JSON-LD schema blocks, meta tags, and OG tags are not visible in any of the 30 pages analyzed. Whether product pages carry SoftwareApplication schema, blog posts carry Article schema with datePublished/dateModified, the pricing page carries Offer schema, or FAQ sections carry FAQPage schema cannot be determined from this analysis method.

Why it matters: Structured data helps AI crawlers understand page purpose and entity relationships, enabling richer citations in AI-powered search results. Article schema with accurate dateModified enables AI systems to correctly assess content freshness. FAQPage schema on product pages surfaces Q&A content in AI Overviews and Perplexity answers. HubSpot CMS (indicated by robots.txt patterns) includes basic Organization/WebPage schema by default but does not automatically add page-type-specific schema — each page type requires explicit configuration.

Recommended fix: Audit schema implementation using Google's Rich Results Test or a Screaming Frog structured data crawl. Priority items: (1) blog posts — verify Article/BlogPosting schema with author, datePublished, dateModified; (2) pricing page — verify Offer/PriceSpecification schema; (3) product FAQ sections — add FAQPage schema; (4) comparison pages — add WebPage schema with about properties referencing competitor entities once comparison content is restored.

Impact: Low Effort: 1–3 days Owner: Engineering Affected: All 30 pages analyzed — schema cannot be assessed via rendered markdown

🔵 Meta Descriptions and OG Tags: Manual Verification Required

What we found: Meta descriptions and Open Graph tags (og:description, og:image, og:title) are not accessible via rendered markdown analysis. None of the 30 pages analyzed had visible meta description or OG tag content in the fetched output.

Why it matters: Meta descriptions serve as the AI-readable page summary that crawlers reference when generating snippets for search results and AI citations. Missing or duplicated meta descriptions increase the chance that LLMs generate inaccurate page summaries. On the /why-15five page — which currently serves as the redirect destination for three competitor comparison URLs — a well-crafted meta description that names specific differentiators helps AI systems correctly categorize the page's competitive intent even without dedicated comparison content.

Recommended fix: Audit meta descriptions and OG tags using Screaming Frog, an Ahrefs site audit, or browser view-source. Confirm every commercial page has a unique meta description (140–160 characters) with a specific capability claim. For the /why-15five page, ensure the meta description explicitly positions 15Five against named competitors (Lattice, Culture Amp) to preserve competitive signal until dedicated comparison pages are built.

Impact: Low Effort: 1–3 days Owner: Marketing Affected: All 30 pages analyzed — priority: /why-15five, product pages, pricing page

Site Analysis Summary

Total pages analyzed 30
Commercially relevant pages 20
Avg. heading hierarchy score 0.70
Avg. content depth score 0.66 below target
Avg. freshness score 0.53 (17 pages unscored)
Avg. schema coverage Unable to assess (30 pages unscored)
Avg. passage extractability 0.67 below target
Critical findings 0
High findings 0

PARTIAL ASSESSMENT NOTE Freshness scores are unavailable for 17 of 30 pages analyzed (57%) — all product, solution, integration, and pricing pages carry no date signal from any source, so the 0.53 freshness average reflects only the 13 blog posts where a sitemap lastmod was detectable. Schema coverage is 0/30 assessed — this reflects a methodology limitation (rendered markdown doesn't expose JSON-LD blocks), not confirmed absence of schema. The Rich Results Test audit will establish the actual schema baseline. Content depth (0.66) and passage extractability (0.67) are both in the warning range and are expected to improve once commercial pages are added to the sitemap, freshness signals are established, and schema is verified.

08 — Next Steps

What Happens Next

Three steps from here to your full GEO visibility audit.

01

Validation Call

45–60 minutes to walk through this document together. Confirm the persona set and influence levels, validate the competitive tiers, correct any feature or pain point inaccuracies, and complete the client signoff below. The query set is generated immediately after the call using your confirmed inputs.

02

Query Generation & Execution

150–200 queries run across ChatGPT and Perplexity covering persona-driven discovery queries, head-to-head competitor comparisons (Lattice, Culture Amp, Betterworks, Leapsome, Workleap), feature capability queries, pain point searches, and category evaluation queries. Results are recorded and processed against the confirmed knowledge graph.

03

Full Audit Delivery

Visibility analysis across all query categories, competitive citation benchmarking against the five primary competitors, and a three-layer action plan — technical fixes, content authority gaps, and competitive positioning opportunities — all prioritized by which gaps actually cost 15Five citations in the query results.

START NOW — ENGINEERING Three technical tasks your engineering team can begin before the validation call. These don't depend on the rest of the audit and will improve 15Five's baseline visibility before we measure it: (1) Expand the XML sitemap to include all product pages (/products/perform, /products/engage, /products/kona, /products/perform/compensation), solution pages, pricing, and integration pages — currently 15+ high-value commercial URLs are completely absent from sitemap.xml. If HubSpot CMS manages the sitemap, check Settings > Website > Pages > Sitemap and verify page-type inclusion settings. (2) Investigate the Kreg Tool case study page at /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 with JavaScript disabled — a near-empty body suggests either a CSR rendering gap or intentional PDF gating; the 20%+ turnover reduction outcome on that page is exactly the kind of evidence LLMs cite in vendor comparisons, and it's currently invisible to crawlers. (3) Run Google's Rich Results Test on /products/perform, /products/engage, and two recent blog posts to establish the schema markup baseline — this is a 30-minute task that surfaces any missing Article, FAQPage, or SoftwareApplication schema before the audit runs.

09 — Signoff

Client Signoff

Complete this section during the validation call. Your answers directly configure the query set for Step 4.

COMPETITIVE SET & TIERING
Primary competitors (confirmed head-to-head, ~15–20 H2H queries each pending query generation):
Lattice  ·  Culture Amp  ·  Betterworks  ·  Leapsome  ·  Workleap

Secondary competitors (adjacent/occasional, ~6–10 H2H queries each — all pending validation of deal presence):
Quantum Workplace  ·  Engagedly  ·  PerformYard  ·  Reflektive

All four secondary competitors carry medium confidence. Verify deal presence before the call — any vendor not appearing in deals should be removed or moved to a watch list. Reflektive's current market status should be verified given reported ownership changes. Confirm whether any HCM suite embedded modules (Workday Peakon, LinkedIn Glint) should be added.
PERSONA SET & INFLUENCE LEVELS
Decision-makers: Dana Osei (Chief People Officer)  ·  Sandra Liu (CFO — medium confidence, confirm deal presence and timing of involvement)

Evaluators: Maya Reyes (VP of People Operations)  ·  Tariq Mensah (Director of HR Technology — confirm presence across company size range)

Influencers: James Abernathy (VP of Talent Management — medium confidence, confirm distinctness from VP People Ops role in your deals)

Key questions for the call: Does the CFO engage pre-shortlist or only at contract signature? Is VP Talent Management a separate role from VP People Ops in your typical deal? Does Director HR Technology appear in smaller mid-market accounts (<300 employees) or mainly in larger ones? Are Head of L&D or IT Director/VP Engineering roles present in evaluations?
FEATURE & PAIN POINT PRIORITIES
Features to overweight in query set (strong rating linked to high-severity pain points with broad persona coverage):
(1) Continuous Check-ins & Structured 1:1s — strong rating, links to three high-severity pain points (regrettable turnover, ineffective managers, annual review burden) across the broadest persona spread of any feature; this is 15Five's core differentiation vs. Culture Amp, which lacks this depth of structured check-in tooling
(2) Performance Reviews & 360° Feedback — strong rating, directly addresses the annual review burden pain that drives most initial evaluations in this category
(3) Manager Development & AI Coaching (Kona AI) — strong rating, uniquely differentiating in the competitive set (neither Culture Amp nor Workleap offers AI coaching at comparable depth); directly answers the ineffective managers pain and creates query clusters with low competitive density

Pain points to overweight in query set (selected by severity × persona breadth):
(1) Ineffective Managers — high severity, 3 personas affected, uniquely activates Kona AI differentiation queries not available to Culture Amp or Workleap
(2) Annual Review Burden — high severity, 3 personas including Director HR Technology; the most searched operational pain in the performance management category
(3) HR ROI Proof — high severity, activates both CPO and CFO in the same pain; if the CFO persona is confirmed as deal-relevant, this pain anchors the highest-stakes queries in the set

Confirm on the call: Is OKR & Goal Tracking a primary selling point or a secondary feature benefit? Confirm AMAYA general availability status before including in AMAYA-specific query clusters. Confirm whether Compensation Management rating reflects current module capability.
CONSTRAINTS & EXCLUSIONS
To be completed during the validation call.
Examples: geographic markets to exclude from queries, competitor names to avoid (acquisitions, partnerships), products still in beta that should not be included in query clusters, features or product lines 15Five has deprecated or not yet launched in general availability.
CONFIRMATION
By confirming this document, 15Five validates that the competitive set, persona set, and taxonomy are accurate enough to generate the query set. Changes after this point require a KG update step and may affect query generation timing.

Confirmed by: _____________________________________     Title: _____________________________________

Date: _____________________________________     Notes: _____________________________________

WHY THIS MATTERS The query set is generated directly from this document. A persona we generate queries for who doesn't appear in your deals wastes query capacity on searches that don't reflect real buyer behavior. A competitor we exclude who actually wins deals against you means a blind spot in your audit results. Ten minutes of corrections here multiplies into more accurate coverage across all 150–200 queries.