Before we run the audit, we need to make sure we're asking the right questions about the right competitors to the right buyers. This document presents what we've learned about 15Five's market — your job is to tell us what we got right, what we got wrong, and what we missed.
Read this document before your validation call. Flag anything that's wrong, missing, or mischaracterized.
WHAT THIS IS This document presents the knowledge model driving your audit — the competitive set, buyer personas, feature taxonomy, and pain point map that will generate 150–200 queries across ChatGPT and Perplexity. The audit tests how well 15Five appears when buyers search in the continuous performance management and employee engagement category. Everything in this document is a hypothesis until you validate it.
WHAT TO DO For each section, confirm or correct: Are the right people in the persona set? Are the competitors listed the ones you actually lose deals to? Are the feature strength ratings accurate relative to what you hear in competitive calls? Are the pain points phrased the way your buyers describe them — not the way your marketing describes them? Your answers directly change which queries get run and how results are interpreted. The validation call is where this becomes a 15Five-specific audit, not a generic performance management benchmark.
CONFIDENCE BADGES Every item carries a confidence badge: High = sourced directly from G2 reviews, product pages, or category listings with corroboration; Medium = partially sourced or inferred — needs your confirmation; Low = placeholder requiring replacement before query generation. Medium-confidence items are the ones most worth scrutinizing on the call.
Foundational data about 15Five used to configure query targeting, entity disambiguation, and category context.
VALIDATION QUESTION 15Five ships five named products under one roof: Perform, Engage, Compensation, Kona AI, and AMAYA. The platform spans two distinct buying conversations — performance management (reviews, goals, calibration) and employee engagement (surveys, retention analytics) — plus two AI products that are relatively recent additions. In a typical 15Five deal, is this a unified platform sale where one buyer evaluates everything together, or do different stakeholders own the performance vs. engagement evaluation? If buyers in your deals separate these conversations — for example, HR Ops owns the Perform evaluation while a People Analytics function evaluates AMAYA independently — the query set needs separate clusters for each conversation, and we may need to add personas to cover the engagement analytics side of that divide. The answer also affects whether Kona AI and AMAYA get their own query clusters or are bundled under the core platform.
These five roles represent the people who evaluate, champion, and approve a continuous performance management platform purchase. Each persona's distinct search vocabulary drives a separate query cluster in Step 4 — which is why getting the right roles with the right influence levels matters more than having a complete list.
CRITICAL REVIEW AREA Personas are the highest-leverage input in the audit. A role that doesn't appear in your actual deal cycles generates queries that dilute coverage of the roles that do. A missing evaluator creates a blind spot in exactly the searches buyers run during real evaluations. Please flag roles that don't appear in deals, roles that significantly overlap, and any key evaluators we've missed.
DATA SOURCING NOTE Persona names are illustrative. The role, department, seniority, influence level, veto power, and technical level fields are sourced from G2 review titles and case study attribution (review_mining) or inferred from category patterns (llm_inference) — the source is noted on each card. Primary buying jobs and query focus areas are synthesized from role context; they reflect what this type of buyer typically searches for, not confirmed behavioral data from 15Five's specific deal history. Treat synthesized fields as hypotheses.
→ At mid-market companies (200–1,000 employees), does the CPO/CHRO typically control HR tech budget directly, or does budget authority sit with Finance and require a separate CFO approval step? We've classified this role as Decision-maker based on title and G2 review evidence, but if the CPO primarily builds the business case and the CFO independently holds the budget gate, the query strategy should split: more CPO-level queries around outcomes and strategy, and a separate CFO query cluster around ROI proof and cost justification. If the CPO controls budget end-to-end and the CFO only reviews the contract, we can weight the CPO cluster much more heavily. Which pattern matches your actual deals?
→ In your deals, does VP People Ops drive initial vendor discovery — actively searching and building the longlist — or does the CPO initiate and hand the shortlist research down to People Ops? This changes where we concentrate query volume. If VP People Ops starts the search, their feature-comparison and G2-review-style queries should carry the most weight in the discovery cluster. If they receive a CPO-initiated shortlist and focus on due diligence, query weight should be heavier at the CPO level during discovery and shift to People Ops at the evaluation and comparison stage. Which pattern describes most 15Five deals?
→ A dedicated Director of HR Technology role typically exists at companies of 500+ employees. If 15Five's primary mid-market segment includes companies of 200–500 employees, does this persona appear in most deals — or mainly in the upper band of your mid-market accounts? If smaller accounts handle technical evaluation through the VP People Ops or a general IT Manager (rather than a dedicated HRTech function), we should reduce Tariq's query cluster weight relative to Maya's and reattribute some of the technical queries to the IT Manager role. If this role appears in fewer than half of your deals, it warrants influencer classification rather than evaluator. Does a dedicated HR Technology Director show up consistently in 15Five evaluations?
→ This persona was inferred by LLM reasoning, not sourced from 15Five reviews — it's the only persona in the set without direct evidence. Does your CFO actively engage with the 15Five evaluation, or do they only see the contract at signature? If the CFO participates pre-shortlist (reviewing the business case framing, approving budget for a pilot), we should add ROI-calculator and cost-of-turnover queries that target finance-side searches. If the CFO only appears at contract stage and never independently searches the category, adding CFO-specific queries wastes query capacity on a role that won't be influenced by your visibility in AI answers. What actually happens in your deals — is Finance involved in the evaluation or just the closing?
→ At mid-market companies, VP of Talent Management and VP of People Operations sometimes describe the same person, or the talent function rolls up under People Operations. If both James and Maya are genuinely distinct roles in your deals, their query clusters should stay separate — James focused on succession and calibration, Maya on systems and operational reviews. But if one person typically does both jobs at the company sizes 15Five targets, merging them into a single VP-level HR persona concentrates query weight more accurately and avoids splitting coverage. Are these two separate roles in a typical 15Five evaluation, or does one person wear both hats?
MISSING PERSONAS? Three roles plausibly present in performance management evaluations but absent from this set: Head of Learning & Development — if Kona AI's manager coaching is evaluated as an L&D investment rather than a people-ops tool, a dedicated L&D function may run a parallel evaluation with meaningfully different query behavior; IT Director or VP of Engineering — HRIS integration, SSO provisioning, and data security reviews often require an IT sign-off at mid-market companies that don't have a dedicated HR Technology Director, and this role searches differently than Tariq (security-first, not people-data-first); Business Unit VP or Department Head — 15Five's published case studies (Kreg Tool, Auror) show turnover pain that originates at the business unit level before it reaches HR, and in some deals the BU leader sponsors the evaluation and brings People Ops in as the implementer rather than the initiator. Do any of these show up in your deals? Who else do we need to add?
Nine vendors define the competitive landscape for the query set. Primary competitors appear directly in your deals — they're who buyers compare you against when they search "15Five vs." Secondary competitors are adjacent or occasional — they appear in category shortlists but less frequently in direct head-to-head evaluations.
Primary Competitors
Secondary Competitors
VALIDATION QUESTIONS Three items need confirmation before query generation: First, all four secondary competitors carry medium confidence — Quantum Workplace, Engagedly, PerformYard, and Reflektive were sourced from category listings without deal-specific evidence. Do any of these names appear in your actual deal cycles — in RFP responses, buyer comparison emails, or shortlist conversations? If a vendor doesn't appear in deals, H2H query budget spent on them is wasted. Second, Reflektive specifically has undergone ownership changes and some analyst reports indicate declining market presence. Do buyers still mention Reflektive as a consideration, or has it largely exited the conversation in the accounts you target? If so, it should be removed from the competitive set before query generation. Third, are any primary competitors missing? Specifically: does Workday Peakon, LinkedIn Glint, or any HCM suite's embedded performance module appear in deals — either as a competitor or as the incumbent system 15Five is asked to replace?
Ten buyer-level capabilities define the query surface for 15Five's platform. Each feature maps to buyer-language searches — the way a prospect describes what they need during evaluation, not how the product team describes what it does. Strength ratings determine which capabilities anchor the query set offensively and which are tested defensively.
Run structured performance reviews — including self-reviews, manager reviews, and 360-degree peer feedback — without living in spreadsheets or chasing down responses
Keep a weekly pulse on every employee with guided check-ins and 1:1 agendas so managers know what's happening before problems escalate
Measure employee engagement with lifecycle surveys and pulse checks, then get AI-driven recommendations for which levers to pull to improve retention
Set company, team, and individual goals that cascade and connect so employees understand how their work maps to business priorities
Give managers an AI coach that preps them for 1:1s, flags team risk signals, and surfaces micro-learning content to build the skills most managers never got formal training on
Ask plain-English questions about your workforce data and get answers that connect performance, engagement, and retention into a single view without building dashboards from scratch
Link pay decisions to performance data, run compensation cycles with real-time market benchmarks, and give employees a clear view of their total rewards
Sync employee data automatically with Workday, BambooHR, ADP, and other HRIS systems so HR admins aren't manually updating rosters or reconciling headcount discrepancies
Enable managers and peers to give real-time recognition and feedback between review cycles so good work doesn't go unnoticed until the annual review
Calibrate performance ratings across managers with a 9-box or performance matrix to reduce bias and identify high-potential employees ready for development or promotion
VALIDATION QUESTIONS Three features need direct input before we finalize query weights: OKR & Goal Tracking is rated moderate based on multiple G2 reviews noting the goal module is functional but less advanced than Betterworks or Lattice for complex hierarchical goal structures. When you're in a competitive eval against Betterworks specifically, how do you position the OKR module — as a known trade-off ("15Five is simpler but sufficient for mid-market") or as a capability you actively compete on? If the module has significantly improved since those reviews, revising the rating to strong changes the query strategy from defensive to offensive on goal-alignment searches. AMAYA (AI-Powered People Analytics) carries medium confidence because it's a recently launched product — feature maturity can't be reliably assessed from outside. Is AMAYA generally available to all 15Five customers, or still in limited access? If it's in beta, we should hold AMAYA-specific query clusters until it's fully launched, because driving AI citations to a capability buyers can't yet access creates a credibility problem. Compensation Management is rated weak based on G2 reviews that reflect an earlier version of the module. Has the comp module reached competitive parity with Lattice's compensation offering since those reviews were written? If yes, the rating should be revised to moderate before building comp-focused queries.
Eight buyer pains shape how queries are phrased and weighted. The buyer language fields are how we write the queries — they need to sound like a frustrated HR leader describing their problem in a search bar, not like a vendor describing their solution.
VALIDATION QUESTIONS Three items to verify before query generation: Regrettable Turnover and Top Talent Flight Risk cover adjacent territory — both describe high performers leaving unexpectedly, but one is reactive (we already lost them) and one is proactive (we can see them leaving). If buyers search for "how to identify flight risk employees" and "how to reduce regrettable turnover" as separate query intents, two distinct pain points are the right model and the query clusters should remain separate. If those searches collapse into the same behavior in your category, merging them into a single higher-weight pain point concentrates query volume more accurately. What do your discovery calls suggest about how buyers frame this? Goal Misalignment carries medium confidence — it's the only pain point with uncertainty about how central it is to 15Five's selling motion. Is OKR/goal alignment a genuine entry-point pain that drives people to evaluate platforms, or is it a benefit buyers discover after they've already decided to buy for turnover or manager effectiveness reasons? If it's the latter, a lower query weight is appropriate. Three pain points not in the current set that may be relevant for this category: (1) Performance rating inflation — managers rate everyone "meets expectations" to avoid conflict, making ratings useless for promotion and comp decisions; (2) Remote/hybrid engagement gap — engagement scores diverge sharply between in-office and distributed employees and managers don't know how to close it; (3) Manager accountability gap — managers commit to development goals in 1:1s and then nothing is tracked, with no visibility for HR between cycles. Do any of these drive deals for 15Five?
Layer 1 analysis of www.15five.com — 30 pages reviewed for AI crawler accessibility, structural discoverability, and content signal quality. All six major AI crawlers tested are currently permitted with no blocking rules.
ENGINEERING ACTION ITEMS No critical or high-severity blocking issues found — GPTBot, ChatGPT-User, ClaudeBot, PerplexityBot, Google-Extended, and Googlebot are all permitted in robots.txt. Three medium-severity technical items are actionable before the validation call and will improve baseline visibility before queries run: (1) Expand the XML sitemap — currently only 19 blog URLs appear in sitemap.xml; all product pages, solution pages, pricing, and integration pages are completely absent, forcing crawlers to rely on link-following alone for the highest-value commercial content; (2) Investigate the Kreg Tool case study page at /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 — load it with JavaScript disabled to determine whether the near-empty body content reflects a CSR rendering issue or intentional PDF gating; the 20%+ turnover reduction outcome on that page is exactly the type of evidence LLMs cite in vendor evaluation queries; (3) Run Google's Rich Results Test on three product pages and two blog posts to establish a schema markup baseline before the audit runs queries. These items don't require waiting for the validation call.
What we found: The sitemap at sitemap.xml contains exactly 19 URLs — all blog posts or resource thank-you pages with lastmod timestamps from November–December 2025. Zero product pages, zero solution pages, zero pricing pages, zero integration pages, and zero feature subpages appear in any sitemap. No sitemap index file exists (sitemap_index.xml and hs-sitemap.xml both return 404). Core commercial pages including /products/perform, /products/engage, /products/kona, /products/perform/compensation, /pricing, /integrations, /solutions/reduce-regrettable-turnover, and /why-15five are entirely absent from any known sitemap.
Why it matters: AI crawlers and search engines use sitemaps to discover pages and prioritize recrawl frequency using lastmod timestamps. When the highest-value commercial pages are absent, crawlers rely entirely on link-following — pages linked only from secondary navigation may be discovered less frequently than blog content explicitly listed in the sitemap. This is especially significant for recently launched product pages (AMAYA, Kona AI) where prompt crawler discovery ensures AI systems learn about new capabilities quickly. The current sitemap actively deprioritizes product and solution content in favor of older blog posts.
Recommended fix: Expand the sitemap to include all commercial pages — product pages, feature subpages, solution pages, pricing, integrations, comparison-redirect pages, and customer stories — with accurate lastmod timestamps. If HubSpot CMS is in use (indicated by /_hcms/ Disallow patterns in robots.txt), verify sitemap page-type inclusion settings in Settings > Website > Pages > Sitemap and ensure all page types are enabled. Submit the updated sitemap to Google Search Console and Bing Webmaster Tools.
What we found: Three URLs indexed in search results as dedicated comparison pages — /15five-vs-lattice, /15five-vs-cultureamp/, and /15five-vs-leapsome/ — all redirect to the generic /why-15five page. The /why-15five page contains no competitor-specific content: it does not mention Lattice, Culture Amp, or Leapsome by name and contains only generic brand messaging. Buyers who follow these comparison URLs from search results land on a page that does not address the query that brought them there. The only substantive Lattice comparison content on the site is a December 2025 blog post that is not linked from the /15five-vs-lattice URL. Culture Amp and Leapsome have no substantive comparison content anywhere on the site.
Why it matters: Competitor comparison pages are among the most frequently cited content types in LLM vendor evaluation queries. When an AI system is asked "How does 15Five compare to Lattice?" it looks for pages that explicitly address that comparison. The three indexed comparison URLs may receive crawler visits for high-intent competitive queries, but the content served provides zero differentiated positioning — wasting whatever link equity and indexing priority those URLs have accumulated.
Recommended fix: Either (a) create dedicated comparison landing pages at the existing URLs with substantive head-to-head content for each competitor, or (b) if comparison pages are not being maintained, implement 301 redirects to blog posts that do contain comparison content (e.g., /15five-vs-lattice → /blog/heres-why-people-choose-15five-over-lattice). Option (a) is strongly preferred. At minimum, create comparison pages for the three most frequently searched primary competitors: Lattice, Culture Amp, and Betterworks.
What we found: All product pages, solution pages, the /why-15five page, and the pricing page have no visible last-updated dates and are absent from the sitemap — meaning no lastmod signal is available from any source. Freshness could not be determined for 17 of 30 pages analyzed. Blog posts in the sitemap carry lastmod timestamps, but these appear to be bulk-refreshed: multiple posts show sitemap lastmod of 2025-11-25 or 2025-11-26 regardless of original publication date, including posts written in 2017–2019.
Why it matters: AI systems that factor content freshness (including Perplexity's recency weighting and Google's freshness signals for AI Overviews) cannot assess whether 15Five's product and solution pages reflect current capabilities. This matters particularly for recently launched features — AMAYA and Kona AI are new additions, but without freshness signals the pages describing them carry no recency advantage over older competitor pages. Bulk sitemap timestamp refreshes that don't reflect actual content changes may also be detected as inaccurate by crawlers.
Recommended fix: Add accurate lastmod timestamps to all commercial pages in the sitemap once the sitemap expansion (finding #1) is complete. Ensure lastmod values reflect actual content modification dates, not bulk CMS publish dates. Consider adding visible "Last updated: [date]" metadata to product and solution pages. Audit the November 2025 bulk sitemap refresh to verify which pages were actually updated vs. which received automated timestamp reassignment.
What we found: The Kreg Tool case study page at /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 returned almost exclusively navigation and footer markup with negligible body content — only the headline metric ("reduced turnover by over 20%") and a download button were accessible. Related customer stories presented as blog posts (Pendo, Auror) returned full body content normally. The case study format on this URL appears to use a gated download model (PDF behind a form), rendering the substantive content inaccessible to AI crawlers.
Why it matters: Customer outcome data — specific turnover reduction percentages, engagement improvements, ROI metrics — is the content type LLMs most commonly cite when answering vendor evaluation queries ("How much does 15Five reduce turnover?"). A gated case study means high-credibility, citable evidence is invisible to AI systems. 15Five's published outcomes (Auror 94% retention, Kreg Tool 20%+ turnover reduction, TrustRadius 88% turnover decrease) are powerful competitive differentiators that AI systems cannot access in their current format.
Recommended fix: Convert the highest-value case studies from gated PDF format to fully accessible HTML pages with inline outcome metrics, challenge/solution narrative, and specific product features used. Keep the formatted PDF as a downloadable bonus — the form can be offered as an optional "download full report" CTA within the page. This makes the content available to AI crawlers without sacrificing lead capture. Priority case studies to convert: Kreg Tool, TrustRadius, and any others using the gated format. First, verify with JavaScript disabled whether the page renders body content to confirm CSR vs. intentional gating.
What we found: This analysis was conducted using rendered page content (web fetch returns markdown, not raw HTML), so JSON-LD schema blocks, meta tags, and OG tags are not visible in any of the 30 pages analyzed. Whether product pages carry SoftwareApplication schema, blog posts carry Article schema with datePublished/dateModified, the pricing page carries Offer schema, or FAQ sections carry FAQPage schema cannot be determined from this analysis method.
Why it matters: Structured data helps AI crawlers understand page purpose and entity relationships, enabling richer citations in AI-powered search results. Article schema with accurate dateModified enables AI systems to correctly assess content freshness. FAQPage schema on product pages surfaces Q&A content in AI Overviews and Perplexity answers. HubSpot CMS (indicated by robots.txt patterns) includes basic Organization/WebPage schema by default but does not automatically add page-type-specific schema — each page type requires explicit configuration.
Recommended fix: Audit schema implementation using Google's Rich Results Test or a Screaming Frog structured data crawl. Priority items: (1) blog posts — verify Article/BlogPosting schema with author, datePublished, dateModified; (2) pricing page — verify Offer/PriceSpecification schema; (3) product FAQ sections — add FAQPage schema; (4) comparison pages — add WebPage schema with about properties referencing competitor entities once comparison content is restored.
What we found: Meta descriptions and Open Graph tags (og:description, og:image, og:title) are not accessible via rendered markdown analysis. None of the 30 pages analyzed had visible meta description or OG tag content in the fetched output.
Why it matters: Meta descriptions serve as the AI-readable page summary that crawlers reference when generating snippets for search results and AI citations. Missing or duplicated meta descriptions increase the chance that LLMs generate inaccurate page summaries. On the /why-15five page — which currently serves as the redirect destination for three competitor comparison URLs — a well-crafted meta description that names specific differentiators helps AI systems correctly categorize the page's competitive intent even without dedicated comparison content.
Recommended fix: Audit meta descriptions and OG tags using Screaming Frog, an Ahrefs site audit, or browser view-source. Confirm every commercial page has a unique meta description (140–160 characters) with a specific capability claim. For the /why-15five page, ensure the meta description explicitly positions 15Five against named competitors (Lattice, Culture Amp) to preserve competitive signal until dedicated comparison pages are built.
PARTIAL ASSESSMENT NOTE Freshness scores are unavailable for 17 of 30 pages analyzed (57%) — all product, solution, integration, and pricing pages carry no date signal from any source, so the 0.53 freshness average reflects only the 13 blog posts where a sitemap lastmod was detectable. Schema coverage is 0/30 assessed — this reflects a methodology limitation (rendered markdown doesn't expose JSON-LD blocks), not confirmed absence of schema. The Rich Results Test audit will establish the actual schema baseline. Content depth (0.66) and passage extractability (0.67) are both in the warning range and are expected to improve once commercial pages are added to the sitemap, freshness signals are established, and schema is verified.
Three steps from here to your full GEO visibility audit.
45–60 minutes to walk through this document together. Confirm the persona set and influence levels, validate the competitive tiers, correct any feature or pain point inaccuracies, and complete the client signoff below. The query set is generated immediately after the call using your confirmed inputs.
150–200 queries run across ChatGPT and Perplexity covering persona-driven discovery queries, head-to-head competitor comparisons (Lattice, Culture Amp, Betterworks, Leapsome, Workleap), feature capability queries, pain point searches, and category evaluation queries. Results are recorded and processed against the confirmed knowledge graph.
Visibility analysis across all query categories, competitive citation benchmarking against the five primary competitors, and a three-layer action plan — technical fixes, content authority gaps, and competitive positioning opportunities — all prioritized by which gaps actually cost 15Five citations in the query results.
START NOW — ENGINEERING Three technical tasks your engineering team can begin before the validation call. These don't depend on the rest of the audit and will improve 15Five's baseline visibility before we measure it: (1) Expand the XML sitemap to include all product pages (/products/perform, /products/engage, /products/kona, /products/perform/compensation), solution pages, pricing, and integration pages — currently 15+ high-value commercial URLs are completely absent from sitemap.xml. If HubSpot CMS manages the sitemap, check Settings > Website > Pages > Sitemap and verify page-type inclusion settings. (2) Investigate the Kreg Tool case study page at /resources/case-studies/how-kreg-tool-skyrocketed-engagement-and-reduced-turnover-by-over-20 with JavaScript disabled — a near-empty body suggests either a CSR rendering gap or intentional PDF gating; the 20%+ turnover reduction outcome on that page is exactly the kind of evidence LLMs cite in vendor comparisons, and it's currently invisible to crawlers. (3) Run Google's Rich Results Test on /products/perform, /products/engage, and two recent blog posts to establish the schema markup baseline — this is a 30-minute task that surfaces any missing Article, FAQPage, or SoftwareApplication schema before the audit runs.
Complete this section during the validation call. Your answers directly configure the query set for Step 4.
WHY THIS MATTERS The query set is generated directly from this document. A persona we generate queries for who doesn't appear in your deals wastes query capacity on searches that don't reflect real buyer behavior. A competitor we exclude who actually wins deals against you means a blind spot in your audit results. Ten minutes of corrections here multiplies into more accurate coverage across all 150–200 queries.