Before we run the audit, we need to make sure we're asking the right questions about the right competitors to the right buyers. This document presents what we've learned about Undaunted's market — your job is to tell us what we got right, what we got wrong, and what we missed.
Before we measure citation visibility in the autonomous robotic security space, these three signals tell us whether AI crawlers can access and trust Undaunted's site content.
AI search is reshaping how commercial property operators and construction firms discover autonomous robotic security solutions — and the category is early enough that no vendor has established dominant GEO visibility. Companies that build citation presence now will compound that advantage as AI platforms learn to trust their domain. Undaunted's quadruped robotic patrol differentiator positions it distinctly in a landscape where most competitors offer wheeled or stationary platforms, but that positioning only matters if AI platforms can find, index, and cite the content that communicates it.
This Foundation Review presents the competitive landscape that shapes how buyer queries will be constructed, the buyer personas whose search patterns determine query intent, and the technical baseline that determines whether AI platforms can access Undaunted's content at all. Each section is designed to be validated together — the competitive set determines head-to-head matchups, the personas drive the query architecture, and the technical findings tell us what engineering can fix before the audit measures anything.
The validation call is a decision-making session with real stakes. Two types of decisions need to happen: first, input validation — are the right personas, competitors, and features in the right tiers to drive accurate buyer queries? Second, engineering triage — which of the structural findings should engineering prioritize before the audit runs, and which can wait? The specific items are in the Pre-Call Checklist at the end of this document.
Three things to know before you start.
What this is This document presents our outside-in understanding of Undaunted's market in the autonomous robotic security space — the competitors buyers compare you against, the personas who drive purchase decisions, the capabilities they evaluate, and the frustrations that trigger their search. Every element here feeds into the buyer query set that powers the GEO audit.
What you need to do Look for the purple question boxes throughout the document. These are the points where your knowledge of your market matters most. Each question names a specific entity and explains what changes in the audit if the answer is different than what we've assumed.
Confidence badges Every data point carries a confidence badge: High means sourced from public data with strong corroboration. Medium means inferred from patterns or limited sources. Low means best-guess that needs validation. Medium and low items are your top review priorities.
The company profile anchors entity resolution across AI platforms — how search engines and LLMs identify and categorize Undaunted when buyers ask about robotic security solutions.
→ Validate The KG categorizes Undaunted as serving "commercial properties" broadly, but the site emphasizes construction sites and industrial facilities specifically. Does Undaunted actively sell into other verticals — commercial real estate, retail, government, or residential communities? If additional verticals are active, we add persona clusters and query types for each distinct buying conversation, potentially expanding the query set by 30-40%.
5 personas: 3 decision-makers, 1 evaluator, 1 influencer. These personas drive the buyer query set — each one searches differently for autonomous robotic security solutions.
Critical Review Area Personas have the highest impact on audit accuracy. Each persona generates a distinct query cluster — wrong personas mean wrong queries, which mean misleading visibility scores. Review each card and flag any that don't match who actually shows up in your deals.
Data Sourcing Note Role, department, seniority, influence level, veto power, and technical level are sourced from the knowledge graph. Buying jobs and query focus areas are synthesized from the persona's role and the client's category to illustrate how each persona's search behavior differs. All 5 personas are inferred from industry patterns — no G2 reviews or published case studies exist for this pre-seed startup.
→ Does the VP of Operations hold direct budget authority for security vendor selection, or does security purchasing report through Tamika Williams' security function? If Ops doesn't own the line item, we reclassify Marcus as an evaluator and reduce validation-stage queries.
→ On construction sites, does the Director of Construction initiate the robotic security evaluation, or does the general contractor make that decision? If GCs drive the purchase, Danielle becomes an influencer and we add a General Contractor persona to the query set.
→ For your typical customer, does the COO personally evaluate security vendors, or is this delegated to directors? If the COO only signs off on pre-vetted recommendations, we remove early-funnel executive queries and focus Robert's query cluster on validation and approval-stage searches.
→ Is the Director of Security the primary budget holder for robotic patrol, or does budget authority sit with Operations (Marcus Delgado)? If Tamika owns the budget, she becomes the primary persona driving 40%+ of the query architecture and we weight technical evaluation queries heavily.
→ Does the regional property manager influence vendor selection, or only manage the post-deployment relationship? If Jason is post-sale only, we remove him from the buyer query set entirely and reallocate those queries to the decision-makers.
→ Missing Personas? Who else shows up in your deals? Plausible missing roles: CFO or Finance Director (if ROI justification is a distinct approval gate separate from operations), General Contractor or Construction Project Manager (if GC firms are the actual purchaser on job sites rather than property owners), Risk or Insurance Manager (if liability reduction and workers' comp savings are the primary buying driver). What roles are we missing?
5 primary + 3 secondary competitors identified. Tier assignments determine which head-to-head matchups appear in the buyer query set.
Tier Impact Getting these tiers right determines which queries test direct competitive differentiation vs. category awareness. Queries like "Knightscope vs Undaunted" and "best robotic security patrol for construction sites" are built from the primary set — approximately 30-40 head-to-head queries depend on these assignments. Three primary competitors — Asylon Robotics, Robotic Assistance Devices (RAD), and SMP Robotics — are medium-confidence tier assignments. If any rarely appear in actual deals, moving them to secondary would shift approximately 6-8 queries per competitor out of the head-to-head set.
→ Validate Three primary competitors — Asylon Robotics, RAD, and SMP Robotics — are medium-confidence tier assignments based on category listings, not deal data. Do any of these rarely appear in actual competitive evaluations? Are there traditional security guard companies (Allied Universal, Securitas) that buyers are comparing Undaunted against for the same budget line? Are there drone-based security companies we're missing? Who else shows up in your deals?
10 buyer-level capabilities mapped. These features determine which capability queries are tested in the audit — what buyers search for when comparing autonomous robotic security solutions.
Robotic security guard that autonomously patrols my property 24/7 without human staffing
Security cameras with thermal imaging that can detect intruders in complete darkness
Trained operators watching my property feeds and calling police in real time when something happens
Security system that can talk to trespassers and warn them off before they commit a crime
Security robot that can handle stairs, curbs, hills, dirt, and rough construction site terrain
Security solution I can deploy on a new site within 24 hours without running wires or internet
Smart motion detection that can tell the difference between a real threat and a false alarm
Robotic security that works with my existing cameras, access control, and alarm systems
Dashboard showing patrol data, incident reports, and security metrics I can share with stakeholders
Security robot that can patrol inside my building lobbies, hallways, and parking garages
→ Validate Six features rated "strong" for a pre-seed startup — are these accurate relative to Knightscope's and Cobalt Robotics' mature, funded offerings? Is "Integration with Existing Security Systems" (rated moderate, low confidence) actually available today, or should it be rated absent? Is "AI-Powered Threat Detection" (moderate) comparable to competitors' detection capabilities, or is moderate too generous given the startup stage? Are there buyer-level capabilities we're missing — compliance certifications, fleet management, or insurance integration?
8 pain points: 5 high, 3 medium severity. Pain point buyer language is how queries will be phrased — these are the frustrations that trigger a buyer to search for autonomous robotic security solutions.
→ Validate Is construction theft truly the highest-severity pain for your buyers, or does guard shortage and cost drive more initial conversations? Does the buyer language accurately reflect how your prospects describe these frustrations? Are there pains we missed — regulatory compliance requirements (OSHA site safety mandates, insurance carrier requirements for active monitoring), drone or aerial intrusion concerns, or seasonal security scaling for construction projects that ramp up and down? What pain point opens the most deals?
8 findings from the technical site analysis. No critical or high-severity blockers — all findings are medium or low severity structural items that engineering can address.
Engineering Action No critical blockers, but several medium-severity structural gaps affect how AI crawlers discover and assess Undaunted's content. Engineering should prioritize: create a sitemap.xml with lastmod timestamps (currently missing — crawlers have no sitemap to guide discovery), fix the broken /about-us navigation link (returns 404 on every page), and verify schema markup using Google's Rich Results Test. These are independent of the audit and can start immediately.
What we found: The main site navigation displays an 'About Us' label, and the path https://www.getundaunted.com/about-us returns a 404 error. The actual about page lives at /why-undaunted. If the navigation links to /about-us rather than /why-undaunted, this is a broken link on every page of the site.
Why it matters: A broken page linked from the main navigation is visible to every AI crawler and search engine that indexes the site. Googlebot and AI crawlers encountering 404s from primary navigation links may reduce crawl confidence for the entire domain. Additionally, an About Us page is a high-value entity resolution page — AI platforms use company background pages to build entity profiles that inform citation decisions.
Recommended fix: Verify whether the navigation 'About Us' link points to /about-us or /why-undaunted using browser DevTools. If it points to /about-us, either update the link to /why-undaunted or implement a 301 redirect from /about-us to /why-undaunted.
What we found: No sitemap.xml exists at https://www.getundaunted.com/sitemap.xml (returned 404). The site has only 12 discoverable commercial pages, and without a sitemap, AI crawlers must rely entirely on link-following from the homepage to discover content.
Why it matters: Without a sitemap, AI crawlers and search engines rely entirely on link discovery to find pages. Crawlers cannot determine page priority or freshness — two signals that influence citation probability. As the site grows with blog posts, case studies, and comparison pages, undiscovered content will be invisible to AI platforms.
Recommended fix: Generate an XML sitemap including all commercially relevant pages with accurate lastmod dates. For a Webflow-hosted site, enable the auto-generated sitemap in Site Settings > SEO. Submit the sitemap to Google Search Console and Bing Webmaster Tools.
What we found: Only 1 of 12 pages (the investment announcement dated February 23, 2026) displays a visible publication or last-updated date. All 11 remaining pages — including all product and industry pages — lack any detectable freshness signal. Without a sitemap providing lastmod timestamps, there is no freshness signal available for these pages.
Why it matters: AI platforms deprioritize content with no freshness signals. Research shows 76.4% of AI-cited pages were updated within 30 days. The complete absence of freshness signals across the site — no sitemap lastmod, no visible dates — means AI crawlers have no way to determine whether content reflects current product capabilities.
Recommended fix: Add lastmod timestamps to the sitemap (primary fix — addresses all pages at once). For blog/press content, display visible publication dates. Consider adding a 'Last updated' date to the pricing page and how-it-works page, as these are most sensitive to staleness.
What we found: The homepage primary heading appears to be a marketing tagline ('Coverage at Half the Cost') rather than a descriptive heading that communicates what Undaunted is and does. Similarly, the pricing page H1 is 'Zero Maintenance. Simple Setup.' — a benefit statement rather than a heading that signals this is a pricing page.
Why it matters: AI models use H1 headings as the primary signal for page topic classification. A marketing tagline as H1 means AI platforms may not correctly categorize the homepage when responding to queries about robotic security, autonomous patrol, or commercial property security solutions.
Recommended fix: Update the homepage H1 to a descriptive heading that includes key terms: company name, product category, and primary value proposition. Example: 'Undaunted — Autonomous Robotic Security Patrols for Commercial Properties.' Update the pricing page H1 to 'Undaunted Pricing — Robotic Security Plans.' Keep marketing taglines as H2s or subheadings.
What we found: Our analysis method returns rendered page content as markdown, which does not include JSON-LD structured data blocks. We cannot confirm whether the site implements schema markup (Organization, Product, LocalBusiness, FAQ, or Article schemas) on any of its 12 pages.
Why it matters: Schema markup provides explicit semantic signals to AI platforms and search engines. Pages with appropriate schema types are more likely to be accurately categorized and cited by LLMs. For a startup competing against established players like Knightscope and Cobalt Robotics, schema markup is a low-effort way to improve signal quality.
Recommended fix: Test all commercially relevant pages using Google's Rich Results Test or Schema.org Validator. Implement at minimum: Organization schema on the homepage, Service schema on the how-it-works and industry pages, PriceSpecification on the pricing page, and Article schema on the investment announcement.
What we found: No robots.txt file exists at https://www.getundaunted.com/robots.txt. All crawlers are implicitly allowed to access all pages, but the site has no explicit crawler access policy.
Why it matters: Without a robots.txt, the site cannot selectively manage crawler access or reference its sitemap location. An explicit policy allows the company to permit known AI crawlers while blocking unwanted scrapers. A robots.txt with a Sitemap directive also helps crawlers discover the sitemap faster.
Recommended fix: Create a robots.txt file that explicitly allows major AI crawlers (GPTBot, ChatGPT-User, ClaudeBot, PerplexityBot, Googlebot) and references the sitemap location once created.
What we found: Our analysis returns rendered text content, not raw HTML. We cannot confirm whether pages include meta descriptions, Open Graph tags, or Twitter Card markup.
Why it matters: Meta descriptions influence how AI platforms summarize page content in citations. OG tags control how pages appear when shared. Missing or generic meta descriptions can lead to AI platforms generating inaccurate summaries of the company's positioning.
Recommended fix: Verify meta descriptions and OG tags using a social preview tool or browser DevTools. Ensure every commercial page has a unique, descriptive meta description (under 160 characters) and complete OG tags.
What we found: The site appears to be built on Webflow (based on cookie consent patterns and page structure). While Webflow sites are typically server-rendered and AI-crawler-friendly, we cannot confirm CSR status from rendered output alone. All pages returned substantive text content, suggesting no major rendering issues.
Why it matters: AI crawlers like GPTBot and ClaudeBot do not execute JavaScript. If any critical content is loaded via JavaScript after initial page load (testimonial carousels, statistics counters, video sections), it would be invisible to most AI indexing systems.
Recommended fix: Test the site with JavaScript disabled in Chrome DevTools to verify all critical content renders without JS. Pay particular attention to testimonials, statistics, and video sections. If using Webflow (likely), this is typically not an issue but worth confirming.
Partial Assessment Freshness and schema coverage scores are unreliable: 11 of 12 pages have no detectable freshness signal, and schema markup could not be verified through our analysis method. Engineering should verify both manually — these scores will be more meaningful after a sitemap with lastmod timestamps is in place.
Why Now
• AI search adoption is accelerating — buyer discovery patterns for security solutions are shifting quarter over quarter as ChatGPT, Perplexity, and Google AI Overviews become default research tools
• Early citations compound: domains that AI platforms learn to trust now get cited more frequently as training data accumulates
• Competitors who establish GEO visibility first create a structural disadvantage for late movers — Knightscope and Cobalt Robotics already have deeper content footprints
• Autonomous robotic security is still early-innings in GEO optimization — acting now means competing against inaction, not against entrenched strategies
The full audit will measure Undaunted's citation visibility across buyer queries like "best robotic security for construction sites," "autonomous patrol robot vs security guards cost," and "robotic security companies for commercial properties" — revealing exactly which queries return results that include Knightscope, Cobalt, or Asylon but not Undaunted, and what it would take to appear. Fixing the structural issues identified in Layer 1 — sitemap creation, freshness signals, descriptive H1 headings — improves the technical baseline before the audit measures anything, giving Undaunted the strongest possible foundation to build on.
45-60 minute session to walk through this document. We'll confirm personas, competitors, features, and pain points — and lock in the inputs that drive the buyer query set.
Buyer queries are generated from the validated KG and executed across selected AI platforms — ChatGPT, Perplexity, Claude, Google AI Overviews — to measure citation visibility.
Visibility analysis, competitive positioning, content gap prioritization, and a three-layer action plan — technical fixes, content creation priorities, and strategic positioning recommendations.
Start Now — Engineering These don't depend on the rest of the audit and will improve your baseline visibility before we even measure it:
• Create sitemap.xml — Enable Webflow's auto-generated sitemap with accurate lastmod dates. This is the single highest-impact structural fix for AI crawler discoverability across all 12 pages.
• Fix the broken /about-us navigation link — Either update the nav link to point to /why-undaunted or implement a 301 redirect. This broken link appears on every page.
• Create robots.txt — Add a robots.txt that explicitly allows AI crawlers (GPTBot, ClaudeBot, PerplexityBot) and references the new sitemap location.
• Verify schema markup — Test all pages with Google's Rich Results Test. Implement Organization, Service, and PriceSpecification schemas as needed.
Two jobs before we meet. The questions on the left require your judgment — no one knows your business better than you. The engineering tasks on the right don't require the call at all.