Competitive intelligence for AI-mediated buying decisions. Where D2L wins, where it loses, and a prioritized three-layer execution plan — built from 150 buyer queries across ChatGPT + Perplexity.
D2L's content infrastructure is optimized for buyers who already know Brightspace exists — not for the 88.9% of early-funnel moments where buyer criteria are formed.
[Mechanism] Three compounding structural gaps create D2L's visibility pattern. First, five feature categories (mobile, collaboration, integrations, administration, assessment) have no dedicated content pages — when buyers search for these capabilities, AI platforms have no D2L-authored content to cite, leaving competitors or third-party sources to fill the gap. Second, D2L's strongest existing pages (learning analytics, competency tracking, adaptive learning) are written in product-marketing language rather than buyer-Comparison language, making them invisible to AI platforms constructing feature-specific answers at the requirements and Validation stages.
Third, the Comparison page architecture is incomplete — D2L has Comparison pages for Canvas and Blackboard (now stale) but no feature-specific Comparison content for Docebo, Moodle's authoring tools, or Canvas's analytics — leaving Docebo winning 5 of 13 Comparison queries by content default, not product superiority.
[Synthesis] Layer 1 technical fixes must precede L2 and L3 work because four stale Comparison pages and three stale product pages (including /brightspace/performance/ dated September 2022) are already receiving depressed AI freshness scores — publishing new or deepened content on pages with stale modification signals wastes the content investment. Specifically, updating the visible modification dates and sitemap lastmod timestamps on /brightspace/performance/ and /brightspace/achievement/ before adding L2 content ensures that AI crawlers register the new content as current rather than inheriting the stale page's freshness penalty.
Where D2L appears and where it doesn't — across personas, buying jobs, and platforms.
[TL;DR] D2L is visible in 29% of buyer queries but wins only 6%. Converting visibility to wins is the primary challenge (23% gap). High-intent queries run higher at 41%.
D2L's 28.7% (43/150) overall visibility masks a structural split: moderate presence at Comparison (56.3%) collapses to near-zero at discovery — meaning D2L shapes almost none of the buyer thinking that precedes a vendor shortlist.
| Dimension | Combined | Platform Delta |
|---|---|---|
| All Queries | 28.7% | Perplexity +3 percentage points |
| By Persona | ||
| Chief Information Officer | 25.8% | Even |
| Director of Online Learning & Instructional Design | 30.3% | Perplexity +3 percentage points |
| LMS Administrator / Educational Technologist | 42.9% | Perplexity +7 percentage points |
| Provost / Chief Academic Officer | 30% | Perplexity +3 percentage points |
| VP of Learning & Development | 14.3% | Perplexity +4 percentage points |
| By Buying Job | ||
| Artifact Creation | 33.3% | ChatGPT +25 percentage points |
| Comparison | 56.2% | Perplexity +6 percentage points |
| Consensus Creation | 8.3% | Perplexity +8 percentage points |
| Problem Identification | 23.1% | Perplexity +8 percentage points |
| Requirements Building | 0% | Even |
| Shortlisting | 36% | Perplexity +8 percentage points |
| Solution Exploration | 12.5% | Perplexity +6 percentage points |
| Validation | 25% | Perplexity +4 percentage points |
| Dimension | ChatGPT | Perplexity |
|---|---|---|
| All Queries | 20.7% | 24% |
| By Persona | ||
| Chief Information Officer | 19.4% | 19.4% |
| Director of Online Learning & Instructional Design | 21.2% | 24.2% |
| LMS Administrator / Educational Technologist | 28.6% | 35.7% |
| Provost / Chief Academic Officer | 23.3% | 26.7% |
| VP of Learning & Development | 10.7% | 14.3% |
| By Buying Job | ||
| Artifact Creation | 33.3% | 8.3% |
| Comparison | 46.9% | 53.1% |
| Consensus Creation | 0% | 8.3% |
| Problem Identification | 7.7% | 15.4% |
| Requirements Building | 0% | 0% |
| Shortlisting | 24% | 32% |
| Solution Exploration | 6.2% | 12.5% |
| Validation | 16.7% | 20.8% |
[Data] Overall visibility: 28.7% (43/150 queries). Early-funnel invisibility: 88.9% (40/45 queries across problem identification, solution exploration, requirements building). Requirements building: 0% visible (0/16 queries).
High-intent stages (Comparison + Shortlisting + Validation): 40.7% visible (33/81 queries), 9.9% win rate (8/81). Collaboration tools: 0% visible (0/11). Mobile experience: 53.9% visible (7/13), 0% win rate (0/7).
[Synthesis] D2L's visibility curve is inverted relative to buyer behavior: lowest at discovery (0–23%) where buyers form criteria, moderate at Comparison (56.3%) where criteria are already set. The requirements building stage — where buyers write RFPs and define must-have features — has zero D2L visibility, meaning every RFP written without D2L input is shaped by competitors. The mobile and collaboration feature zeros are especially significant: these are top LMS switch triggers, and D2L's complete absence means buyers who prioritize these criteria never encounter Brightspace as an option.
22 queries won by named competitors · 11 no clear winner · 74 no vendor mentioned
Sorted by competitive damage — competitor-winning queries first.
| ID | Query | Persona | Stage | Winner |
|---|---|---|---|---|
| ⚑ Competitor Wins — 22 queries where a named competitor captures the buyer | ||||
| d2l_055 | "Best AI-powered enterprise learning platforms for corporate training and professional development" | VP of Learning & Development | Shortlisting | Docebo |
| d2l_071 | "How hard is it to maintain Moodle integrations compared to using a commercial SaaS LMS?" | Director of Online Learning & Instructional Design | Comparison | Moodle |
| d2l_076 | "Canvas vs Blackboard — which has stronger learning analytics and predictive student insights?" | Chief Information Officer | Comparison | Canvas LMS |
| d2l_080 | "Moodle content authoring vs commercial LMS tools — is the open-source approach good enough for building online courses?" | Director of Online Learning & Instructional Design | Comparison | Moodle |
| d2l_082 | "Docebo AI capabilities for personalized learning — how do they compare to other enterprise learning platforms?" | VP of Learning & Development | Comparison | Docebo |
| d2l_083 | "Moodle collaboration plugins vs built-in LMS collaboration tools — which approach works better for online programs?" | Director of Online Learning & Instructional Design | Comparison | Moodle |
| d2l_086 | "Schoology's PowerSchool SIS integration vs other LMS platforms for K-12 districts" | LMS Administrator / Educational Technologist | Comparison | Schoology |
| d2l_088 | "Our company uses Blackboard for compliance training and it's painful — what are better alternatives for enterprise L&D?" | VP of Learning & Development | Comparison | Docebo |
| d2l_090 | "Moodle mobile app — is it good enough for students or should we invest in a commercial LMS with a better app?" | Chief Information Officer | Comparison | Canvas LMS |
| d2l_093 | "Schoology vs Canvas for K-12 — which has better tools for student collaboration and parent communication?" | Provost / Chief Academic Officer | Comparison | Schoology |
Remaining competitor wins: Canvas LMS ×5, Docebo ×3, Schoology ×2, Moodle ×2. 11 queries with no clear winner. 74 queries with no vendor mentioned. Full query-level data available in the analysis export.
Queries where D2L is mentioned but a competitor is positioned more favorably.
| ID | Query | Persona | Buying Job | Winner | D2L Position |
|---|---|---|---|---|---|
| d2l_002 | "Why do so many faculty refuse to use their university's LMS for course delivery?" | Provost / Chief Academic Officer | Problem Identification | No Clear Winner | Listed |
| d2l_006 | "Students keep complaining our LMS mobile experience is terrible — is it worth switching platforms over this?" | Director of Online Learning & Instructional Design | Problem Identification | No Vendor Mentioned | Listed |
| d2l_008 | "Our LMS crashes every finals week when thousands of students submit at once — should we be looking at more reliable platforms?" | LMS Administrator / Educational Technologist | Problem Identification | No Vendor Mentioned | Listed |
| d2l_015 | "Build versus buy for AI tutoring in higher education — should we add AI to our LMS or use a separate tool?" | Director of Online Learning & Instructional Design | Solution Exploration | No Clear Winner | Listed |
| d2l_021 | "Tired of our LMS mobile app being clunky and unreliable — does any platform actually deliver a good mobile experience for students?" | LMS Administrator / Educational Technologist | Solution Exploration | Canvas LMS | Listed |
| d2l_046 | "Best LMS platforms for large universities with complex SIS and third-party integration needs" | Chief Information Officer | Shortlisting | Canvas LMS | Listed |
| d2l_047 | "Which learning management system is easiest for non-technical faculty to build courses in?" | Provost / Chief Academic Officer | Shortlisting | Canvas LMS | Listed |
| d2l_048 | "Our LMS has no real adaptive learning — looking for a platform that personalizes paths based on student performance" | Director of Online Learning & Instructional Design | Shortlisting | d2l | Listed |
| d2l_049 | "Which enterprise LMS platforms are easiest to administer for a university with 30,000+ students?" | LMS Administrator / Educational Technologist | Shortlisting | Canvas LMS | Listed |
| d2l_050 | "Best learning management systems that handle both academic programs and corporate training on a single platform" | VP of Learning & Development | Shortlisting | d2l | Listed |
| ID | Query | Persona | Buying Job | Winner | D2L Position |
|---|---|---|---|---|---|
| d2l_051 | "LMS platforms with the best-rated mobile apps for students — which ones actually work well on phones?" | Chief Information Officer | Shortlisting | Canvas LMS | Strong 2nd |
| d2l_052 | "Which LMS platforms have the most useful AI features for course design and learner support?" | Director of Online Learning & Instructional Design | Shortlisting | Moodle | Strong 2nd |
| d2l_054 | "Top LMS platforms with flexible assessment tools — rubrics, competency-based grading, and weighted gradebooks" | LMS Administrator / Educational Technologist | Shortlisting | Canvas LMS | Strong 2nd |
| d2l_075 | "Docebo vs D2L for corporate training programs — which platform is better for enterprise L&D?" | VP of Learning & Development | Comparison | Docebo | Strong 2nd |
| d2l_079 | "Canvas mobile app vs Brightspace mobile app — switching LMS and need to know which is actually better for students" | LMS Administrator / Educational Technologist | Comparison | Canvas LMS | Strong 2nd |
| d2l_081 | "Blackboard accessibility compliance — how does it compare to newer cloud-based LMS platforms?" | Chief Information Officer | Comparison | No Clear Winner | Strong 2nd |
| d2l_084 | "Moodle total cost of ownership vs commercial LMS platforms for a university with a small IT team" | Chief Information Officer | Comparison | No Clear Winner | Mentioned In List |
| d2l_085 | "Switching from Blackboard — how do Canvas and D2L compare on gradebook flexibility and assessment tools?" | Provost / Chief Academic Officer | Comparison | Canvas LMS | Strong 2nd |
| d2l_087 | "How does Canvas analytics compare to Brightspace for measuring student engagement and learning outcomes?" | Director of Online Learning & Instructional Design | Comparison | Canvas LMS | Strong 2nd |
| d2l_091 | "Moving off Blackboard — which LMS has the fastest faculty learning curve for course creation?" | Director of Online Learning & Instructional Design | Comparison | Canvas LMS | Strong 2nd |
| d2l_094 | "Docebo vs D2L for skills tracking and competency mapping in enterprise L&D programs" | VP of Learning & Development | Comparison | Docebo | Mentioned In List |
| d2l_095 | "Migrating off Moodle to a commercial LMS — which platforms have the best migration support and content import tools?" | Chief Information Officer | Comparison | Docebo | Mentioned In List |
| d2l_098 | "Canvas accessibility for students with disabilities — how does it compare to other LMS platforms?" | Provost / Chief Academic Officer | Comparison | Canvas LMS | Brief Mention |
| d2l_100 | "Blackboard mobile experience — is it really as bad as reviews say compared to newer LMS apps?" | LMS Administrator / Educational Technologist | Comparison | Canvas LMS | Strong 2nd |
| d2l_106 | "Thinking about leaving Blackboard — is administration really easier on other enterprise LMS platforms?" | LMS Administrator / Educational Technologist | Validation | Canvas LMS | Mentioned In List |
| d2l_108 | "Our students hate the Moodle mobile app — would switching to a SaaS LMS actually fix the mobile experience?" | Chief Information Officer | Validation | No Clear Winner | Mentioned In List |
| d2l_109 | "Are LMS AI features genuinely useful for instructional designers or mostly marketing hype right now?" | Director of Online Learning & Instructional Design | Validation | No Vendor Mentioned | Mentioned In List |
| d2l_115 | "Universities that switched from Blackboard — did faculty adoption actually improve with the new LMS?" | Provost / Chief Academic Officer | Validation | Canvas LMS | Primary Recommendation |
| d2l_125 | "Hidden costs of migrating from Blackboard — what budget surprises should universities plan for?" | Chief Information Officer | Validation | No Clear Winner | Mentioned In List |
| d2l_126 | "LMS mobile app reviews from students — which platforms consistently get the worst ratings on app stores?" | LMS Administrator / Educational Technologist | Validation | Blackboard Learn | Mentioned In List |
| d2l_130 | "Business case for switching corporate LMS — how do we convince the CFO the migration cost is worth it?" | VP of Learning & Development | Consensus Creation | No Vendor Mentioned | Brief Mention |
| d2l_142 | "Build a vendor Comparison scorecard for LMS AI features — we're replacing our current platform and need to evaluate tutoring, automated feedback, and content recommendations" | Director of Online Learning & Instructional Design | Artifact Creation | No Clear Winner | Mentioned In List |
| d2l_144 | "Create a technical evaluation matrix for LMS assessment capabilities — question types, rubrics, gradebook flexibility, proctoring integration, and auto-grading" | LMS Administrator / Educational Technologist | Artifact Creation | No Clear Winner | Mentioned In List |
| d2l_148 | "Draft a migration plan template for moving from Blackboard to a new LMS, covering data migration, faculty training, content transfer, and timeline" | LMS Administrator / Educational Technologist | Artifact Creation | No Clear Winner | Brief Mention |
Who’s winning when D2L isn’t — and who controls the narrative at each buying stage.
[TL;DR] D2L wins 6% of queries (9/150), ranks #2 in SOV — H2H record: 15W–15L across 8 competitors.
D2L's #2 SOV position holds in volume but not in conversion — the 14pp gap between decision-maker and evaluator win rates signals that technical evaluators are where D2L loses deals, often on feature categories (mobile, collaboration, integrations) where content coverage is absent.
| Company | Mentions | Share |
|---|---|---|
| Canvas LMS | 46 | 24.7% |
| D2L | 33 | 17.7% |
| Blackboard Learn | 31 | 16.7% |
| Moodle | 30 | 16.1% |
| Docebo | 17 | 9.1% |
| Schoology | 10 | 5.4% |
| TalentLMS | 8 | 4.3% |
| Absorb LMS | 7 | 3.8% |
| Google Classroom | 4 | 2.1% |
When D2L and a competitor both appear in the same response, who gets the recommendation? One query with multiple competitors generates a matchup against each — so H2H totals will exceed the query count.
Win = primary recommendation (cross-platform majority). Loss = competitor was. Tie = neither or third party.
For the 107 queries where D2L is completely absent:
Vendors appearing in responses not in D2L’s defined competitive set.
[Synthesis] D2L's #2 SOV position masks two competitive vulnerabilities. First, query-level win rate is 9.9% (8/81 high-intent queries) — meaning the majority of high-intent queries D2L appears in are won by competitors or produce no recommendation. Second, Docebo — a smaller competitor with 9.1% SOV — defeats D2L in all 4 directly contested H2H queries, specifically in corporate training and AI features categories where D2L has strong product capabilities but weak content positioning.
The 14pp decision-maker advantage (decision makers: 29.4% conditional win rate vs. evaluators: 15.4%) is structurally positive but fragile — the evaluators who surface technical objections are where D2L loses the most ground.
What AI reads and trusts in this category.
[TL;DR] D2L had 22 unique pages cited across buyer queries, ranking #2 among all cited domains. 10 high-authority domains cite competitors but not D2L.
D2L's 22 uniquely cited pages and #2 citation rank confirm that AI platforms do cite Brightspace content when it exists and is structured for extraction — the citation infrastructure works, but only covers the features with existing pages.
Note: Domain-level citation counts (above) tally instances per individual domain. Competitor-level counts (below) aggregate across all domains owned by a single vendor, which may include subdomains.
Non-competitor domains citing other vendors but not D2L — off-domain authority opportunities.
These domains cited competitors but did not cite D2L pages in the queries analyzed. This reflects citation patterns in AI responses, not overall platform presence.
[Synthesis] D2L's #2 citation rank confirms that when AI platforms have D2L content available, they use it — the 23 citation instances from 22 unique pages is a meaningful existing citation footprint. The problem is the 10-source third-party gap: AI platforms are citing 10 more third-party sources than D2L-owned content, indicating that for many buyer queries, third-party review sites (G2, Capterra, eLearning Industry) are the authoritative sources, not d2l.com. This gap is addressable through the L2 page deepening work — AI platforms cite vendor pages when those pages contain buyer-specific, extractable answers that third-party sources don't have.
Three layers of recommendations ranked by commercial impact and implementation speed.
[TL;DR] 18 recommendations targeting 146 queries where D2L is currently invisible. 4 L1 technical fixes + 1 verification checks, 7 content optimizations (L2), 6 new content initiatives (L3).
The 146 recommendations follow a dependency chain: fix staleness first (L1), deepen existing high-coverage pages next (L2 across 7 pages), then build the structural content voids that account for early-funnel invisibility (L3 across 6 NIOs) — each layer enabling the next.
Reading the priority numbers: Recommendations are ranked 1–18 across all three layers by commercial impact × implementation speed. Within each layer, items appear in priority order. Gaps in the sequence (e.g., L1 shows 1, 2, then 12) mean higher-priority items belong to a different layer.
Configuration and infrastructure changes. Owner: Engineering / DevOps. Timeline: Days to weeks.
| Priority | Finding | Impact | Timeline |
|---|---|---|---|
| #1 | Key Product Pages Show Stale Modification Dates | High | 1-3 days |
| #2 | Stale Competitor Comparison Pages | High | 1-2 weeks |
| #14 | Meta Descriptions and Open Graph Tags Cannot Be Verified | Medium | 1-3 days |
| #15 | Schema Markup Cannot Be Verified — Manual Check Recommended | Medium | 1-3 days |
Items requiring manual review before determining if action is needed.
| Priority | Finding | Impact | Timeline |
|---|---|---|---|
| #18 | Client-Side Rendering Status Should Be Verified | Low | < 1 day |
Click any row to expand full issue/fix detail.
Existing pages that need restructuring or deepening. Owner: Content Team. Timeline: Weeks.
The /brightspace/performance/ page does not address FERPA data handling or privacy governance, leaving it unable to answer CIO and director queries about 'FERPA risks with LMS learning analytics' (d2l_121) — a required disclosure in any analytics tool evaluation for US higher education. The /brightspace/performance/ page lacks a structured Comparison of Brightspace analytics versus Canvas Analytics and Moodle's third-party reporting stack — causing D2L to lose direct Comparison queries (d2l_087 equivalent framing: 'How does Canvas analytics compare to Brightspace?') to Canvas's more Comparison-ready content. The /brightspace/performance/ page uses marketing-prose descriptions of analytics capabilities without extractable, specific claims — such as the number of built-in reports, self-service dashboard features, or data freshness intervals — making it difficult for AI platforms to extract authoritative Comparison points.
Queries affected: d2l_003, d2l_022, d2l_027, d2l_034, d2l_062, d2l_105, d2l_121, d2l_127, d2l_147
The /brightspace/achievement/ page does not address accreditation reporting requirements — missing the specific regional accreditation body mappings (HLC, SACSCOC) and accreditation evidence generation workflows that provosts need for 'requirements for LMS supporting CBE and accreditation reporting' queries (d2l_037). The /brightspace/achievement/ page does not differentiate Brightspace's competency tracking from 'just checkbox tracking' — the specific skepticism expressed in d2l_114 ('Docebo competency tracking — does it actually work or is it just checkbox tracking?') — leaving the page unable to answer D2L's own version of this objection. The /brightspace/achievement/ page lacks enterprise L&D competency tracking specifications for VP L&D buyers — missing workforce skills mapping, certification expiry and auto-renewal tracking, manager-facing skills dashboards, and integration with HRIS systems that separate corporate from academic competency use cases.
Queries affected: d2l_011, d2l_024, d2l_037, d2l_045, d2l_058, d2l_068, d2l_094, d2l_114, d2l_123, d2l_128, d2l_135, d2l_149
The /solutions/corporate/ page does not address the LMS vs. LXP question ('What's the difference between an LMS and an LXP — do we need both?', d2l_029), leaving VP L&D buyers who ask this specific question without a D2L answer to cite. The /solutions/corporate/ page describes corporate training capabilities in marketing terms without specific compliance training features — missing named regulatory standards (OSHA, HIPAA, SOX compliance tracking), certification management workflows, and manager-facing dashboard specifications that evaluators use in RFP scoring. The /solutions/corporate/ page does not directly address Docebo's hidden costs, limitations, or specific feature gaps versus Brightspace — despite the existence of /blog/Docebo-alternatives/, that page is a blog post rather than a structured, AI-extractable Comparison that supports corporate evaluators at the Shortlisting stage.
Queries affected: d2l_007, d2l_020, d2l_029, d2l_035, d2l_050, d2l_061, d2l_107, d2l_122, d2l_130, d2l_138, d2l_143
The /brightspace/creator/ page does not address faculty adoption outcomes from Blackboard migration scenarios — missing the specific 'did faculty adoption actually improve after switching to Brightspace?' evidence (d2l_115) that provosts and deans require before recommending a disruptive migration to their institutions. The /brightspace/creator/ page does not address content standards and migration portability specifics — missing SCORM 1.2/2004, xAPI/Tin Can, LTI 1.3, AICC support documentation and content import workflow descriptions that 'what should we require for content portability?' queries (d2l_043) require. The /brightspace/creator/ page does not contain 'what to test during LMS evaluation' content for authoring tools — missing the honest, buyer-perspective guidance that 'biggest regrets after choosing LMS' (d2l_120) and 'what did institutions wish they'd tested?' queries require to position D2L as a transparent, evaluation-ready vendor rather than a promotional one.
Queries affected: d2l_002, d2l_019, d2l_026, d2l_032, d2l_043, d2l_047, d2l_060, d2l_104, d2l_115, d2l_120, d2l_133, d2l_141
The /brightspace/accessibility-plus/ page does not link to or summarize Brightspace's VPAT — the single most commonly requested document in LMS accessibility procurement for US public universities — leaving CIO buyers (d2l_145: 'draft an accessibility compliance checklist covering WCAG 2.1 AA, Section 508, and VPAT requirements') without the reference they need from D2L's own page. The /brightspace/accessibility-plus/ page does not address WCAG 2.1 AA success criteria at a specific level — it claims accessibility compliance without mapping to named success criteria (e.g., 1.1.1 Non-text Content, 1.3.1 Info and Relationships) that institutional accessibility officers verify during procurement. The /brightspace/accessibility-plus/ page does not address 'common LMS accessibility failures universities discover after signing the contract' (d2l_112) — a specific Validation concern that buyers have after previous vendor disappointment — leaving this objection-handling opportunity to third-party review sites rather than D2L's own authoritative content.
Queries affected: d2l_012, d2l_018, d2l_031, d2l_057, d2l_067, d2l_112, d2l_145
The /lumi/ page describes Lumi's features without any measurable outcome data — missing the specific learning improvement metrics (time-to-competency reduction, course completion rate changes, learner engagement increases) that the 'case studies with measurable results' query (d2l_129) and the 'are AI features useful or hype?' query (d2l_109) require to trust vendor claims. The /lumi/ page does not address the 'build vs buy for AI tutoring in higher education' question (d2l_015) — failing to provide the structured build/buy analysis that Directors of Online Learning need when evaluating whether to integrate external AI tools versus using Brightspace's native Lumi capabilities. The /lumi/ page lacks enterprise L&D AI capability specifications — missing the personalization algorithm descriptions, skills-gap identification features, and content recommendation engine details that VP L&D buyers need to compare Lumi against Docebo's AI positioning for enterprise training (d2l_055, d2l_082).
Queries affected: d2l_010, d2l_015, d2l_028, d2l_041, d2l_052, d2l_055, d2l_109, d2l_129, d2l_142
The /solutions/higher-education/competency-based-education/ page does not address the 'does adaptive learning actually improve outcomes or just sound good in vendor demos?' question (d2l_118) — the most common Validation concern about adaptive learning — leaving this critical objection unanswered on D2L's most relevant page. The /solutions/higher-education/competency-based-education/ page lacks specific scale evidence: it does not state how many students have used Brightspace's adaptive and personalized learning features, what enrollment sizes institutions have deployed at, or what performance improvements have been measured — making it unable to support Shortlisting queries (d2l_063: 'LMS that auto-adjusts content based on student performance'). The /solutions/higher-education/competency-based-education/ page does not differentiate between 'adaptive learning' and 'personalized learning paths' — a distinction that buyers (d2l_016) and AI platforms need to understand to correctly position Brightspace's capability in responses.
Queries affected: d2l_009, d2l_016, d2l_036, d2l_048, d2l_063, d2l_118, d2l_134
Net new content addressing visibility and positioning gaps. Owner: Content Strategy. Timeline: Months.
When buyers search for LMS mobile app quality — the most common objection raised during Shortlisting — D2L has no page to surface. Canvas wins 5 of the 13 mobile-related queries not because Brightspace lacks a mobile app, but because D2L has never published a dedicated mobile content asset for AI platforms to cite. This is an existential gap: at the moment a CIO or LMS Admin asks 'which platform has the best mobile app?', D2L is absent from the answer. Creating a single, authoritative mobile feature page would immediately unlock citation opportunities across all 13 queries spanning problem identification through Shortlisting.
ChatGPT (high): Mobile-specific Shortlisting queries (d2l_051, d2l_064, d2l_079) show Canvas winning via product page citations. ChatGPT responds to structured mobile feature pages with specific specs — a /brightspace/mobile/ page with named app features would directly compete with Canvas's mobile marketing content. Perplexity (high): Perplexity cites third-party sources heavily for mobile queries (app store reviews, G2 mobile ratings). Publishing outcome-backed mobile content with labeled headings and a Comparison table would make the page highly extractable for Perplexity's structured response format.
Collaboration tools — group workspaces, peer review, discussion forums, video-integrated learning — are standard evaluation criteria for directors of online learning and provosts building fully online programs. Yet D2L has zero presence across all 11 collaboration-focused queries. Moodle's plugin ecosystem and Canvas's discussion infrastructure are consistently cited while Brightspace's collaboration capabilities are invisible. Because the pain point 'Group collaboration, peer review, and discussion tools feel clunky and outdated ' registers 0% visibility and 0 wins, buyers evaluating collaboration see D2L's competitors — not D2L — as the answer. A dedicated collaboration content hub with honest feature Comparison and outcome data would move D2L from absent to present across the entire collaboration buying journey.
ChatGPT (medium): Collaboration queries (d2l_083, d2l_113) show Moodle and Canvas winning via community-generated content. ChatGPT cites well-structured feature pages — a /brightspace/collaboration/ page with specific native tool specs would compete with Moodle plugin documentation currently dominating these responses. Perplexity (high): Perplexity pulls from discussion forums, eLearning blogs, and G2 category pages for collaboration queries. Third-party content citing Brightspace's collaboration capabilities (submissions to G2, eLearning Industry, and institutional case studies) would significantly improve citation rates on Perplexity.
The CIO — D2L's veto-holding decision maker — cares most about integration architecture: LTI 1.3 support, SIS connectors, SSO, FERPA-compliant data flows, and API quality. Yet D2L lacks a structured integration hub where AI platforms can extract authoritative claims about Brightspace's ecosystem. With 14 integration-focused queries and 0 wins, every CIO who asks 'which LMS has the best SIS integration?' or 'what are the non-negotiable LTI requirements?' receives an answer that names Canvas or Moodle — never D2L. The structural fix is not content depth on existing pages; it is creating the architectural content type that buyers and AI platforms expect: an integration directory.
ChatGPT (high): CIO-persona integration queries (d2l_030, d2l_044, d2l_140) show ChatGPT producing standards-focused responses where vendor documentation is cited. A structured /brightspace/integrations/ hub with LTI 1.3, SAML, and SIS connector specifications would be directly citable in these responses. Perplexity (high): Integration queries on Perplexity (d2l_042, d2l_103, d2l_116) pull from technical documentation, Educause publications, and vendor integration pages. Perplexity's structured response format is highly compatible with a directory-style integration hub — each integration entry becomes an extractable passage.
CIOs and LMS Administrators evaluating enterprise LMS platforms need to see how Brightspace handles multi-campus administration, role-based access, FERPA security controls, and total cost of ownership versus alternatives like Moodle. D2L's current content surfaces Brightspace as a learning platform but not as an enterprise IT system — a critical framing gap for CIO-led evaluations. Canvas wins Comparison queries on administration because its documentation is IT-structured and extractable. Building a dedicated IT/admin content hub would reframe Brightspace for the CIO buyer and capture the 13 queries across the full administration buying journey.
ChatGPT (high): CIO-persona admin queries (d2l_044, d2l_066, d2l_099) show ChatGPT citing technical documentation and security compliance pages. A structured IT hub with FERPA, SSO, and multi-tenancy specifications would be directly citable against Canvas's IT documentation which currently wins these queries. Perplexity (medium): Admin-complexity and TCO queries (d2l_084, d2l_117) on Perplexity pull from institutional IT blogs and Educause publications rather than vendor pages. Third-party co-authored content with IT leaders would improve Perplexity citation rates more than vendor page depth alone.
Assessment and grading tools — rubrics, question banks, auto-grading, weighted gradebooks, anti-cheating tools, and certification tracking — are tier-1 evaluation criteria for LMS Administrators and Directors of Online Learning. Canvas consistently wins assessment Comparison queries because its feature documentation is specific and Comparison-ready. D2L's assessment content describes capabilities in product-marketing language rather than buyer-Comparison language: specific question type counts, rubric customization depth, and proctoring integration specifics are missing. Building assessment content with the same specificity as competitors' documentation would convert D2L from a visible but losing option to a winning option in this high-intent feature category.
ChatGPT (high): Assessment Comparison queries (d2l_085, d2l_102) show ChatGPT citing Canvas's feature documentation with specific tool names (SpeedGrader, Quiz Engine). A /brightspace/assessment/ page with equivalent named-feature specificity would directly compete for these citations. Perplexity (high): Peak-performance queries (d2l_111) and rubric/gradebook queries (d2l_054) on Perplexity pull from LMS administrator forums, eLearning industry blogs, and G2 reviews. Outcome-backed case study content on assessment reliability would generate strong Perplexity citation surface.
D2L's existing Comparison pages cover Canvas and Blackboard (and are now stale — see L1 finding stale_comparison_pages), but buyers comparing Brightspace against Docebo for corporate L&D, Moodle for course authoring, or Canvas for learning analytics find no structured Comparison content from D2L's perspective. At the Comparison stage, AI platforms cite the most specific, structured Comparison content available — and currently that content belongs to competitors or third-party review sites. The structural fix is expanding D2L's Comparison page architecture to cover feature-specific pairings where Docebo and Canvas are winning by content default, not product superiority.
ChatGPT (high): Comparison queries (d2l_076, d2l_087, d2l_094) show ChatGPT citing vendor Comparison pages and G2 category comparisons. D2L's existing Comparison pages for Canvas and Blackboard demonstrate that ChatGPT does cite D2L's Comparison content when it exists — the gap is coverage of additional pairings. Perplexity (high): Perplexity pulls from G2 Comparison pages, eLearning Industry head-to-heads, and vendor Comparison pages for all 13 queries in this cluster. New Comparison pages on d2l.com would be immediately indexable and citable — Perplexity's freshness weighting favors newly published structured Comparison content.
All recommendations across all three layers, ranked by commercial impact × implementation speed.
The main Brightspace product page (/brightspace/) shows a last modification date of May 7, 2025 — approximately 10 months ago. The Achievement+ page (/brightspace/achievement/) shows a publication date of July 9, 2024 with no visible update — over 20 months old. The Performance+ page (/brightspace/performance/) shows only a September 2022 publication date with no visible recent modification.
4 of 6 dedicated Comparison pages have not been updated in over 12 months. D2L Brightspace vs. Moodle was last modified August 2024 (~18 months ago). D2L Brightspace vs. Schoology was published February 2024 (~25 months). D2L Brightspace vs. Google Classroom was published January 2023 (~38 months). D2L Brightspace vs. Sakai was published March 2024 (~24 months). Only the Canvas and Blackboard Comparison pages have been updated within the last 90 days.
D2L achieves 0% visibility (0/11 queries) and 0 wins for the Collaboration & Communication Tools feature — the only feature with complete invisibility across all queries. All 11 collaboration queries are L3 gaps because d2l.com has no dedicated collaboration feature page (coverage_status='thin' or insufficient). Moodle wins 2 of the 11 queries and Schoology and Canvas each claim 1 — competitors win by default in D2L's absence.
D2L has no integration directory, LTI marketplace page, or SIS connector hub — causing all 14 integration-ecosystem queries to route to L3 (coverage_status='thin' across all). D2L achieves only 21.4% visibility (3/14) on integration queries and wins 0 of those 3 (0% conditional win rate on visible integration queries). Canvas, Moodle, and Schoology each claim victories on integration-specific Comparison queries where D2L's absence lets competitors define the narrative.
d2l.com has no dedicated mobile learning feature page (coverage_status='missing' across all 13 Mobile Learning Experience queries). Despite appearing as a passive mention in 7 of 13 (53.8%) mobile queries, D2L wins 0 of those 7 (0% conditional win rate on mobile queries), with Canvas claiming 5 of 13 victories by default. The root cause is not underperformance — it is complete content absence for the mobile buying job.
13 of 14 Administration & Role Management queries route to L3 because d2l.com lacks dedicated IT-facing content on multi-tenant administration, security governance, and enterprise admin complexity. D2L achieves 35.7% visibility (5/14) across all admin queries but wins only 1 of 5 visible (20% conditional win rate), with Canvas winning 4 of the contested admin Comparison queries. All 13 L3 admin queries have coverage_status='thin' — content exists in fragments across product pages but never cohesively addresses the IT administrator or CIO buyer.
11 of 12 Assessment & Grading queries are L3 gaps because d2l.com's assessment content is thin (coverage_status='thin' across all 11) and lacks Comparison-ready specificity. D2L achieves 41.7% visibility (5/12) across all assessment queries but wins only 1 of 5 visible (20% conditional win rate), with Canvas winning 3 of the 5 directly contested assessment Comparison queries. The root cause: Brightspace's gradebook and assessment tools are not documented in a format that allows AI platforms to extract specific capability claims against Canvas's well-documented assessment infrastructure.
13 Comparison-buying-job queries across six feature categories (learning analytics, course creation, AI features, corporate training, competency outcomes, accessibility) are L3 gaps because d2l.com's content inventory contains feature landing pages but not Comparison page types for these specific pairings. All 13 received affinity overrides during routing: existing pages have adequate feature depth but the wrong page structure for Comparison-stage buyers. Docebo wins 5 of these 13 Comparison queries and Canvas wins 4 — both by virtue of having Comparison content that Brightspace has not yet published.
The /brightspace/performance/ page does not address FERPA data handling or privacy governance, leaving it unable to answer CIO and director queries about 'FERPA risks with LMS learning analytics' (d2l_121) — a required disclosure in any analytics tool evaluation for US higher education.
The /brightspace/achievement/ page does not address accreditation reporting requirements — missing the specific regional accreditation body mappings (HLC, SACSCOC) and accreditation evidence generation workflows that provosts need for 'requirements for LMS supporting CBE and accreditation reporting' queries (d2l_037).
The /solutions/corporate/ page does not address the LMS vs. LXP question ('What's the difference between an LMS and an LXP — do we need both?', d2l_029), leaving VP L&D buyers who ask this specific question without a D2L answer to cite.
The /brightspace/creator/ page does not address faculty adoption outcomes from Blackboard migration scenarios — missing the specific 'did faculty adoption actually improve after switching to Brightspace?' evidence (d2l_115) that provosts and deans require before recommending a disruptive migration to their institutions.
The /brightspace/accessibility-plus/ page does not link to or summarize Brightspace's VPAT — the single most commonly requested document in LMS accessibility procurement for US public universities — leaving CIO buyers (d2l_145: 'draft an accessibility compliance checklist covering WCAG 2.1 AA, Section 508, and VPAT requirements') without the reference they need from D2L's own page.
Meta descriptions and Open Graph (OG) tags could not be assessed from the rendered page output. These HTML head elements are not visible in the rendered markdown content returned by our analysis method.
JSON-LD structured data (schema.org markup) could not be assessed across any of the 42 analyzed pages. Our analysis method returns rendered page content as markdown, which strips HTML-embedded schema blocks. We cannot confirm whether appropriate schema types (Product, Article, FAQ, HowTo, Organization) are implemented on commercially relevant pages.
The /lumi/ page describes Lumi's features without any measurable outcome data — missing the specific learning improvement metrics (time-to-competency reduction, course completion rate changes, learner engagement increases) that the 'case studies with measurable results' query (d2l_129) and the 'are AI features useful or hype?' query (d2l_109) require to trust vendor claims.
The /solutions/higher-education/competency-based-education/ page does not address the 'does adaptive learning actually improve outcomes or just sound good in vendor demos?' question (d2l_118) — the most common Validation concern about adaptive learning — leaving this critical objection unanswered on D2L's most relevant page.
All 42 fetched pages returned substantial rendered content, suggesting the site is primarily server-rendered (consistent with WordPress). However, client-side rendering (CSR) detection signals — such as framework-specific div containers, noscript fallback content, and JavaScript bundle analysis — are not available through our analysis method. We cannot definitively confirm that all page content is accessible without JavaScript execution.
All three workstreams can start this week.
[Synthesis] Layer 1 technical fixes execute first because stale modification signals on /brightspace/performance/, /brightspace/achievement/, and four Comparison pages actively suppress AI platform freshness scoring for the pages that L2 and L3 improvements will build from — fixing staleness before deepening content avoids publishing high-quality content onto pages that AI crawlers are already deprioritizing. L2 remediations on seven existing pages unlock 66 query wins by converting D2L's existing coverage into AI-extractable, Comparison-ready content. L3 NIO content builds the structural gaps that account for D2L's 88.9% early-funnel invisibility — specifically, three critical-priority NIOs (mobile, collaboration, integrations) that cover features with zero or near-zero wins across all queries.