AI Visibility Audit

D2L
Visibility Report

Competitive intelligence for AI-mediated buying decisions. Where D2L wins, where it loses, and a prioritized three-layer execution plan — built from 150 buyer queries across ChatGPT + Perplexity.

150 Buyer Queries
5 Personas
8 Buying Jobs
ChatGPT + Perplexity
March 7, 2026

TL;DR

28.7%
Visibility
43 of 150 queries
6%
Win Rate
9 wins of 150 queries
107
Invisible
queries where D2L absent
18
Recommendations
targeting 146 gap queries
Three things to know
D2L is #2 in AI presence but converts only 9.9% of high-intent queries into wins
D2L's 33 AI mentions rank it second among all LMS competitors — above Blackboard (31), Moodle (30), and Docebo (17). Yet across 81 high-intent queries (Comparison, Shortlisting, Validation), D2L wins only 8, producing a 9.9% win rate (8/81) against a 40.7% visibility rate (33/81). The 31pp gap between being seen and being chosen reflects a consistent pattern: D2L's content appears in AI responses but lacks the specific, Comparison-ready claims that convert mentions into recommendations.
31pp conversion gap · high-intent queries
Four stale Comparison pages are suppressing AI freshness scoring at the exact moment buyers compare vendors
The L1 audit identifies 4 Comparison pages (vs. Moodle, Schoology, Google Classroom, Sakai) with modification dates ranging from 12 to 38 months ago. Research shows 76.4% of AI-cited pages were updated within 30 days — stale pages are actively deprioritized. These Comparison pages target the Comparison buying job where D2L achieves its highest visibility (56.3%, 18/32 queries), meaning the staleness penalty is concentrated at D2L's most competitive stage. Refreshing these 4 pages is a 1–2 week fix with direct impact on D2L's highest-value buying job.
4 stale pages · Comparison buying job
D2L wins 0 of 11 collaboration queries — not because the product is weak, but because no content exists
The Collaboration & Communication Tools feature registers 0% visibility and 0 wins across all 11 collaboration-focused queries — the only feature in the audit with complete invisibility. Moodle and Canvas win collaboration queries by default because d2l.com has no dedicated collaboration feature page to cite. Brightspace has native collaboration capabilities, but with coverage_status='thin' or missing across all collaboration queries, AI platforms have no D2L content to surface. A single /brightspace/collaboration/ page would unlock all 11 queries currently lost to competitors with no product advantage.
Content void · 11 collaboration queries
Section 1
The Late Arrival Problem: How D2L's #2 AI Presence Collapses Before Buyers Start Comparing

D2L's content infrastructure is optimized for buyers who already know Brightspace exists — not for the 88.9% of early-funnel moments where buyer criteria are formed.

Early Funnel — Where D2L is visible but not winning
Requirements Building
0%
Solution Exploration
12.5%
Problem Identification
23.1%
Late Funnel — Where D2L competes
Comparison
56.2%
Shortlisting
36%
Artifact Creation
33.3%
Validation
25%
Consensus Creation
8.3%

[Mechanism] Three compounding structural gaps create D2L's visibility pattern. First, five feature categories (mobile, collaboration, integrations, administration, assessment) have no dedicated content pages — when buyers search for these capabilities, AI platforms have no D2L-authored content to cite, leaving competitors or third-party sources to fill the gap. Second, D2L's strongest existing pages (learning analytics, competency tracking, adaptive learning) are written in product-marketing language rather than buyer-Comparison language, making them invisible to AI platforms constructing feature-specific answers at the requirements and Validation stages.

Third, the Comparison page architecture is incomplete — D2L has Comparison pages for Canvas and Blackboard (now stale) but no feature-specific Comparison content for Docebo, Moodle's authoring tools, or Canvas's analytics — leaving Docebo winning 5 of 13 Comparison queries by content default, not product superiority.

Layer 1
Fix Content Staleness
4 technical fixes addressing stale modification dates on /brightspace/performance/, /brightspace/achievement/, and 4 Comparison pages — plus 1 schema verification check — ensure AI platforms assign correct freshness signals to the pages that L2 deepening and L3 content creation will depend on.
4 fixes + 1 checks · Days to 2 weeks
Layer 2
Deepen Existing Pages
66 L2 optimizations across 7 existing pages (learning analytics, corporate training, AI features, adaptive learning, competency tracking, accessibility, course creation) convert D2L's existing coverage into AI-extractable, Comparison-ready content that wins the 66 queries where D2L has a matched page but loses the citation.
7 recommendations · 2–6 weeks
Layer 3
Build Structural Gaps
75 L3 new content items across 6 NIO clusters create the content infrastructure for feature categories and Comparison page types that currently have zero D2L presence — targeting the 88.9% early-funnel invisibility that prevents D2L from shaping buyer criteria before Comparison stage begins.
6 recommendations · 1–3 months

[Synthesis] Layer 1 technical fixes must precede L2 and L3 work because four stale Comparison pages and three stale product pages (including /brightspace/performance/ dated September 2022) are already receiving depressed AI freshness scores — publishing new or deepened content on pages with stale modification signals wastes the content investment. Specifically, updating the visible modification dates and sitemap lastmod timestamps on /brightspace/performance/ and /brightspace/achievement/ before adding L2 content ensures that AI crawlers register the new content as current rather than inheriting the stale page's freshness penalty.

Reference
How to Read This Report

Visibility

Whether D2L is mentioned at all in an AI response to a buyer query. Being visible does not mean being recommended — it just means D2L appeared somewhere in the answer.

Win Rate

Of the queries where D2L is visible, the percentage where it is the primary recommendation — the vendor the AI tells the buyer to evaluate first.

Share of Voice (SOV)

How often a vendor is mentioned by AI across all 150 buyer queries. Measures brand presence in AI-generated answers, not ad spend or traditional media.

Buying Jobs

The 8 non-linear tasks buyers perform during a purchase: Problem Identification, Solution Exploration, Requirements Building, Shortlisting, Comparison, Validation, Consensus Creation, and Artifact Creation.

NIO

Narrative Intelligence Opportunity — a cluster of related buyer queries where D2L has no content. Each NIO includes a blueprint of on-domain pages and off-domain actions to close the gap.

L1 / L2 / L3

The three execution layers. L1 = technical infrastructure fixes. L2 = optimization of existing pages. L3 = new content creation and off-domain authority building.

Citation

When an AI tool references a specific webpage as its source. AI systems build recommendations from cited pages — if your pages aren't cited, your content didn't influence the answer.

Invisible Query

A buyer query where D2L does not appear in the AI response at all. Distinct from a positioning gap, where D2L appears but is not the recommended vendor.
Section 2
Visibility Analysis

Where D2L appears and where it doesn't — across personas, buying jobs, and platforms.

[TL;DR] D2L is visible in 29% of buyer queries but wins only 6%. Converting visibility to wins is the primary challenge (23% gap). High-intent queries run higher at 41%.

D2L's 28.7% (43/150) overall visibility masks a structural split: moderate presence at Comparison (56.3%) collapses to near-zero at discovery — meaning D2L shapes almost none of the buyer thinking that precedes a vendor shortlist.

Platform Visibility

−3 percentage points
Perplexity leads ChatGPT overall
−7 percentage points
LMS Administrator / Educational Technologist — widest persona swing
+25 percentage points
Artifact Creation — widest stage swing
DimensionCombinedPlatform Delta
All Queries28.7%Perplexity +3 percentage points
By Persona
Chief Information Officer25.8%Even
Director of Online Learning & Instructional Design30.3%Perplexity +3 percentage points
LMS Administrator / Educational Technologist42.9%Perplexity +7 percentage points
Provost / Chief Academic Officer30%Perplexity +3 percentage points
VP of Learning & Development14.3%Perplexity +4 percentage points
By Buying Job
Artifact Creation33.3%ChatGPT +25 percentage points
Comparison56.2%Perplexity +6 percentage points
Consensus Creation8.3%Perplexity +8 percentage points
Problem Identification23.1%Perplexity +8 percentage points
Requirements Building0%Even
Shortlisting36%Perplexity +8 percentage points
Solution Exploration12.5%Perplexity +6 percentage points
Validation25%Perplexity +4 percentage points
Show per-platform breakdown (ChatGPT vs Perplexity raw %)
DimensionChatGPTPerplexity
All Queries20.7%24%
By Persona
Chief Information Officer19.4%19.4%
Director of Online Learning & Instructional Design21.2%24.2%
LMS Administrator / Educational Technologist28.6%35.7%
Provost / Chief Academic Officer23.3%26.7%
VP of Learning & Development10.7%14.3%
By Buying Job
Artifact Creation33.3%8.3%
Comparison46.9%53.1%
Consensus Creation0%8.3%
Problem Identification7.7%15.4%
Requirements Building0%0%
Shortlisting24%32%
Solution Exploration6.2%12.5%
Validation16.7%20.8%

Visibility by Buying Job

Artifact Creation33.3% (4/12)
Comparison56.2% (18/32)
Consensus Creation8.3% (1/12)
Problem Identification23.1% (3/13)
Requirements Building0% (0/16)
Shortlisting36% (9/25)
Solution Exploration12.5% (2/16)
Validation25% (6/24)
High-intent visibility
Shortlist + Compare + Validate
40.7% (33/81)
High-intent win rate24.2% (8/33)
Appearance → win conversion24.2% (8/33)

Visibility & Win Rate by Persona

Chief Information Officer25.8% vis · 12.5% win (1/8)
Director of Online Learning & Instructional Design30.3% vis · 20% win (2/10)
LMS Administrator / Educational Technologist42.9% vis · 16.7% win (2/12)
Provost / Chief Academic Officer30% vis · 44.4% win (4/9)
VP of Learning & Development14.3% vis · 0% win (0/4)
Decision-maker win rate
Chief Information Officer + Provost / Chief Academic Officer
29.4% (5/17 visible)
Evaluator win rate
Director of Online Learning & Instructional Design + LMS Administrator / Educational Technologist + VP of Learning & Development
15.4% (4/26 visible)
Role type gap14 percentage points

Visibility by Feature Focus

Accessibility Compliance22.2% vis (2/9) · 0% win (0/2)
Adaptive Learning Paths40% vis (4/10) · 75% win (3/4)
Admin User Management35.7% vis (5/14) · 20% win (1/5)
AI Features41.7% vis (5/12) · 20% win (1/5)
Assessment Grading41.7% vis (5/12) · 20% win (1/5)
Collaboration Tools0% vis (0/11) · 0% win (0)
Competency Outcomes15.4% vis (2/13) · 50% win (1/2)
Corporate Training21.4% vis (3/14) · 0% win (0/3)
Course Creation Authoring33.3% vis (5/15) · 20% win (1/5)
Integrations Ecosystem21.4% vis (3/14) · 0% win (0/3)
Learning Analytics15.4% vis (2/13) · 50% win (1/2)
Mobile Experience53.8% vis (7/13) · 0% win (0/7)

Visibility by Pain Point

Complex Administration42.9% vis (3/7) · 33.3% win (1/3)
Faculty Adoption Resistance57.1% vis (4/7) · 25% win (1/4)
Integration Friction12.5% vis (1/8) · 0% win (0/1)
LMS Migration Risk50% vis (4/8) · 0% win (0/4)
Peak Performance Issues66.7% vis (2/3) · 50% win (1/2)
Poor Mobile Experience53.8% vis (7/13) · 0% win (0/7)
Reporting Complexity25% vis (2/8) · 50% win (1/2)
Steep Learning Curve25% vis (1/4) · 0% win (0/1)
Weak Collaboration Features0% vis (0/8) · 0% win (0)

[Data] Overall visibility: 28.7% (43/150 queries). Early-funnel invisibility: 88.9% (40/45 queries across problem identification, solution exploration, requirements building). Requirements building: 0% visible (0/16 queries).

High-intent stages (Comparison + Shortlisting + Validation): 40.7% visible (33/81 queries), 9.9% win rate (8/81). Collaboration tools: 0% visible (0/11). Mobile experience: 53.9% visible (7/13), 0% win rate (0/7).

[Synthesis] D2L's visibility curve is inverted relative to buyer behavior: lowest at discovery (0–23%) where buyers form criteria, moderate at Comparison (56.3%) where criteria are already set. The requirements building stage — where buyers write RFPs and define must-have features — has zero D2L visibility, meaning every RFP written without D2L input is shaped by competitors. The mobile and collaboration feature zeros are especially significant: these are top LMS switch triggers, and D2L's complete absence means buyers who prioritize these criteria never encounter Brightspace as an option.

Invisibility Gaps — 107 Queries Where D2L Doesn’t Appear

22 queries won by named competitors · 11 no clear winner · 74 no vendor mentioned

Sorted by competitive damage — competitor-winning queries first.

IDQueryPersonaStageWinner
⚑ Competitor Wins — 22 queries where a named competitor captures the buyer
d2l_055"Best AI-powered enterprise learning platforms for corporate training and professional development"VP of Learning & DevelopmentShortlistingDocebo
d2l_071"How hard is it to maintain Moodle integrations compared to using a commercial SaaS LMS?"Director of Online Learning & Instructional DesignComparisonMoodle
d2l_076"Canvas vs Blackboard — which has stronger learning analytics and predictive student insights?"Chief Information OfficerComparisonCanvas LMS
d2l_080"Moodle content authoring vs commercial LMS tools — is the open-source approach good enough for building online courses?"Director of Online Learning & Instructional DesignComparisonMoodle
d2l_082"Docebo AI capabilities for personalized learning — how do they compare to other enterprise learning platforms?"VP of Learning & DevelopmentComparisonDocebo
d2l_083"Moodle collaboration plugins vs built-in LMS collaboration tools — which approach works better for online programs?"Director of Online Learning & Instructional DesignComparisonMoodle
d2l_086"Schoology's PowerSchool SIS integration vs other LMS platforms for K-12 districts"LMS Administrator / Educational TechnologistComparisonSchoology
d2l_088"Our company uses Blackboard for compliance training and it's painful — what are better alternatives for enterprise L&D?"VP of Learning & DevelopmentComparisonDocebo
d2l_090"Moodle mobile app — is it good enough for students or should we invest in a commercial LMS with a better app?"Chief Information OfficerComparisonCanvas LMS
d2l_093"Schoology vs Canvas for K-12 — which has better tools for student collaboration and parent communication?"Provost / Chief Academic OfficerComparisonSchoology
Show 12 more competitor wins + 85 uncontested queries

Remaining competitor wins: Canvas LMS ×5, Docebo ×3, Schoology ×2, Moodle ×2. 11 queries with no clear winner. 74 queries with no vendor mentioned. Full query-level data available in the analysis export.

Positioning Gaps — 34 Queries Where D2L Appears But Loses

Queries where D2L is mentioned but a competitor is positioned more favorably.

IDQueryPersonaBuying JobWinnerD2L Position
d2l_002"Why do so many faculty refuse to use their university's LMS for course delivery?"Provost / Chief Academic OfficerProblem IdentificationNo Clear WinnerListed
d2l_006"Students keep complaining our LMS mobile experience is terrible — is it worth switching platforms over this?"Director of Online Learning & Instructional DesignProblem IdentificationNo Vendor MentionedListed
d2l_008"Our LMS crashes every finals week when thousands of students submit at once — should we be looking at more reliable platforms?"LMS Administrator / Educational TechnologistProblem IdentificationNo Vendor MentionedListed
d2l_015"Build versus buy for AI tutoring in higher education — should we add AI to our LMS or use a separate tool?"Director of Online Learning & Instructional DesignSolution ExplorationNo Clear WinnerListed
d2l_021"Tired of our LMS mobile app being clunky and unreliable — does any platform actually deliver a good mobile experience for students?"LMS Administrator / Educational TechnologistSolution ExplorationCanvas LMSListed
d2l_046"Best LMS platforms for large universities with complex SIS and third-party integration needs"Chief Information OfficerShortlistingCanvas LMSListed
d2l_047"Which learning management system is easiest for non-technical faculty to build courses in?"Provost / Chief Academic OfficerShortlistingCanvas LMSListed
d2l_048"Our LMS has no real adaptive learning — looking for a platform that personalizes paths based on student performance"Director of Online Learning & Instructional DesignShortlistingd2lListed
d2l_049"Which enterprise LMS platforms are easiest to administer for a university with 30,000+ students?"LMS Administrator / Educational TechnologistShortlistingCanvas LMSListed
d2l_050"Best learning management systems that handle both academic programs and corporate training on a single platform"VP of Learning & DevelopmentShortlistingd2lListed
Show 24 more queries
IDQueryPersonaBuying JobWinnerD2L Position
d2l_051"LMS platforms with the best-rated mobile apps for students — which ones actually work well on phones?"Chief Information OfficerShortlistingCanvas LMSStrong 2nd
d2l_052"Which LMS platforms have the most useful AI features for course design and learner support?"Director of Online Learning & Instructional DesignShortlistingMoodleStrong 2nd
d2l_054"Top LMS platforms with flexible assessment tools — rubrics, competency-based grading, and weighted gradebooks"LMS Administrator / Educational TechnologistShortlistingCanvas LMSStrong 2nd
d2l_075"Docebo vs D2L for corporate training programs — which platform is better for enterprise L&D?"VP of Learning & DevelopmentComparisonDoceboStrong 2nd
d2l_079"Canvas mobile app vs Brightspace mobile app — switching LMS and need to know which is actually better for students"LMS Administrator / Educational TechnologistComparisonCanvas LMSStrong 2nd
d2l_081"Blackboard accessibility compliance — how does it compare to newer cloud-based LMS platforms?"Chief Information OfficerComparisonNo Clear WinnerStrong 2nd
d2l_084"Moodle total cost of ownership vs commercial LMS platforms for a university with a small IT team"Chief Information OfficerComparisonNo Clear WinnerMentioned In List
d2l_085"Switching from Blackboard — how do Canvas and D2L compare on gradebook flexibility and assessment tools?"Provost / Chief Academic OfficerComparisonCanvas LMSStrong 2nd
d2l_087"How does Canvas analytics compare to Brightspace for measuring student engagement and learning outcomes?"Director of Online Learning & Instructional DesignComparisonCanvas LMSStrong 2nd
d2l_091"Moving off Blackboard — which LMS has the fastest faculty learning curve for course creation?"Director of Online Learning & Instructional DesignComparisonCanvas LMSStrong 2nd
d2l_094"Docebo vs D2L for skills tracking and competency mapping in enterprise L&D programs"VP of Learning & DevelopmentComparisonDoceboMentioned In List
d2l_095"Migrating off Moodle to a commercial LMS — which platforms have the best migration support and content import tools?"Chief Information OfficerComparisonDoceboMentioned In List
d2l_098"Canvas accessibility for students with disabilities — how does it compare to other LMS platforms?"Provost / Chief Academic OfficerComparisonCanvas LMSBrief Mention
d2l_100"Blackboard mobile experience — is it really as bad as reviews say compared to newer LMS apps?"LMS Administrator / Educational TechnologistComparisonCanvas LMSStrong 2nd
d2l_106"Thinking about leaving Blackboard — is administration really easier on other enterprise LMS platforms?"LMS Administrator / Educational TechnologistValidationCanvas LMSMentioned In List
d2l_108"Our students hate the Moodle mobile app — would switching to a SaaS LMS actually fix the mobile experience?"Chief Information OfficerValidationNo Clear WinnerMentioned In List
d2l_109"Are LMS AI features genuinely useful for instructional designers or mostly marketing hype right now?"Director of Online Learning & Instructional DesignValidationNo Vendor MentionedMentioned In List
d2l_115"Universities that switched from Blackboard — did faculty adoption actually improve with the new LMS?"Provost / Chief Academic OfficerValidationCanvas LMSPrimary Recommendation
d2l_125"Hidden costs of migrating from Blackboard — what budget surprises should universities plan for?"Chief Information OfficerValidationNo Clear WinnerMentioned In List
d2l_126"LMS mobile app reviews from students — which platforms consistently get the worst ratings on app stores?"LMS Administrator / Educational TechnologistValidationBlackboard LearnMentioned In List
d2l_130"Business case for switching corporate LMS — how do we convince the CFO the migration cost is worth it?"VP of Learning & DevelopmentConsensus CreationNo Vendor MentionedBrief Mention
d2l_142"Build a vendor Comparison scorecard for LMS AI features — we're replacing our current platform and need to evaluate tutoring, automated feedback, and content recommendations"Director of Online Learning & Instructional DesignArtifact CreationNo Clear WinnerMentioned In List
d2l_144"Create a technical evaluation matrix for LMS assessment capabilities — question types, rubrics, gradebook flexibility, proctoring integration, and auto-grading"LMS Administrator / Educational TechnologistArtifact CreationNo Clear WinnerMentioned In List
d2l_148"Draft a migration plan template for moving from Blackboard to a new LMS, covering data migration, faculty training, content transfer, and timeline"LMS Administrator / Educational TechnologistArtifact CreationNo Clear WinnerBrief Mention
Section 3
Competitive Position

Who’s winning when D2L isn’t — and who controls the narrative at each buying stage.

[TL;DR] D2L wins 6% of queries (9/150), ranks #2 in SOV — H2H record: 15W–15L across 8 competitors.

D2L's #2 SOV position holds in volume but not in conversion — the 14pp gap between decision-maker and evaluator win rates signals that technical evaluators are where D2L loses deals, often on feature categories (mobile, collaboration, integrations) where content coverage is absent.

Share of Voice

CompanyMentionsShare
Canvas LMS4624.7%
D2L3317.7%
Blackboard Learn3116.7%
Moodle3016.1%
Docebo179.1%
Schoology105.4%
TalentLMS84.3%
Absorb LMS73.8%
Google Classroom42.1%

Head-to-Head Records

When D2L and a competitor both appear in the same response, who gets the recommendation? One query with multiple competitors generates a matchup against each — so H2H totals will exceed the query count.

Win = primary recommendation (cross-platform majority). Loss = competitor was. Tie = neither or third party.

vs. Canvas LMS7W – 10L – 9T (26 mentioned together)
vs. Blackboard Learn4W – 1L – 13T (18 mentioned together)
vs. Moodle2W – 0L – 14T (16 mentioned together)
vs. Docebo0W – 4L – 3T (7 mentioned together)
vs. Schoology1W – 0L – 3T (4 mentioned together)
vs. Absorb LMS0W – 0L – 5T (5 mentioned together)
vs. Google Classroom0W – 0L – 2T (2 mentioned together)
vs. TalentLMS1W – 0L – 5T (6 mentioned together)

Invisible Query Winners

For the 107 queries where D2L is completely absent:

Canvas LMS7 wins (6.5%)
Docebo6 wins (5.6%)
Moodle5 wins (4.7%)
Schoology4 wins (3.7%)
Uncontested (no winner)85 queries (79.4%)

Surprise Competitors

Vendors appearing in responses not in D2L’s defined competitive set.

— 19.4% SOVFlagged
SAP Litmos — 3.2% SOVFlagged
Cornerstone — 3.2% SOVFlagged
360Learning — 2.7% SOVFlagged
Cornerstone OnDemand — 2.7% SOVFlagged
LearnUpon — 2.1% SOVFlagged
LearnWorlds — 2.1% SOVFlagged
Google Classroom — 2.1% SOVFlagged
iSpring Learn — 1.6% SOVFlagged
Edsby — 1.6% SOVFlagged
CYPHER Learning — 1.6% SOVFlagged

[Synthesis] D2L's #2 SOV position masks two competitive vulnerabilities. First, query-level win rate is 9.9% (8/81 high-intent queries) — meaning the majority of high-intent queries D2L appears in are won by competitors or produce no recommendation. Second, Docebo — a smaller competitor with 9.1% SOV — defeats D2L in all 4 directly contested H2H queries, specifically in corporate training and AI features categories where D2L has strong product capabilities but weak content positioning.

The 14pp decision-maker advantage (decision makers: 29.4% conditional win rate vs. evaluators: 15.4%) is structurally positive but fragile — the evaluators who surface technical objections are where D2L loses the most ground.

Section 4
Citation & Content Landscape

What AI reads and trusts in this category.

[TL;DR] D2L had 22 unique pages cited across buyer queries, ranking #2 among all cited domains. 10 high-authority domains cite competitors but not D2L.

D2L's 22 uniquely cited pages and #2 citation rank confirm that AI platforms do cite Brightspace content when it exists and is structured for extraction — the citation infrastructure works, but only covers the features with existing pages.

Top Cited Domains (citation instances)

reddit.com42
d2l.com31
instructure.com20
elearningindustry.com18
Docebo.com16
Show 15 more domains
research.com11
selecthub.com11
en.wikipedia.org10
teachfloor.com9
linkedin.com8
paradisosolutions.com7
cypherlearning.com7
g2.com7
pmc.ncbi.nlm.nih.gov7
lmsportals.com7
techradar.com6
uc.powerschool-docs.com6
community.instructure.com6
docs.Moodle.org6
mdpi.com6

D2L URL Citations by Page

www.d2l.com/blog/ai-lms3
www.d2l.com/blog/lms-platforms-advanced-analytics2
www.d2l.com/why-d2l/compare/d2l-brightspace-vs-...2
community.d2l.com/brightspace/kb/articles/3541-...1
www.d2l.com/why-d2l/why-faculty-prefer-brightspace1
Show 17 more pages
www.d2l.com/why-d2l/compare1
www.d2l.com/blog/five-reasons-why-you-need-to-s...1
www.d2l.com/blog/Blackboard Learn-learn-vs-d2l-...1
www.d2l.com/blog/brightspace-ranked-1-as-the-ea...1
community.d2l.com/brightspace/kb/articles/33146...1
www.d2l.com/blog/best-lms-platforms1
www.d2l.com/newsroom/d2l_advances_breakthrough_...1
www.d2l.com/newsroom/new-d2l-lumi-features-enha...1
www.d2l.com/en-eu/brightspace/the-best-mobile-l...1
www.d2l.com/accessibility/standards1
www.d2l.com/brightspace/accessibility-plus1
community.d2l.com/brightspace/kb/articles/33113...1
www.d2l.com/brightspace/performance-plus1
www.d2l.com/blog/Moodle-alternatives1
www.d2l.com/blog/uscs-successful-transition-fro...1
www.d2l.com/blog/leaving-uncertainty-behind-rea...1
www.d2l.com/blog/blended-learning-lms1
Total D2L unique pages cited22
D2L domain rank#2

Competitor URL Citations

Note: Domain-level citation counts (above) tally instances per individual domain. Competitor-level counts (below) aggregate across all domains owned by a single vendor, which may include subdomains.

Docebo12 URL citations
Canvas LMS12 URL citations
Moodle6 URL citations
Blackboard Learn5 URL citations
Schoology4 URL citations
Absorb LMS1 URL citations

Third-Party Citation Gaps

Non-competitor domains citing other vendors but not D2L — off-domain authority opportunities.

These domains cited competitors but did not cite D2L pages in the queries analyzed. This reflects citation patterns in AI responses, not overall platform presence.

reddit.com42 citations · D2L not cited
elearningindustry.com18 citations · D2L not cited
research.com11 citations · D2L not cited
selecthub.com11 citations · D2L not cited
en.wikipedia.org10 citations · D2L not cited

[Synthesis] D2L's #2 citation rank confirms that when AI platforms have D2L content available, they use it — the 23 citation instances from 22 unique pages is a meaningful existing citation footprint. The problem is the 10-source third-party gap: AI platforms are citing 10 more third-party sources than D2L-owned content, indicating that for many buyer queries, third-party review sites (G2, Capterra, eLearning Industry) are the authoritative sources, not d2l.com. This gap is addressable through the L2 page deepening work — AI platforms cite vendor pages when those pages contain buyer-specific, extractable answers that third-party sources don't have.

Section 5
Prioritized Action Plan

Three layers of recommendations ranked by commercial impact and implementation speed.

[TL;DR] 18 recommendations targeting 146 queries where D2L is currently invisible. 4 L1 technical fixes + 1 verification checks, 7 content optimizations (L2), 6 new content initiatives (L3).

The 146 recommendations follow a dependency chain: fix staleness first (L1), deepen existing high-coverage pages next (L2 across 7 pages), then build the structural content voids that account for early-funnel invisibility (L3 across 6 NIOs) — each layer enabling the next.

Reading the priority numbers: Recommendations are ranked 1–18 across all three layers by commercial impact × implementation speed. Within each layer, items appear in priority order. Gaps in the sequence (e.g., L1 shows 1, 2, then 12) mean higher-priority items belong to a different layer.

Layer 1 Technical Fixes

Configuration and infrastructure changes. Owner: Engineering / DevOps. Timeline: Days to weeks.

Priority Finding Impact Timeline
#1Key Product Pages Show Stale Modification DatesHigh1-3 days

Issue: The main Brightspace product page (/brightspace/) shows a last modification date of May 7, 2025 — approximately 10 months ago. The Achievement+ page (/brightspace/achievement/) shows a publication date of July 9, 2024 with no visible update — over 20 months old. The Performance+ page (/brightspace/performance/) shows only a September 2022 publication date with no visible recent modification.

Fix: Review and update the main /brightspace/ product page, /brightspace/achievement/, and /brightspace/performance/ pages with current product capabilities, recent customer metrics, and updated award recognitions. Ensure visible dates on the page reflect the update (e.g., 'Last updated March 2026'). Also verify that sitemap lastmod timestamps are being set correctly for these pages.

#2Stale Competitor Comparison PagesHigh1-2 weeks

Issue: 4 of 6 dedicated Comparison pages have not been updated in over 12 months. D2L Brightspace vs. Moodle was last modified August 2024 (~18 months ago). D2L Brightspace vs. Schoology was published February 2024 (~25 months). D2L Brightspace vs. Google Classroom was published January 2023 (~38 months). D2L Brightspace vs. Sakai was published March 2024 (~24 months). Only the Canvas and Blackboard Comparison pages have been updated within the last 90 days.

Fix: Update all 4 stale Comparison pages with current G2 data (Spring 2026 or latest available), refresh feature Comparison tables, and add recent customer migration stories. Prioritize the Moodle page given Moodle's large market share and D2L's strong blog content (Moodle-alternatives) that could cross-link.

#14Meta Descriptions and Open Graph Tags Cannot Be VerifiedMedium1-3 days

Issue: Meta descriptions and Open Graph (OG) tags could not be assessed from the rendered page output. These HTML head elements are not visible in the rendered markdown content returned by our analysis method.

Fix: Verify meta descriptions and OG tags across all key pages using browser developer tools (View Source → search for 'meta') or a tool like Screaming Frog. Ensure each commercially relevant page has a unique, descriptive meta description (150-160 characters) that includes the page's primary value proposition. Confirm OG title, description, and image tags are set for social sharing and AI citation contexts.

#15Schema Markup Cannot Be Verified — Manual Check RecommendedMedium1-3 days

Issue: JSON-LD structured data (schema.org markup) could not be assessed across any of the 42 analyzed pages. Our analysis method returns rendered page content as markdown, which strips HTML-embedded schema blocks. We cannot confirm whether appropriate schema types (Product, Article, FAQ, HowTo, Organization) are implemented on commercially relevant pages.

Fix: Audit schema markup across all commercially relevant pages using Google's Rich Results Test or Schema.org Validator. Verify that product pages carry Product or SoftwareApplication schema, blog posts carry Article schema with author and dateModified, Comparison pages carry appropriate structured data, and the organization-level Organization schema is present on the homepage. This is a WordPress site (Yoast SEO detected in sitemap), which likely provides some baseline schema — verify it is correctly configured and sufficiently detailed.

Verification Checks

Items requiring manual review before determining if action is needed.

Priority Finding Impact Timeline
#18Client-Side Rendering Status Should Be VerifiedLow< 1 day

Issue: All 42 fetched pages returned substantial rendered content, suggesting the site is primarily server-rendered (consistent with WordPress). However, client-side rendering (CSR) detection signals — such as framework-specific div containers, noscript fallback content, and JavaScript bundle analysis — are not available through our analysis method. We cannot definitively confirm that all page content is accessible without JavaScript execution.

Fix: Verify by loading key product and Comparison pages in a browser with JavaScript disabled (Chrome DevTools → Settings → Disable JavaScript). If all primary content, headings, and navigation are visible without JavaScript, no action is needed. Pay particular attention to any interactive content sections, pricing calculators, or dynamically loaded testimonials.

Click any row to expand full issue/fix detail.

Layer 2 Existing Content Optimization

Existing pages that need restructuring or deepening. Owner: Content Team. Timeline: Weeks.

Deepen Learning Analytics Authority on /brightspace/performance/ — Add FERPA, Competitive Data, and Self-Service Proof

Priority 9
Currently: partialThe /brightspace/performance/ page describes Brightspace analytics capabilities in product-marketing language without the comparative specificity or compliance proof that buyers at the requirements and Shortlisting stages require. Missing: FERPA-compliant data handling explanation, self-service vs. IT-dependent reporting distinction, specific Comparison vs. Canvas Analytics and Moodle's third-party tools, and predictive modeling depth for student success use cases.

The /brightspace/performance/ page does not address FERPA data handling or privacy governance, leaving it unable to answer CIO and director queries about 'FERPA risks with LMS learning analytics' (d2l_121) — a required disclosure in any analytics tool evaluation for US higher education. The /brightspace/performance/ page lacks a structured Comparison of Brightspace analytics versus Canvas Analytics and Moodle's third-party reporting stack — causing D2L to lose direct Comparison queries (d2l_087 equivalent framing: 'How does Canvas analytics compare to Brightspace?') to Canvas's more Comparison-ready content. The /brightspace/performance/ page uses marketing-prose descriptions of analytics capabilities without extractable, specific claims — such as the number of built-in reports, self-service dashboard features, or data freshness intervals — making it difficult for AI platforms to extract authoritative Comparison points.

Queries affected: d2l_003, d2l_022, d2l_027, d2l_034, d2l_062, d2l_105, d2l_121, d2l_127, d2l_147

Expand Competency Tracking Depth on /brightspace/achievement/ — Add ROI Models, Accreditation Claims, and Certification Automation Specifics

Priority 10
Currently: partialThe /brightspace/achievement/ page describes Achievement+ features but does not address: accreditation body reporting standards (HLC, SACSCOC, regional accreditors), direct Comparison vs. Docebo's competency and skills tracking for enterprise L&D buyers, scalability evidence for large cohorts, ROI modeling for competency-based LMS investment, or the distinction between checkbox completion tracking and genuine skills development evidence that buyers explicitly ask about.

The /brightspace/achievement/ page does not address accreditation reporting requirements — missing the specific regional accreditation body mappings (HLC, SACSCOC) and accreditation evidence generation workflows that provosts need for 'requirements for LMS supporting CBE and accreditation reporting' queries (d2l_037). The /brightspace/achievement/ page does not differentiate Brightspace's competency tracking from 'just checkbox tracking' — the specific skepticism expressed in d2l_114 ('Docebo competency tracking — does it actually work or is it just checkbox tracking?') — leaving the page unable to answer D2L's own version of this objection. The /brightspace/achievement/ page lacks enterprise L&D competency tracking specifications for VP L&D buyers — missing workforce skills mapping, certification expiry and auto-renewal tracking, manager-facing skills dashboards, and integration with HRIS systems that separate corporate from academic competency use cases.

Queries affected: d2l_011, d2l_024, d2l_037, d2l_045, d2l_058, d2l_068, d2l_094, d2l_114, d2l_123, d2l_128, d2l_135, d2l_149

Expand Corporate Training Proof on /solutions/corporate/ — Add Compliance Depth, Docebo Comparison, and ROI Evidence

Priority 11
Currently: partialThe /solutions/corporate/ page positions D2L broadly for corporate learning but does not address: the LMS vs. LXP question that VP L&D buyers consistently ask, specific compliance training and certification management capabilities with named industry standards, direct Docebo Comparison with commercial outcome data, or CFO-ready ROI models for LMS migration. The existing /blog/Docebo-alternatives/ page exists but is blog-format rather than structured Comparison content.

The /solutions/corporate/ page does not address the LMS vs. LXP question ('What's the difference between an LMS and an LXP — do we need both?', d2l_029), leaving VP L&D buyers who ask this specific question without a D2L answer to cite. The /solutions/corporate/ page describes corporate training capabilities in marketing terms without specific compliance training features — missing named regulatory standards (OSHA, HIPAA, SOX compliance tracking), certification management workflows, and manager-facing dashboard specifications that evaluators use in RFP scoring. The /solutions/corporate/ page does not directly address Docebo's hidden costs, limitations, or specific feature gaps versus Brightspace — despite the existence of /blog/Docebo-alternatives/, that page is a blog post rather than a structured, AI-extractable Comparison that supports corporate evaluators at the Shortlisting stage.

Queries affected: d2l_007, d2l_020, d2l_029, d2l_035, d2l_050, d2l_061, d2l_107, d2l_122, d2l_130, d2l_138, d2l_143

Reframe /brightspace/creator/ for Faculty Adoption Evidence, Migration Support Proof, and Change Management Framing

Priority 12
Currently: partialThe /brightspace/creator/ page describes Creator+ features without: faculty adoption success data from migration scenarios, SCORM/xAPI/LTI standards specificity for content portability, change management and training support documentation, honest 'what to test during evaluation' content, or comparative evidence against Canvas (the dominant winner for faculty usability Comparison queries). The page reads as a product announcement rather than an evidence package for evaluators in the middle of a competitive replacement decision.

The /brightspace/creator/ page does not address faculty adoption outcomes from Blackboard migration scenarios — missing the specific 'did faculty adoption actually improve after switching to Brightspace?' evidence (d2l_115) that provosts and deans require before recommending a disruptive migration to their institutions. The /brightspace/creator/ page does not address content standards and migration portability specifics — missing SCORM 1.2/2004, xAPI/Tin Can, LTI 1.3, AICC support documentation and content import workflow descriptions that 'what should we require for content portability?' queries (d2l_043) require. The /brightspace/creator/ page does not contain 'what to test during LMS evaluation' content for authoring tools — missing the honest, buyer-perspective guidance that 'biggest regrets after choosing LMS' (d2l_120) and 'what did institutions wish they'd tested?' queries require to position D2L as a transparent, evaluation-ready vendor rather than a promotional one.

Queries affected: d2l_002, d2l_019, d2l_026, d2l_032, d2l_043, d2l_047, d2l_060, d2l_104, d2l_115, d2l_120, d2l_133, d2l_141

Strengthen Accessibility Claims on /brightspace/accessibility-plus/ — Add VPAT Detail, WCAG Evidence, and Institutional Risk Prevention Content

Priority 13
Currently: partialThe /brightspace/accessibility-plus/ page covers D2L's accessibility offering but does not provide: WCAG 2.1 AA conformance level documentation with specific success criteria coverage, Section 508 compliance mapping, VPAT (Voluntary Product Accessibility Template) download or summary, or the institutional risk content ('common LMS accessibility failures universities discover after signing') that Validation-stage buyers require to trust vendor claims.

The /brightspace/accessibility-plus/ page does not link to or summarize Brightspace's VPAT — the single most commonly requested document in LMS accessibility procurement for US public universities — leaving CIO buyers (d2l_145: 'draft an accessibility compliance checklist covering WCAG 2.1 AA, Section 508, and VPAT requirements') without the reference they need from D2L's own page. The /brightspace/accessibility-plus/ page does not address WCAG 2.1 AA success criteria at a specific level — it claims accessibility compliance without mapping to named success criteria (e.g., 1.1.1 Non-text Content, 1.3.1 Info and Relationships) that institutional accessibility officers verify during procurement. The /brightspace/accessibility-plus/ page does not address 'common LMS accessibility failures universities discover after signing the contract' (d2l_112) — a specific Validation concern that buyers have after previous vendor disappointment — leaving this objection-handling opportunity to third-party review sites rather than D2L's own authoritative content.

Queries affected: d2l_012, d2l_018, d2l_031, d2l_057, d2l_067, d2l_112, d2l_145

Add AI Outcome Evidence and Validation Framing to /lumi/ — Move Beyond Product Description to Proof

Priority 16
Currently: partialThe /lumi/ page describes Lumi's AI features in product-announcement language without the Validation evidence, use-case specificity, or instructional designer workflow integration that evaluators need. Missing: specific AI capability technical specs (model types, data privacy), measurable learning outcome improvements from Brightspace customers, build-vs-buy analysis content for AI tutoring, and a Comparison of Lumi vs. Docebo's AI personalization for enterprise L&D buyers.

The /lumi/ page describes Lumi's features without any measurable outcome data — missing the specific learning improvement metrics (time-to-competency reduction, course completion rate changes, learner engagement increases) that the 'case studies with measurable results' query (d2l_129) and the 'are AI features useful or hype?' query (d2l_109) require to trust vendor claims. The /lumi/ page does not address the 'build vs buy for AI tutoring in higher education' question (d2l_015) — failing to provide the structured build/buy analysis that Directors of Online Learning need when evaluating whether to integrate external AI tools versus using Brightspace's native Lumi capabilities. The /lumi/ page lacks enterprise L&D AI capability specifications — missing the personalization algorithm descriptions, skills-gap identification features, and content recommendation engine details that VP L&D buyers need to compare Lumi against Docebo's AI positioning for enterprise training (d2l_055, d2l_082).

Queries affected: d2l_010, d2l_015, d2l_028, d2l_041, d2l_052, d2l_055, d2l_109, d2l_129, d2l_142

Deepen Adaptive Learning Evidence on /solutions/higher-education/competency-based-education/ — Add Outcome Data, Scale Proof, and Institutional Results

Priority 17
Currently: partialAdaptive learning content exists but is fragmented and insufficiently specific for AI citation. The /solutions/higher-education/competency-based-education/ page describes CBE capabilities without the specific algorithmic description, scale proof (student counts with adaptive features enabled), or independent outcome Validation that skeptical buyers need. The SNHU customer page exists but is isolated rather than systematically linked as outcome evidence for adaptive learning claims.

The /solutions/higher-education/competency-based-education/ page does not address the 'does adaptive learning actually improve outcomes or just sound good in vendor demos?' question (d2l_118) — the most common Validation concern about adaptive learning — leaving this critical objection unanswered on D2L's most relevant page. The /solutions/higher-education/competency-based-education/ page lacks specific scale evidence: it does not state how many students have used Brightspace's adaptive and personalized learning features, what enrollment sizes institutions have deployed at, or what performance improvements have been measured — making it unable to support Shortlisting queries (d2l_063: 'LMS that auto-adjusts content based on student performance'). The /solutions/higher-education/competency-based-education/ page does not differentiate between 'adaptive learning' and 'personalized learning paths' — a distinction that buyers (d2l_016) and AI platforms need to understand to correctly position Brightspace's capability in responses.

Queries affected: d2l_009, d2l_016, d2l_036, d2l_048, d2l_063, d2l_118, d2l_134

Layer 3 Narrative Intelligence Opportunities

Net new content addressing visibility and positioning gaps. Owner: Content Strategy. Timeline: Months.

NIO #1: Mobile Experience Content Void — No Dedicated Page Exists
Gap Type: Content Type Deficit — d2l.com has no dedicated mobile learning feature page (coverage_status='missing' across all 13 Mobile Learning Experience queries). Despite appearing as a passive mention in 7 of 13 (53.8%) mobile queries, D2L wins 0 of those 7 (0% conditional win rate on mobile queries), with Canvas claiming 5 of 13 victories by default. The root cause is not underperformance — it is complete content absence for the mobile buying job.
Critical

When buyers search for LMS mobile app quality — the most common objection raised during Shortlisting — D2L has no page to surface. Canvas wins 5 of the 13 mobile-related queries not because Brightspace lacks a mobile app, but because D2L has never published a dedicated mobile content asset for AI platforms to cite. This is an existential gap: at the moment a CIO or LMS Admin asks 'which platform has the best mobile app?', D2L is absent from the answer. Creating a single, authoritative mobile feature page would immediately unlock citation opportunities across all 13 queries spanning problem identification through Shortlisting.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: d2l_006, d2l_013, d2l_021, d2l_039, d2l_051, d2l_064, d2l_079, d2l_090, d2l_100, d2l_108, d2l_119, d2l_126, d2l_137
“our current LMS mobile experience is terrible — which platforms have the best mobile app for higher ed?”
“LMS platforms with the best-rated mobile apps for students — which ones actually work well on phones?”
“Canvas mobile app vs Brightspace mobile app — switching LMS and need to know which is actually better for students”
“How much does a poor LMS mobile experience really hurt student satisfaction and retention rates?”
Blueprint
  • On-Domain: Create /brightspace/mobile/ as a standalone feature page covering native iOS/Android app capabilities, offline learning support, push notification workflows, and accessibility on mobile — structured with extractable bullet-point claims for AI citation.
  • On-Domain: Include a direct mobile app Comparison table (Brightspace vs Canvas vs Blackboard) with app store ratings, feature parity, and student adoption data from existing Brightspace customer universities.
  • On-Domain: Add a 'Mobile by the Numbers' section with verifiable student engagement metrics from institutions using Brightspace mobile (e.g., percentage of course access via mobile, completion rate comparisons).
  • On-Domain: Cross-link the new mobile page from the main /brightspace/ product page, the /solutions/higher-education/ hub, and the stale /why-d2l/compare/ pages once those are refreshed.
  • Off-Domain: Publish a third-party-cited blog post or research brief on 'LMS Mobile App Quality: What Students Actually Need' — referencing D2L's Brightspace mobile as a named example with metrics, to build citation surface on Perplexity and ChatGPT for mobile Comparison queries.
  • Off-Domain: Submit updated Brightspace mobile feature data to G2, Capterra, and TrustRadius category pages where mobile experience is rated — these third-party pages are frequently cited by AI platforms for mobile Comparison queries.
Platform Acuity

ChatGPT (high): Mobile-specific Shortlisting queries (d2l_051, d2l_064, d2l_079) show Canvas winning via product page citations. ChatGPT responds to structured mobile feature pages with specific specs — a /brightspace/mobile/ page with named app features would directly compete with Canvas's mobile marketing content. Perplexity (high): Perplexity cites third-party sources heavily for mobile queries (app store reviews, G2 mobile ratings). Publishing outcome-backed mobile content with labeled headings and a Comparison table would make the page highly extractable for Perplexity's structured response format.

NIO #2: Collaboration Tools Content Void — 0% Visibility Across All 11 Queries
Gap Type: Content Type Deficit — D2L achieves 0% visibility (0/11 queries) and 0 wins for the Collaboration & Communication Tools feature — the only feature with complete invisibility across all queries. All 11 collaboration queries are L3 gaps because d2l.com has no dedicated collaboration feature page (coverage_status='thin' or insufficient). Moodle wins 2 of the 11 queries and Schoology and Canvas each claim 1 — competitors win by default in D2L's absence.
Critical

Collaboration tools — group workspaces, peer review, discussion forums, video-integrated learning — are standard evaluation criteria for directors of online learning and provosts building fully online programs. Yet D2L has zero presence across all 11 collaboration-focused queries. Moodle's plugin ecosystem and Canvas's discussion infrastructure are consistently cited while Brightspace's collaboration capabilities are invisible. Because the pain point 'Group collaboration, peer review, and discussion tools feel clunky and outdated ' registers 0% visibility and 0 wins, buyers evaluating collaboration see D2L's competitors — not D2L — as the answer. A dedicated collaboration content hub with honest feature Comparison and outcome data would move D2L from absent to present across the entire collaboration buying journey.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: d2l_005, d2l_023, d2l_040, d2l_056, d2l_069, d2l_083, d2l_093, d2l_113, d2l_124, d2l_139, d2l_150
“How can we improve student engagement and collaboration in online university courses?”
“Should we bolt Zoom and Slack onto our LMS or find a platform with better built-in collaboration tools?”
“What collaboration features should we require in an LMS for fully online graduate programs?”
“Fed up with our LMS discussion boards — which platforms have the best collaboration tools for online courses?”
Blueprint
  • On-Domain: Create /brightspace/collaboration/ as a dedicated feature page covering discussion forums, group workspaces, peer review workflows, video conferencing integrations (Zoom, Teams, Kaltura), and asynchronous collaboration tools — with extractable specifications formatted for AI citation.
  • On-Domain: Include an explicit 'Built-in vs. Plugin-based' Comparison section that directly addresses the Moodle plugin ecosystem question ('Should we bolt Zoom onto our LMS or get built-in collaboration?') with a structured Comparison of D2L's native tools vs. competitor plugin approaches.
  • On-Domain: Add outcome data from existing Brightspace customers showing student engagement improvements tied to specific collaboration features (e.g., discussion participation rates, peer review completion rates, group project completion).
  • On-Domain: Build an RFP requirements section ('What to Require in an LMS for Online Collaboration') that positions Brightspace's capabilities against buyer RFP checklists — directly targeting the Artifact Creation queries (d2l_150).
  • Off-Domain: Contribute to or sponsor a third-party Comparison post on G2 or eLearning Industry covering 'Best LMS Collaboration Tools for Online Education' — with Brightspace positioned as a named platform with verifiable ratings.
  • Off-Domain: Engage instructional design communities (ID communities, eLearning Guild) to generate third-party content citing Brightspace's collaboration feature set, creating citation sources that Perplexity and ChatGPT can reference beyond vendor-owned content.
Platform Acuity

ChatGPT (medium): Collaboration queries (d2l_083, d2l_113) show Moodle and Canvas winning via community-generated content. ChatGPT cites well-structured feature pages — a /brightspace/collaboration/ page with specific native tool specs would compete with Moodle plugin documentation currently dominating these responses. Perplexity (high): Perplexity pulls from discussion forums, eLearning blogs, and G2 category pages for collaboration queries. Third-party content citing Brightspace's collaboration capabilities (submissions to G2, eLearning Industry, and institutional case studies) would significantly improve citation rates on Perplexity.

NIO #3: Integrations Ecosystem Structural Hub — No Directory, No LTI Showcase, No Migration Content
Gap Type: Structural Gap — D2L has no integration directory, LTI marketplace page, or SIS connector hub — causing all 14 integration-ecosystem queries to route to L3 (coverage_status='thin' across all). D2L achieves only 21.4% visibility (3/14) on integration queries and wins 0 of those 3 (0% conditional win rate on visible integration queries). Canvas, Moodle, and Schoology each claim victories on integration-specific Comparison queries where D2L's absence lets competitors define the narrative.
Critical

The CIO — D2L's veto-holding decision maker — cares most about integration architecture: LTI 1.3 support, SIS connectors, SSO, FERPA-compliant data flows, and API quality. Yet D2L lacks a structured integration hub where AI platforms can extract authoritative claims about Brightspace's ecosystem. With 14 integration-focused queries and 0 wins, every CIO who asks 'which LMS has the best SIS integration?' or 'what are the non-negotiable LTI requirements?' receives an answer that names Canvas or Moodle — never D2L. The structural fix is not content depth on existing pages; it is creating the architectural content type that buyers and AI platforms expect: an integration directory.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: d2l_004, d2l_014, d2l_030, d2l_042, d2l_046, d2l_059, d2l_071, d2l_086, d2l_095, d2l_103, d2l_116, d2l_125, d2l_132, d2l_140
“Why is connecting our LMS to other campus systems always a multi-month IT project?”
“LTI 1.3, SIS integration, SAML SSO — what integration standards should be non-negotiable in an LMS evaluation?”
“LMS platforms with the largest LTI app marketplace and most third-party integrations”
“How hard is it to maintain Moodle integrations compared to using a commercial SaaS LMS?”
Blueprint
  • On-Domain: Create /brightspace/integrations/ as a structured integration hub covering LTI 1.3 certification, SIS connector catalog (Banner, Colleague, PeopleSoft, Workday), SSO standards (SAML 2.0, OAuth), FERPA-compliant data handling, and API documentation links — formatted as a scannable directory for AI extraction.
  • On-Domain: Add a 'Moodle vs. Managed Integrations' Comparison section directly addressing the 'multi-month IT project' pain point with specific Brightspace onboarding timelines and support model descriptions.
  • On-Domain: Create a dedicated migration support page (/brightspace/migration/ or similar) covering content import standards, LMS-to-LMS data portability, and migration partner ecosystem — targeting the Migrating from one LMS to another is extremely costly, disruptive, and risky for pain point queries (d2l_095, d2l_125) and the RFP artifact query (d2l_140).
  • On-Domain: Include a CIO-facing 'Integration Security & Compliance' section covering FERPA data residency, SOC 2 certification, and data governance controls — directly addressing d2l_044 (security requirements) and d2l_132 (ROI of native vs. custom integrations).
  • Off-Domain: Publish a third-party-cited integration Comparison brief ('LTI 1.3 Implementation: Commercial LMS vs. Open Source') that positions Brightspace's managed integration model against Moodle's self-managed plugin approach — targeting Perplexity's tendency to cite technical Comparison content for CIO-persona queries.
  • Off-Domain: Submit integration capability data to Educause's annual LMS review and IMS Global's LTI implementation directory to generate authoritative third-party citations for integration queries on both ChatGPT and Perplexity.
Platform Acuity

ChatGPT (high): CIO-persona integration queries (d2l_030, d2l_044, d2l_140) show ChatGPT producing standards-focused responses where vendor documentation is cited. A structured /brightspace/integrations/ hub with LTI 1.3, SAML, and SIS connector specifications would be directly citable in these responses. Perplexity (high): Integration queries on Perplexity (d2l_042, d2l_103, d2l_116) pull from technical documentation, Educause publications, and vendor integration pages. Perplexity's structured response format is highly compatible with a directory-style integration hub — each integration entry becomes an extractable passage.

NIO #4: Administration & Multi-Tenancy Architecture — Missing IT-Facing Content Hub
Gap Type: Content Type Deficit — 13 of 14 Administration & Role Management queries route to L3 because d2l.com lacks dedicated IT-facing content on multi-tenant administration, security governance, and enterprise admin complexity. D2L achieves 35.7% visibility (5/14) across all admin queries but wins only 1 of 5 visible (20% conditional win rate), with Canvas winning 4 of the contested admin Comparison queries. All 13 L3 admin queries have coverage_status='thin' — content exists in fragments across product pages but never cohesively addresses the IT administrator or CIO buyer.
High

CIOs and LMS Administrators evaluating enterprise LMS platforms need to see how Brightspace handles multi-campus administration, role-based access, FERPA security controls, and total cost of ownership versus alternatives like Moodle. D2L's current content surfaces Brightspace as a learning platform but not as an enterprise IT system — a critical framing gap for CIO-led evaluations. Canvas wins Comparison queries on administration because its documentation is IT-structured and extractable. Building a dedicated IT/admin content hub would reframe Brightspace for the CIO buyer and capture the 13 queries across the full administration buying journey.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: d2l_001, d2l_017, d2l_033, d2l_044, d2l_049, d2l_066, d2l_070, d2l_084, d2l_099, d2l_103, d2l_106, d2l_117, d2l_131, d2l_148
“How are universities managing LMS admin complexity when their IT teams are already stretched thin?”
“Security and data governance requirements for LMS selection — FERPA compliance, SSO, data residency, encryption standards”
“Enterprise LMS with the best multi-tenant architecture for a large state university system managing multiple campuses”
“Moodle total cost of ownership vs commercial LMS platforms for a university with a small IT team”
Blueprint
  • On-Domain: Create /brightspace/administration/ or /brightspace/it-infrastructure/ as an IT-facing hub covering multi-tenant architecture, role-based access controls, data residency options, SOC 2 compliance, FERPA governance, SSO standards, and system uptime SLAs — structured with IT-extractable technical specifications.
  • On-Domain: Add a 'Moodle TCO vs Brightspace' page directly addressing the open-source vs. managed SaaS administration cost question (d2l_084), with total cost of ownership modeling across implementation, maintenance, security patching, and upgrade cycles.
  • On-Domain: Build a 'LMS Implementation & Migration' resource section covering typical migration timelines, data import standards, content portability guarantees, and managed migration service options — targeting d2l_131 (migration cost/timeline) and d2l_148 (migration plan template).
  • On-Domain: Create a 'LMS Evaluation RFP: Administration & Security Requirements' downloadable template that positions Brightspace's capabilities as the reference standard — targeting Artifact Creation queries from LMS Admins.
  • Off-Domain: Publish a co-authored brief with an institutional IT partner (e.g., a CISO or CIO from a Brightspace customer university) on 'Enterprise LMS Security: What Higher Ed IT Teams Actually Need' — creating third-party citation surface for CIO-persona queries on Perplexity.
  • Off-Domain: Submit Brightspace's FERPA compliance documentation and security certifications to Educause's LMS security evaluation resources and IMS Global's certification registry to build authoritative third-party citations for security and governance queries.
Platform Acuity

ChatGPT (high): CIO-persona admin queries (d2l_044, d2l_066, d2l_099) show ChatGPT citing technical documentation and security compliance pages. A structured IT hub with FERPA, SSO, and multi-tenancy specifications would be directly citable against Canvas's IT documentation which currently wins these queries. Perplexity (medium): Admin-complexity and TCO queries (d2l_084, d2l_117) on Perplexity pull from institutional IT blogs and Educause publications rather than vendor pages. Third-party co-authored content with IT leaders would improve Perplexity citation rates more than vendor page depth alone.

NIO #5: Assessment & Grading Feature Depth — Losing to Canvas on Comparison Queries
Gap Type: Content Type Deficit — 11 of 12 Assessment & Grading queries are L3 gaps because d2l.com's assessment content is thin (coverage_status='thin' across all 11) and lacks Comparison-ready specificity. D2L achieves 41.7% visibility (5/12) across all assessment queries but wins only 1 of 5 visible (20% conditional win rate), with Canvas winning 3 of the 5 directly contested assessment Comparison queries. The root cause: Brightspace's gradebook and assessment tools are not documented in a format that allows AI platforms to extract specific capability claims against Canvas's well-documented assessment infrastructure.
High

Assessment and grading tools — rubrics, question banks, auto-grading, weighted gradebooks, anti-cheating tools, and certification tracking — are tier-1 evaluation criteria for LMS Administrators and Directors of Online Learning. Canvas consistently wins assessment Comparison queries because its feature documentation is specific and Comparison-ready. D2L's assessment content describes capabilities in product-marketing language rather than buyer-Comparison language: specific question type counts, rubric customization depth, and proctoring integration specifics are missing. Building assessment content with the same specificity as competitors' documentation would convert D2L from a visible but losing option to a winning option in this high-intent feature category.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: d2l_008, d2l_025, d2l_038, d2l_054, d2l_065, d2l_085, d2l_102, d2l_110, d2l_111, d2l_136, d2l_144
“Our LMS crashes every finals week when thousands of students submit at once — should we be looking at more reliable platforms?”
“What assessment capabilities matter most when choosing an LMS — question types, rubrics, auto-grading, anti-cheating?”
“Top LMS platforms with flexible assessment tools — rubrics, competency-based grading, and weighted gradebooks”
“Canvas quiz and assessment engine vs Moodle exam tools — which is more flexible for programs with lots of certification testing?”
Blueprint
  • On-Domain: Create /brightspace/assessment/ as a dedicated assessment and grading feature page with specific capability claims: question types supported, rubric customization options, gradebook configuration depth, auto-grading capabilities, Respondus and third-party proctoring integrations, and peak-load performance guarantees.
  • On-Domain: Include a direct 'Canvas vs. Brightspace Assessment Tools' Comparison section addressing the specific queries buyers ask (d2l_085, d2l_102) — with feature-by-feature Comparison on rubrics, gradebook flexibility, question bank size, and proctoring integration depth.
  • On-Domain: Add a 'Peak Performance' section specifically addressing the LMS-crash-at-finals pain point (d2l_008, d2l_111) with D2L's uptime SLA, CDN architecture, and documented performance during high-concurrency exam events — this is a specific competitive differentiation point.
  • On-Domain: Build a 'Assessment Evaluation Criteria' resource (targeting d2l_038, d2l_144) that provides a buyer's checklist structured around Brightspace's strengths — positioning D2L as the reference platform for assessment evaluation.
  • Off-Domain: Publish an outcome-backed case study from an institution that migrated from Canvas or Blackboard to Brightspace specifically for assessment reliability — with specific exam concurrency data and gradebook migration outcomes, for third-party citation on Perplexity.
  • Off-Domain: Submit Brightspace assessment feature ratings to G2's LMS assessment category and encourage existing customers to rate Brightspace's gradebook and rubric tools specifically — improving third-party citation quality for assessment Comparison queries on ChatGPT and Perplexity.
Platform Acuity

ChatGPT (high): Assessment Comparison queries (d2l_085, d2l_102) show ChatGPT citing Canvas's feature documentation with specific tool names (SpeedGrader, Quiz Engine). A /brightspace/assessment/ page with equivalent named-feature specificity would directly compete for these citations. Perplexity (high): Peak-performance queries (d2l_111) and rubric/gradebook queries (d2l_054) on Perplexity pull from LMS administrator forums, eLearning industry blogs, and G2 reviews. Outcome-backed case study content on assessment reliability would generate strong Perplexity citation surface.

NIO #6: Comparison Page Architecture Deficit — Missing Feature-Level Head-to-Head Pages
Gap Type: Structural Gap — 13 Comparison-buying-job queries across six feature categories (learning analytics, course creation, AI features, corporate training, competency outcomes, accessibility) are L3 gaps because d2l.com's content inventory contains feature landing pages but not Comparison page types for these specific pairings. All 13 received affinity overrides during routing: existing pages have adequate feature depth but the wrong page structure for Comparison-stage buyers. Docebo wins 5 of these 13 Comparison queries and Canvas wins 4 — both by virtue of having Comparison content that Brightspace has not yet published.
High

D2L's existing Comparison pages cover Canvas and Blackboard (and are now stale — see L1 finding stale_comparison_pages), but buyers comparing Brightspace against Docebo for corporate L&D, Moodle for course authoring, or Canvas for learning analytics find no structured Comparison content from D2L's perspective. At the Comparison stage, AI platforms cite the most specific, structured Comparison content available — and currently that content belongs to competitors or third-party review sites. The structural fix is expanding D2L's Comparison page architecture to cover feature-specific pairings where Docebo and Canvas are winning by content default, not product superiority.

Show query cluster, blueprint & platform acuity
Query Cluster
IDs: d2l_075, d2l_076, d2l_080, d2l_081, d2l_082, d2l_087, d2l_088, d2l_091, d2l_094, d2l_096, d2l_097, d2l_098, d2l_101
“Docebo vs D2L for corporate training programs — which platform is better for enterprise L&D?”
“Canvas vs Blackboard — which has stronger learning analytics and predictive student insights?”
“How does Canvas analytics compare to Brightspace for measuring student engagement and learning outcomes?”
“Docebo AI capabilities for personalized learning — how do they compare to other enterprise learning platforms?”
Blueprint
  • On-Domain: Create 'Brightspace vs Docebo for Corporate L&D' Comparison page targeting d2l_075, d2l_088, d2l_101 — positioning D2L's cross-segment (academic + corporate) capability vs. Docebo's pure-play corporate focus, with specific compliance training, skills tracking, and certification management feature comparisons.
  • On-Domain: Create 'Brightspace Learning Analytics vs Canvas Analytics' Comparison page targeting d2l_076, d2l_087, d2l_097 — with specific Comparison on predictive modeling, FERPA-compliant reporting, self-service dashboards, and institutional effectiveness metrics.
  • On-Domain: Create 'Brightspace vs Moodle for Course Creation' Comparison page targeting d2l_080, d2l_091 — addressing the managed authoring vs. plugin-based approach tradeoff with faculty usability data and content migration support specifics.
  • On-Domain: Create 'Brightspace AI vs Docebo AI' Comparison page targeting d2l_082, d2l_096 — positioning Lumi's integrated AI capabilities vs. Docebo's AI personalization, with specific use cases for instructional designers and L&D teams.
  • On-Domain: Refresh the four stale existing Comparison pages (vs. Moodle, Schoology, Google Classroom, Sakai) in parallel with new page creation — the L1 finding stale_comparison_pages directly amplifies this structural gap.
  • Off-Domain: Ensure Brightspace's positioning appears in third-party Comparison articles on eLearning Industry, Capterra, and G2 for the specific pairings where Comparison pages are now being created — particularly 'Docebo vs D2L' and 'Brightspace vs Canvas analytics', which currently have no D2L-authored Comparison content.
  • Off-Domain: Submit Brightspace to G2's 'Compare' feature for each specific pairing and encourage customers to complete comparative reviews — G2 Comparison pages are high-authority third-party sources frequently cited by both Perplexity and ChatGPT for Comparison buying job queries.
Platform Acuity

ChatGPT (high): Comparison queries (d2l_076, d2l_087, d2l_094) show ChatGPT citing vendor Comparison pages and G2 category comparisons. D2L's existing Comparison pages for Canvas and Blackboard demonstrate that ChatGPT does cite D2L's Comparison content when it exists — the gap is coverage of additional pairings. Perplexity (high): Perplexity pulls from G2 Comparison pages, eLearning Industry head-to-heads, and vendor Comparison pages for all 13 queries in this cluster. New Comparison pages on d2l.com would be immediately indexable and citable — Perplexity's freshness weighting favors newly published structured Comparison content.

Unified Priority Ranking

All recommendations across all three layers, ranked by commercial impact × implementation speed.

  • 1

    Key Product Pages Show Stale Modification Dates

    The main Brightspace product page (/brightspace/) shows a last modification date of May 7, 2025 — approximately 10 months ago. The Achievement+ page (/brightspace/achievement/) shows a publication date of July 9, 2024 with no visible update — over 20 months old. The Performance+ page (/brightspace/performance/) shows only a September 2022 publication date with no visible recent modification.

    Technical Fix · Marketing · /brightspace/, /brightspace/achievement/, /brightspace/performance/
  • 2

    Stale Competitor Comparison Pages

    4 of 6 dedicated Comparison pages have not been updated in over 12 months. D2L Brightspace vs. Moodle was last modified August 2024 (~18 months ago). D2L Brightspace vs. Schoology was published February 2024 (~25 months). D2L Brightspace vs. Google Classroom was published January 2023 (~38 months). D2L Brightspace vs. Sakai was published March 2024 (~24 months). Only the Canvas and Blackboard Comparison pages have been updated within the last 90 days.

    Technical Fix · Marketing · /why-d2l/compare/d2l-brightspace-vs-Moodle/, /why-d2l/compare/d2l-brightspace-vs-Schoology/, /why-d2l/compare/d2l-brightspace-vs-google-classroom/, /why-d2l/compare/d2l-brightspace-vs-Sakai/
  • 3

    Collaboration Tools Content Void — 0% Visibility Across All 11 Queries

    D2L achieves 0% visibility (0/11 queries) and 0 wins for the Collaboration & Communication Tools feature — the only feature with complete invisibility across all queries. All 11 collaboration queries are L3 gaps because d2l.com has no dedicated collaboration feature page (coverage_status='thin' or insufficient). Moodle wins 2 of the 11 queries and Schoology and Canvas each claim 1 — competitors win by default in D2L's absence.

    New Content · Content · 11 queries affecting personas: Director of Online Learning & Instructional Design, Provost / Chief Academic Officer, LMS Administrator / Educational Technologist
  • 4

    Integrations Ecosystem Structural Hub — No Directory, No LTI Showcase, No Migration Content

    D2L has no integration directory, LTI marketplace page, or SIS connector hub — causing all 14 integration-ecosystem queries to route to L3 (coverage_status='thin' across all). D2L achieves only 21.4% visibility (3/14) on integration queries and wins 0 of those 3 (0% conditional win rate on visible integration queries). Canvas, Moodle, and Schoology each claim victories on integration-specific Comparison queries where D2L's absence lets competitors define the narrative.

    New Content · Content · 14 queries affecting personas: Chief Information Officer, LMS Administrator / Educational Technologist, Director of Online Learning & Instructional Design
  • 5

    Mobile Experience Content Void — No Dedicated Page Exists

    d2l.com has no dedicated mobile learning feature page (coverage_status='missing' across all 13 Mobile Learning Experience queries). Despite appearing as a passive mention in 7 of 13 (53.8%) mobile queries, D2L wins 0 of those 7 (0% conditional win rate on mobile queries), with Canvas claiming 5 of 13 victories by default. The root cause is not underperformance — it is complete content absence for the mobile buying job.

    New Content · Content · 13 queries affecting personas: Chief Information Officer, LMS Administrator / Educational Technologist, Director of Online Learning & Instructional Design, Provost / Chief Academic Officer
  • 6

    Administration & Multi-Tenancy Architecture — Missing IT-Facing Content Hub

    13 of 14 Administration & Role Management queries route to L3 because d2l.com lacks dedicated IT-facing content on multi-tenant administration, security governance, and enterprise admin complexity. D2L achieves 35.7% visibility (5/14) across all admin queries but wins only 1 of 5 visible (20% conditional win rate), with Canvas winning 4 of the contested admin Comparison queries. All 13 L3 admin queries have coverage_status='thin' — content exists in fragments across product pages but never cohesively addresses the IT administrator or CIO buyer.

    New Content · Content · 13 queries affecting personas: Chief Information Officer, LMS Administrator / Educational Technologist, VP of Learning & Development
  • 7

    Assessment & Grading Feature Depth — Losing to Canvas on Comparison Queries

    11 of 12 Assessment & Grading queries are L3 gaps because d2l.com's assessment content is thin (coverage_status='thin' across all 11) and lacks Comparison-ready specificity. D2L achieves 41.7% visibility (5/12) across all assessment queries but wins only 1 of 5 visible (20% conditional win rate), with Canvas winning 3 of the 5 directly contested assessment Comparison queries. The root cause: Brightspace's gradebook and assessment tools are not documented in a format that allows AI platforms to extract specific capability claims against Canvas's well-documented assessment infrastructure.

    New Content · Content · 11 queries affecting personas: LMS Administrator / Educational Technologist, Director of Online Learning & Instructional Design, VP of Learning & Development, Provost / Chief Academic Officer
  • 8

    Comparison Page Architecture Deficit — Missing Feature-Level Head-to-Head Pages

    13 Comparison-buying-job queries across six feature categories (learning analytics, course creation, AI features, corporate training, competency outcomes, accessibility) are L3 gaps because d2l.com's content inventory contains feature landing pages but not Comparison page types for these specific pairings. All 13 received affinity overrides during routing: existing pages have adequate feature depth but the wrong page structure for Comparison-stage buyers. Docebo wins 5 of these 13 Comparison queries and Canvas wins 4 — both by virtue of having Comparison content that Brightspace has not yet published.

    New Content · Content · 13 queries affecting personas: VP of Learning & Development, Director of Online Learning & Instructional Design, Chief Information Officer, Provost / Chief Academic Officer, LMS Administrator / Educational Technologist
  • 9

    Deepen Learning Analytics Authority on /brightspace/performance/ — Add FERPA, Competitive Data, and Self-Service Proof

    The /brightspace/performance/ page does not address FERPA data handling or privacy governance, leaving it unable to answer CIO and director queries about 'FERPA risks with LMS learning analytics' (d2l_121) — a required disclosure in any analytics tool evaluation for US higher education.

    Content Optimization · Content · 9 queries, personas: Director of Online Learning & Instructional Design, Chief Information Officer, Provost / Chief Academic Officer
  • 10

    Expand Competency Tracking Depth on /brightspace/achievement/ — Add ROI Models, Accreditation Claims, and Certification Automation Specifics

    The /brightspace/achievement/ page does not address accreditation reporting requirements — missing the specific regional accreditation body mappings (HLC, SACSCOC) and accreditation evidence generation workflows that provosts need for 'requirements for LMS supporting CBE and accreditation reporting' queries (d2l_037).

    Content Optimization · Content · 12 queries, personas: Provost / Chief Academic Officer, VP of Learning & Development, Director of Online Learning & Instructional Design
  • 11

    Expand Corporate Training Proof on /solutions/corporate/ — Add Compliance Depth, Docebo Comparison, and ROI Evidence

    The /solutions/corporate/ page does not address the LMS vs. LXP question ('What's the difference between an LMS and an LXP — do we need both?', d2l_029), leaving VP L&D buyers who ask this specific question without a D2L answer to cite.

    Content Optimization · Content · 11 queries, personas: VP of Learning & Development, Director of Online Learning & Instructional Design
  • 12

    Reframe /brightspace/creator/ for Faculty Adoption Evidence, Migration Support Proof, and Change Management Framing

    The /brightspace/creator/ page does not address faculty adoption outcomes from Blackboard migration scenarios — missing the specific 'did faculty adoption actually improve after switching to Brightspace?' evidence (d2l_115) that provosts and deans require before recommending a disruptive migration to their institutions.

    Content Optimization · Content · 12 queries, personas: Director of Online Learning & Instructional Design, Provost / Chief Academic Officer, LMS Administrator / Educational Technologist
  • 13

    Strengthen Accessibility Claims on /brightspace/accessibility-plus/ — Add VPAT Detail, WCAG Evidence, and Institutional Risk Prevention Content

    The /brightspace/accessibility-plus/ page does not link to or summarize Brightspace's VPAT — the single most commonly requested document in LMS accessibility procurement for US public universities — leaving CIO buyers (d2l_145: 'draft an accessibility compliance checklist covering WCAG 2.1 AA, Section 508, and VPAT requirements') without the reference they need from D2L's own page.

    Content Optimization · Content · 7 queries, personas: Chief Information Officer, Provost / Chief Academic Officer, LMS Administrator / Educational Technologist
  • 14

    Meta Descriptions and Open Graph Tags Cannot Be Verified

    Meta descriptions and Open Graph (OG) tags could not be assessed from the rendered page output. These HTML head elements are not visible in the rendered markdown content returned by our analysis method.

    Technical Fix · Marketing · All 42 analyzed pages
  • 15

    Schema Markup Cannot Be Verified — Manual Check Recommended

    JSON-LD structured data (schema.org markup) could not be assessed across any of the 42 analyzed pages. Our analysis method returns rendered page content as markdown, which strips HTML-embedded schema blocks. We cannot confirm whether appropriate schema types (Product, Article, FAQ, HowTo, Organization) are implemented on commercially relevant pages.

    Technical Fix · Engineering · All 42 analyzed pages — particularly /brightspace/, /lumi/, /brightspace/creator/, /brightspace/performance/, /brightspace/achievement/, and the 6 Comparison pages
  • 16

    Add AI Outcome Evidence and Validation Framing to /lumi/ — Move Beyond Product Description to Proof

    The /lumi/ page describes Lumi's features without any measurable outcome data — missing the specific learning improvement metrics (time-to-competency reduction, course completion rate changes, learner engagement increases) that the 'case studies with measurable results' query (d2l_129) and the 'are AI features useful or hype?' query (d2l_109) require to trust vendor claims.

    Content Optimization · Content · 9 queries, personas: Director of Online Learning & Instructional Design, VP of Learning & Development
  • 17

    Deepen Adaptive Learning Evidence on /solutions/higher-education/competency-based-education/ — Add Outcome Data, Scale Proof, and Institutional Results

    The /solutions/higher-education/competency-based-education/ page does not address the 'does adaptive learning actually improve outcomes or just sound good in vendor demos?' question (d2l_118) — the most common Validation concern about adaptive learning — leaving this critical objection unanswered on D2L's most relevant page.

    Content Optimization · Content · 7 queries, personas: Provost / Chief Academic Officer, Director of Online Learning & Instructional Design, Chief Information Officer
  • 18

    Client-Side Rendering Status Should Be Verified

    All 42 fetched pages returned substantial rendered content, suggesting the site is primarily server-rendered (consistent with WordPress). However, client-side rendering (CSR) detection signals — such as framework-specific div containers, noscript fallback content, and JavaScript bundle analysis — are not available through our analysis method. We cannot definitively confirm that all page content is accessible without JavaScript execution.

    Technical Fix · Engineering · All pages — particularly interactive product demos, Comparison tables, and dynamically loaded content sections

Workstream Mapping

All three workstreams can start this week.

Engineering / DevOps

Layer 1 — Technical Fixes
Timeline: Days to 2 weeks
  • Stale Competitor Comparison Pages
  • Key Product Pages Show Stale Modification Dates
  • Schema Markup Cannot Be Verified — Manual Check Recommended
  • Meta Descriptions and Open Graph Tags Cannot Be Verified

Content Team

Layer 2 — Content Optimization
Timeline: 2–6 weeks
  • Deepen Learning Analytics Authority on…
  • Expand Corporate Training Proof on /solutions/corporate/ —…
  • Add AI Outcome Evidence and Validation Framing to /lumi/ —…
  • Deepen Adaptive Learning Evidence on…

Content Strategy

Layer 3 — NIOs + Off-Domain
Timeline: 1–3 months
  • Create /brightspace/mobile/ as a standalone feature page…
  • Create /brightspace/collaboration/ as a dedicated feature…
  • Create /brightspace/integrations/ as a structured…
  • Create /brightspace/administration/ or…
  • Create /brightspace/assessment/ as a dedicated assessment…

[Synthesis] Layer 1 technical fixes execute first because stale modification signals on /brightspace/performance/, /brightspace/achievement/, and four Comparison pages actively suppress AI platform freshness scoring for the pages that L2 and L3 improvements will build from — fixing staleness before deepening content avoids publishing high-quality content onto pages that AI crawlers are already deprioritizing. L2 remediations on seven existing pages unlock 66 query wins by converting D2L's existing coverage into AI-extractable, Comparison-ready content. L3 NIO content builds the structural gaps that account for D2L's 88.9% early-funnel invisibility — specifically, three critical-priority NIOs (mobile, collaboration, integrations) that cover features with zero or near-zero wins across all queries.

Methodology
Audit Methodology

Query Construction

150 queries constructed from persona × buying job × feature focus × pain point matrix
Every query carries four metadata fields assigned at creation time
High-intent jobs (Shortlisting + Comparison + Validation): 54% of queries (81 of 150)
Note: 150 queries across full buying journey.

Personas

Chief Information Officer — Chief Information Officer · Decision Maker
Provost / Chief Academic Officer — Provost / Chief Academic Officer · Decision Maker
Director of Online Learning & Instructional Design — Director of Online Learning & Instructional Design · Evaluator
VP of Learning & Development — VP of Learning & Development · Evaluator
LMS Administrator / Educational Technologist — LMS Administrator / Educational Technologist · Evaluator

Buying Jobs Framework

8 non-linear buying jobs: Artifact Creation → Comparison → Consensus Creation → Problem Identification → Requirements Building → Shortlisting → Solution Exploration → Validation
High-intent jobs (Shortlisting + Comparison + Validation): 54% of queries (81 of 150)

Competitive Set

Primary: Canvas LMS, Blackboard Learn, Moodle, Docebo, Schoology
Secondary: Absorb LMS, Google Classroom, Sakai, TalentLMS
Surprise: , SAP Litmos, Cornerstone, 360Learning, Cornerstone OnDemand — flagged for review

Platforms & Scoring

Platforms: ChatGPT + Perplexity
Visibility: Binary — does the client appear in the response?
Win rate: Of visible queries, is the client the primary recommendation?

Cross-Platform Counting (Union Method)

When a query is run on multiple platforms, union logic is applied: a query counts as “visible” if the client appears on any platform, not each platform separately.
Winner resolution: When platforms disagree on the winner, majority vote is used. Vendor names are preferred over meta-values (e.g. “no clear winner”). True ties resolve to “no clear winner.”
Share of Voice: Each entity is counted once per query across platforms (union dedup), preventing double-counting when both platforms mention the same company.
This approach ensures headline metrics reflect real buyer-query outcomes rather than inflated per-platform counts.

Terminology

Mentions: Query-level visibility count. A company receives one mention per query where it appears in any platform response (union-deduped). This is the numerator for Share of Voice.
Unique Pages Cited: Count of distinct client page URLs cited across all platform responses, after URL normalization (stripping tracking parameters). The footer total in the Citation section uses this measure.
Citation Instances (Top Cited Domains): Raw count of citation occurrences per domain across all responses. A single domain can accumulate multiple citation instances from different queries and platforms. The Top Cited Domains table uses this measure.