Look at the adoption numbers and you can see a pattern. The LMS and SCORM are everywhere. They are the definition of mainstream. In the Guild's 2025 State of the Industry survey, more than 80 percent of organizations reported using LMS platforms as the backbone of their learning technology ecosystem. SCORM tracking was just as dominant. These systems are stable, familiar, and deeply embedded.
xAPI, on the other hand, tells a different story. It was launched more than a decade ago with the promise of expanding tracking beyond SCORM. In theory, it could capture any kind of learning experience, inside or outside the LMS. In practice, Guild research shows adoption hovering around twenty percent. A majority of those who have implemented xAPI report using it only for pilot projects or limited use cases. Despite its technical strength, it has never crossed the gap into widespread use.
That gap is not about whether xAPI works. It is about comfort. SCORM was familiar. LMS reporting was entrenched. xAPI required new workflows and new thinking, and the field largely resisted. The lesson is clear: technology that does not align with established habits and tools struggles to gain traction, even if it offers more capability.
Now set that beside how we are using AI. To make sense of the current state, it helps to think in terms of four levels of maturity for L&D and marketing:
Level 1 is AI as Assistant. This is the chatbot in the browser, used for brainstorming, summaries, or drafting. According to the Guild's 2025 survey, more than half of respondents experimenting with AI place it in this category, with 55 percent reporting use for content generation and 42 percent for image creation.
Level 2 is AI as Personal Tools. These are small helpers that fit your workflow, like an API script that auto-summarizes a transcript or a bot that tags survey results. Only a minority of respondents reported building custom tools, with figures under 20 percent.
Level 3 is AI Embedded in Products. The AI shows up inside the tools your learners or customers already use, such as an LMS that personalizes practice questions or a CRM that drafts emails. Guild data shows fewer than 15 percent reporting any form of AI personalization inside platforms.
Level 4 is AI as Agent or Automation. At this level, AI coordinates across tools, sets goals, and completes steps with minimal input. Fewer than 10 percent reported anything resembling multi-step automation.
Most teams are living at Level 1, maybe reaching into Level 2. We open a chatbot to brainstorm, summarize, or draft. Some of us have built small helpers that automate routine tasks. A few vendors are starting to push Level 3, embedding AI inside tools we already know. Almost no one is building for Level 4, where agents coordinate across tools and take on work with minimal input.
Here is the danger. Are we repeating the same pattern? Are we bolting AI into the tools we are most comfortable with? When we talk about AI-driven LMS dashboards, AI-powered SCORM modules, or xAPI tracking for AI activities, we are really talking about familiar moves. They feel safe. They look familiar. But those same moves kept xAPI from ever crossing the adoption gap.
The truth is, AI does not need the LMS. It does not need SCORM. It does not need xAPI. Those systems were designed for tracking, compliance, and uniformity. That focus brought with it certain constraints in how we design and deploy learning. They existed because we lacked the ability to personalize, adapt, and automate at scale. Now we have that ability. Forcing AI into those containers limits what it can do.
Think about the authoring tools that dominate our field, like Articulate Storyline and Adobe Captivate. They were built for a world where course design had to live inside an LMS and flow through SCORM or xAPI. But today, I can go to a service like Lovable, prompt my way to a front end, connect it to GitHub for version control, link it to Supabase for data, deploy through Vercel, secure a domain name, and even tie it to Stripe for payments. In short, I can build the equivalent of my own LMS and learning module through prompts, and get eighty percent of the way there before handing it to a developer. Do not think for a second that others are not seeing the same opening. The learning industry is primed for disruption, and companies outside our space already know it. OpenAI and Gemini are already experimenting. The writing is on the wall: whatever gatekeeping is left around developing and deploying learning will face serious challenges in the near future.
The Guild's data shows that while interest in AI is high, most reported use cases fall into surface-level categories: content generation, brainstorming, and image creation. These are Level 1 activities. Very few organizations report using AI for adaptive pathways, integrated personalization, or cross-system automation. In other words, we are following the same path as xAPI—lots of potential, little adoption beyond the early experiments.
We also need to acknowledge some realities. In highly regulated industries, compliance is not negotiable. Audit trails and defensible data are survival tools, and no AI system has yet proven itself ready to replace them. At the same time, adoption patterns tell us something important: organizations cling to familiar workflows. xAPI never crossed into mainstream not because it was weak, but because it asked for too much change. Those are serious counterpoints, and they deserve attention.
But even if those points are true, they do not change the larger trajectory. Comfort has logic behind it, but it is not enough. Compliance may explain why some systems remain, but it should not set the ceiling for what we can build. Adoption history shows how habits shape decisions, but it also shows how those habits can trap us. Scale and integration are real strengths of incumbents, but they are no longer exclusive. SaaS tools outside L&D have already shown they can grow faster and integrate more deeply than most LMS vendors.
So this is the reality check. Are we preparing for Level 4 maturity, where AI acts as an agent and reshapes how learning actually happens? Or are we content to stay in Levels 2 and 3, patching AI into our existing stack because it feels safer?
The adoption numbers for xAPI are a warning. Just because a technology makes sense does not mean it gets used. Adoption happens when workflows change. If we are serious about AI in L&D, we cannot settle for comfort. We have to design for possibility.
That is why the four legs of the stool matter. API, Markdown, JSON, and JavaScript are not about keeping us inside old systems. They are about giving us the foundation to build outside them. The question is whether we will take that step.
Top comments (0)