Education

AI-Assisted Learning and L&D – eLearning Industry

What Research Says About AI, Learning, and People

I came late to education in my career. And it humbled me in ways I never expected. There are more skills and areas of research than most people realize. The more I read research, especially regarding AI, the more I became convinced that we are looking at this the wrong way. There is a version of the conversation about AI in L&D that goes something like this: AI will handle general instructions, and L&D teams will focus on strategic things. It sounds reassuring. And it’s very simple.

Research on AI-assisted learning tells a very complex and interesting story. AI doesn’t just manage routines. When properly designed, it can outperform conventional learning facilitated by measurable results. And if it is poorly designed, it is not profitable at all and can even give negative results. That gap, between well-designed AI learning and poorly designed AI learning, is where the role of the L&D practitioner becomes more important, not less.

That’s Human-Led Instruction That Still Works Best

Before exploring what AI can do, it’s worth clarifying what it can do. A landmark meta-analysis by Roorda et al. (2017) found that the quality of the instructor-student relationship is one of the strongest predictors of engagement and learning outcomes. The reverse is equally true: poor facilitation relationships are more detrimental to outcomes. These findings do not disappear from the workplace. Human facilitators and L&D professionals have four things that AI can reliably replicate:

  1. Studying the room
    It detects disengagement, resistance, or psychological safety issues in a group that no model can detect from interaction data alone.
  2. Content judgment
    Knowing where the learning objective is important under what is happening in the group or organization around it.
  3. Heritage and culture
    Creating norms for how people learn together, challenge each other, and apply new skills in a specific area of ​​the organization.
  4. Moral authority
    Making safe decisions about evaluation, performance, and development that affect people’s jobs

The barrier to human-led L&D has never been motivation or expertise. It was a scale. Providing personalized feedback and practice to every student, at the pace they individually need, is impossible without the help of AI.

That’s What AI-Assisted Learning Can Really Bring

In 1984, Benjamin Bloom identified what he called the “2 Sigma Problem”: students who receive one instructional instruction outperform their peer group instruction by two deviations. [1]. The next question was how that could be achieved at scale. Forty years later, AI is starting to provide a practical answer.

A 2025 randomized controlled trial published in Natural Science Reports found that an AI tutoring system designed for research is more effective than active learning in knowledge outcomes. Importantly, the benefit only emerged when the system was designed to encourage critical thinking and application, rather than simply providing answers on demand. Undirected AI access shows no measurable benefit. The design of the learning experience was everything.

A separate UK RCT (2024) testing Google’s LearnLM reached a similar conclusion: students who were supervised by an AI model experienced better knowledge transfer in novel problems than those who received only human-led instruction. [2]. Human facilitators in that study focused on motivation, motivation, and social-emotional support. The hybrid model performed better than either method independently.

VanLehn’s groundbreaking research on instructional system design explains why this works when done right: effective AI learning systems transform assessment into instruction continuously, providing feedback at every step rather than at the end of a module. That system is even stronger now with Large Language Models that can answer open-ended questions, not just multiple-choice choices.

However, AI-assisted learning has real failure modes that L&D professionals need to design around:

  1. Hallucinations
    AI models can generate smooth, confident, and accurate content. In the case of compliance or technical skills, this is a serious risk that needs human attention
  2. Dependence
    Always-on AI assistance can reduce the productive struggle involving long-term learning. Gaps and difficulties are features, not bugs.
  3. Bias
    Automated scoring and feedback needs to be researched to detect different error rates across student groups, especially in organizations with diverse staff.

Formative Vs. Summary: A Practical Framework for L&D

A very useful lens for determining where to use AI in the learning process is the constructivist and summative distinction. For constructive learning tasks (practice, reflection, knowledge evaluation, situational responses), AI is often the real winner. Students get instant feedback, more opportunities to practice, and a lower chance of making and learning from mistakes. A 2025 systematic review on Limits in Education confirmed these benefits across 37 studies, while also noting that the benefits depend on L&D professionals remaining as active mediators of the experience, not passive suppliers of the tool. [3].

For summary and advanced tests, the calculator changes. Fairness, fairness, and protection are more important than efficiency. A study by Litman et al. (2021) on AI-assisted scoring clearly states where automated assessment can be trusted and where human review can be negotiated, especially for written work, professional judgment tasks, and anything with performance management implications. In practical terms: let AI take charge of construction. Keep people informed of anything that affects the student’s progress in the organization.

The IL&D Practitioner in AI-Assisted Learning: Behaviors and Skills

The evidence points to a clear conclusion: the role of the L&D function is not diminishing in an AI-assisted learning environment. It is changing, and in some respects, becoming more and more necessary. Here are some behaviors and skills that separate L&D employees who will use AI successfully from those who will struggle with it.

1. Learning to Design Learning: Knowing What AI Should and Shouldn’t Do

The 2025 Nature RCT found that unintended use of AI produced no learning benefit. Practitioners who will find value in AI tools are those who understand learning design well enough to specify what AI should do, when, and with what constraints.

This means going beyond selecting content and designing learning structures: structuring AI practice with human reflection, building in retrieval periods, and specifying what AI should not just give the learner.

2. Data Interpretation: Learning What AI Is and Doing It

AI-assisted learning platforms generate student data at a scale and granularity previously unavailable. The L&D profession of the next decade needs to be comfortable asking: what does this pattern in the data tell me about unemployment? Where do students get stuck? What are the different groups and why? This is not a data science role, but requires sufficient analytical fluidity from dashboard to design decision.

3. Rapid design and system: Defines AI behavior Precisely

Deploying an AI learning tool is not the same as configuring it properly. Successful practitioners will need to be able to write clear instruction summaries for AI systems: specifying the person, the constraints, the types of feedback the AI ​​should provide, and the escalation points where the human assistant should step in. This is a new form of Instructional Design, and it is quickly becoming a core competency of L&D.

4. Ethical Oversight: Auditing for Bias and Maintaining Defenses

As AI takes over formative assessment, L&D professionals take on a new responsibility: to ensure that automated feedback is fair, accurate, and does not systematically discount certain groups of learners. This requires building auditing practices into the program cycle, not treating fairness as a one-time shopping checklist item. It also means maintaining the confidence to write AI recommendations when human judgment says something is wrong.

5. Helping AI That Can’t Iterate

As AI pulls more of the knowledge transfer and adaptation workload, the human motivation that remains needs to be truly irreplaceable. That means relying more on things that research confirms are most important: psychological safety, motivational support, contextual challenge, and the type of feedback that needs to know the person, not just the feedback. The L&D professionals who will succeed are the ones who see AI taking repetitive, incremental work as an opportunity to do people’s work better, not as a threat to their professional identity.

The research is clear on one thing above all: the level of judgment of L&D professionals is what determines whether AI-assisted learning works or fails. That is not a reduced role. It is a surface effect. Organizations that will get this right are those that invest in developing their L&D function alongside their AI tools. Technology with no work force, as the evidence shows, is no better than no technology at all.

On You

Which of these skills are you already developing in your L&D team, and where are the biggest gaps? I would welcome feedback from the staff working on this end.

References:

[1] The 2 Sigma Problem: Finding Effective Teaching Methods Like One-to-One Teaching

[2] AI teaching can safely and effectively support students: An exploratory RCT in UK classrooms

[3] Educators’ reflections on AI automation in higher education: an integrated systematic review of strengths, pitfalls, and ethical dimensions

Research Cited:

[1] Effective Teacher-Student Relationships and Student Engagement and Achievement: A Meta-Analytic Review and Examination of the Mediating Role of Engagement

[2] The 2 Sigma Problem: Finding Effective Teaching Methods Like One-to-One Teaching

[3] The Behavior of Instructional Programs

[4] Assessing the Validity of Automated Methods for Detecting the Use of Textual Evidence in Writing

[5] AI teaching goes beyond practical classroom learning: an RCT introduces a novel research-based design in a real educational setting.

[6] AI teaching can safely and effectively support students: An exploratory RCT in UK classrooms

[7] Educators’ reflections on AI automation in higher education: an integrated systematic review of strengths, pitfalls, and ethical dimensions

[8] What research shows about productive AI in teaching

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button