Education

The Endless Ride: AI Enables the Continuous Model

The Ride Ends. The Need for Development We do not want.

There is a document that almost all new hires receive in their first week. It goes by different names, but the structure remains the same: 30 days to learn the product, 60 days to manage your first accounts, 90 days to work independently. The 30-60-90 plan is one of the most used climbing tools in the professional arena, and it’s not a bad framework. The problem is not the document. The problem is what the organizations think when it is finished. They think this person is dead.

I come from an L&D background, and now lead the customer success (CS) function. That combination gives me an uneasy view of the same problem on both sides. On the L&D side, I understand why the event model persists: it’s measurable, deliverable, and gives the business something tangible to point to. On the CS side, I can see what it costs. New team members hit their 30-60-90s and are left to navigate promotions, product changes, and increasingly complex accounts without any consistent support structure. The ride ends. Development does not require.

In this article…

Perpetual Metric at 90 Days

The pressure on L&D teams right now is significant. Managers want productivity time and efficiency time down. They are looking for new hires that deliver quickly, cut smoothly, and last a long time. Those are legitimate commercial requirements, and they’re definitely worth measuring.

But time efficiency is not a 90-day metric. It reappears at all times of change in professional work. When someone is promoted, they enter a new level with new expectations and a new performance gap. If the product changes significantly, the entire team faces the same version of the gap. When a professional takes on a new type of account, a new market, or a new leadership responsibility, effectively, they come in again. The organization has a commercial interest in closing that gap at all times, not just in the first half of a person’s employment.

The boarding event model is not just a learning problem. It’s the ongoing loss of performance that no one officially measures, because we stopped counting after the 90th day.

Why The Right Model Has Been So Hard To Deliver

Another thing I would call perpetual onboarding: the recognition that development is a continuous cycle, and that the support infrastructure built for new hires should, by law, work for every reasonable change the professional makes during his tenure.

Most L&D practitioners understand this instinctively. The reason it’s not the default model isn’t for creativity; it works. Bringing personal, critical support for the development of everyone in the team, at all stages of his work, which he currently needs, is a human resources problem. A manager cannot be a continuous coach for six people at the same time, each at different levels, each facing different challenges. So organizations design programs for the average person in the middle class, deliver them on schedule, and measure completion because completion is what counts.

The result is exactly what Josh Bersin’s research has consistently shown: graduation rates go up, performance results don’t follow. Learning infrastructure is developed for a metric that can be captured rather than an outcome that the business really cares about. I’ve seen this on the L&D side for years. Sitting in a CS leadership role, I feel differently. The gap between what the onboarding program promised and what my team needed wasn’t a content issue or a budget issue. It was a model problem.

What AI is Changing, Exactly

Artificial Intelligence (AI) does not solve the problem of underinvestment in L&D. Anyone who tells you it does is selling something. What AI does is remove the human bottleneck that has made the perpetual boarding model ineffective at scale.

A well-designed AI training program can be in place when an expert is preparing for a senior interview with a client or key stakeholder. It can respond differently to a question from a newbie and the same question from an experienced practitioner, because the support those two people need is very different. It can recognize when someone is navigating content without their previous experience and increase their scaffolding accordingly, without requiring a manager to notice and intervene. It can do all this at once, for the whole group, at any hour.

That is not Artificial Intelligence replacing human development. Artificial Intelligence that makes the right model work the first time.

Building Proof of Concepts

Earlier this year, my team and I tested this. During the company’s hackathon, we built an AI training agent called CSM 360: a permanent onboarding system designed for customer success managers, from their first day in the role to senior leadership.

The framework is based on Charles Jennings’ 70-20-10 model and Bersin’s capability academy research, but the most important design decision is simpler than any theoretical framework: the coach treats every important change as a new moment of entry. The promotion is a ride time. The biggest product release is the ride time. A new business account after years of experience in the middle market is a time to ride. The structure of 30-60-90 covers the first loop of the cycle, but the cycle does not end.

The coach differentiates by level, drawing on our internal matrix of CS skills to adjust not only the depth of its responses but the type of support it provides. A newbie asking about a risk account gets scaffolding, process guidance, and reassurance that stepping up is the right call. A senior CSM who asks the same question is challenged to diagnose the cause before any framework is given. Same question, completely different answer, because the need for development is completely different.

We built this at a hackathon, with a small team. The point is not a specific agent. The point is that the concept works, and a small team with a deadline has proven it.

Irritation of L&D

The conversation around AI in L&D has spent a long time focused on content generation and automated learning. Those are real applications, but they’re adaptations of an existing model, making the event-based approach faster and a little cheaper. They don’t change what the model is capable of.

Endless onboarding, supported by smart work tools embedded in workflows, is a completely different model. It’s the one that ultimately aligns what L&D builds with what the business actually measures: not completion, but capacity, not on-boarding, but on-going. The professionals I manage don’t stop improving at day 90. The managers I report to don’t stop caring about time efficiency on the 90th day. The question that L&D must sit with is why the support infrastructure stops there.

If you work in L&D and own any part of the onboarding experience, or if you’re a leader who really cares about empowerment instead of just seeing things through, the starting point is easier than building an agent from scratch. Take an AI tool that your organization already has access to. Stop using it to polish emails. Start using it to fill the efficiency gaps that open up every time someone on your team changes, promotes, or faces a challenge their onboarding plan was never prepared for. The infrastructure for frequent boarding is already at your fingertips. The only thing missing is the decision to use it that way.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button