Non-binary and not single threaded
A two-day training does not make a practitioner
GenAI enablement is not binary, and it is not single-threaded.
Many companies are struggling to get meaningful returns from their GenAI initiatives. Part of the reason is how they think about “enablement”, the process of training and supporting people with GenAI tooling and ways of working. Too often, it is treated as a point-in-time activity. Something to check off and move past.
But enablement is not binary. You cannot go from “I have never interacted with ChatGPT” to “I run swarms of AI agents that push code straight to production” in one session, or even a handful.
It sounds ridiculous when stated that way. Yet many training programs effectively assume exactly that.
Day 1: Prompting fundamentals Day 2: Context engineering Day 3: Agentic workflows Day 4: Onboard onto your project Day 5: Increase velocity by 40% sprint over sprint
This is not verbatim, but it is not far off from what I have seen in Fortune 100 companies. Run a short training, then expect lift-off.
Let’s count the ways this approach falls short.
First, the initial training, while necessary, cannot be compressed into a few days. At best, it establishes a foundation. Cramming this much content into a short window leaves no time to absorb, experiment, or develop a personal perspective on how to apply it.
Second, this training is largely use-case agnostic. It does not consider which use cases the tooling will be applied to, or adjust the approach accordingly. Context engineering, for example, looks very different depending on whether you are doing UAT bug triage or writing user stories.
Third, it treats each capability as a one-and-done. Spend a day on agentic workflows, and you are now “enabled”. That would be like saying that once someone learns to drive, they can handle every vehicle and situation. No distinction between motorcycles, 18-wheelers, race cars, and dune buggies. You are now a “driver”. Therefore, you can drive anything.
Finally, it focuses on increased velocity, which is misguided on multiple levels.
Velocity, the number of story points completed per sprint, was introduced by Scrum and XP in the late 90s. It was never meant to measure productivity. It is a relative measure of a specific team’s capacity over time.
A story point does not represent just time, just complexity, just effort, or just uncertainty. It is a combination of all of those. And crucially, it is relative to a specific team.
If my team estimates a piece of work at 3 points, that only has meaning for my team. Another team might estimate the same work at 5 points and complete it in the same time with the same resources. Both are correct, because the scale is internal.
If points are team-specific, then velocity is also team-specific. That alone makes it unsuitable as a cross-team measure of productivity.
Worse, using velocity this way encourages bad behavior. If my team delivers 100 points per sprint and leadership expects 150, the easiest solution is not to work faster. It is to re-estimate. Turn every 3 into a 5, and suddenly we are delivering 160 points. Nothing has improved except the optics.
In the process, we lose the integrity of story points as a meaningful reflection of time, complexity, effort, risk, and uncertainty within a specific context.
So let’s recap.
We cannot meaningfully change how people work by compressing training into a few days. We should not assume GenAI practices apply uniformly across use cases. We should not assume all practitioners operate at the same level of proficiency. We should not treat velocity as a universal benchmark.
So what should we do instead?
We need enablement motions that treat true enablement as continuous, contextual, and compounding.
Enablement is continuous. People do not learn GenAI the way they learn a tool. They learn it the way they learn a craft. Through repeated exposure, feedback, and iteration over time.
Enablement is contextual. The way a QA engineer uses GenAI should not look like the way a product manager uses it, which should not look like the way a platform engineer uses it. Even within the same role, the approach should evolve based on the problem at hand.
Enablement is compounding. Each layer builds on the one before it. Prompting leads to better context. Better context enables more reliable workflows. Reliable workflows make agentic systems possible. But none of these layers are ever “done”.
And most importantly, enablement is not single-threaded.
It does not happen in a classroom. It happens in the flow of work.
It does not happen once. It happens every day.
It does not happen the same way for everyone. It evolves based on the individual, the team, and the problem space.
The organizations that will get real returns from GenAI are not the ones that run the best training programs. They are the ones that build systems of learning.
Systems that embed experimentation into delivery. Systems that allow teams to share patterns and evolve practices. Systems that treat enablement as an ongoing capability, not a kickoff event.
Because GenAI is not a tool you roll out. It is a way of working that teams evolve into.
Enablement is not an event. It is an operating model.
In a follow-up, I will break down the specific roles and practices you can put in place to jumpstart that evolution and sustain it over time.



Say it all again, and say it louder, every chance you get. One aspect of the overall change pattern that draws my attention: there is a sometimes visible, sometimes unstated goal of collapsing roles, or said more kindly, evolving people, into broader contexts - a two-pizza team becomes a one-pizza team . To some degree, velocity focus is a substitute for that... The optimist in me says that should the evolution be achieved, the other half of the team will move on to higher order (or at least new order) work. I wonder what that might need to be added to the enablement equation for that?