Since leadership training is a part of what I do my network and reading overlaps somewhat with the L & D community, where there is currently something of an existential debate going on about the future of learning. In fairness this is probably justified since AI is about to steamroller through traditional L & D practice, but beyond this I think there are also some good reasons to consider some fundamental shifts to organisational and personal learning in the broadest sense, and the implications for the future of work and organisations.
The collapse of the knowledge-expertise distinction
It’s a generalisation but traditionally learning has been about acquiring knowledge first, and then developing true expertise through many years of application. AI completely shifts this dynamic by giving everyone access to seemingly expert-level answers to pretty much any question we choose to ask.
But there’s a subtlety here which is often missed. It’s tempting to believe that because of this we don’t need to teach knowledge anymore and that facts are obsolete. But as Daisy Christodolou has pointed out, you can only evaluate something well if you know something about it. Looking something up, she argues, involves a cognitive cost and without sufficient knowledge stored in long-term memory, we lack the schema (mental frameworks, patterns of thought) to even understand what we look up, let alone assess it. Without domain knowledge, you can’t ask sensible questions (of the AI), can’t critique (AI) responses, and can’t judge whether the (AI) tone is appropriate for context. In this new age the scarce resource isn’t knowledge but the judgment to know which knowledge matters. But that judgement is itself dependent on a level of knowledge.
A deeper question therefore, is what is the motivation for acquiring knowledge in the first place when AI can give you all the answers? Again, here we’re likely to see a subtle but fundamental shift. When AI can give you expert level answers to any question, motivation exists in the gap between knowing the right answer and executing a consequential action. A surgeon may be able to ask an AI anything about anatomy but as soon as they have a scalpel in their hands they are reliant on their own unique judgement. The motivation to learn comes from the weight of consequence. Now broaden this out to less obviously consequential domains and the same is true. A manager that needs to give feedback to diffuse a tricky situation in the workplace, a salesperson that has to read the room and interpret client signals, a strategist that needs to cut through reams of consumer research to identify the most meaningful signals. We need to reframe the benefits of learning more as the ability to gain good judgement in a topic area rather than simply acquiring knowledge.
The acquisition of new capabilities (not just knowledge) is, in its own way, a powerful motivator. When someone learns to play the piano the goal is usually not to just produce piano sounds (we could use Spotify for that), it is to become a person who plays the piano. The same is true of learning a language or any new capability. It shapes who we are and how we think about ourselves. In the age of AI, learning should be closely tied to craft, identity, the desire to become a certain kind of person (or what psychologists would call competence motivation). ‘Become the kind of person who…’ is a more fundamental motivational framing than ‘learn how to…’.
A personal view here, but I also think that there will be a growing premium on perspectives that emerge from genuine understanding rather than information retrieval. I think (or at least hope) that leaders will increasingly sense the difference between someone who has thought through a problem and someone who has merely looked up an answer. The former will carry with it a kind of authority and distinctiveness that the latter lacks. The question ‘what do you think?’ becomes more potent, not less, when AI can tell us what everyone else thinks.
The death of the curriculum as an organising principle
Where curricula assume a shared starting point and predictable sequence, AI enables truly demand-driven learning in a way that we haven’t seen before. The opportunity is for learning that follows the contours of actual work problems rather than abstract competency frameworks. But the profound challenge inherent in this is developing people when everyone’s learning path is completely unique. To get good at this organisations will need to shift from knowledge transfer to designing learning environments where challenges based on different contexts emerge naturally and AI can support exploration.
In his book How People Learn (recommended) Nick Shackleton-Jones adds another dimension to this with what he describes as the ‘Affective Context Model’. The radical proposition at the heart of this is that we don’t actually encode knowledge in our memories but instead we encode the pattern of emotional reactions we have to experiences, and it is these that we later use to reconstruct knowledge. He makes the point that in order to learn about something we need to care about it. When people care about something they need resources that they can ‘pull’ on at point of need. When they don’t care about it they need experiences that ‘push’ them to care. The aim of this is not to help someone know something, but to help them to actually do something.
Another way of thinking about this is the difference between productive and generative learning. The former is focused on the efficient acquisition and application of known information. An example of this might be coaching someone through a process that they haven’t done before, or surfacing relevant information at point of need. AI can supercharge productive learning by providing on demand coaching and support for just about any task at the exact moment that the employee needs it. In this sense learning becomes indistinguishable from working. You are learning as you do the work, meaning that capability becomes a dynamic relationship between the employee, the tools they’re using, and the situational context.
If productive learning is suitable for ‘find-it-out’ situations, then generative learning should be designed for ‘figure-it-out’ use cases. This is about creating new understanding and meaning, perhaps through actively connecting new information to prior knowledge, or generating new insights for unknown or new situations. Perhaps less obviously but no less powerfully, AI can supercharge this too, not just by making it a lot easier to select relevant information, organise it into a coherent structure, and integrate it with existing knowledge, but by enabling an expedited and potentially deeper exploration. If we learn through emotional responses to experiences, we need to create the kind of learning experiences that elicit emotional reactions through greater agency, exploration and progress.
The ‘zone of proximal development’ (Vygotsky) is a well known concept in educational psychology that expresses the gap between what a learner can do on their own, and what they can achieve with guidance from a ‘More Knowledgeable Other’ (MKO). It’s a sweet spot for learning because the tasks are challenging but not impossible and mastery can be achieved with the right support. In this new era, that support may increasingly come from AI, alongside the right level and type of human intervention.
Tacit knowledge and experiences that transform
Organisations contain huge reservoirs of tacit knowledge that exist largely in the unwritten, unspoken and less visible ways in which people work in the business. It’s an under recognised but very powerful enabler of capability. In the same way that an apprentice learns a trade through proximity to an experienced practitioner, social learning can equip employees with a unique kind of understanding of what they need to do in different situations. AI presents an opportunity to observe patterns of work at scale, to codify this kind of tacit knowledge and make it far easier to access. But I think this needs very careful thought.
Nick Shackleton-Jones distinguishes between ‘resources’ (which help people perform tasks) and ‘experiences’ (which transform them). As an example, a pilot can’t learn how to fly just by reading a manual. They need simulator and real flying experiences to actually change who they are. AI may be able to codify the resource dimension of tacit knowledge but I suspect that the experiential dimension will remain stubbornly human. So AI might accelerate the resource side while making the experience side more valuable (and more deliberately designed). Either way, L & D’s role will need to shift more to becoming an architect of transformative experiences rather than a curator of content.
To draw together these strands and help define the right approach and the role of AI, I’ve created a 2 x 2 matrix which sets out the context on two dimensions: the stakes involved in the task (what happens if it goes wrong?), and its novelty (how predictable and repeatable is it?). This gives us four essential approaches which each have different levels of AI and human intervention:
Low stakes, low novelty = learning elimination. For routine, predictable tasks where errors are easily corrected and people can look up what they need (compliance refreshers, process documentation, standard operating procedures), the goal should be to eliminate the need to learn by providing resources at point of need (productive learning). AI should handle the cognitive load, and the requirement is performance, not development.
High stakes, low novelty = deliberate practice, then resources. For tasks that are predictable but consequential capability needs to be internalised. Here, you build the foundational capability through deliberate practice first, then provide AI support as a backup and an efficiency layer for the already-capable.
Low stakes, high novelty = AI-supported exploration. For genuinely new situations where the cost of failure is manageable, AI becomes a powerful learning partner (generative learning, or figure-it-out situations), accelerating exploration, organising thinking, and enabling faster iteration. The learning happens through the AI-assisted work.
High stakes, high novelty = experiential transformation. For situations that are both unpredictable and consequential transformative experiences where the person is changed by the intervention (simulations, guided exploration and learning, stretch assignments, real-world challenges) are needed. AI might support preparation and reflection, but the essential learning is human-driven.

A final point on this, returning to where I began this post. AI might provide the kind of support and cognitive scaffold articulated in Vygotsky’s ‘zone of proximal development’ but it’s clear that whilst AI accelerates performance for the capable, that capability and expertise itself must be built through fundamentally different means. Daisy Christodolou argues for deliberate practice of component skills, assessed in the absence of AI. For what it’s worth I still believe strongly in the power of face-to-face, experiential learning. You might say that I’m biased in this view since I run a lot of face-to-face, experiential workshops and you’d be right, but I also believe that there is no reason why in-person experiences shouldn’t form part of a much wider portfolio of AI-driven and AI-augmented interventions and support.
AI is going to fundamentally change the economics of knowledge and learning but not the fundamentals of human development. Whilst AI can handle the informational layer with increasing ease, this only heightens the need for what remains distinctly human, notably the judgment born of genuine understanding, the capability forged through deliberate practice, and the transformation that comes from experiences that matter.
A version of this post appeared on my weekly Substack of AI and digital trends, and transformation insights. To join our community of over thirteen thousand subscribers you can sign up to that here.
To get posts like this delivered straight to your inbox, drop your email into the box below.

Leave a Reply