Posted on 

 by 

 in , ,

Leading Human/AI Hybrid Teams

For most of human history, leadership has meant one thing – getting the best out of people. But what happens to leadership in an age where hybrid teams of humans and AI agents will increasingly be the norm? It’s a pretty big question. And one that I’ve been working on to understand better (for a breakfast talk I’m doing to a bunch of agency leaders, and a trends webinar).

As AI agents increasingly take on more meaningful work inside organisations this is leading us into distinctly unprecedented territory. Will your best (human) talent feel liberated or threatened? When an AI agent produces a poor output, who is accountable for that? Who owns the relationship with the AI agents? What happens when your humans and AI agents disagree? How do you know when an AI agent has been given too much autonomy? What does an onboarding process for AI agents look like? How do we manage the performance of AI agents? So many questions.

The cost of getting this wrong is not abstract or philosophical. At the operational level, poorly supervised AI agents introduce errors that can compound quietly, generating outputs that feed downstream decisions before any human has had the chance to check the work. Organisations that fail to manage integration thoughtfully risk alienating their best people. Skilled staff, whose judgement and institutional knowledge is precisely what AI cannot replicate, may feel bypassed, undervalued or uncertain about their future. As agents generate volumes of outputs leaders can easily mistake activity for progress. When no one is explicitly responsible for what AI agents do, responsibility can diffuse until it disappears entirely.

None of these risks are inevitable. But they are far more likely without a deliberate leadership approach. Thinking about what this deliberate approach might look like I think there are a number of principles which are going to be critical:

  • Design for complementarity, not substitution. This means having clarity on mapping AI agents to tasks that they are naturally good at (those requiring speed, scale and consistency), and reserving humans for judgement, relationships and navigating situations that are likely to involve ambiguity. The goal is to create a human/AI system which is smarter than humans or AI alone.
  • Hold AI to the same performance standards. Define clear outputs, metrics and quality thresholds for AI agents as you would humans. Vague accountability produces vague results, from any kind of employee, and different standards generates resentment.
  • Make oversight a deliberate role, not an afterthought. Humans should be assigned explicit responsibility for monitoring, correcting and learning from AI agent behaviour. Supervision must be structured rather then being assumed.
  • Build trust through transparency. Teams need to understand what AI agents are doing and why. Murkiness breeds resistance but explainability supports confidence and more effective collaboration.
  • Treat integration as a cultural challenge, not a technical one. Redefining roles, dealing with push back and concerns is likely to be the bigger challenge than the technical aspects of setting up AI agents. Be respectful to the cultural shifts this will entail.
  • Design clear escalation protocols. Better to know in advance when AI agents must defer to human judgement and to make those boundaries explicit. Understanding what happens when the AI hits the edge of its competence is essential to both trust and performance.
  • Build for continuous learning. Treat the human/AI team as a system that improves over time. In rapidly changing environments systems that are overly rigid will become brittle and fragile over time. Ones that have learning and adaptation at their core will remain resilient.

These principles can help set a hybrid team up for success. But to make a hybrid team truly high-performing we need to take this further into expectations, behaviours, and team dynamics in the way that we would with a human team. AI agents are NOT humans of course, and it would be a mistake to treat them as such. But at a team dynamics level we can draw on some foundational principles for how we can bring humans and agents together in a way that amplifies rather than undermines team performance.

So here I’m going to draw on a renowned framework for high-performing teams (in a human context) – Patrick Lencioni’s Five Dysfunctions of a Team, first published in 2002. It sets out five key attributes for team performance and behaviours but also focuses on how they inter-relate and how each enables (or negates) the others. There is of course lots of nuance around team performance that models can’t capture (as the statistician George Box said, ‘All models are wrong, but some are useful’, right?). But it’s a good way of framing the foundations of team performance, not just for human teams but also for hybrid human/AI teams.

Beginning at the bottom and going up the pyramid, the foundation of the model is trust (without that, high-performance is impossible), but what trust enables is psychological safety, equality of contribution, and healthy debate to solve problems well. If team members don’t feel that they have had the chance to actively contribute to the strategy and objectives, they are a lot less likely to be committed to the direction that the team commits to. A lack of commitment means that team members don’t hold each other to account, which in turn means that everyone is concerned only for their own results, not those of the wider team.

Now let’s apply this to a context where we have humans working alongside AI agents (again, starting at the bottom and going up):

  • Trust (Dysfunction: Absence of Trust): Human-AI teams fail when people don’t understand what AI agents are doing or why. The job of leadership is to engineer transparency (explainable outputs, visible reasoning, honest acknowledgement of AI limitations). This ensures genuine confidence on the part of the humans rather than blind faith or even blanket suspicion.
  • Conflict (Dysfunction: Fear of Conflict): If healthy teams challenge each other then leaders have to actively encourage humans to interrogate, question and push back on AI outputs rather than deferring to them. Productive friction between human judgement and AI recommendation is a useful feature if it results in better outcomes.
  • Commitment (Dysfunction: Lack of Commitment): AI agents can execute with consistency but only toward the goals they’ve been given. Here leaders need to ensure that the whole team, human and AI, is oriented around clearly defined, shared objectives. Ambiguity in direction confuse people but it can also compound at machine-driven speed.
  • Accountability (Dysfunction: Avoidance of Accountability): AI agents don’t hold themselves responsible. There is no natural machine-driven accountability (an AI agent operating outside its competence won’t push back, it will just continue producing outputs) meaning that accountability needs to come from humans. Leaders need to assign explicit ownership for monitoring, correcting and learning from AI behaviour.
  • Results (Dysfunction: Inattention to Results): The ultimate test of a human-AI hybrid team is outcomes rather than activity. AI agents can generate enormous volumes of output that can have very little impact. Leaders must keep the whole system (human and AI) focused on what actually matters and resist mistaking velocity for value.

With a human team, the dysfunctions may well surface through observable social signals but in hybrid teams dysfunctions can easily be masked or at least less visible. AI agents can easily create false impressions of alignment and accountability because they are executing consistently and without conspicuous resistance

We’re entering a very different world of leadership with Agentic AI, and these are not insignificant challenges for leaders. Get this right and we can have a team environment that draws on the best of human and AI in a compounding affect. Get it wrong and we are at risk of alienating the very people on whose judgement, experience and sensibility we’ll need to rely on for some time to come.

A version of this post appeared on my weekly Substack of AI and digital trends, and transformation insights. To join our community of over thirteen thousand subscribers you can sign up to that here.

To get posts like this delivered straight to your inbox, drop your email into the box below.

One response to “Leading Human/AI Hybrid Teams”

  1. Technology * Innovation * Publishing Newsletter #377 | Sandler Techworks

    […] Leading Human/AI Hybrid Teams […]

Leave a Reply

Discover more from Only Dead Fish

Subscribe now to keep reading and get access to the full archive.

Continue reading