Posted on 

 by 

 in , ,

Designing your agentic AI system

There’s obviously a lot of talk about Agentic AI right now but the reality that I’ve experienced most in teams is that many of the agents that have been built so far are simple, task focused, sequential automations. There’s nothing wrong with this of course (and it’s a logical place to start) but to paraphrase from a theme which Sangeet Paul Choudray has made very well AI doesn’t just change tasks, it reshapes the whole way in which work happens, from how it’s organised, to where decisions sit, to enabling new forms of coordination. In other words the opportunity is much bigger. So with that future in mind, how should we be designing for what’s coming?

An architecture for agents

The success (or otherwise) of developing an ecosystem of AI agents will depend largely on how it is set up. The best angle for this that I’ve seen comes from Craig Hepburn who set out a five-layer ‘intelligence stack’ that will be needed for agents to deliver real value.

  • Foundation (LLMs). These are the AI engines (Claude, Gemini, CoPilot, ChatGPT) that power the stack but this layer is not where the real competitive advantage lies. The models themselves are rapidly becoming commoditised with the differences between them diminishing. The real choice here is whether to tightly couple with one model (good for integration but potentially creating a dependency on the one layer where differentiation is disappearing fastest) or whether to retain flexibility.
  • Context. This is where it gets interesting and where, as I’ve written before, the real long term advantage will be derived. But it’s also where most organisations are furthest behind. This layer is everything your business knows, made accessible to intelligence (data, documents, processes, institutional memory, customer history, strategic direction). It’s most of what makes a business distinctive, and is probably currently trapped in silos, people’s heads, email threads, and systems that don’t talk to each other. Jon Miller had a useful take on this, describing four types of ‘context-as-a-service’: Operational or ‘how we do things’; Memory or ‘what we’ve learned’; Governance or ‘what rules must be enforced’; Brand and voice or ‘how we look and sound’. An agent is only as good as the context it can draw on so two organisations using the same model with different context layers will get dramatically different outcomes. The starting point here is not the technology but simply auditing where your knowledge actually lives. It’s harder to build but where much of the future value sits.
  • Orchestration. This is the layer that decides how intelligence gets used: routing requests to the right model, managing memory, coordinating agents, determining what context to retrieve and when. It’s like a judgement layer and the home of open standards like Anthropic’s Model Context Protocol (MCP). The critical question for the organisation here isn’t only about how this layer works, it’s who controls it. If it’s embedded inside a vendor’s platform you’re subject to their logic and limitations but configuring and owning it may bring its own set of challenges. Either way, the degree of control you have over orchestration will define how much flexibility you have to evolve your agentic ecosystem over time.
  • Action. This is where agents actually live, operating across your tools rather than inside any single one. A lot of the current forms of agentic are really just automation within a single application but an agent that is genuinely useful sits above this, pulling context from wherever it lives and taking action across whatever systems are needed.
  • Interface. Craig makes the point that this is now the thinnest and most disposable part of the stack, and will likely be rebuilt or swapped out faster than any other component. In the old world, the interface was the product (think Google search) but now the value has migrated down the stack into context and orchestration.

The interesting shift that’s worth highlighting again here is that from a world where data was the byproduct of using a product to one where context is the product and the interface is the byproduct. But most organisations are not yet spending their time and focus on where the value will actually accumulate.

Design with intent

Moving from the architecture to the agents themselves, there’s a useful model (that I’ve used before) that maps the key roles AI can play in an organisation against the balance of human and machine involvement. It runs from full automation on the left (AI decides and implements with no human in the loop) through to illuminator and evaluator roles on the right (where humans are using are using AI to solve complex problems, explore, ideate, innovate, stress-test, simulate and so on).

Most organisations are currently focused on the left-hand side – automators and deciders, which makes sense as these are the easiest early wins and the clearest ROI cases. But once those initial productivity gains have been banked, the real opportunity shifts to the right.

This matters because good agentic design requires real clarity on the role an agent will play and how it interacts with human capability at each point in a process. That means thinking carefully not just about the steps in a workflow, but about the inputs the agent requires (data, context, human judgment), the boundaries within which it should operate, and what good looks like for the outputs. Agents on the left of the model can run with minimal human involvement. Agents on the right are designed either to perform specific tasks within a more complex process, or to make humans meaningfully better at the work that only humans can do. The human remains the fundamental driver, but with a quality of input and challenge that wasn’t previously possible.

Redesigning the organisation for agentic

Getting agentic right isn’t only about the architecture and the agent design. It’s a significant organisational shift, and most of the potential friction is not created by the technology but rather in how organisations are structured around it.

Conway’s Law tells us that organisations design systems that mirror their own communication structures. The information flows, team boundaries, hand-offs, hierarchies and dependencies that have grown over time become embedded in the products and processes a company creates. Simply layering agents onto existing structures will reproduce the limitations of those structures. So escaping that trap requires redesigning how work happens, not just who (or what) does it.

Research from the Frontier Firm Initiative at Harvard Business School (combining Harvard’s organisational research with Microsoft’s deployment data) confirms this. They describe most large enterprises as being ‘pilot-rich but transformation-poor’. The technology works, and individual productivity gains are real but those gains remain trapped inside specific workflows unless leadership intentionally redesigns the broader system. It’s no good an agent being able to produce something in a fraction of the time if the sign off for that thing still takes weeks. You’ve just shifted the bottle-neck to somewhere else.

This plays out in several notable ways. First, getting trapped in efficiency. The initial framing of AI as a tool for cost-reduction constrains thinking about its bigger potential. And a negative ‘AI will take our jobs’ narrative is not conducive to genuine transformation. But there is also a subtler risk that initial productivity gains from agents give way to an unsustainable intensity of work. Some longer-term studies have shown that employees using AI extended work into previously protected hours, often voluntarily, because AI made doing more feel possible and rewarding. Leaders need to intentionally reallocate reclaimed time toward higher-value work and prevent it being absorbed into more meetings, emails, and low-value activity. You can’t assume that happens naturally – you need to design for it.

Then there’s what the FFI researchers called the process debt problem. Workflows grow complex over time, full of exceptions accumulated through acquisition or localisation, hand-offs, and individualised requirements. A bad process automated is still a bad process. Leaders need to think bigger, and ask themselves and their teams what the process would look like if it was designed from scratch for an agentic world (clean-sheet process redesign). Without doing that, the initial productivity gains stay imprisoned within individual workflows and the system-level transformation never happens.

Lastly there’s a kind of identity problem. For decades, expertise in the organisation was defined by being ‘the person who knew’. Proficiency and know-how confers status. In many functions within the organisation the knowledge for how something gets done often lives in the heads of the people that do it. It’s rarely documented. So alongside the kind of tangible knowledge architecture that can provide context for agents there is the need for agents to access and benefit from all the institutional and tacit knowledge that sits within the business. And that means framing the shift as more like legacy building rather than a threat or change to status.

The crux of all of this is that Agentic AI is going to require architectural innovation, which means redesigning how work gets done rather than layering AI onto existing structures. That demands new levels of cross-functional collaboration, deliberate space outside existing org charts to think differently, and a fundamental rethinking of leadership itself, towards establishing and leading hybrid human-AI teams in ways that get the best from both without isolating or demeaning either.

A version of this post appeared on my weekly Substack of AI and digital trends, and transformation insights. To join our community of over thirteen thousand subscribers you can sign up to that here.

To get posts like this delivered straight to your inbox, drop your email into the box below.

One response to “Designing your agentic AI system”

  1. Technology * Innovation * Publishing Newsletter #379 | Sandler Techworks

    […] Designing your agentic AI system […]

Leave a Reply

Discover more from Only Dead Fish

Subscribe now to keep reading and get access to the full archive.

Continue reading