Posted on 

 by 

 in , , , , ,

On the Agentic Organisation

The next wave of AI innovation is already upon us, and it’s the era of agentic AI. In fact, OpenAI’s CFO Sarah Friar has already said that ‘agentic’ will be the word of 2025. The pace of progression has been remarkably fast from simple chatbots that were primarily designed to engage in conversation, answer questions, or guide users through a specific task, through to assistants and Generative AI tools which have taken this to a whole new (and generative) level with advanced natural language processing meaning that they can respond to multi-modal inputs, a much wider range of questions and more complex needs than a context-specific chatbot.

And now the era of the AI agent is emerging – a more advanced, potentially autonomous system that can make decisions based on the input it receives, and use tools to perform tasks. Agents can interact with multiple systems, collect data from a variety of sources, operate on a broader scope, and can take actions without continuous human guidance. The differences between agentic AI and chatbots or assistants is articulated in this BCG definition:

‘AI agents are artificial intelligence that use tools to accomplish goals. AI agents have the ability to remember across tasks and changing states; they can use one or more AI models to complete tasks; and they can decide when to access internal or external systems on a user’s behalf. This enables AI agents to make decisions and take actions autonomously with minimal human oversight’.

When Google launched their latest model Gemini 2.0, Sundar Pichai described these more advanced versions of AI as ‘models that can understand more about the world around you, think multiple steps ahead and take action on your behalf, with your supervision’. And last week OpenAI launched ChatGPT scheduled tasks which allow for automated and timed execution of specific functions like sending reminders, generating reports, or performing recurring data analysis. They say that these tasks can be customised to meet deadlines or support workflow efficiency by handling routine activities at pre-set intervals, but this is another significant step forwards to becoming a true AI agent.

So what happens when organisations are able to use sophisticated AI agents to perform any number of tasks and workflows? The implications of agentic AI are likely to be huge. Not just on the type of work that humans do and how they do it, but on organisational design, how strategy gets executed, skillsets that people will require, resources that businesses will need, management practices and plenty more besides.

Lee Bryant wrote some excellent thoughts setting out an expansive view on the changes this could bring. He makes the point that in spite of the world of work changing rapidly, the way in which work is coordinated and aggregated in most businesses has remained largely unchanged, and yet:

The expected arrival of enterprise AI at scale in 2025 presents a once-in-a-generation opportunity to reinvent management and work coordination in ways that could substantially reduce operating costs, whilst making organisations more agile, adaptive and automated.’

Lee talks about how management must transition from micro-level control to governance of AI agent ecosystems – setting clear rules for autonomy, ensuring AI accountability, and enabling seamless collaboration between human and AI teams. Leaders must create a culture of trust where agents operate effectively within ethical and organisational boundaries. He uses the phrase ‘programmable organisations’ as a way to describe how businesses will be empowered by autonomous agents but also how there needs to be deliberate design from humans to shift from inflexible hierarchies to adaptive, network-driven frameworks that empower AI agents to take the lead in decision-making. It’s a massive shift.

This agentic era is likely to emerge in stages. In the early stages vertical AI agents will become the next iteration of Software-as-a-Service (SaaS), automating a much wider range of specialised, repetitive administrative tasks across various business functions. Their focus is likely to be on specific tasks (digital marketing, customer support, quality assurance for example) where they can deliver more effective and efficient solutions, leading to significant productivity gains.

But AI agents will rapidly move beyond task automation and narrow application to become orchestrators of entire workflows and even culture within programmable organisations. We’ll be able to give an agent an outcome and let the agent decide on the optimal way of achieving it rather than specifying what it needs to do. They will be able to dynamically allocate resources, optimise team structures, and even surface blind spots in organisational operations enabling companies to function with unprecedented efficiency and foresight.

Vertical AI agents that can utilise specific domain expertise and data for narrow application will be joined by horizontal agents which can handle general tasks and have much wider applicability leading to the need to orchestrate across both generalist and specialist AIs. As AI agents increasingly interact with one another on our behalf, careful management of multi-agent ecosystems will be required to ensure trust, oversight, accountability, and appropriate behaviour among autonomous agents. They can potentially act as catalysts for cultural transformation, ensuring work becomes more transparent, inclusive, and outcome-driven. We may even see ‘autonomous organisations’ where AI systems handle all operational aspects.

The potential for all this to go awry is huge. Which is why we need careful consideration and deliberate design around implications for staffing, strategic risk, but also the balance of how humans work with AI agents. The worst option is where we simply fall into a revolutionary change almost without noticing.

Several years ago BCG set out a simple framework for understanding the role that AI can play across an organisation, and recognising the subtle balance between AI and human capability that’s needed.

It’s high-level but I’ve always liked this as a way of understanding the role of AI, not least because it accounts for different contexts. Situations which are characterised by known knowns, relatively stable environments, repetitive tasks, and where the AI can draw on extensive data and/or knowledge and experience can be automated. But as we move left to right the AI has less and less context to work with (perhaps because of a lack of data, or high variability, newness and complexity) and so the level of human intervention increases. These kinds of subtle considerations are going to become more and more important for organisations looking to understand the right approach to designing for agentic application.

In his post Lee also talks about how internal functions and processes need to be re-imagined as services (which are composable, and likely to be at least partially automated) so that other teams can access what they need as and when they need it via internal platforms. This reminded me of Amazon’s Service-Oriented-Architecture (SOA) which I wrote about in my second book. As far back as 2002, Jeff Bezos issued an infamous mandate concerning how software was to be built at Amazon. Each team was to expose their data and functionality through service interfaces (APIs) and teams had to communicate with each other through these interfaces. The externalisation of these APIs formed the basis of Amazon’s thinking around AWS and B2B externalised infrastructure and services (having third parties utilise your services through APIs adds scale and competitiveness but also generates revenues). But this service-oriented-architecture also brought another level of efficiency to internal collaboration. Reimagining functional and team outputs as services enables a SOA where intelligent multi-agent systems can catalyse cross-team collaboration and operate with minimal dependencies or blockers to delivery.

Yet these systems need to be deliberately designed, modelled and controlled. The potential here, if we get it right, is for the integration of AI agents to empower humans to step away from repetitive, low-value tasks and focus on areas where creativity, empathy, and critical thinking are indispensable. This reallocation of roles positions organisations to solve complex challenges where human ingenuity is critical, but to have that ingenuity and judgement super-charged with AI. AI agents should be workforce multipliers, empowering small teams and individuals to achieve great things.

The implications for organisational design, team structures, resourcing, workflows, job roles are huge. One thing is for sure – there’s a whole bunch of pretty fundamental change coming our way.

BCG image

I write a weekly Substack of digital trends, transformation insights and quirkiness. To join our community of thousands of subscribers you can sign up to that here.

To get posts like this delivered straight to your inbox, drop your email into the box below

Leave a Reply

Discover more from Only Dead Fish

Subscribe now to keep reading and get access to the full archive.

Continue reading