Posted on 

 by 

 in ,

The progression of AI agents

I truly believe that with AI agents we are on the cusp of huge change – in organisation design, strategy, operations, staffing, customer experience – just about any and every area of the organisation will be impacted. And as I wrote a few weeks back, the agentic organisation will operate a lot differently to the businesses and institutions that we know today.

AI is a true general purpose technology (pervasive, broad impact across economic and social structures, different industries, and sectors). Within that, AI agents are a true enabling or platform innovation in that they will facilitate a cascade of other innovations (simple example – the development of GPS technology enabled the creation of ride-sharing apps, location-based services, and a lot more). They create a platform upon which other innovations (in operations, processes, HR, customer experience) can be built.

The emergence of highly autonomous AI agents capable of advanced reasoning and self-directed actions will totally change how decisions get made, strategies get executed, and processes get completed. Things are moving fast of course, but it will still take a while for the full impact of agentic AI to be felt, which is why it’s useful to consider how this huge shift to AI agents will emerge. If only so that we can start to think now about how we need to plan for this change in a more deliberate way rather than allow ourselves to fall into it. We are at the start of the S-curve with intelligent agents, but we’ve already seen and heard about general purpose agents such as OpenAI’s OperatorGoogle DeepMind’s Project Mariner, and Anthropic’s computer use, and there’s a bunch of task oriented specialist agents emerging, so this is already happening.

So what might the progression of AI agents look like? And how are their capabilities likely to emerge and develop? To give us some context around this question I asked Stanford University’s free AI research tool Storm, to create a research article for me on the topic. The article it created (fully referenced and drawing from a wide range of studies and sources) emphasised the need for strong governance and ethical consideration but classified AI’s trajectory into five distinct levels of autonomy (based on an OpenAI classification), ‘ranging from simple chatbots that perform scripted interactions to sophisticated AI systems that can operate as independent entities capable of strategic decision-making’:

  1. Chatbots: basic form of AI, simple interactions, programmed responses
  2. Reasoners: ability for analysis of complex problems, the application of logic, and to learn from patterns. They can make decisions but only based on predefined parameters
  3. Agents: AI systems can make decisions and take actions based both on their programming but also learned experience. They may still need human oversight in high risk scenarios.
  4. Innovators: An even higher level of autonomy, here the AI can set its own goals and then determine how to achieve them without direct human input (this is where deliberate and ethical design starts to become extremely important)
  5. Organisations: at the highest level AI systems ‘function as organizations, capable of complex decision-making and strategic planning. They can act autonomously across various domains, making decisions that can significantly impact human lives and societal structures’.

The article also classifies different types of agency from assistants (performing tasks as instructed by humans), to proxy agency (AI representing human users within defined range of authority), to independent agency (fully autonomous, without human intervention).

Given that context, and a bunch of other research I’ve been doing, here’s my take on a progression model for how agentic AI will evolve, framed around five A’s (because I like a memorable acronym):

  1. Automation:
    1. Description: Early AI agents that perform narrow, rule-based tasks with human oversight.
    2. Capabilities: Task automation using rules-based systems (for example RPA, chatbot-based customer service, AI-assisted scheduling).
    3. Impact: Useful for automating repetitive, low-risk tasks, task specific efficiency gains, cost savings, improved customer experience. But there is limited adaptability, and potentially a need for extensive training and oversight. Here organisations will need to prioritise low-risk processes, build data pipelines, and staff familiarity and knowledge.
  2. Augmentation:
    1. Description: AI agents provide decision support, assisted intelligence and predictive modelling but still require human intervention with more complex scenarios.
    2. Capabilities: AI-driven analytics (for example forecasting, sentiment analysis, HR talent matching), AI copilots for research, marketing, sales, finance, and operations. Context-aware personal assistants integrated into workflows.
    3. Impact: Enhanced decision-making, faster workflows, semi-autonomous processes. Challenges around data integration, trust, and potential friction around human-AI collaboration. Upskilling will need to focus on AI collaboration, and understanding how to get the most out of the tools.
  3. Autonomy:
    1. Description: AI agents independently complete complex tasks with minimal human oversight.
    2. Capabilities: Autonomous customer support and operational management, AI-driven content creation, campaign management, and sales negotiations, real-time risk management, dynamic supply-chain optimisation. We’ll start to see cross-domain integration where AI agents handle multi-domain processes and learn adaptively, and multi-agent collaboration across different business functions.
    3. Impact: Reduced need for human intervention, although humans will still need to set goals and ensure proper controls, AI-driven operations, cost reduction. Significant changes required around compliance, governance, ethical concerns, workforce adaptation, the coordination of vertical and horizontally focused agents. Businesses will need to develop fail-safes, validation techniques and audit trails, and pilot in controlled environments.
  4. AI Ecosystem & orchestration:
    1. Description: AI agents coordinate and manage other AI systems within and across organisations.
    2. Capabilities: Multi-agent ecosystems (AI overseeing other AI systems), self-improving and self-optimising systems, AI-led project management and strategic planning, cross-company AI interaction, dynamic decision-making across broad areas based on real-time business data.
    3. Impact: AI-first business models, automated enterprise management, a requirement for the orchestration of agent ecosystems, interoperability standards and conflict resolution, goal-alignment frameworks, management of security risks and AI accountability, negotiation protocols for multi-agent systems.
  5. Agentic enterprise & AGI:
    1. Description: AI handles enterprise-wide strategy focusing on advanced narrow AI coordination, and eventually AI agents exhibit near-human or human-level intelligence, capable of strategic reasoning and autonomous innovation.
    2. Capabilities: AI-generated business strategies and innovation pipelines, AI-led organisations with human oversight in critical areas, autonomous negotiations, policymaking, and business governance.
    3. Impact: Fully AI-driven enterprises, radical efficiency, new market structures, challenges around ethical risks, potential existential debates on AI governance and human roles.

Technological complexity and organisational impact scale up as AI agents become more mature:

The five stages may be broadly sequential but there is likely to be significant overlap as one stage blurs into the next. For example, we’re already seeing specific vertical, task-based agents being used alongside horizontal, general purpose agents. Glean is an example of the former – an internal knowledge agent that indexes, categorises and understands all the information that lives inside your company, so you can use it to answer any question related to internal data and knowledge.

ChatGPT, CoPilot, Claude and the like are examples of horizontal, general purpose agents that will be good enough at multiple tasks but perhaps lack the specific domain knowledge or context of a task-based agent. Ema is another example of a horizontal agent that can be applied to horizontal workflows across different teams, so it can be oriented to optimise tasks around a particular persona or a specific vertical use case. This can help join up processes across different teams and make them more oriented to customer outcomes, for example.

There’s a lot of big questions that we’ll need to answer along this transition, not least the ethical guardrails, questions around staffing and workforce skills, risk management, the nature of competitive advantage. If Satya Nadella is to be believed, the whole way in which we interact with and relate to technology will change entirely, with agents interacting directly with data rather than being mediated by a software system with programmed business logic. In the words of Rita Gunther MacGrath as agentic AI matures…‘we’re moving toward a “Star Trek future” where we simply request what we need, and intelligent systems handle the rest’ potentially removing interface friction altogether.

One thing is for sure – we’re in for some huge change.

As always, interested to know what you think.

A version of this post appeared on my weekly Substack of AI and digital trends, and transformation insights. To join our community of over ten thousand subscribers you can sign up to that here.

To get posts like this delivered straight to your inbox, drop your email into the box below

Leave a Reply

Discover more from Only Dead Fish

Subscribe now to keep reading and get access to the full archive.

Continue reading