Posted on 

 by 

 in , , ,

Action, instability and safe to fail

I enoyed reading Doug Garnett’s thoughts on action and instability (courtesy JP Castlin). The premis of Doug’s piece is that, in his words, ‘certain actions in business are so unstable that even tiny errors do tremendous harm’:

‘Businesses tend to think of stability and instability in broad terms, summarized in questions like “is my business stable?”. The question may be important, but it ignores the ways businesses actually encounter instability – with specific actions..’

Doug uses a fantastic example of this – the disastrous introduction of contactless credit card payments in the US. When this was first brought in payment terminals in the US told shoppers to ‘Tap to pay’. As a result people took the instructions literally and tapped the edge of the card onto the terminal to pay which of course, didn’t work since in order for the connection to be made it’s actually necessary for a card to be laid flat on the terminal and for the customer to wait for the beep. This simple choice of words may have seemed pretty innocuous at the time but actually led to an error being amplified across thousands, maybe millions of tranactions across the country, and a bunch of frustrated shoppers. Unsurprisingly perhaps, contactless card payments did not take off as a result.

Doug illustrates this principle of small errors cascading into much larger problems with a simple diagram. With the concave shape on the left the ball is naturally at rest in the centre and whilst a force may cause it to move from side-to-side it will soon return to this central prosition. With the convex shape on the right however, the ball is extremely unstable since the tiniest force will cause it to roll off never to be seen again.

Image source

Doug ‘s point is that stability is often considered in the context of the wider business rather than the potential for smaller actions to cascade into much larger consequences. Governance processes and commitees set up to ensure stability don’t often consider the potenntial impact of individual decisions or actions (as was the case with the US contactless payent example). The use of heuristics in decision-making, he says, can lead to short-hand ways of reaching a conclusion which, whilst useful, can be problematic.

He also makes a good, and somewhat counterintuitive, point that instability is not entirely a bad thing: ‘Complexity shows that ecosystems and companies only become healthy because they encounter situations that are at least somewhat unstable.’ In other words it is through dealing with instability that we learn. Similarly, I’ve written before about the idea of the ‘antifragile organisation’ (drawing on Nicholas Nassim Taleb’s book) and the difference between robust systems which can resist known threats, resilient systems which can absorb or react to disturbances by re-establishing themselves as close as possible to the original state, and antifragile systems that actively improve as a result of stresses and failures. Antifragile organizations, says Taleb, value continuous learning and view mistakes as opportunities for growth. 

Doug Garnett visualises the course of a project (or I guess, a strategy deployment or any course of action) as a ‘stability terrain’ with a series of peaks and valleys as time passes. The valleys are periods of high stability (where errors generate a lesser affect but also the potential for outsized returns is less), the peaks are periods of high instability (where the impact of errors could be much greater). Various factors along the way may contribute to greater stability or instability, not least the ability of the team to respond to challenges, learn about solutions, develop experience about different scenarios.

Image source

These peaks and troughs constitute the terrain which a team will need to navigate but Doug also makes the point that since most companies and teams spend most of their time in relatively stable environments and scenarios, they typically have weak decision-making skills when facing potentially unstable situations or contexts (notably about the level of instability that individual decisions may introduce).

It’s a really interesting lens through which to view organisational contexts and strategies but as I was reading it I was also thinking about corporate experimentation and how important safe to fail testing is. Having clarity about exactly what safe to fail experimentation means within a business (what can change, what can’t change, areas to focus on, areas to avoid) helps unlock greater levels of trying things out, but it also gives some protection to the organisation against potential consequences from that experimentation that can cascade into serious problems.

Photo by Markus Spiske on Unsplash

I write a weekly Substack of digital trends, transformation insights and quirkiness. To join our community of thousands of subscribers you can sign up to that here.

To get posts like this delivered straight to your inbox, drop your email into the box below.

Leave a Reply

Discover more from Only Dead Fish

Subscribe now to keep reading and get access to the full archive.

Continue reading