Posted on 

 by 

 in , , ,

Gigantomania and why big projects go wrong

Peter Palchinsky

This weeks news about the further delays and overspend costs associated with the UK’s HS2 high-speed rail project had a distinct air of inevitability about it. Government infrastructure initiatives and large corporate projects that don’t overrun and exceed budget are a unique kind of rarity. Oxford academic Bent Flyvbjerg, who co-wrote the excellent ‘How Big Things Get Done‘, compiled a database of 16,000 such initiatives and found that only 8.5% of projects deliver to their original projected cost and time projections, and only 0.5% deliver to initial cost, time and benefit forecasts. Wow.

Institutions, large corporates and big consultancies often seem to have a preference for what you might call ‘gigantomania‘ – centrally-controlled grand projects of significant scale that are often supplemented by a desire to use the latest technology. Western observers in the first half of the 20th Century used the phrase to describe Stalin’s predilection for huge scale industrial and engineering schemes in Soviet Russia. These infrastructure projects (including dams, hydroelectric and irrigation programmes) almost always vastly exceeded their projected time and budget, led to high accident rates, poor quality production, and high environmental impacts.

In the early 20th Century Russian engineer Peter Palchinsky was something of an outlier in advocating for a more scientific method within Russian industry. He believed that rather than seeing every problem as a technical one that could be solved using the latest technology, engineers should follow three simple rules for industrial design that would enable greater adaptability whilst mitigating risk:

  1. Variation: Actively seek out and try many different ideas and approaches, rather than committing to a single grand plan from the outset.
  2. Survivability: Experiment with new ideas on a small scale, where potential failures are ‘survivable’ and do not lead to catastrophic consequences for the entire system or population.
  3. Selection: Implement quick and effective feedback loops to learn from both successes and failures. This continuous learning allows for adjustments and improvements as projects progress.

Palchinsky’s principles championed a more iterative, human-centred and adaptive approach to problem-solving and large-scale projects which contrasted sharply with the gigantomania and disregard for human cost that characterised much of early Soviet industrialisation. He emphasised the importance of thorough research, data collection, and realistic assessment before embarking on massive projects. He sought to organise engineers into professional organisations to foster the exchange of ideas and ensure their voices were heard in decision-making. He was against allowing ideological zeal and grandiosity to override sound engineering principles, safety, and economic realities.

Many of today’s organisations can learn a lot from Palchinsky’s thinking. His three principles offer a pretty good set of guidelines for learning fast about the potential value in AI application and yet I’m sure we’ll see more than our fair share of AI gigantomania. In many ways Palchinsky’s ideas can be seen as a precursor to modern systems thinking in that he understood that industrial projects were not isolated technical challenges but complex systems intertwined with social, environmental, and political factors. He looked at the broader context and long-term effects of decisions. And AI should be no different.

Yet if we are going to do big stuff we should do it well. I’m going to finish this post by returning to Bent Flyvbjerg and Dan Gardner’s excellent book and five essential conventions that I’ve taken from it that can help avoid the classic errors that often hinder big initiatives:

Don’t climb the wrong hillMany projects go wrong before they even begin because the problem isn’t framed well. The book makes a great case for extended front-end planning and taking time to make sure we’re solving the right problem. Go slow to then go fast.

Planning fallacy should be treated as the rule, not the exception: Optimism bias (believing things will go better than they are likely to) and strategic misrepresentation (intentionally underestimating cost to get approval) are endemic in project planning. Use reference class forecasting (analysing data from similar past projects) rather than internal wishful thinking. Study patterns of failure and success in your domain. Your project is probably less novel than you think.

Scale is a risk multiplier: Big things amplify small errors. A 1% mistake in scope, budget, or sequencing can become a major issue at scale, so it can be useful to break projects into modular, repeatable units where possible. Modularity helps to reduce complexity, enhance predictability, and enable faster delivery.

Right people, right incentives: Misaligned incentives (especially among contractors, political sponsors, or consultants) breeds dysfunction. Many cost overruns stem from human misalignment, not technical issues. Reward systems should prioritise long-term performance over short-term gains. Build in accountability and monitor alignment continuously.

Make the invisible visible: Small problems buried in complexity often sink big projects. Transparency means visualising interdependencies, making assumptions explicit, tracking progress rigorously. While planning should be considered, delivery should be fast and iterative. Producing something tangible quickly can help build momentum. Success often comes from managing perception, not just execution, and so shaping the story is important. Many project failures are political.

The Sydney Opera House was expected to take 4 years and cost $7 million. It famously took 14 years and cost $102 million. The design was approved before technical feasibility was known. Builders were solving basic engineering problems whilst they were actually building. But the equally ambitious Guggenheim Bilbao opened on time and on budget. The difference was meticulous planning, highly-aligned stakeholders, and tight contractual controls with clear incentives.

Successful big projects may be well-executed ones but they’re also better designed from the start.

A version of this post appeared on my weekly Substack of AI and digital trends, and transformation insights. To join our community of over ten thousand subscribers you can sign up to that here.

To get posts like this delivered straight to your inbox, drop your email into the box below.

Leave a Reply

Discover more from Only Dead Fish

Subscribe now to keep reading and get access to the full archive.

Continue reading