Posted on 

 by 

 in , ,

Business Critical Thinking

On the risks of blindly following technology in the age of AI

In the summer of 2009, 28 year old nurse Alicia Sanchez was driving through Death Valley National Park with her six-year-old son Carlos when her GPS directed her onto an unmarked road. She followed it for 20 miles, continuing even when the road gradually deteriorated from tarmac to gravel and then to sand. Eventually her Jeep became hopelessly stuck in the sand and she was stranded in the middle of nowhere in one of the most inhospitable environments on earth. A week later a park ranger found the vehicle, buried to its axles in the desert. On the windshield, Alicia had spelled out an SOS in medical tape. She had survived the ordeal but Carlos, tragically, had not.

This heartbreaking story is just one example of what rangers in Death Valley began calling ‘Death by GPS’, reflecting how often they found themselves rescuing park visitors who had followed their GPS devices into peril. But this phenomenon was not restricted to Death Valley. There was the couple who drove past concrete barriers, orange barrels and road closed signs before driving off the edge of an unfinished bridge in Indiana. Another couple whose Waze app took them into a notorious district of Rio de Janeiro where they were ambushed by gang members. A man whose GPS took him and his car down a steep, narrow path in the Pennines until he was left teetering on a cliff edge. And my personal favourite, the Swedish tourists who put the island of Capri into their GPS and then drove 400 miles off course to the Italian city of Carpi.

The problem with all of these examples wasn’t that the GPS devices were wrong, but almost that they were too right, mapping a direct path to a desired destination without taking account of the state of the road, or unmapped hazards or an exceptionally challenging environment. The phenomenon also reflects some very human traits like our tendency to continue with a course of action even when it’s no longer the best option (plan continuation bias, or what pilots have called ‘get-there-itis’). It also reflects how we often blindly follow technology without due consideration (the idea of technocentrism, or as Michael Scott of The Office puts it: ‘The machine knows!’).

With the increasing ubiquity of AI this is a growing risk. Think a little deeper about what is often happening when we work with AI, and you realise that the machine is not just giving us answers, it is reshaping what we pay attention to. When AI handles summarisation, assimilation, analysis, for example, it is determining what gets foregrounded and what disappears into the background. The danger is that we end up making decisions based on a filtered version of reality without ever knowing what was filtered out. If a GPS leads us into a dicey situation at least we have a chance to notice the environment changing outside of the window. If an AI leads us to think about something in a certain way we have no way of sensing what we’re not seeing.

Technology can also act to narrow our situational awareness. A study by Toro Ishigawa at the University of Tokyo found that people who used a GPS to complete a navigational task had a reduced recall of their surroundings on the route that they had just navigated. Blindly following what the technology tells us can not only lead us into unforeseen situations, it can also reduce our awareness and recall for the landmarks that can help us to remember where we’re heading, and our feel for the terrain that we’re navigating. The more the tool does the thinking for us, the less we understand about where we actually are.

Now imagine this happening at scale across an organisation where every employee is using AI, and this is why a focus on critical thinking is so needed. Organisations adopting AI without thought to critical thinking aren’t just delegating tasks to machines but delegating the question of what matters and potentially creating a new kind of blindness that’s less conspicuous than following bad directions because we don’t notice what we’ve stopped noticing.

The Gaussian Copula

My last thought here is about risk. Complex, technologically-driven decision-making mechanics may work beautifully for extended periods of time whilst simultaneously making us blind to growing risk. The Gaussian copula is a great example of this. A mathematical formula that became the backbone of the global financial system in the years leading up to the 2008 financial crash. It seemed to offer an elegant way to measure the risk of complex financial products, particularly mortgage-backed securities and the even more complex derivatives that were built on top of them.

The core problem that the formula attempted to solve was correlation. If one mortgage defaults, how likely is it that others will default too? When thousands of mortgages are being bundled together and sold as a single product this is an important thing to understand. The Gaussian copula gave investors a neat way to seemingly manage risk. Rather than trying to model all the messy, interconnected factors that might cause mortgages to fail together, the formula’s creator, David X. Li, used historical data on credit default swaps to infer correlation patterns. The past behaviour of markets became a proxy for understanding future risk.

And for a good while it actually worked really well. The formula was elegant and produced clear outputs that could be plugged into trading models and risk assessments. Banks used it to justify holding huge quantities of these products. Rating agencies used it to bestow mortgage-backed securities with AAA ratings. And the entire financial system came to rely on it.

Like the GPS examples, the problem wasn’t that the formula itself was wrong, it was that it couldn’t see what it couldn’t see. It treated correlation as relatively stable because historical data suggested that it was. But historical data didn’t include a scenario where housing prices fell nationally whilst interest rates went up and lending standards had been systematically eroded. The formula worked precisely as it was designed to do, but it was blind to the huge ‘fat-tail’ risk that actually materialised. It accumulated systemic fragility.

This is exactly the kind of intrinsic blindness that we risk with AI. John Culkin and Marshall McLuhan famously said ‘We shape our tools, and thereafter our tools shape us’. We have a clear opportunity to get this right from the start, but the choice is ours. Critical thinking has become business critical.

A version of this post appeared on my weekly Substack of AI and digital trends, and transformation insights. To join our community of over thirteen thousand subscribers you can sign up to that here.

To get posts like this delivered straight to your inbox, drop your email into the box below.

Photo by ben ali on Unsplash

Leave a Reply

Discover more from Only Dead Fish

Subscribe now to keep reading and get access to the full archive.

Continue reading