Posted on 

 by 

 in , ,

Moravec’s Paradox

I came across Moravec’s Paradox via a talk which Stephen Fry gave, titled ‘AI: A Means to an End or a Means to Our End?’, and given as the inaugural ‘Living Well With Technology’ lecture for King’s College London’s Digital Futures Institute. The concept was first articulated by AI researcher Hans Moravec in the 1980s. It highlights a counterintuitive aspect of AI development: tasks that humans find cognitively challenging (like playing chess or solving mathematical problems) are relatively easy for AI to replicate, while tasks that humans do effortlessly (walking, recognizing faces, or navigating through space) are extremely difficult for AI to master.

Moravec posited that this paradox is underscored in the evolutionary roots of human abilities. The ‘hard’ things that we train computers to do are based on the deliberate process of reasoning and problem-solving, which are cognitive functions that humans have developed relatively recently (over the last 100,000 years). The ‘easy’ capabilities which we do without conscious thought are ingrained from the long process of evolution and natural selection honed over millions of years. It’s easier to codify high-level reasoning but far less easy to do the same for sensorimotor or unconscious tasks.

Fry quotes Donald Knuth: ‘AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do without thinking …’. Or as this article puts it: ‘the skills that humans have acquired recently in their history are easier to teach computers, but our skills get harder to teach as they go further back in the evolutionary history of humans and animals’.

Understanding Moravec’s paradox helps us recognize the limits of AI and the challenges in achieving AGI (Artificial General Intelligence). The interesting thing about the recent announcement of ChatGPT o1 is that the new set of models have deliberately been designed to spend more time thinking and reasoning before they respond: ‘Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes’. 

That’s why it’s another (not insignificant) step towards AGI. Small wonder they decided to ‘reset the counter’ back to one and call this series OpenAI o1.

I write a weekly Substack of digital trends, transformation insights and quirkiness. To join our community of thousands of subscribers you can sign up to that here.

To get posts like this delivered straight to your inbox, drop your email into the box below.

Photo by Vlad Sargu on Unsplash




3 responses to “Moravec’s Paradox”

  1. El Nietzsche que conocemos es una mera interpretación. Esta es su historia. | Ajuste de Cuentas

    […] Moravec’s Paradox […]

  2. Being more intentional in the age of AI – Only Dead Fish

    […] at others. Moravec’s paradox, a well known AI paradox that has been around since the 1980s, describes a counterintuitive aspect of AI development: tasks that humans find cognitively challenging […]

  3. On the limitations of AI – Only Dead Fish

    […] Moravec’s paradox is the concept that tasks that humans find cognitively challenging (like playing chess or solving mathematical problems) are relatively easy for AI to replicate, while tasks that humans do effortlessly (walking, recognising faces, or navigating through space) are extremely difficult for AI to master. Moravec posited that the ‘easy’ capabilities which humans do without conscious thought are ingrained from millions of years of evolution and natural selection. The point is that recognising the inherent strengths and weaknesses of both humans and AI enables us to design systems which get the most from both. […]

Leave a Reply

Discover more from Only Dead Fish

Subscribe now to keep reading and get access to the full archive.

Continue reading