
I came across Moravec’s Paradox via a talk which Stephen Fry gave, titled ‘AI: A Means to an End or a Means to Our End?’, and given as the inaugural ‘Living Well With Technology’ lecture for King’s College London’s Digital Futures Institute. The concept was first articulated by AI researcher Hans Moravec in the 1980s. It highlights a counterintuitive aspect of AI development: tasks that humans find cognitively challenging (like playing chess or solving mathematical problems) are relatively easy for AI to replicate, while tasks that humans do effortlessly (walking, recognizing faces, or navigating through space) are extremely difficult for AI to master.
Moravec posited that this paradox is underscored in the evolutionary roots of human abilities. The ‘hard’ things that we train computers to do are based on the deliberate process of reasoning and problem-solving, which are cognitive functions that humans have developed relatively recently (over the last 100,000 years). The ‘easy’ capabilities which we do without conscious thought are ingrained from the long process of evolution and natural selection honed over millions of years. It’s easier to codify high-level reasoning but far less easy to do the same for sensorimotor or unconscious tasks.
Fry quotes Donald Knuth: ‘AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do without thinking …’. Or as this article puts it: ‘the skills that humans have acquired recently in their history are easier to teach computers, but our skills get harder to teach as they go further back in the evolutionary history of humans and animals’.
Understanding Moravec’s paradox helps us recognize the limits of AI and the challenges in achieving AGI (Artificial General Intelligence). The interesting thing about the recent announcement of ChatGPT o1 is that the new set of models have deliberately been designed to spend more time thinking and reasoning before they respond: ‘Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes’.
That’s why it’s another (not insignificant) step towards AGI. Small wonder they decided to ‘reset the counter’ back to one and call this series OpenAI o1.
I write a weekly Substack of digital trends, transformation insights and quirkiness. To join our community of thousands of subscribers you can sign up to that here.
To get posts like this delivered straight to your inbox, drop your email into the box below.
Photo by Vlad Sargu on Unsplash

Leave a Reply