
A lot of the focus on the benefits of AI are (perhaps naturally) focusing on efficiencies right now. But one of my favourite ways of using these tools is to challenge my own thinking, to open up new lines of exploration, and to originate new perspectives and ideas that I hadn’t even thought of.
If we are to avoid what has become known as ‘cognitive debt’ (the idea that we grow lazy in our thinking by outsourcing it to AI) we need to really systematise approaches and habits that combine AI with human cognition in ways that generate new possibilities, not just make the most of the ones that already exist. For businesses, the competitive advantage of the future won’t only be about who can be the most efficient and productive, it will be about who can innovate and think in creative ways. And so we need to think about AI in that way as well.
But at a personal level, this is also about how it can help me to get to places that I couldn’t have otherwise got to on my own. And I’m finding it hugely beneficial in doing just that. So I thought I’d set out some of the ways in which I try and systematise this. I’m not a fan of ‘magic prompts’ as you probably know, but I have included example prompts here which may help kick start the process:
Using perspective triangulation
In this Ian Leslie podcast the brilliant Jasmine Sun talks about creating a ‘syllabus’ for the AI to learn from, and it’s definitely a useful technique to carefully curate a collection of documents/studies/information which the GPT can use as source material. For example, you can collate this into the ‘project files’ section of ChatGPT (or Claude) projects or a Google Drive folder and then set up a Gemini Gem which can use this folder for context (more about how I use AI Project spaces in strategy here). These features have persistent memory which is wonderfully useful for deeper dives into a topic. Once you have this you can then surface blind spots by forcing the model to contrast clashing viewpoints – the best way I’ve found to do this is to ask the GPT to set out 3-5 short excerpts that take clearly different positions on your topic (e.g. a status-quo defender is great at surfacing reasons why change may not happen, an optimistic viewpoint helps shows what’s possible, a pessimistic one reveals potential barriers). Then ask it to list the key assumptions or questions behind each (ChatGPT visualised this for me below).

Anchoring the model in explicit tensions pushes it to reason across boundaries rather than averaging them.
Board of Brains
There’s another technique shared recently by Lani Assaf at Anthropic which I also really like – creating a ‘Board of Brains’, or setting up Claude (or other) Projects that can act as independent advisors or challengers, each bringing their own different perspective. To do this you create a new Project (“[Person’s Name] Brain”), upload as much context as possible (e.g. public writing, podcasts), and add some project instructions (“You embody [Person/Role]’s perspective. Review my work through their specific lens. Flag what they’d notice. Suggest what they’d recommend. Be direct.”). This could be respected individuals with an interesting point of view, or experts, or customer personas, but I liked what Lana said about using this technique:
‘What’s really cool about using Claude consistently in this way is the compounding effect. Over time, these other perspectives become interwoven with yours. You’re systematically borrowing someone else’s lens until it becomes part of how you see.’
Role-play debate
Another similar technique is to simulate a live debate between people with different perspectives. Here you give the GPT a cast list, tell it to speak in numbered turns, giving one paragraph on a topic per role. You can then pause after each cycle and ask follow-up questions, go deeper on specific arguments or add a new ‘guest’. Seeing arguments collide in real time really opens up your mind to new perspectives and lets you notice assumptions you didn’t know you were making.
Other prompting techniques
I wrote a whole thing a while back about challenging your thinking through prompting but I’m going to highlight a couple of favourite techniques from there and also some new ones that I’ve started using. You can, of course, include in your prompts a deliberate instruction to challenge common assumptions, avoid typical biases or echo-chamber thinking in the area. You can ask it to play devil’s advocate and force you to justify your thinking, or to flag when you are being vague and ask clarifying questions. You can even set the whole GPT up to always avoid lazy thinking (e.g. in ChatGPT this is in Settings-Personalisation-Custom Instructions). But there are also some more specific and deliberate techniques:
- Norm switching: explore the norms from a completely unrelated context (e.g. a totally different sector) and apply them to your context (e.g. your own sector)
- Constraint reversal: sometimes if you challenge the model with a ‘ridiculous’ constraint (‘how would you do X with no budget’) and ask it for five solutions based on that it can jolt it out of its pattern and force lateral moves the model wouldn’t surface under standard feasibility filters.
- Counterfactual storyboard: this is useful for identifying the potential fragility of existing plans. Ask the model to create a short narrative in which the opposite of a key assumption is true, and then examine how the system adapts or fails. For example: ‘Imagine that instead of expanding, our target market shrinks by 50 % in two years. Show the sequence of events month by month and identify three leverage points we could still exploit.’
- Forced analogy engine: here you preload a list of different domains which could be completely random (like principles of acting, or conservation practices, or jazz improvisation), or it could be examples of businesses that are particularly good at a specific thing (like Apple for product design). Then you tell the GPT to map each to your challenge and extract at least one transferable principle.
- Negative inversion: Ask something like ‘list ten ways this initiative could fail spectacularly’, and then invert each item into a preventative or mitigating action, or a new idea. This can help reveal and prevent potential points of failure but also open up new possibilities once you invert it.
- Ladder up/ladder down: this is about toggling between the big-picture ‘why’ and the more executional ‘how’. Write your starting challenge in as succinct a way as possible. Then ladder up from that one abstraction level higher (‘Why does this matter?’), and do it a couple more times or until the answer starts becoming vague. This can help reveal implicit purpose and higher-order goals. Then go back to your original challenge and ladder down (ask ‘What’s the very first observable action?’, and then ‘What must happen immediately before that?’), and do that a couple more times. This helps reveal practical dependencies and hidden blockers. You can even compare the two ladders to see where the strategic intent (top) fails to translate into concrete actions (bottom). That gap is an interesting area to focus on. If it helps, I created a cheat sheet on this with a simple example which you can download here.
Using an AI tool like a search engine means that you are potentially missing a huge opportunity for them to open up new lines of exploration and to take your thinking and ideas to places that you couldn’t have got to on your own. I believe that intellectual curiosity will only become more important in the era of AI. It’s about seeking out the right answer, not just the one in front of you. It’s what separates lazy thinking from deeper deliberation, and mediocre outputs from exceptional ones.
A version of this post appeared on my weekly Substack of AI and digital trends, and transformation insights. To join our community of over ten thousand subscribers you can sign up to that here.
To get posts like this delivered straight to your inbox, drop your email into the box below.
Photo by Glen Carrie on Unsplash

Leave a Reply