Logo for tommyjepsen.com
Thinking with AI

Thinking with AI

2025-04-28 Tommy Jepsen ✌️

I've always found Andy Clark and David Chalmers' concept of "The Extended Mind" incredibly fascinating, not least because it feels so relevant to how I navigate the world myself. The idea describes how we naturally integrate external tools — from simple notebooks to advanced computers — directly into our cognitive processes, effectively extending our minds beyond our biological brains.

For years, the internet, with its search engines and knowledge bases like Wikipedia, served as a powerful example of this extension, dramatically expanding our access to information. But now, we're facing large language models (LLMs) like ChatGPT. These introduce a far more interactive, and potentially more intrusive, form of cognitive extension – one that has the potential to fundamentally change the very nature of our thinking.

Before LLM's

With previous digital tools, such as websites and search engines, the user was the primary cognitive driver. We evaluated information based on many signals – domain, visual layout, linguistic correctness – beyond the text itself. The crucial mental synthesis, which created knowledge from these many inputs, was our own. The tool delivered information unaffected by our previous interactions or preferences, so the analysis remained firmly anchored in our own, broader assessment.

LLMs function differently. They are not passive repositories of information, but active generators of language, ideas, and structure. When we interact with them, we engage in a form of dialogue, asking them to create, explain, summarize, or rephrase. This shifts the cognitive balance. Less time might be spent on initial information gathering, and more on formulating precise instructions (prompts) and critically evaluating the output the AI delivers. Thinking becomes an iterative process, a collaboration to shape and refine an externally generated product.

Challenges

However, this very generative nature introduces new, subtle challenges for our thinking, beyond mere factual errors. LLMs are trained on enormous amounts of text data from the internet and other sources, and they inevitably absorb the biases present in this data – whether cultural, social, political, or otherwise. When the AI generates text, these biases can be expressed, often in ways that are difficult to discern. A text might appear neutral, yet still reflect a specific viewpoint or reinforce existing stereotypes.

For the user's thinking, this means critical evaluation must go deeper than just fact-checking. We must now also actively consider: What underlying assumptions or biases might lie behind this formulation? Is the information presented in a fair and balanced way? If we uncritically accept the AI's output, we risk not only basing our reasoning on incorrect facts but also internalizing and perpetuating the biases the model has inherited from its training data. Our thinking can be unconsciously colored by the model's embedded worldview.

Furthermore, the way LLMs formulate information has a powerful impact. Their ability to produce fluent, well-structured, and often convincing text can affect our own perception and argumentation. We can be seduced by a well-written, but perhaps one-sided or simplistic, presentation. Our own capacity for nuanced thinking and independent formulation may be challenged if we become accustomed to outsourcing this part of the cognitive process. It requires a conscious effort to distinguish between a convincing formulation and a solid argument.

Finally, the interaction itself – the way we prompt – helps shape the output. Our own questions and instructions, which may be colored by our own expectations or biases, guide the AI. This can create a form of echo chamber, where the LLM primarily delivers what it "thinks" we want to hear, rather than challenging our assumptions. Our thinking risks being confirmed rather than expanded or corrected if we do not actively seek conflicting perspectives, also in our dialogue with the AI.

Thinking with LLMs

Thinking with LLMs offers enormous possibilities for efficiency and creativity. But it requires a new form of cognitive vigilance. Our extended mind becomes more powerful, but we must actively develop a heightened critical sense towards bias, the power of formulations, and the persistent risk of misleading information, even when the AI seems to have access to "facts." Navigating this landscape requires us to constantly evaluate not only what the AI says, but also how and why.

Hello.

My name is Tommy. Im a Product designer and developer from Copenhagen, Denmark.

Connected with me on LinkedIn ✌️