I felt a little strange writing this over the last few days as violence escalated in Minneapolis and the United States hit this dark period in its civil conflict and state repression. But I also knew that there wasn’t much I could do other than boost the issue through my socials and send good vibes. Here in Canada, we’ve managed to escape the worst of the authoritarian movement so far. But the trend of tech giants pushing for authoritarian government is a big part of that crackdown. So my critical comments about the flagship product of our era’s Silicon Valley serve something of a point of resistance and alternative as well.
• • •
This is the first newsletter I’ve written that began as a TikTok post. My TikTok never used to be linked to my professional presence all that much. Mostly, it was me reading from and talking about books on my shelves or that I was reading at the moment. But I’m thinking of reformulating it as a video version of a lot of the content I put out on the newsletter, at least in part.
Honestly, I haven’t had a lot of success over the years marketing myself on social media. I’m not particularly viral or a clout chaser. I’m not really comfortable with the tone or methods most people use to create a lot of engagement and viral posts. I lean into thoughtfulness by nature, not reaction. I’m just me, and I hope that people find my insights and ideas valuable to them. It’s been a couple of years since I’ve really felt in control of my career in terms of which opportunities are open to me, and how I can develop my professional life. Here’s another instalment of those professional thoughts, and maybe you can learn something from them.
• • •
For the last few years, I’ve been developing different ideas in research and business around artificial intelligence. The catalyst was, like most of the rest of the world caught off guard, the explosive launch of OpenAI’s ChatGPT and the many other generative AI chatbots that followed. My own intellectual research, stretching back about 20 years, actually gave me remarkable preparation to understand this hype wave and navigate it without falling for hype and false promises.
With this newsletter, I want to discuss some ideas that I’ve been digging through recently as I try to reposition my professional life in our new business environment. In particular, I’ve been digging back into some philosophical classics to find new ideas about how to understand how and why AI and LLMs are limited in what they can achieve, and figure out how to apply that knowledge to business development, organizations’ operations, and my own career path.
Let’s start by setting the scene a little, and then how my own recent research complements the critical ideas that are floating around the world of the AI bubble already.
A Clear Intellectual Heritage That Business Leaders Ignore
Philosophy as a research discipline can be incredibly useful to help navigate all the hype and complicated messaging around AI. This very insightful article at Noema by Anil Seth is a great example. Its overall argument cuts through a lot of the confusion that AI companies sow to encourage investment and adoption, by distinguishing concepts like intelligence and consciousness that the industry’s corporate messaging also blurs together.
That being said, Seth’s piece could use some more nuanced sources on the importance of consciousness and experience to human existence. The only figure he explicitly draws on is Thomas Nagel, who published a very influential article in the 20th century about that very thing. The problem is the clunkiness of his language, which prevents his thinking from digging into the real nuance of what consciousness and experience are.
Nagel’s phrase revolved around his description that “there must be something that it is like” to experience the world as a consciousness. So in Nagel’s wake, many philosophers started using the phrase “what-it-is-like-ness” to describe the nature, character, and flavour of consciousness in intelligence. Seth’s piece, which I strongly encourage you to read after this newsletter, digs into a strong critique of computational functionalism, the conception of the mind as the brain’s software.
It’s an analogy that became especially powerful in the wake of Alan Turing’s major papers and the impact of the cybernetics research scene. But evolved organisms can’t have a strong separation of mind software from the brain hardware on which it’s implemented because it’s ridiculously energy-inefficient. The more we learn about the brain and nervous system itself, the more we understand how inseparable its functions are from its body.
I studied all this in my early years of graduate school, writing a master’s thesis on the computational theory of mind and its associated functionalism. But I also went beyond this disciplinary boundary. I saw a lot of the problems that Anil Seth is talking about in that Noema article in 2026, when I was first studying the field in 2005. And as he refers to philosophers of biology like Francisco Varela and the phenomenological tradition, the thesis I wrote when I was 23 explains how research from those fields makes all the intractable problems of computational functionalism either solved or disappeared. So these aren’t new ideas.
Matter, Memory, and Mind
So I dug into another source of old ideas, long before the world we live in could even really be imagined. That was Henri Bergson, particularly his 1896 book Matter & Memory. This isn’t going to be a full lesson on the history of Bergson. There are a good few of those out there already, particularly my old grad school supervisor Barry Allen’s book, Living in Time. Instead, I’m going to walk through the ideas that I could find in Matter & Memory that are most relevant to understanding the ambitions of the men building artificial intelligence and the limitations of those systems and their infrastructure.
First, there’s an insight about the limits of language. OpenAI started talking about ChatGPT showing signs of human-like intelligence from the moment of their public launch at the end of 2022. A lot of the hype around AI jumps from a common (and mistaken) interpretation of the Turing Test: that once a machine can converse with a human in such a way that we could mistake it for a human, it’s intelligent. It’s a way of thinking about intelligence that’s very language-forward.
But Bergson makes a lot of very clear, sensible arguments that language is only a limited, partial, sliver of what thought and intelligence can be. The full body of human thinking consists of propositions and language, but also imagination, action, and foresight to plan, predict, and prepare action. An LLM is only dealing with the data and relationships among words in language, analyzing semantic data and responding to linguistic prompts.
That’s important for Bergson’s thinking about intelligence and thought, but what matters even more than this is memory. We often think of memory as being only about our past: remembering what has gone. However, memory is more than this. When Bergson talks about memory, at a basic level, he’s talking about recollections and habits. Specific things that we bring to mind that we’ve learned or experienced before, and the actions and ways of doing things that we’ve trained ourselves to do with deftness and skill.
But that isn’t just about remembering. Memory always informs our action in the present. We recollect the past to understand continuities and differences with the present. We use our habituated skills to act right now. And because our present action is always oriented toward a future we’re trying to influence with our actions, memory also informs our push into the future.
Machines With No Past Or Future
Whether we can say that an LLM has its memory in anything like a human form is a messy question. In many ways, it can’t. Conceived as habits and recollections of a life in a world, they certainly done. An LLM AI like Claude, ChatGPT, or Grok is a computer program whose computations are distributed across an enormous network of data centres. Google Gemini has no eyes to open, and if it says that it does, it’s a positive response to your prompt, not a sign of its actual experience. What’s more, because there is no data storage in an LLM itself, it’s literally impossible for it to have recollections. There’s nothing in its systems to recollect. So one aspect of this thick conception of human memory simply doesn’t apply.
You could, though, consider an LLM as having habits. It’s quite literally a creature built on habits, since its responses are the result of statistically analyzing huge troves of human communication records as its training data. So your prompts activate habits that the algorithmic software have developed by replicating the patterns that emerge from training data analysis. But that isn’t the kind of habits organisms form as they practice activities in the world. LLMs’ habits are responses to prompts that it developed through statistical analysis of a huge body of information.
The practical orientation that Bergson identifies as central to organic intelligence just isn’t part of LLM architecture. Memory is the accumulation of human activity in your history, all of it oriented toward action. What can we do now, given all that we’ve acquired and developed throughout our past, as we stride into the future.
An LLM, or any other software system, can never orient itself in such a way. It’s a computing system that responds to input with output. Silicon Valley culture defines intelligence as processing inputs to create outputs, interpreting and responding to information. There is no dynamic of action, and action is at the heart of human and organic intelligence.