In all the attempts over the years that I’ve written blogs, websites, newsletters, or whatever else you call them, that explore the ideas that interest me or express my thoughts on issues and concepts, I’ve always had larger goals of influence and reputation building in mind. But those projects never really took off that way. There were a lot of reasons why. My earliest attempts in the 2000s I think I never figured out how to market them well. My blog in the mid-2010s was a bit more successful thanks to some of my researcher networks, but it never really took off because a lot of problems in my personal life at that time prevented me from thinking through ways to build a career with my intellectual talents. I spun my wheels, but never figured out how to market my intellectual self in a way that got traction. For a long while, I felt like that had no material value.
But I do have an enlightening perspective on the world we’re muddling through. I can bring my thinking, research skills, and knowledge together for insights that have value for people. When I was studying philosophy, what meant the most to me was its ability to change the way people understood the world, and that changed what people thought was possible.
So in this edition, I want to lay out my thinking about the business and technology trend that has had such transformative effects on so many fields. For two years, I was part of a consultancy startup, and my core work there was to develop training programs to guide business clients on implementing artificial intelligence systems. I planned to focus on the underlying principles so that our prospective clients understood what it could and couldn’t do, and see through the hype around the technology. I discovered during this work, and from later conversations with other AI experts, that a lot of what I had studied all the way back to graduate school is vitally important for understanding AI. So let this edition stand as a summary of my professional opinion, on the record, for anyone who wants to know.
The Most Egregious and Fantastical: “Artificial Intelligence”
The biggest hurdle in understanding artificial intelligence today as a technology is the wave of hype when modern large language model chatbots were launched at the end of 2022. When OpenAI unveiled ChatGPT, Sam Altman described it as the first steps toward superintelligence, and he still does so. This idea of superintelligence is both easy to understand and mystifying.
It’s easy to understand because we have so many images of artificial minds more intelligent or with greater capacities than human minds. Mr. Data from Star Trek, T-800 from The Terminator franchise, Deep Thought from Hitchhiker’s Guide, Demerzel from Foundation, Ava from Ex-Machina. Less popular but no less evocative and influential visions like Iain M. Banks’ Mind-Ships from his Culture novels, and the obsessive mind-machine from Dino Buzzati’s The Singularity.
But superintelligence is also very difficult to understand because it’s unclear what intelligence and mind actually are, substantively. The Turing Test has always served as a guide to its arrival: once a computer could simulate a conversation partner that could make a person think they were talking with a human, that computer would have manifested intelligence at the same level as a human. But one thing we’re learning from our daily interactions with LLM chatbots is that tricking a human into thinking that a computer is intelligent may be a lot easier than building an intelligent computer.
The OpenAI Hypothesis: Replicate Language and You Replicate Thought
Plenty of people interact with LLM-powered chatbots as if they were intelligences, people of some kind. We see this in how Elon Musk’s DOGE crew began integrating Grok into its project of slashing the US government bureaucracy. We see it when ousted Uber founder Travis Kalanick talks on a podcast about his certainty that he and his AI are going to solve quantum gravity and other cosmological problems through talking them out. We see it in the strange popular phenomenon of people forming romantic relationships with chatbots. We see it in the horrific ways that people like teenager Adam Raine are talked into suicide with their chatbots. We let these chatbots interact with us as if they were more intelligent than we are and we end up becoming dupes, rubes, and exploding our mental health.
But my research has convinced me that LLM mechanics, on which these AI chatbots rely, can’t actually replicate or surpass organic intelligence. My own background is in philosophy, a field where I studied the field of cybernetics that became the foundation of contemporary AI science. I also spent a long time immersed in the study of philosophy of language, in what academic philosophy labels the Analytic Tradition. I did a lot of research and course work on the theories of language, thought, and logic that defined this tradition, from its beginnings with Gottlob Frege, Bertrand Russell, and Ludwig Wittgenstein, to its later developments in the works of John Searle, Saul Kripke, Jerry Fodor, and David Chalmers.
If I can summarize the most important takeaways from that tradition I want you to get from this large and complicated body of work and discourse, it’s this.
The full capacities of human thought and knowledge are rooted in language, logic, and linguistic meaning
When thinkers in this tradition wanted to discuss meaning, they talked about the meaning of sentences. When they wanted to discuss thoughts, they talked about logical propositions and declarative statements. When they wanted to discuss truth, they talked about the truth-value of individual declarative statements and logical propositions. Thought was language, and language was thought, which assembled the mind. Intelligence was the ability to understand and make use of language. Some philosophers considered thinking itself to be an inner language, as in Noam Chomsky’s conception of language as a hardwired set of rules for calculating how to assemble bits and phrases, or Fodor’s cheeky concept of “Mentalese” as the inner language of thought.
All of these theories presumed that the mind and thought functioned in a human body as computer programming does on hard drives and processors. Language was the code of the human body, brain, or mind, just as the foundational hardwiring of computer circuitry is the foundational code for all software systems that run on those machines. This model, as it exists in philosophy, was incredibly influential on the central concepts of computer science, even if few computer scientists today bother to engage with the philosophical works.
Why the OpenAI Hypothesis Fails: Thought Is More Than Language & Logic
However, my studies in philosophy throughout my 20s also brought me into contact with many other traditions in that field. One of the most important of those other traditions in forming my own critical take on AI hype and the real prospects of LLM applications was Phenomenology. Where Analytic Philosophy put language and logic first in understanding human thought, Phenomenology concentrated on experience. Most important for me were Edmund Husserl, Maurice Merleau-Ponty, Gilbert Simondon, Hans Jonas, Simone de Beauvoir, and Jean-Paul Sartre.
Those works and authors studied how we moved through the world as embodied agents, communicated, formed intentions, built social systems, and at a foundational level, perceived the most basic constituents of our surroundings and thinking that held ourselves together in personal experience. Phenomenology started from the presumption that our understanding developed through our bodily comportment: moving through the physical world and experiencing the full range of our sensory and perceptual possibilities.
LLMs can’t do any of that. They’re computer programs predicting how to assemble text in response to prompts, based on complicated mathematical patterns of probabilities and correlations that it extracts from analyzing training data of terabytes and petabytes of written texts. To paraphrase a joke I once saw on BlueSky, an LLM can assemble texts about popsicles, but it can’t eat one. Organic intelligence, growing from perception and action, writes and talks, but it also walks, runs, dances, eats, smells, and uses. An LLM can’t replicate the intelligence that comes from perception because it doesn’t perceive. It calculates and outputs into computer interfaces.
So why is the Turing Test such a big deal for so many computer scientists? Start from Alan Turing himself. He was a very strict reductionist about the mind, considering thought to be no more than the recombination of linguistic tokens using logical and grammatical operations. Much of computer science followed his lead, because those core works of Phenomenology simply aren’t on the reading lists for computer science education. Presuming that thought and intelligence is entirely a matter of language and calculation, of course you would think a machine calculating what to say next was intelligent. But there is more to intelligence than this.
Conclusion: This AI Summer May Become an Ice Age
Finally, I wanted to lay out my expectation for the future of artificial intelligence as a business and a technology. I’m not a doomer, though the technology is being used to harm our societies’ political discourse and skills development. I’m not an optimist, in that I don’t think it will uplift society through the singularity.
Honestly, I’m not even sure that LLM-based artificial intelligence is going to last much longer. The data centres that run them are unavoidably expensive to operate. More importantly, the financial infrastructure of the AI industry shows all the signs of a dangerous, trillion-dollar investment bubble. Microsoft and NVIDIA trade about a trillion dollars in cash and each other’s stock back and forth, cycling in OpenAI, AMD, Coreweave, the Musk empire, and Intel.
All the capital investment of the AI industry is these companies paying each other for chips, data centre operations, and services. Without generating enough income from other companies, governments, and people to pay for all those billions of cross-investment, these companies will just wash trade with each other until the bottom falls out of their operations.
When that happens, there will simply not be enough money to keep operating LLM infrastructure. With no one to pay for them, the data centres for ChatGPT, Claude, Grok, Gemini, CoPilot, and the rest will shut down. The chatbots will go silent and the technology we all presumed was here to stay forever will be gone in a flash. Newsletters like this and countless others from boosters, doomers, and critics will be hilariously obsolete. The tech industry as we know it today could very well fall to bits.
I don’t know what will happen after that.
