
From bylo.ai. Prompt: “Two snakes are eating each other's tails, laid out in an ouroborous shape. One snake has the head of Sam Altman, and the other snake has the head of Larry Ellison.” I don’t think this gets to quite what I wanted.
Today, I want to reflect on some of what I’ve learned over the last few years that I’ve spent developing consultancy projects and pitches around implementing and training businesses, organizations, and employees for the best uses of AI products. My work in this part of the business world began just under three years ago, when I started a media project with one of my colleagues that I began growing into a consultancy. Though I work on my own on these projects now, this newsletter being one place to promote it, I’ve built up some professional experience in training people to use generative AI systems.
This includes some contract work for the federal government of Canada most recently, which I will say not to brag, but also to brag a little.
But this post is about research I’ve been doing lately that probes deeper into issues surrounding the AI industry than simply training on the tools. This post by the journalist and economics analyst Joey Politano makes one core problem of the industry very clear.

Politano is not the first to identify the problem of the AI investment bubble, and he sure won’t be the last.
By now, there’s a clear consensus among anyone observing the computer tech industry that there is a massive, dangerous investment bubble here. The same handful of companies are paying each other tens of billions of dollars to build, operate, and use data centres and AI platforms. There isn’t nearly enough money from corporate and consumer customers coming into this crazily expensive system to sustain the circle jerk. I wouldn’t be surprised if we see it tumble to bits before the end of Trump’s second term.*
So what can we do as users, as customers, as people this technology affects, to push for better employment of AI once this elite money circus falls to pieces? For that, I think ordinary people, businesspeople, professionals at all ranks of society need to talk with each other and build a common popular agenda for what that better use of AI could be.
Mass Adoption Built Real Knowledge
A major advantage that we have in 2026 that didn’t exist three years ago is that the wool has fallen from our eyes. You may not remember three years ago, as it feels like three thousand years. But when OpenAI launched ChatGPT, the hype surrounding it was that this was the first stage of creating genuine machine intelligence of equivalent powers to humans. Not only has that not happened, but most users understand that whatever technology may one day create Mr Data, it’s not the LLM.
The only people who do still seem to believe this are folks with genuinely severe mental health issues, bored billionaires who’ve cashed out of industry and have nothing to occupy their time, and easily-led young dipshits who’ve fallen into an insane billionaire’s fascist cult. The issues related to those I’d like to save for another time, because I want this post to lay out what I think are the positive ideas for how we could run AI systems better than the jerks who are currently in charge.
Because the ordinary common sense of most people have actually pushed back, without even really knowing it, the AI hype machine. We all followed Silicon Valley’s leaders lockstep into the future and built AI into our workplaces and everyday lives. We quickly discovered, despite occasional Eliza Effects, that its best purpose was producing work we didn’t want to do ourselves, or didn’t have the cash (or want to pay) to hire someone else to do. It’s why the general term for generative AI is no longer “artificial intelligences.” Instead, we call them chatbots. That’s all.
The open question for most of us intelligent people who use AI tools is now, Can they do anything better than this? Can they be anything better than a chatbot?
What Powers the Software We Call AI
The foundational research of what became modern AI technology includes a programming discipline called “expert systems.” This was about developing software that could create and adjust their own macros, their own automatic actions, to assist and take over repetitive tasks that humans find boring or mentally painful. Sounds pretty familiar.
So what kind of boring stuff does generative AI do automatically to keep our minds from getting numb? You enter a description of something you want into a chatbot window, using clear but ordinary language. It then makes that thing. You edit the thing yourself, or with additional prompts to refine the output, so that what you’ve made is closer to what you envisioned. The most serious innovation in this technology was getting the computer creating this image to take ordinary language, not specific codes or graphic interface actions, as input.
How computers came to understand ordinary human language was a process called machine learning. We know what that is by now: statistically analyzing huge amounts of data to find consistent patterns across their enormous scales to recognize those patterns in fresh input and replicate appropriate responses to those inputs in their outputs. Chatbots are made on training data of human language and communication, usually the text of most of the internet since the 1990s. These are Large Language Models (LLMs).
If you’re an ordinary person who doesn’t understand this, you’ll think AI chatbots are actual humanlike intelligences. If you’re a software tech professional who thinks human intelligence consists only of recombining these pattern recognitions, you’ll think AI chatbots are actual humanlike intelligences.
Both of these groups are wrong. They’re macros for giving directions in ordinary language. But they miss a lot of the knowledge that’s generated in worldly, perceptual experience. As a joke I once saw on the internet about AI said, an LLM can tell you all kinds of descriptive facts about popsicles, but it can never taste an actual popsicle.
Funny enough, earlier attempts to build artificial intelligence didn’t use this method of analysis and correlation seeking. Instead, it tried to replicate the logical and syntactic structure of human thought by programming it piece by piece. Computer scientists call this approach Good Old Fashioned AI (GOFAI), and it couldn’t work because it required too much programming and not enough adaptation to sloppy or eccentric inputs. The correlational approach to knowledge in machine learning is much more efficient, but still relies on massively powerful and expensive computing power to work at all.
What Could Be Better Uses of AI?
One of the most difficult aspects of having spent time over the last handful of years researching AI and its deployments is that it’s easy to see the problems, but difficult to figure out solutions. Alan Blackwell and his 2024 book Moral Codes informed a lot of the ideas in this post. He positioned Moral Codes as not only critiquing the AI industry and issues with the technology’s social effects. He also wanted it to be a positive statement of how to deploy AI technology in uplifting, constructive ways.
But aside from a few remarks in the final chapters, he couldn’t manage it. There was an intriguing description of how a Maori community has leveraged AI technology in their home territory in Aotearoa/NZ to monitor ecological changes and electricity supplies. He drew from a Maori concept of stewardship that provides moral and communal responsibilities to maintain natural systems and community health. It’s quite similar to a concept that I see in Indigenous works from this continent. So there’s a firm ground there.
Yet that seems to be the challenge of so many authors and critics in the AI space. We can’t seem to get past our critiques of the systems and figure out more useful and beneficial ways of applying this technology. Do we have to wait for the investment bubble to burst before we get creative about how to employ these technologies and products?
* Or, you know, the other thing.
Further Reading
Most importantly for this post, I’ve drawn from Moral Codes by Alan Blackwell. I also recommend the work of Eryk Salvaggio, particularly his newsletter Cybernetic Forests. Another critical AI thinker who offers very insightful points about the shortcomings of the industry is Joy Buolamwini. Ed Zitron’s immensely overlong newsletters are also very informative about the financial tomfoolery fuelling the AI bubble.