AI Is a Toxic Menace
What we currently call “AI” is not intelligent, and won’t be. The current AI boom will end in a financial crash. The AI industry causes terrible ecological and social damage.
Under the hood, generative AI models are monstrosities, built from consuming previously unfathomable amounts of data, labor, computing power, and natural resources.
(Hao 2025)

Figure 1: AI winter is coming
“AI-first” open source makes me choke
I nearly lost my cool the other day. I was at a strategic summit for the Plone community I’m part of. Despite my ambivalence about flying, I had taken a plane to South Africa to attend a dedicated workshop, facilitated by local futures studies rock star Jon Cherry.

Figure 2: Group photo Stellenbosch Sprint Jan 2026
It was bad enough, that one of the participants kept trying to offload our brainstorms to an AI chatbot. When the whole point of the strategy session was the process of jointly converging on shared core values, and internalizing that convergence — if you short-circuit that process and outsource the outcome, the outcome would have no value. It would be just words, instead of a deeply felt conviction that we’re on a mission together.
What pushed me over the edge was another participant, who repeatedly insisted Plone should be an “AI-first CMS”. Surely we’ve all seen the massive community backlash, when Firefox declared itself to be an “AI-first browser”? My blood was boiling. After the third such statement, I shouted: “AI is toxic!” He was clearly surprised by that. As I was surprised by his surprise. Because everybody knows that AI is toxic, right? It’s friggin obvious, innit?
Apparently not.
Let’s talk
Thankfully, the Plone people are a bunch of nice guys. Over lunch, my AI-first colleague and I got together and quickly found that despite our differences, we have a lot of common ground when it comes to AI in open source. We started working on a joint position statement, and figuring out a process to widen our conversation to the whole community.
This post documents where I’m coming from: the stuff I read which makes it so blindingly obvious to me that AI is toxic. The next post in this series tries to outline a way forward: how do we relate to the current AI hype, without damaging our community?
What we currently call “AI” is not intelligent, and won’t be
Claims around consciousness and sentience are a tactic to sell you on AI.
Our monkey minds are poorly equipped, to navigate this complex high-tech world we’ve created. Our minds play tricks on us. We see things that aren’t actually there. We look at the clouds, and see an epic battle between opposing forces. We have to calm down our kids, that there is not actually a crocodile under the bed in the dark. The evolutionary pressure to survive, means it’s better to see a lion when there’s only a stone, than to see a stone when there’s actually a lion. Better to be scared, than to be eaten. We are hard-wired to see patterns everywhere (Dennett 1991).
This tendency is used against you. To make you see “intelligence” where there is none. Because a con man like Sam Altman can become a billionaire, by playing that game.

Figure 3: Whatever! Your dumb robot speak doesn’t scare me!
Let me cut through a whole tangle of philosophical and technical complications around consciousness and AI. The main thing you need to understand, is the crucial difference between the following two concepts:
- (1) Large Language Models
- LLMs are text prediction machines. Statistics on stereoids. Also called generative artificial intelligence, abbreviated into GenAI, though there is no intelligence involved, only statistics.
- (2) General AI
- AI that is reaching human-equivalent or superhuman levels of understanding. This would be the real deal, except that it has always been science fiction. Its imminent arrival has been heralded since the mid-20th century. We’re still waiting.
If I just get you to understand that these things are not the same, I’ve already succeeded in my main mission in writing this article.
The AI con
In order to make you confuse the two, they’re also called, respectively:
generative artificial intelligence : LLM bullshit engines; and
artificial general intelligence : machine thinking approaching or surpassing human intelligence.
Notice the sleight of words here. Those phrases look eerily similar, while meaning completely different things. That’s not an accident. You’re supposed to subconsciously equate generative artificial intelligence —GenAI, LLM bullshit engines— with artificial general intelligence —AGI, machine gods. When actually GenAI has nothing to do with intelligence. And AGI is nowhere in sight.
This word play is part of the AI con job (Bender and Hanna 2025).
The fact that GenAI is nothing like AGI, and will not somehow become AGI either, has far-reaching implications. So let’s dive in some deeper.
Large Language Models are not “intelligent”
Language models thus represent nothing more than extensive information about what sets of words are similar and what words are likely to appear in what contexts. While this isn’t meaning or understanding, it is enough to produce plausible synthetic text, on just about any topic imaginable, which turns out to be quite dangerous: we encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed.
When we see language, we see consciousness. For tens of thousands of years, producing language has been the defining trait of human intelligence. You encounter speech, there must be a human nearby.
Well, that has changed. We now have machines spinning words. That does not mean those machines are actually intelligent. We just have to overcome our monkey mind’s tendencies, to see patterns in noise. To see human intelligence in bullshit engines.
Vaporware
The illusion of intelligence is strongest, when you engage in chat dialogue outside your own field of core competence. Start working within your own area of expertise, and the illusion quickly dissipates. The chatbot confidently hallucinates (sidenote: I should point out that even the term "hallucination" is controversial, since it suggests a consciousness that is not there, in support of the AI grifter's propaganda. (Klein 2023) ) utter bullshit. It misunderstands you. It goes off-track. It contradicts itself. It is unable to follow instructions.
What we have, is only the surface appearance of intelligence, with nothing underneath. It’s like a sweet crust of cinnamon candy sugar, with no apple inside, just a hollow emptyness. Only a babble of words, not founded on any real understanding.
There is nothing connecting an LLM to actual reality. What it lacks, is a world model. Reasoning capability grounded in reality. Something beyond next-word probability distributions.
Scaling LLMs does not bootstrap real machine intelligence
The idea — which has led us all the way into the apparent AI bubble — was that one could use “massive amounts of data – often derived from human behavior – as a substitute for intelligence.”
I said it would never work.
It didn’t.
And now, at last, the field is starting to realize it.
Gary Marcus is a prominent AI researcher. His Substack, Marcus on AI, has 100,000 subscribers. His voice has long been that of a lone prophet in the desert, pointing out how the emperor has no clothes.
His position is, that scaling –throwing ever more data at ever more compute– will not, can not, deliver real breakthroughs towards human-equivalent artificial intelligence. Throwing ever more money at Large Language Models is not going to magically realize the holy grail of obtaining true machine intelligence.
AI winter is coming
Marcus’ idea is, that we need “neuro-symbolic AI” instead of only deep learning: we need to augment large language models with semantic reasoning, or “world models”. Combine deep learning with good old-fashioned AI, if you will. Deep learning alone won’t cut it.
Let’s say that position did not make him popular, when the whole current AI boom is about scaling LLMs, in an attempt to reach AGI and conquer the universe.
He ran into fierce opposition. Yet now he is gleefully counting how all his opponents are coming over to his side.
- Rich Sutton.
- Ilya Sutskever.
- Sir Demis Hassabis becomes the latest to say that ChatGPT is a dead-end and that we must turn our focus to world models.
- Yann LeCun, longtime critic of neurosymbolic approaches, changes teams.
If you don’t know those names: this is the cream of the cream of the AI field. Turing award (Sutton). OpenAI co-founder and chief scientist (Sutskever). Nobel prize and Google Deepmind founder (Hassabis). Meta chief AI scientist (LeCun). They’ve all been forced to grudgingly accept Marcus’ thesis, that scaling is a dead end.

Figure 4: Is another AI winter coming? (Image generated by ChatGPT) (Marcus 2025a)
You know things are bad, when even Sam Altman, the poster child of the AI boom, starts talking about a “bubble” after the GPT-5 launch turned out to be a dud.
In short: what we currently call “AI” is not actually intelligent. Nor will it somehow magically become intelligent if we throw more data and compute at LLMs. Which is a bit of a problem, given the amounts of money that have been staked on the losing side of this debate.
The current AI boom will end in a financial crash
We are in a bubble. Generative AI does not do the things that it’s being sold as doing, and the things it can actually do aren’t the kind of things that create business returns, automate labor, or really do much more than one extension of a cloud software platform. The money isn’t there, the users aren’t there, every company seems to lose money and some companies lose so much money that it’s impossible to tell how they’ll survive.
When adopting AI into their workflows, 95% of organizations are getting zero return on their AI implementation efforts. (Challapally et al. 2025).
Software developers think AI tooling makes them faster, but it actually slows them down (Becker et al. 2025). Using AI reduces cognitive understanding and impedes skill formation in software developers (Shen and Tamkin 2026).
While LLM use offers immediate convenience, this comes at the expense of persistent long-term cognitive costs. (Kosmyna et al. 2025)
Irrational exuberance
In short: AI seems to work, but the benefits aren’t actually there. Everybody knows this, which is why organizations are not paying serious money for AI subscriptions. This means the AI companies are losing money. Given the phenomenal costs of building out AI datacenters, the lack of actual revenue means they’re losing money at phenomenal rates.
The amounts of money invested imply an expectation of spectacular revenue growth in the future. Those AI data centers were built in anticipation of a massive breakthrough in the future — AGI, or artificial general intelligence. The assumption, was, that this was a winner-take-all race for world dominance and unprecendented power.
That assumption was wrong. As I summarized above, there’s now an emerging consensus that scaling compute alone will not deliver AGI. AI research is running out of steam. The whole approach turns out to be a dead end.
In plain English: you can throw a mountain of money at this problem, but that is money down the drain.
Oops.
It was understandably tempting for the market to think that scaling would have been enough, because scaling is fundamentally a financial exercise and the market knows how to move money. Algorithmic innovation is more democratic and much less predictable.
(“plibin” on X, cited in Marcus 2025b)
Capitalism thought it could buy itself a machine god. It is now getting to grips with the hard reality, that that ain’t working out. Plan B is: to hope that God will give them a machine god, by providing the kind of breakthrough idea this field has been chasing since the second world war.
Good luck with that. Sounds like a recipe for a new AI winter to me: a repeat of the boom/bust cycle that has characterized the AI field since its inception. We’ve heard predictions about the imminent arrival of AGI, since the 1950’s. Ain’t happened yet, and ain’t happening now.

Figure 5: Wile E. Coyote runs off a cliff
Brace for impact
OpenAI has sucked so much money in, that the end of the bubble is in sight.
OpenAI needs more than a trillion dollars ($500bn in operational expenses and at least another $625bn-$800bn+ for data centers), and there is not enough private and venture capital to pay for it.
The size of the money mountains we’re talking about, is difficult to comprehend for normal human beings like you and me. But it’s critical to get a grip on this, because our lack of understanding is part of what makes this bubble possible in the first place; and it leads us to underestimate the devastating fallout we’re about to face.
For me, a thousand dollars is significant money. A million dollars is about the limit of my imagination. A billion is a thousand million, and somewhere along the way there I have already fully blanked out. Which means I’m ill equipped to think in terms of a trillion dollars: that’s a thousand billion, a million millions.
Big bazooka
The scale of investment into GenAI we’re talking about, is in the range of a trillion. This is the same order of magnitude as the central bank “big bazooka” of the great financial crisis, or the one used to prop up the economy in the Covid crisis.
There’s not enough investment money in the world, to build out all the data centers AI and Oracle have promised to build. There’s not enough grid capacity in the US, to power those data centers, even if they were to be built.
Puny revenues
The mountain of money spent on fast-depeciating single-use AI infrastructure is totally out of whack when compared with the relatively puny amounts of money actually being made.
The basic unit economics of generative AI are insane. Providers must purchase tens or hundreds of thousands of GPUs each costing $50,000 a piece, and hundreds of millions or billions of dollars of infrastructure for large clusters. And that’s without mentioning things like staffing, construction, power, or water.
Then you turn them on and start losing money. Despite hundreds of billions of GPUs sold, nobody seems to make any money, other than NVIDIA, the company that makes them, and resellers like Dell and Supermicro who buy the GPUs, put them in servers, and sell them to other people.
The costly Nvidia Blackwell chips that have been deployed, are bleeding value by the minute. They have an end-of-life deadline of a few years max, cannot be repurposed for anything else than running AI inference, and the same is true for the data centers that house them. All that capital will have evaporated before the decade is over.
When the bubble bursts, as it will, even if it lasts until 2027, it will take down Oracle with it (Zitron 2026). But it’s not just Oracle. The AI bubble is now underpinning the whole of tech.
Since ChatGPT, the six largest tech giants together have seen their market caps increase $ 8 trillion. At the same time, more and more doubts have risen about the true economic value of generative AI.
(Hao 2025)
That’s 8 trillion of market cap, resting on a yearly revenue — not profit; revenue! — of 50 billion (Zitron 2025b). Which sounds like an awful lot, until you translate it into: 0.05 trillion of revenue. To prop up 8 trillion in market cap.
I believe that the generative AI market is a $50 billion revenue industry masquerading as a $1 trillion one
There is only one way this can end, and it ain’t gonna be pretty.
A trillion dollars is a terrible thing to waste
This is not just a financial numbers game. Those numbers represent real human effort and real planetary resources, squandered on a wild goose chase. This is a massive opportunity loss. We could have ended world hunger for the same amount of money now spent on data centers. We could have made the Green New Deal work, with that level of investment.

Figure 6: Burning a pile of money
If that were all there is to it: a tech hype that does not deliver, causing a financial meltdown similar in magnitude to the Great Recession or Covid-19 crisis, that would be bad enough. Our societies are still reeling from the fallout of those crises.
But there is a even more sinister side to our predicament.
The AI industry causes terrible ecological and social damage
“The first few months, I couldn’t sleep,” she says. “I would close my eyes and still see the screen loading.” Images followed her into her dreams: of fatal accidents, of losing family members, of sexual violence she could not stop or escape. On those nights, she says, her mother would wake and sit with her.
The work affected her personal life. “The idea of sex started to disgust me,” she says. She withdrew from intimacy and felt increasingly disconnected from her partner.
When Singh complained, the response was blunt: “your contract says data annotation – this is data annotation.” She left the job, but a year on, she says the thought of sex can trigger a sense of nausea or dissociation. “Sometimes, when I’m with my partner, I feel like a stranger in my own body. I want closeness, but my mind keeps pulling away.”
Can you please stop reading here for a moment, and just breathe?
Trauma and digital colonialism
That’s a young woman getting traumatized, watching snuff movies and sexual violence as a job, for $2 per hour. This is what we participate in, when using Large Language Models. Those LLMs may be trained using unsupervised deep learning algorithms—but the data fed into them must still be cleaned up, miscategorizations detected and corrected. That’s work offloaded to the global poor.
And no, you cannot opt out of that neocolonial abuse by using an “open source” LLM. Those need data cleaning, too.
Industrial scale copyright violations
As everybody knows, those internet-spanning data sets fed into the LLMs involve industrial scale copyright violations of creator’s works, subsequently threatening the livelihoods of those very same creators by offering cheap knockoffs as a substitute for their work.
Ecological destruction
Meanwhile, doing something against our escalating climate crisis takes a back seat, because the money machine has chosen a different direction. AI data centers have an insatiable thirst for energy (to power up those NVIDIA GPUs) and water (to cool them down again). Google, which had been climate neutral since 2007, quietly dropped its net zero pledge in 2024. Because AI.

Figure 7: Data centers are energy vampires
By 2028, “AI alone could consume as much electricity annually as 22% of all US households.”
Some power companies that were planning go all solar from now on in order to avoid destroying the planet (even more than they already have) are now saying they will be bringing new natural-gas-fired plans online in order to keep up with the increased demand from LLM data centers.
Essentially, none of the new renewable electrical capacity the US has built in recent years is going to replacing existing CO2-barfing coal, oil, and gas plants. It’s all getting eaten up by AI data centers. That of course drives up electrical rates across the country, which in turn means people are less likely to switch to electric appliances, which means even more CO2 produced.
Fascism and rampant disinformation
On top of all that, disinformation spreads like wildfire while we’re witnessing a surge of fascism globally. We’ve all seen the tech CEOs kneel for Trump. The days of starry-eyed utopian tech optimism are long gone. I bought Wired magazine in the 1990s and bought into that vibe. The internet was going to change the world! And it did, just not the way I hoped.

Figure 8: Musk performing a Hitler salute
That whole Silicon Valley ideology has revealed its dark truth. It is deeply misanthropic, steeped in eugenics (Gebru and Torres 2024) and race science (Bender and Hanna 2025). It’s fully aligned with the fascist agenda.
Class war
While executives suggest that AI is going to be a labor-saving device, in reality it is meant to be a labor-breaking one. It is intended to devalue labor by threatening workers with technology that can supposedly do their job at a fraction of the cost.
Nevermind that AI does not actually work, and cannot deliver. It’s the threat that counts, not the reality. It’s the power shift that matters.
The AI can’t do your job, but an AI salesman can convince your boss to fire you and replace you with an AI that can’t do your job.
Not only do you then lose your job. The people who depended on your work suffer from the degradation of service, caused by you being replaced by a bot that cannot do your job. This is the Rot Economy in action (Zitron 2023), presided over by business idiots (Zitron 2025a), who only care about one thing: numba go up.
We all pay the price by living in an impoverished world, while our billionaire overlords prep for the end of civilization in their New Zealand bunker mansions (sidenote: If they're not partying on a pedo island. But I digress. ) .