Skip to main content Home

"AI" Is the New "Befehl Ist Befehl"

Reading time: 4 minutes

The recent mass murder of 175 Iranian schoolgirls was not caused by a rogue AI. It was the logical outcome of pursuing efficiency in the kill chain above anything else. Claude had nothing to do with it.

This war crime was the system working as intended. All the talk about AI, just deflects attention away from the real people making awful decisions, directly leading to this atrocity as an outcome.

Kill chain

That, in short, is the upshot of an article in The Guardian, which was previously published as Kill Chain on the Artificial Bureaucracy Substack.

It’s a deep dive into the history of technology, specifically into the evolution of targeting processes in the military. Which sounds boring, until you translate it as: who decides whom to kill, and how fast. Kill kill kill.

Figure 1: Aegis combat system displays aboard USS Vincennes (CG-49), the warship whose crew misidentified Iran Air Flight 655 as an attacking fighter jet and shot it down in 1988, killing all 290 people aboard.

Figure 1: Aegis combat system displays aboard USS Vincennes (CG-49), the warship whose crew misidentified Iran Air Flight 655 as an attacking fighter jet and shot it down in 1988, killing all 290 people aboard.

Organizations need human judgement

What looks as “friction” from an efficiency perspective, which wants to optimize for killing people as fast as possible, is actually a manifestation of human judgement preventing war crimes.

What the next generation of reformers would measure as latency – the delay between identifying a target and striking it – was the window in which mistakes could be caught.

(Baker 2026)

Bureaucratic organisations run into a paradox: they derive their legitimacy from appearing to be inhuman, neutral, evidence-based, procedure driven — as opposed to the fickleness and political nature of human decision making.

But without humans in the loop, the procedures lose any grounding in objective reality. They start chasing their own tail. The only measure of outcome is the predefined frame of the process itself.

What remains is a bureaucracy that can execute its rules but with no one left to interpret them. Bureaucracy encoded in software does not bend. It shatters.

(Baker 2026)

A known failure mode of bureaucracy

We’ve seen a similar dynamic in The Netherlands, in a very different context: rigid application of impersonal criteria (supposedly impersonal, but actually racist criteria), lead to the wrecking of countless lives, in what has come to be known as de kinderopvang toeslagenaffaire: the child care benefits scandal. People driven into bankrupcies, children taken away, suicides.

I’m sure every country has its own recent example of bureaucratic processes running amok. The Grenfell Tower fire comes to mind in the UK context — saving a few bucks for efficiency, literally burning people as a result.

To lose track of humanity and wider outcome considerations, is an inherent failure mode of bureaucratic organisations. Since it’s inherent, it needs to be mitigated by proper management decisions. Not addressing that failure mode, is a human decision.

After WW2, it was widely recognized that “Befehl ist Befehl” —I was just following orders—, does not provide an excuse for participating in war crimes and crimes against humanity. Blaming immoral actions on “the computer” or “the AI” is just the current-day equivalent, of that same lame excuse, of cowardly hiding behind an organisation you’re part of.

The AI halo obfuscates human responsibility

Following the bombing, a lot of attention went into a virtual discussion about the role of Claude in this war crime. When Claude had nothing to do with it. It’s an older class of machine learning technologies that supports this kill chain.

The constitutional question of who authorised this war and the legal question of whether this strike constitutes a war crime were displaced by a technical question that is easier to ask and impossible to answer in the terms it set.

(Baker 2026)

Chasing after efficiency and speed as primary values, within a system whose sole purpose is to kill people, means that immoral decisions happen at the level of how the system is set up, rather than at the level of individual targeting choices.

Nobody decided specifically, that killing 175 schoolgirls was a great plan. But many people made decisions to shape the whole system in the direction, where this was the natural outcome. Those people should be held to account for the war crime they participated in. That whole system is criminal: it’s called Maven, it’s provided by Palantir, it’s funded by US tax payers through the department of War.

It has also occluded something deeper: the human decisions that led to the killing of between 175 and 180 people, most of them girls between the ages of seven and 12. Someone decided to compress the kill chain. Someone decided that deliberation was latency. Someone decided to build a system that produces 1,000 targeting decisions an hour and call them high-quality. Someone decided to start this war. Several hundred people are sitting on Capitol Hill, refusing to stop it. Calling it an “AI problem” gives those decisions, and those people, a place to hide.

(Baker 2026)

Go and read the source article. It’s really good.

References

Baker, Kevin T. 2026. “AI Got the Blame for the Iran School Bombing. The Truth Is Far More Worrying.” The Guardian, March. https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying.