The looming accountability crisis

// thoughts

“A computer can never be held accountable. Therefore a computer must never make a management decision.” — Internal IBM training documents, 1979

You can’t sue an algorithm

On 2nd October, 2023, an autonomous electric vehicle dragged a San Francisco woman around 20 feet to the curb at approximately 8 miles per hour after she was hit by a human-driven vehicle1. The vehicle was a Chevrolet Bolt that belonged to General Motors’ Cruise autonomous vehicle fleet. The consequences were dire: the California Department of Motor Vehicles suspended permits of Cruise LLC soon after2 and the incident triggered a purge of the unit’s key leadership3.

You can’t fire a neural network, or send a large language model to jail. When you need a neck on the chopping block, it’s gotta be made of human flesh. You’re not pacifying angry customers just by tossing your robots into figurative compactors. People are gonna want to see bright, red blood when they suffer losses as a result of your product or service malfunctioning.

The sad truth of our reality is that the justice served in the Cruise incident was only an exception. A pattern of a lack of accountability is the norm instead:

  • 2020 Despite false arrests and wrongful incarceration of innocent people, no executives from the AI vendors responsible for these failures faced criminal charges4.
  • 2021 No executives were fired or faced criminal liability for the failure of Epic’s sepsis algorithm5. It missed 67% of cases while generating false alarms for 18% of all hospitalized patients6.
  • 2023 A class action lawsuit7 charged UnitedHealthcare with using an AI algorithm—nH Predict—that not only denied and overrode claims to elderly patients that had been approved by their doctors but carried a staggering 90% error rate. In the same year, Cigna8 and Humana9 faced similar accusations in court.
  • 2025 Five out of seven counts in the UnitedHealthcare lawsuit were dismissed10. UnitedHealthcare denied that nH Predict was used for making coverage decisions.

With regards to that last sentence, I’ll concede that everything is speculative and we’ll never know for sure. But contrast that with what we do know today, which is that an increasing number of businesses are frantically trying to replace employees with AI11. And they’re proud of it, too.

The trend of unaccountability is there in plain sight, and it’s only going to get worse.

When AI becomes ‘good enough’

The standard response to these accountability failures is predictable: ‘AI will get better.’ And it will. But here’s what that narrative conveniently ignores: even with dramatically improved accuracy rates, the uncomfortable truth is that the number of cases that humans have to review still remains the same, if not more. Add to this the growing trend of companies laying people off and delegating their work to AI, and we have fewer humans doing more review and validation work.

Let’s put it this way. If you were attempting to file for an insurance claim, and you were denied a payout from an AI model that is 90% accurate. Would you quietly accept your fate and walk away? Or would you try to get in touch with a human at the insurance company, except that there’s no one left to pick up the call?

Can AI check itself?

Most people know that Large Language Models are stochastic models by now. Using stochastic models to validate other stochastic models—even if it works well (hint: they don’t)—does not resolve the conundrum I posted earlier.

It’s turtles all the way down, and each additional AI system you add into the workflow is yet another output to review. And that’s why you’d hear terms like ’traceability’, ‘Human-in-the-loop’ systems and audit trails as ways to ensure human accountability in AI-assisted systems.

The math is not mathing

Let me reiterate the core problem: we’re facing a mathematical impossibility of human oversight at AI scale. Pundits and bright-eyed AI evangelists love to talk about productivity, efficiency and accuracy gains from AI solutions, but rarely talk about improving the accountability problem in the face of a shrinking workforce.

Remember when the Crowdstrike outage grounded thousands of flights worldwide12? With or without AI, software is running our world, societies and lives. Plenty of automated systems and even mission critical systems depend on well-engineered software. Today we’re seeing AI models being capable of working non-stop in the background, generating billions of lines of code while we sleep. Great, but who’s gonna review all of them? I mean, who wants to take responsibility for any security incidents or service disruptions arising from AI-generated code? Bear in mind that there’s going to be a shit ton more code in our world in the near future!

There’s not even gonna be enough people for executives to throw under the bus, because most people have already been laid off. And even if they could churn their staff fast enough and hire the next willing victim, it’s only a matter of time before the business suffers a fatal credibility and reputational hit.

A silver lining

Thankfully, some companies are starting to come to their senses after being drunk on the AI Kool-Aid. Klarna CEO Siemiatkowski famously fired 700 employees in 2023 and claimed that AI was doing their work. This year in 2025, he told Bloomberg that the AI-focused strategy “wasn’t the right path” and that while AI customer service chatbots were cheaper than human staff, they resulted in “lower quality” output13.

Don’t get me wrong. I’m not an ‘Anti-AI Doomer’. I’m a big fan of Claude Code myself. It’s truly a game changer and I’m personally excited about the future of AI-assisted software engineering. The pendulum is still swinging in full-force in the direction of replacing everything with AI. But at some point in time the pendulum will swing back. People will start to course-correct, and hopefully we arrive at a beautiful balance where AI-assisted workflows genuinely improve our lives and the future of mankind.

The folks at IBM did us all a favour by leaving behind such a prescient line in the annals of software engineering. Perhaps it’s time to keep it at the back of our minds as we forge towards a new frontier with our newfound artificial friends.