When AI Makes Life-and-Death Decisions in Seconds, Who's Really in Control?
In the first 24 hours of a hypothetical war with Iran, the United States struck a thousand targets. By week's end, the total exceeded 3,000-twice the volume of the 2003 Iraq invasion's opening phase. Artificial intelligence made this scale possible. U.S. Central Command maintains that humans approve every targeting decision, with AI serving as a tool to help them "make smarter decisions faster." But when systems operate at this pace, the actual role humans play becomes difficult to define.
Israel's military offers a cautionary example. An AI system called Lavender identified suspected militants in Gaza for targeting. Officials said humans assessed each case. One Lavender operator revealed a different reality: as trust in the system grew, human checks shrank to a single confirmation-verifying the target was male. "I would invest 20 seconds for each target," the operator said. "I had zero added-value as a human, apart from being a stamp of approval."
Insurance Claims Follow the Same Pattern
The insurance industry shows how this dynamic plays out in a less visible domain. In 2023, ProPublica reported that Cigna deployed an algorithm to flag claims for denial. Physicians-legally required to exercise clinical judgment-signed off on algorithmic decisions in batches, spending an average of 1.2 seconds per case. One doctor denied more than 60,000 claims in a single month.
"We literally click and submit," a former Cigna doctor said. "It takes all of 10 seconds to do 50 at a time."
Twenty seconds to authorize a strike. 1.2 seconds to deny a claim. The human remains in the loop. Humanity does not.
Difficulty Serves a Purpose
Some decisions should be hard. Deciding to kill a person or deny someone healthcare should require time and mental effort. That difficulty is not inefficiency-it is a safeguard.
When decisions move quickly, institutions stop feeling the weight of what they do. They become numb. Friction in the decision-making process forces people to pause, question, and push back. Remove that friction, and you remove the mechanism that prevents harm.
AI promises to lift the burden of cognitively demanding work. In many fields, faster decisions are genuine progress. But some choices are important enough that we ought to feel them. The difficulty creates space for doubt, for second thoughts, for the human conscience to register what is happening.
When the human in the loop spends mere seconds on each decision, the distinction between autonomous systems and human-supervised ones becomes mostly semantic. Real human oversight requires that humans actually have time to oversee.
What This Means for Your Work
If you work in insurance, this matters directly. AI for Insurance systems are already making decisions about claims at scale. Understanding how these systems work-and where human judgment can actually operate-is essential to your role.
The question isn't whether AI should help with these decisions. It's whether the humans involved have genuine time and authority to think critically about them. When approval becomes a rubber stamp, the system has crossed a line.
For healthcare professionals, AI for Healthcare raises the same questions about clinical judgment and institutional accountability.
The cost of meaningful human oversight is slower decisions and lower throughput. That is a cost worth paying when lives and health are at stake.
Your membership also unlocks: