How to Think About Spacecraft Risk When Numbers Lie

Probability numbers on space missions look rigorous. Here's why they often aren't — and the three-step framework for reading mission risk honestly.

Published by – Sevs Armando

How to Think About Spacecraft Risk When the Numbers Don't Add Up

In 1986, a group of NASA engineers stayed up the night before the Challenger launch trying to stop it. They had data. They had charts. They had a documented concern about O-rings failing in cold temperatures. Their managers heard them out, then voted to fly anyway. Seven people died the next morning at 73 seconds into the mission.

The engineers weren't ignored because their data was wrong. They were ignored because the institution had already decided the mission was safe, and the numbers they brought to the table couldn't cut through a decision that had already been made emotionally.

That gap — between what the data says and what people in authority choose to hear — is the actual risk in any high-stakes engineering endeavor. Understanding it changes how you watch any spacecraft launch, not just this one.

The Certainty Illusion in Risk Assessment

There's a cognitive trap specific to how humans interpret probability in complex systems. Call it the Precision Illusion: the belief that attaching a number to danger makes it more manageable, when it often just makes it feel more certain than it is.

Before the 2022 Artemis I uncrewed test flight, NASA put the probability of losing the Orion spacecraft at 1-in-125. That figure sounds rigorous. It isn't, exactly. It's a model built on assumptions, expert estimates for components that had never flown before, and historical data from systems that share some, but not all; characteristics with the hardware being assessed. The number is real in the sense that engineers built it honestly. It's fiction in the sense that no one can verify it without flying the mission dozens of times and averaging the results.

When officials later declined to release a crew-loss probability for the crewed Artemis II mission, citing a lack of sufficient data, that was actually the more honest answer. The uncomfortable truth is that first and second flights of entirely new rocket systems don't have enough flight history to produce statistically meaningful risk numbers. They have engineering judgment, careful review, and educated confidence. That's different from certainty, and conflating the two is where things go wrong.

spacecraft-risk-assessment-how-to-think
spacecraft-risk-assessment-how-to-think

Every spacecraft carries unresolved questions into flight. The productive question isn't whether risk exists; it's whether the known unknowns have been honestly assessed. On Artemis II, the central known unknown is the heat shield. The Orion capsule returned from Artemis I with unexpected erosion on its ablative shield. NASA spent over a year studying the cause. The fix is not a new shield; it's a modified reentry trajectory. Understanding that distinction, mitigation through procedure rather than through redesign tells you more about residual risk than any probability number.

Risk culture at a space agency is visible in small behavioral signals. When engineers express dissent before a launch and those concerns are addressed through documented resolution, that's a healthy process. When dissent is present and the agency proceeds without direct resolution on the specific issue raised, that's a different signal. On Artemis II, some engineers who previously objected to flying with the existing heat shield have continued to object after the agency's additional analysis. Others say the new data addressed their concerns. That split is worth knowing, not because it means the mission will fail, but because it means the risk is genuinely contested not resolved.

NASA's inspector general last week assessed the agency's risk threshold for lunar surface operations at roughly 1-in-40. That's a policy choice, not a scientific finding. Different agencies in different eras set different thresholds. The Apollo program flew missions with estimated crew-loss probabilities some engineers placed far higher, because the geopolitical stakes in the 1960s justified higher tolerance. Today's Artemis program operates in a different context: commercial competition, congressional oversight, international partners, and a public that has largely forgotten how difficult this is. The acceptable risk threshold is always partly a social and political judgment, not only a technical one. Knowing that lets you evaluate institutional decisions more clearly.

Optimism Bias and Why It Kills Smart People

Optimism Bias is the well-documented human tendency to believe that bad outcomes are more likely to happen to others than to ourselves or to projects we're emotionally invested in. Psychologists Daniel Kahneman and Amos Tversky built an enormous body of research on this pattern. In spaceflight, it shows up as the gradual normalization of known anomalies.

The Space Shuttle program flew 135 missions. By the time Columbia was lost in 2003, foam shedding from the external tank had become so routine that engineers had stopped flagging it as a serious concern. It had happened before. Nothing bad had happened yet. The institutional memory of "this anomaly is survivable" replaced the engineering question "have we actually determined why it happens and whether it could be worse?"

This pattern has a name in aerospace: normalization of deviance, coined by sociologist Diane Vaughan in her 1996 study of the Challenger disaster. The mechanism is simple. An unexpected thing happens. It doesn't cause failure. The team notes it, files it, and the mission continues. The next time it happens, it's slightly less surprising. Over enough iterations, the anomaly becomes part of the baseline, and the baseline gets treated as acceptable.

You see it everywhere outside of aerospace too — in financial risk management before 2008, in pharmaceutical trials that run past early warning signals, in infrastructure inspections that defer action on known cracks. The pattern isn't unique to rockets. Rockets just make the consequences impossible to hide.

The antidote isn't paranoia. It's a deliberate, institutionalized practice of asking: "We've seen this before and survived it. Have we actually explained it, or have we just survived it?" Those are not the same question.

If you're watching the Artemis II launch in April, you now know what to look for beyond the fire and the thunder. A successful flight would be genuinely historic. The questions raised before launch don't mean it will fail. They mean the risk was real, was contested, and was accepted anyway which is exactly how every meaningful human endeavor into new territory has ever worked.

The honest version of courage isn't pretending the risk doesn't exist. It's understanding it clearly and deciding to go anyway.

This is exactly the kind of analysis we publish every week for The Science Impact subscribers — before it reaches mainstream news cycles. Subscribe free. Stay a step ahead.