Stop Crying Over Spilled Sample Bottles
What the Naval Nuclear Propulsion Program and Silicon Valley can learn from one another.
On a nuclear submarine, there is no such concept as “move fast and break things.”
The classic Silicon Valley idiom makes its priorities clear: speed of iteration outweighs the stability of what exists right now. In the Naval Nuclear Propulsion Program (NNPP), this would be downright heresy. The NNPP is slow. Burdensome. Boring. It is steeped in rigid conservatism that permeates every aspect of design and operation, because the stakes are too high to tolerate anything else.
For a long time, I believed that reliable engineering required that conservatism. It is drilled into you from day one. Twice, the Navy put me through the nuclear power pipeline, and its indoctrination is clear: the perfect operator demonstrates integrity, maintains procedural compliance, takes full ownership of their station, provides forceful backup to their team, possesses a deep level of knowledge, communicates formally, and approaches every task with a questioning attitude.
The nuclear operator lives in a world of litanies, checklists, and pre-planned responses. They are not — except in the most extreme cases — fast.
Naval Reactors often explains its culture of reliability with slices of Swiss cheese. Alone, each slice (an operator) has holes — fatigue, gaps in knowledge, or simple human error. But stack them together, and the holes don’t line up. One operator’s weakness is covered by another’s strength. The lesson is simple: no individual is perfect, but the team, structured properly, can achieve near-perfect reliability.
The Tech Industry doesn’t believe in Swiss cheese- from what I’ve seen, it’s more of a Jenga Tower. Each new feature, each quick iteration, is a block pulled from the base and placed on top. The tower grows higher, faster, flashier- but who knows when it will come crashing down. Even if it does, you can just build it up again. Sometimes, the world of hyper-iterative companies holds together brilliantly and builds among the stars… and other times, the SpaceX Falcon 9 explodes in a ball of hellfire on the launchpad.
It has been (half-metaphorically) beaten in to me that the reckless abandon of SpaceX, Amazon, Google, and other high-flying tech companies cannot be trusted… and yet, I’d wager most of us have more confidence in our Amazon packages arriving on time than in a watch team making it through a deployment without causing a critique-worthy incident.
The Swiss cheese model works for the big things– in the 70+ years of nuclear-powered ships, American sailors have never experienced a single reactor accident. But that surety has a cost: time. Redundancy takes time to build, inspect, and maintain. Every audit, every drill, every lesson learned slows us down. I’m confident NNPP will never have a reactor accident if Naval Reactors is allowed to operate as it has for the past half-century — but I’m equally confident its technologies will continue to lag far behind civilian innovation.
In stark contrast, the Jenga model proves that new technologies emerge quickly when the priority is speed. In the past 25 years, Silicon Valley has delivered more transformation than most societies saw over centuries: smartphones, broadband internet, GPS navigation, generative AI, electric vehicles — just to name a few. Today, tech makes up more than 10% of the U.S. economy, second only to healthcare and real estate.
This Jenga philosophy has fueled incredible growth, but it has also exposed deep cracks in reliability. In 2024, a faulty CrowdStrike update crashed 8.5 million Windows systems worldwide. Tesla’s Autopilot has been linked to more than 700 crashes and 17 fatalities since 2015. And Boeing’s 737 MAX — hobbled by flawed software design — suffered two fatal crashes that killed 346 people and grounded the entire fleet for nearly two years.
Neither culture is perfect, but I do think they can learn from each other.
For the entrepreneurs building something at the bleeding edge of human capability, keep in mind:
1. Reliability can’t be backfit.
When you’re building something new, experimentation is necessary — you try a bunch of different approaches until you land on something viable. That’s how you reach a minimum viable product. But too many companies stop there. Once a product is exposed to the real world, consistency matters just as much as novelty. If the stakes are high — lives, national security, or infrastructure — you must spend time thinking backwards: what could go wrong, and what would that failure cause? You have to proactively identify and solve your worst failure modes before they manifest themselves for you in catastrophic fashion.
2. Teams, not individuals, are the unit of safety.
No one is perfect, and no single person on your team can guarantee protection. Not the software engineer, not the program manager, and not the CEO. Safety and reliability emerge only when everyone up and down the chain truly understands what they’re building and holds each other accountable. A culture without mutual accountability is one where even your best and brightest are bound to make costly mistakes.
And for the high-reliability operators keeping our nuclear plants running safely, each and every day, I would advise you to remember that:
1. Some bugs aren’t worth fixing right now.
Not every mistake on a nuclear plant deserves a fact-finding or a critique. Not every ambiguity needs a new standing order, checklist, or binder entry in the “lessons learned” archive. Sometimes, the corrective action required to fix a small bug creates more drag on the team than the bug itself. You have to know when to stop crying over spilled sample bottles.
2. Throw the book out when you need to.
Many of the solutions in NNPP are patchwork fixes to problems everyone has long since forgotten. Over time, those fixes become calcified into procedures that no longer serve the mission. If something isn’t working for your team, push to get rid of it through the proper channels. As one example: whether supervisors tour every six hours or every eight probably doesn’t change safety outcomes — but it does change whether your junior officers standing Engineering Duty Officer every three days get enough damn sleep to perform.
I’ve lived inside one of the slowest, most deliberate engineering cultures on Earth, and now I’m trying to do a hard pivot to stepping into one of the fastest. Both worlds have blind spots, but both have truths worth carrying forward: Silicon Valley’s speed has reshaped daily life in amazing ways, and Naval Reactors’ rigor has kept generations of sailors, ships, and reactors safe for over nearly a century.
As I approach my own transition out of the Navy, I often think that this is also about finding my own balance — bringing a mindset forged in high-reliability operations into a world that thrives on iteration.
If you’re building something where both speed and reliability matter, I’d love to be part of what you’re making. Feel free to reach me at ceo@thereactoriscritical.com if you’d like to see my resume or talk further.


I need to shout this from the rooftop at work