Jul 31, 2018
What does the 2007 -2008 financial crisis, the Fukushima nuclear accident, Three Mile Island, and Deepwater Horizon all have in common?
The small things. Or rather, lots of tightly coupled small things that are overlooked, ignored or covered up.
Accidents waiting to happen.
In Deep Survival, Lawrence Gonzalez, writes about the fact that accidents don’t just happen, they are assembled carefully, piece by piece. And if just one single piece is missing, the accident simply doesn’t happen.
Risk is unavoidable but accidents aren't.
Our world is filled with countless near-misses and close calls, and the truth is, most of the time we never even know how close we came to this or that accident or disaster.
This is even truer at the organizational/institutional levels, where risk and complexity combine with organizational culture to increase both the likelihood and the impact of catastrophic failure.
My guest on this podcast is Chris Clearfield. Chris brings a novel approach to the study of the challenges posed by risk and complexity. He’s a science geek and reformed derivatives trader, but more recently he’s the founder of System Logic, an independent research and consulting firm dedicated to understanding risk and its interaction with organizational factors. He’s also the co-author, with András Tilcsik, of Meltdown: Why Our Systems Fail, and What We Can do About it, which is the topic of our show today.
This isn't a conversation just about system failures and why they happen; it's also about what we can do about those failures, about how we can better prepare for, and even prevent many such accidents and failures from happening.
“The same kind of culture and decision making that led to the financial crisis also led to BP" - Chris Clearfield
Complex systems generate risk (and fail) in ways that are fundamentally different from the kinds of risks and failures our species evolved to deal with over millions of years, and that the new risk landscape we face requires a new approach to risk management, and really, an entirely new organizational culture.
Chris was very insightful during the conversation, as he discussed the emergent properties of many system-wide failures. Many of these disasters were emergent in those systems in the same way as the 2009 financial crisis was “of the system and not an anomaly.”
“What would have to be in place for something really bad to happen?"
Checklists and Pre-mortems
After talking with Chris, I find myself thinking much more in terms of checklists and “pre-mortems” and the like. It’s like we spend most of our lives driving along a twisty mountain highway at night, totally clueless about just how close to the edge of the 500-foot cliff we really came around that last turn. I’m reflecting more and more on what would have to be in place for something bad to go wrong, say driving your car or in managing online bank accounts. What would have to be in place for something really bad to happen and then kind of going back and mentally reverse-engineering and mitigating those things, those pieces, one by one.
I hope you find my conversation with Chris as interesting as I did.
Some of the other subjects we discuss include:
In other words, we talked about a ton of really interesting and useful subjects, and hopefully, I've “salted” this intro enough to make you thirsty for the whole episode.