A Necessary Preoccupation with Failure: Safety Culture Averts Disaster

Added on Sep 20, 2016

By Robert Wedertz, CDR, USN (Ret.), Manager, Consulting             

aircrewHigh Reliability Organizations (HROs) are characteristically described as those that work under very trying conditions and yet manage to prevent accidents and serious safety events. Weick and Sutcliffe, in Managing the Unexpected[1], identify naval aviation as one of very few High Reliability Organizations in the world (commercial aviation and nuclear power are others). They go on to write that “the signature of an HRO is not that it is error free, but that errors don’t disable it.” 

On March 18th of this year, events unfolded on the aircraft carrier Dwight D. Eisenhower (CVN-69) operating off the coast of Virginia, that by many accounts, should have disabled carrier aviation, if only for a moment, and if only on that ship, certainly on that day.

At 1:52 pm, an E-2C Hawkeye with Carrier Airborne Early Warning Squadron (VAW) 123 touched down on Ike (the colloquial term for the carrier) and caught the number 4 wire with its arresting hook.  At the very end of the landing, as the aircraft’s inertia and forward speed pulled out the arresting wire, the cable broke in two and an aircraft that was supposed to have been brought to a stop was still moving—too fast to stop using the brakes, and too slow to immediately take off again. 

As the aircraft trickled toward the end of the landing area, the air crew in the E-2C was struck with the realization that something had gone terribly wrong. They hadn’t stopped like they usually did, although the deceleration over the first part of the landing seemed normal. The E-2 does not have ejection seats, eliminating that option. Without conscious thought or committee-like discussions, their training took over: full power, raise the gear, open the ditching hatches and lower the flaps. Miraculously, the aircraft flew perilously close to the water but eventually climbed away and diverted to Naval Air Station Norfolk.

The potential for tragic consequences was extremely high, yet the outcomes were limited to serious injuries and no loss of life. In hindsight, it is clear that naval aviation’s preoccupation with failure and safety culture trumped what could have been a crippling human error. 

What you will NOT see in the video of the event is the after effects of a steel cable, broken in the middle, as it recoils unpredictably across a flight deck filled with dozens of people. There were several broken bones and lacerations and a possible traumatic brain injury as a result of the backlash of the wire. We do not keep score when it comes to events like this, but without question this incident could have been much worse. 

Now that we know what happened, here is why it happened.

The Navy has taken significant strides to design and implement capabilities into the mechanisms that control and monitor the arresting gear engines (which are below the flight deck) in order to minimize the likelihood that human error will creep its way into a complex mechanical system. Despite its best efforts to simplify the requirements for operating the arresting gear (and in this case, react to faults discovered during its operation), the policies and protocols for correcting a fault were overly complex and not user friendly. It was that complexity that led the sailors monitoring the arresting gear engines to misunderstand the mechanical condition of the system. So what are the lessons from the events of that day​?

  • One of the defining characteristics of HROs, according to Weick and Sutcliffe, is that they have a preoccupation with failure. Nowhere is this better represented than the reaction of the aircrew when something that happens historically once every 15 years happens, and they are supremely qualified to react. They have no doubt simulated this type of event and more than likely discussed it before their flight this day. (Procedures to execute for events like this are part of naval aviator’s required knowledge, and they are repeatedly tested for recall).
  • In the aftermath of this incident, an investigation was convened and the results were shared Navy-wide. We know about the human error element because the sailors involved clearly admitted their mistakes and this facilitated corrective action as a result. And because of the Navy’s fair and just safety-reporting system, they were able to look beyond the human error and identify a system failure that if left uncorrected could lead any competent, well-meaning technician to repeat the mistake.
  • Despite the Navy’s best efforts to design a system that could mitigate human error, the complexity of the policies and protocols predicated misunderstanding that weakened the system as a result. These sailors were forced into knowledge-based problem solving as a result and their error probability was likely tripled as a result.

How can we apply these lessons in health care?  By committing to asking these questions.

  1. How well are we prepared to react to unplanned events in our service line?  Are we discussing potential problem areas in our daily safety huddles or do we find it easier to state, “nothing to report?” Leaders must take the initiative to help our staff think differently and consider and discuss what challenges may set us up for failure. When we learn to think ahead of the moment rather than in the moment, our performance will reflect that of a High Reliability Organization.
  2. How is reporting viewed in our system? If we do not have a fair and just culture then we will likely never have the opportunity to fix and correct issues before they become safety events.
  3. Are our policies and protocols focused and simplified so that our people can use them and to good effect? When possible, we need to construct our policies and procedures with the front line people in mind so that they are a tool, not a burden.


[1] Weick K, Sutcliffe K. 2007. Managing the unexpected: Resilient performance in an age of uncertainty. San Francisco, CA: Jossey Bass.