
In early 2024, the door panel blowout from a Boeing 737 MAX aircraft in mid-air shocked the world, less than five years since that model was grounded worldwide following two crashes that claimed 346 lives. The urgent question then was: How did that happen? But the more important question really is: Could these disasters have been prevented?
From the apex of the aerospace industry, the American aircraft manufacturer landed itself a troubling safety record and felony charges with a nearly US$1 billion price tag. But Boeing is certainly not the first (and probably not the last) company to experience such a rude awakening.
More recently, the 2008 financial crisis saw banks including Wells Fargo, Citigroup, Goldman Sachs and Bank of America being bailed out to prevent their failure from cascading into a global financial meltdown. More surprisingly, some bank executives whose unprecedented risk-taking led to the banks’ bailouts were quick to dispense lavish bonuses after receiving government money.
Are these corporate giants too big to fail – such that neither regulators nor markets have corrective power over them?
Anatomy of a man-made disaster
In our working paper “Icarus: On Operational Safety and Organisational Downfall”, we examined why and how performance – and in the case of Boeing, safety – gets eroded and what can be done to attenuate such disasters.
Our findings underscore a general tendency for organisations to drift towards failure. Disasters are, in fact, avoidable. The genesis of a disaster and its prevention lies in day-to-day operations and accountability. Where accountability is concerned, governance and the operator-regulator relationship are key. When regulators do not pay sufficient attention to their operations, organisations may be tempted to take risks with the complicity of the regulator.
In the case of Boeing, as one of the only two major players in the commercial aircraft manufacturing space and an essential supplier to the United States defence services, it will not be allowed to fail. Even after it admitted to felony, the US government made it clear that it would continue to do business with the aircraft manufacturer. In the same vein, the state did not allow the aforementioned banks to fail during the 2008 crisis for “the greater good”.
Being too big to fail sparks a moral hazard. Executives tend to tolerate or gloss over transgressions, secure in the knowledge that the organisation will be bailed out or will suffer minimal consequences even if it underperforms. Clearly, being too big to fail can have a direct impact on quality and safety.
However, earlier studies show that being too big to fail does not automatically lead to adverse consequences. That said, our study shows that it can intensify the harmful effects of the confidence trap.
Confidence trap meets “too big to fail”
When previous instances of risk-taking did not lead to failure, one may erroneously believe that continuing to take risks in the future will pay off – a phenomenon known as the “confidence trap”.
Our research indicates that top organisations often fall into the confidence trap, which could manifest as banks engaging in more risky lending or aircraft manufacturers cutting back on quality inspections. When issuing a riskier loan or doing away with an inspection do not lead to a disaster – and might instead reward the executive or the organisation – the propensity to take risks increases.
The situation worsens in too-big-to-fail organisations, where executives enjoy the upside of risk but are protected from the downside – in the knowledge that the organisation will not be allowed to fail. Former Boeing CEO David Calhoun continued taking risks even though his predecessor Dennis Muilenburg was fired for shoddy practices that led to two fatal crashes. Despite Boeing’s poor safety records, the regulator, the US Federal Aviation Administration (FAA), hardly “tightened the screws” on Boeing. The cap it imposed on production, among other steps, was no more than a slap on the wrist. And worse, these executives were handsomely rewarded.
If a company is bailed out instead of penalised by the regulator and executives are not held accountable for making risky decisions, unwarranted risk-taking will undoubtedly increase. Being too big to fail introduces this moral hazard that “supercharges” the confidence trap. In our study, we show this relationship using system dynamics modelling, specifically, how the interaction of the confidence trap and being too big to fail results in greater frequency and magnitude of disasters.
Together, Boeing and the FAA undoubtedly fell prey to the confidence trap, such that the regulator allowed excessive self-regulation with little oversight. Take the manoeuvring characteristics augmentation system aboard the Boeing 737 MAX, which was at the centre of the 2018 and 2019 crashes. The system was not only designed and certified by Boeing itself, but “incremental modifications” was made over time without regulatory oversight. In addition, the company drifted away from safety standards. Pass a tipping point, a major disaster will occur – and it did, taking 346 lives.
Preventing man-made disasters
So, what can be done? To attenuate the consequences of these dynamics, we suggest two policy levers. First, financial clawbacks could strip executives of the “too big to fail” protection. These contractual provisions ensure that executives have skin in the game, since they would have to pay back part of their remuneration in the case of non-performance – including operational failure and man-made disasters. Making them personally liable for disasters resulting from their imprudent decisions or actions could align the objectives of the operator and regulator, making disaster avoidance a shared priority.
Second, we propose a separation of (government) authority from expertise. In practice, it is almost impossible (or impractical) for regulators to catch up with the technical details of the organisations they oversee as they may not have all the required expertise. While regulatory requirements are set by the relevant authority, regulatory oversight in the form of monitoring and checks can be outsourced to an independent specialist auditor to improve operational oversight.
In the aviation industry, the head of the FAA acknowledged that it is trying to move to an “audit plus inspection” regime. More generally, the third-party auditor should be appointed by regulators but paid by the company being audited. This is simply the cost of doing business. In fact, companies in industries such as pharmaceuticals often hire external auditors to identify problems flagged by regulators.
Our model shows that imposing clawbacks and instituting third-party oversight can boost performance, or in this case, reduce the occurrence of preventable disasters. Excessive risk-taking can be curtailed when an independent third party provides effective oversight and executives are incentivised to act more prudently instead of focusing only on short-term profit.
In addition, the operator-regulator relationship and governance structure are critical factors. Systemic consideration of these factors is needed to reduce the likelihood of man-made and preventable disasters. Importantly, the mechanisms underpinning disasters are not unique; they can be generalised to high-stakes safety contexts, be it in aviation, nuclear energy or pharmaceuticals.
“INSEAD, a contraction of “Institut Européen d’Administration des Affaires” is a non-profit graduate-only business school that maintains campuses in Europe, Asia, the Middle East, and North America.”
Please visit the firm link to site