Risk management, at least at first glance, would appear to be the discipline of identifying potential threats, the likelihood of them occurring, and the negative impact such events can have on the business.
What often is missing, however, is a second-level analysis of how steps taken to manage risks also impact the business. Werner Heisenberg’s wisdom extends, at least loosely, to risk management: By mitigating a risk, you create another risk, which also must be evaluated and prioritized.
In a “think piece” on the dangers of worst-case scenario planning published this month at Risk Management magazine, London School of Economics professor Dylan Evans makes this evocative claim:
The long lines at airports caused by the introduction of new airport security procedures, for example, have led more people to drive rather than fly, and that in turn has led to thousands more road fatalities than would otherwise have occurred, because driving is so much more dangerous than flying.
The piece doesn’t offer any specific data to back up Evans’ assertion, but we’ll take it on good faith that it’s accurate. A worst-case thinker might argue pervasively that although there’s no way of knowing for sure, increased airline security may well have prevented an even greater number of deaths by deterring terrorism, at least on the airlines.
It’s the ultimate conundrum of risk management — it’s just really hard to prove why something did not happen.
But Evans makes an enlightening point — the impact of heightened airport security on other transportation systems was not fully evaluated, and certainly not presented to the public, as the new standards were debated and implemented. Secondary and tertiary impacts are real, and must be included in any risk plan.
The econ professor offers other examples of how, at least in his view, worst-case scenario thinking has skewed public risk management and planning. The accident at the Three Mile Island nuclear plant that lead to a 30-year moratorium on new plants as fossil fuel plants did indisputable damage to the atmosphere; the “1 percent chance” stance on nuclear proliferation dictated much of recent U.S. foreign policy; even the “paranoid parenting” mindset that results in freak-outs over 102-degree fevers. All are cited as examples of how fear has outweighed fear in dictating how society plans and responds to risk.
It’s a fairly political piece, and it echoes a common criticism of the risk-management industry in general as being a little fear-mongering as it pedals its wares. (An item at the Huffington Post recently made a strained effort to tie the tragic Trayvon Martin shooting to an RM firms’ citation of wearing a hoodie to potential at-risk personalities on college campuses. Fear-mongering takes all forms, it seems).
Still, all this coverage does raise legitimate questions about how you prioritize and engineer against unlikely but dire threats facing your organization. At the very least, you must consider the secondary impact of your solution. Will tightly locking down physical access to the office on weekends materially impact employee efficiency or, even more infuriatingly, morale? Does stretching your Ops team for constant offsite backup damage your daily uptime standing?
Fortunately, business performance and compute cycles are easier to quantify than transportation fatalities, and your business is probably not quite as complex as national energy policy. At the very least, secondary impact must be considered as a performance metric in your risk management planning.