Why the Risk Matrix is Broken – and What to use Instead
The risk matrix is an ill-suited tool for classifying information security risks. Bayesian statistics come to the rescue.
Risk assessment is at the heart of information security. The standard approach is to use a risk matrix to classify information security risks based on their probability and impact, then give each one a “risk score” by adding the two numbers together. Then you rank the risks by score and address the top ones first.
The risk matrix is like an old, familiar friend. It’s been around for years and everyone uses it. It’s easy to understand and simple to use. And in terms of “getting to grips with risk,” it seems to work.
There’s only one problem: it doesn’t.
I should know, because at E-accent we used the risk matrix ourselves. It was baked into the information security management system (ISMS) we used every day for a long time. But I came to realize that it was actually completely unfit for purpose. In fact, it was doing more harm than good.
Garbage in, garbage out
Things go wrong from the very beginning, with the probability estimates you put into the risk matrix.
On the whole, human beings are not very good with non-linear risks. Our instincts evolved to help us deal with immediate physical dangers in our environment. So we can tell whether an oncoming car is likely to hit us, for example. But the more abstract the threat, and the more factors are involved, the less helpful our gut instinct is.
It’s extremely difficult to say how likely it is that an information security incident will actually occur. So most people rely on their gut instinct on the grounds that it’s better than nothing.
But if you ask someone to gauge the likelihood of an information security risk – even someone with very deep knowledge – they will be hard pressed to give you an answer. For instance, what’s the likelihood of an ex-employee with a grudge hacking your server? Is it low, medium or high? Why do you say that? How do you know?
It’s a similar story with impact. In theory, it’s easier to get a reasonably good idea of financial impact by thinking about management time, developer hours, lost sales and reputation damage. But people rarely bother, because the risk matrix is only asking for a simple assessment anyway.
Illusion of control
The problem with the risk matrix is that it feels scientific. It has what Stephen Colbert calls “truthiness” – a plausible air of being true. It promises a quick, simple solution to a wicked problem without taking up loads of time, or asking you to do too many hard sums.
Before, you had no idea about risks. But now, you’ve put them in neat little boxes and given them solid-sounding scores. You’ve “got to grips with risk”, or so it seems. But all you’ve really done is deceive yourself with a potentially dangerous illusion of control.
To make matters worse, the risk matrix is the foundation of almost all risk assessment in information security, and is recommended by security advisory committees like NIST and OWASP. Your shonky guesstimates will ultimately determine how you prioritize risks, use resources and plan for disaster. But what’s the point of fortifying your castle if it’s built on sand?
When you don’t understand your taxes, you call your accountant. So why not call a risk management consultant? Because most of them are using this exact same method. You might feel reassured by involving an “expert”, but all you’ve done is move the wrongness from your desk to theirs – and pay for the privilege.
If you’re responsible for information security, you’re probably feeling pretty scared by now. I certainly was.
So I searched the web for hard evidence that risk matrices actually help manage risk. Was it possible that the whole thing was just snake oil?
What I found out was sobering. Not only is there no proof that risk matrices work, there’s actually proof of the opposite. Using the matrix actively hampers firms’ efforts to deal with risk, absorbing time, money and effort for no benefit at all.
The Bayesian way
What I did find, through Douglas Hubbard’s book How to Measure Anything in Cybersecurity Risk, was a better way to gauge probability. It’s called Bayesian statistics, and it’s been around for centuries.
Insurers, whose livelihoods depend on knowing probabilities as accurately as possible, use it all the time. For some reason, even though we’re in the business of “insuring” our data, it hasn’t penetrated the information security world. But it needs to.
The Bayesian approach uses hypotheses based on little or no data, which gradually become more accurate as you learn more. Don’t know how likely an event is? No problem. Start with related stuff that you do know something about, and work with it to sharpen up your probability estimate.
The mainstream, “big data” approach to probability is frequentist: based on very large quantities of data about events in the past. But that’s not much use in information security, where you’re concerned with future threats that have not materialised before, but could have very big impacts when they do. If you wait until then to assess their risks, it will be too late.
Hubbard also looks at the unavoidable biases that come in when you use gut feeling. Instead, he suggests using percentages for probability (0% = impossible, 100% = certain) and dollars for impact. This forces you to back up probability and impact with facts.
From hunches to accuracy
Let’s see how it works in practice. For example, what is the risk that a server under your control will be hacked in the next 12 months? If you have no idea whatsoever, you begin by stating it as 50%.
Now, to sharpen up your figure, turn the question round and ask how you might be hacked. Like a detective, you need to consider means, motive and opportunity for different suspects. Search online for threat intelligence, studies and data. A foreign state? Unlikely for most businesses. An ex-employee with a grudge? More likely.
You still start with hunches, or whatever historical data you can get, and move forward from there. Heard about a new vulnerability that allows hackers to hack your server in seconds? Your risk of data theft just skyrocketed.
You can do the same for impact. Instead of trying to gauge the total cost of an event, break it down into, say, legal fees, fines, developer time, reputational damage, lost sales and so on. All these things can be researched and made more accurate over time, which makes the amounts far more objective and accurate.
Now, the Bayesian approach is more work than making a risk matrix. But the questions is, do you want an easy answer, or an accurate one?
In a world where cybersecurity risks are proliferating and getting worse, we can’t afford to get the numbers wrong. It is time for the risk matrix to go.
(This is a revision of an article that I initially published on e-accent.com in April 2017.)
Further reading
- Seiersen, Richard and Hubbard, Douglas (2017). How to Measure Anything in Cybersecurity Risk. Wiley. ISBN 9781119085294.