Risk Management for Cybersecurity

From the Founders Desk, Roderick Flores

Previously, I discussed how it is feasible to create a regular and timely threat forecast. However, it is only the first step in developing a proper risk management strategy. You must place threat forecasts into context by which I mean that for a bad actor to effect a breach they must have more than motive – they also need an opportunity and the means to take advantage of it.

If we return to the natural disaster analogy, a building engineered to withstand up to a magnitude 6 earthquake is unlikely to be damaged in any significant way by a magnitude 5 event. Barring a construction flaw, the building was designed to not have a vulnerability to the hazard (i.e. threat). If, however, new intelligence comes in that indicates that a region could experience higher than previously imagined earthquake magnitudes, as was the case after the 2011 Virginia earthquake (related article), then broad swaths of a nation might suddenly find that they have had significant vulnerabilities for some time. In reality, they and the risk associated with them were always there.

This is where cybersecurity can be very different than natural disasters. In cybersecurity discoveries of new vulnerabilities include the possibility that an organization has already been breached. In the physical arena, damage is typically readily apparent whether a vulnerability is ever discovered. Worse yet, cyber doesn’t rely on physical processes to initiate an attack. Threats can attack at will either as transient probes looking for an opportunity or persistently lying in wait until a vulnerability arises.

Essentially we have to undertake two new tasks. First, we have to create different recurrence intervals for persistent versus transient threats – which is straightforward enough given the loads of transient attack data out there. We also have to forecast recurrence intervals for vulnerabilities and weaknesses. We can use the NIST’s National Vulnerability Database along with the Common Platform Enumeration as a good resource for vulnerability recurrence intervals for technologies. That said, weaknesses are significantly more complicated to model. I will have to save that discussion for another time.

Once we are able to forecast both threat activity and vulnerabilities, we are able to simulate when attacks can be successful. Let’s consider a contrived example of how this might work using a pair of dice. Imagine that a first roll of both dice represents the number of days before the next attack using a specific methodology, such as phishing, will occur against an operation. Next, we can roll one die to get a threshold level dictating whether an opportunity exists. If rolling the second die exceeds that number, a breach occurs. A simple calculation tells us this will happen nearly 42% of the time. Therefore, there will be approximately 22 breaches in a given year. If we then assume that an operation can significantly improve its patching rates so that bad actor now has to beat the threshold die roll by at least 2 to effect a breach, they will only breach 28% of the time. Accordingly, they are expected only to experience 15 breaches a year. If we then multiply the number of expected breaches by typical value-at-risk numbers for a given data type, as some companies in this space do, we can calculate a basic version of expected risk exposure.

Keep in mind that there is a great deal of variability in this simple model as well. We could get lucky and roll lots of high numbers so that attacks were less frequent. Similarly, high rolls with one die could raise the vulnerability threshold so that they were less frequent. We could also get lucky and roll lower numbers when testing to see if the vulnerability is exposed. In other words, we could have good years and bad years. In fact, we will want to focus on the bad years in our analysis because we want to avoid fat tails that result in huge losses.

I am routinely told that this sort of calculation is too difficult to do systematically: “Cybersecurity is much more complicated than other fields!!!” And yet, I just demonstrated that it can be done. You can see how straightforward it is to create a model that tells us when threats and vulnerabilities align in such a way that it allows a breach. We also demonstrated how a policy choice can be modeled so that we can analyze its effect on risk.

To be fair, we all agree that a pair of dice are not representative of threat attack recurrence intervals, the presence of a vulnerability at a given time, nor patching practices. There are also a variety of variables that influence susceptibility to a threat and the impact of a breach that I did not bore you with. Rest assured that our software has a much more sophisticated model of means, opportunity, and mitigating factors. It also uses realistic probability distributions built off of observed data and expert opinion rather than the 1 in 6 chance you get of rolling a particular number on a die. Simply put, we provide the only assessment that rigorously explores the risk equation: risk = threats x vulnerabilities x impact. Better still, we are the only company that truly aligns realistic forecasts of threat and vulnerability activity with one another to determine the probability of breaches going forward. This type of assessment is the basis for proper risk management.

Comments are closed. Posted by: raflores@sikernes.com on

Tags: , , , , ,