The problem that must be solved

Despite leaps and bounds in cybersecurity technologies and capabilities, the number and severity of breaches have continued apace as if nothing has changed. As might be expected, adequate insurance is costly and difficult to obtain in this adversarial environment[i]. Around two-thirds of members of boards of directors lack confidence in their companies’ ability to thwart attacks[ii]. Nearly 9 out of 10 of security professionals do not believe their security programs meet their needs[iii]. Plus, 3 out of 4 consumers are likely to stop purchasing from a company if a data breach is found to be linked to the board failing to prioritize cyber security[iv]. Clearly, boards are under significant pressure to get their organizations to control their risk exposures or likely lose business.

 

Traditionally, other business lines have turned to proven risk management techniques to control their exposures be they caused by crime, war, terrorism, or the so-called acts of god. This has not proven to be the case in the cybersecurity space; it is time that it was.

 

The obvious question is why don’t practitioners use proven risk management approaches to cybersecurity?

 

Simply put, it is very difficult to do well, resource intensive, and expensive. In other fields such as transport logistics, a risk manager could simply create a test versus a control group to understand how different hazards affect their operations. For instance, a trucking company could lengthen the maintenance schedules for a small percentage of their fleet. After a sufficient test period, the company can infer what the changes to downtime recurrence intervals are and therefore understand the impact on their business.

 

However, when dealing with forces of nature (e.g. tornadoes) or human behavior (e.g. crime) it would be nearly impossible for a single organization to analyze the large volumes of complicated and rapidly changing data necessary to understand the recurrence intervals of these hazards. Instead, organizations typically rely upon insurers to create actuarial tables to help them understand their risks. Others, including the insurers, rely on huge government outlays to collect and process much of the data needed for these tables through entities such as the National Weather Service (NWS), the National Oceanic and Atmospheric Administration (NOAA), the Federal Bureau of Investigation, and the National Science Foundation.

 

In the cybersecurity field, insurers are in the same boat as those who are insured. There is no government organization subsidizing the collection and processing of data in any significant way. Instead, an entire industry centered around collection, aggregation, and analysis of threat intelligence has evolved. These products are quite expensive and very difficult to consume. Imagine what it would be like if the NWS did not exist and NOAA sold raw satellite and other weather data feeds to farmers expecting them all to use that intelligence to figure out how the weather will affect their crops. This is not to say that it cannot be done: certain farming companies would do so to get a competitive advantage over those who could not afford it. In fact, this can be seen in the financial industry with respect to cybersecurity. The top tier banks (fewer than 10) spend hundreds of millions of dollars, some planning to go as high as half a billion dollars, every year implementing robust cybersecurity programs. For the vast majority of banks, this level of spending is well outside of their capabilities. Today, they must do the best that they can with the few resources they can afford.

 

The next question is how do the big banks do it and is there a way to bring that capability to the small and medium enterprises (SMEs) at a reasonable cost?

The answer to that, is yes and no. The top-tier banks keep stables of cybersecurity experts at the ready to handle incident (breach) verification and management (clean-up) work. They also keep teams whose job it is to put all of the raw threat data they purchase and collect into a context relevant to the organization’s business operations. They also have people whose job it is to identify the most likely areas where a bad actor might attempt to breach. They then focus on the problem areas beefing up detection and defensive capabilities to match the current threats. Finally, they train their team members to detect and avoid any hazards. All of this requires experts who, due to their limited supply, are very expensive and clearly do not scale.

 

Yet, we can provide a better approach that will help close the gap between the armies of cybersecurity experts the big banks use and the fewer resources that the SMEs can muster. Three things must be done.

  • Cybersecurity assessments must consider risk.

Most cybersecurity assessments do not take risk into account whatsoever. Instead, they focus on hazard (i.e. threat) characterization or vulnerability reduction and do not consider how those facets interact to result in impact. While taking this approach can ultimately result in risk reduction, there is no way to measure how effective these approaches are let alone whether a return on investment can be realized. For instance, a recommendation for an e-commerce website might be to implement a next-generation firewall. These systems are expensive and there is no guarantee that a commensurate reduction in risk has been achieved. Perhaps it would have been more cost efficient to spend the same amount on broader insurance coverage.

 

  • Risk assessments must be quantitative.

In the instances when risk assessments are performed, they tend to be qualitative. For instance, risk is often assessed based upon how well an operation complies to a standard. The Titanic was the most safety compliant ship to set sail on its maiden voyage. Compliance is not a substitute for a proper risk assessment. Practitioners may also measure risk using qualitative techniques that rely upon expert ratings. These might range from extreme to low impact or highly likely to unlikely. For instance, an expert might say that the likelihood of an attack against an e-commerce site is high but the probability of the attack succeeding is low because the organization has a next generation firewall in place.

 

This expert ratings approach does not formalize the model that the practitioner had in mind when they made the ratings. As such, ratings for one component of an operation may not be equivalent for another: does a low likelihood of a vulnerability existing for java, a notoriously buggy application layer, have the same meaning as a low likelihood for nginx, a web server with an unremarkable bug history. This inconsistency gets worse when multiple experts are working on different aspects of the same operation: does high mean 80% likelihood for one person while it means 65% for another? Expert’s mental models most certainly change over time. As such, comparing risk over different time periods or from different teams makes no sense.

 

The qualitative approach to risk assessment also ignores the rapidly changing, variable, and uncertain nature of the cybersecurity space. An expert may indicate that a particular technology component has a low probability of a vulnerability because one may not have been discovered over a long stretch of time. However, a detailed analysis of the history for the component could easily show that disclosures tend to occur in clusters after a long hiatus. Thus, the likelihood might be understated. An expert could also rate a particular attack vector as highly active because of a few high-profile breaches in the news. However, it could prove that those attacks were persistent threats focused on particular organizations as targets. The organization under review might not be one of the threat’s primary targets and risk would be overestimated.

 

In essence, qualitative approaches fail because humans are not Bayesian, meaning we are terrible about drawing conclusions based upon past information. When we attempt to make inferences about probabilistic outcomes such as the likelihood of a breach, we “show systematic biases that are often attributed to heuristics or limitations in cognitive processes.”[v] Generally speaking, systematic biases are acceptable if they are properly documented and understood by all of the individuals consuming results that have those assumptions. Unfortunately, qualitative methods lack the rigor needed to tease out what these biases are. The only hope is to have a large heterogeneous group of experts who provide a set of opinions whose biases cancel on another.  Otherwise, uncertainty and variability will make the risk measures suspect. Worse yet, they cannot be compared against one another.

 

Quantitative approaches overcome these problems. The results are consistent so different operations can be compared to one another to help organizations understand where budget dollars can provide the most return on investment. Results can be recalculated if a new set of assumptions are put forth. This allows for hypothesis to be tested without having to worry about systematic biases. This is particularly important for measuring continuous improvement over time. Variability can be accommodated by exploring a wide range of outcomes given the range of possible scenarios. Uncertainty can be dealt with by exploring how risk changes for different magnitude of inputs. Finally, as fidelity improves in the quantitative model, the results get more accurate.

Unfortunately, only a few risk assessments are quantitative and they are done manually for the most part.

 

  • Risk assessments must be forward looking.

A risk assessment of damage for a hurricane in Miami on 1 May 2016 would have been zero. There were no hurricanes present or capable of being formed on that date. Conversely, a risk assessment for Miami on 23 August 1992 would have been extreme. Unless a risk assessment looks forward in time, it will either severely under or overestimate impacts. In reality, there is a 10% chance of a category 3 or higher storm hitting Miami in any given year. The range of storm impacts varies based upon many factors but an annual loss expectancy, a common risk measure, for a house that is not built to withstand a hurricane is 10% of the value of the home. Any mitigation efforts that cost less than that amount will have a return on investment.

 

Risk assessments in cybersecurity also have to be forward looking. They cannot rely on known vulnerabilities and current threats. Instead, vulnerability and threat recurrence rates must be used so that measures such as the annual loss expectancy can be calculated.

 

[i] National Association of Insurance Carriers (NAIC) Center for Insurance Policy and Research (CIPR), Cybersecurity Issue Statement, 25 Jan 2016.

[ii] NYSE Governance Services & Veracode, A 2015 Survey of Cybersecurity in the Boardroom, 2015.

[iii] EY, Creating trust in the digital world: EY’s Global Information Security Survey, 2015.

[iv] Vanson Bourne & FireEye, Beyond the Bottom Line: The Real Cost of Data Breaches, MAY 2016.

[v] Soltani, Alireza, et al., Neural substrates of cognitive biases during probabilistic inference, Nature, 26 April 2016.

Comments are closed. Posted by: raflores@sikernes.com on

Tags: , , ,