This is the third post in our ongoing series on IT risk assessments. In our first post we established critical foundational concepts and considerations. In our previous post we discussed different frameworks and how to best make use of them. In today’s post we will delve into the topic of qualitative versus quantitative risk assessment methods. This topic is important because there is much quackery in the industry claiming to be quantitative while masquerading as bad mathematics. We will get into some of the dos and don’ts of quant, including how you can start applying quantitative techniques now, regardless of program maturity.
Using Numbers Is Not Inherently Quant
Many tools, models, and methodologies like to claim that they provide a quantitative risk analysis capability, but there is a great deal of misunderstanding and misperception around what is and is not “quantitative analysis.” In fact, it is quite common to find that there isn’t anything truly quantitative happening, despite some rather complex calculations, all because the creators of the method or formula have failed to take into consideration foundational principles of statistical mathematics.
Just because your “assessment” (or, more often, your data collection tool) makes use of numbers does not mean that you are doing quantitative analysis. In fact, depending on the type and nature of the numbers being used, and the subsequent manipulation of those numbers, you might be breaking mathematical laws in addition to not doing quant analysis.
Specifically, an understanding of this topic must start from foundational concepts, such as understanding the difference between categorical data (for example, labels like high, medium, and low), ordinal data (such as used in ranking and prioritization, as in first, second, third), and real number data (either actual measured values or estimated measured values). Only the latter case (real number, or numerical, data) generally provides the basis for quantitative analysis. As a general rule, only numerical data can be acted upon using standard arithmetic.
For example, if I ask you to take a list of five attributes (categorical data) and rank them in order of importance from 1 to 5 (ordinal data), then we are most definitely not doing a quantitative analysis. We’re doing a simple ranking exercise. You can take all the ranked scores for each of the attributes and then average them out to help determine which attribute was deemed “most important” and so on. However, that’s about the extent of the arithmetic that you would be allowed to do on categorical and ordinal data.
Now… here is where things can start to get tricky. You would not take this data and add all the values together and/or start multiplying them by random weighting factors. You wouldn’t decide “Today, a 1 (“most important”) is worth 100 points, whereas a 5 (“least important”) is only worth 10 points.” You would not then add and multiply and perform logarithmic derivations. You collected ordinal stack-rankings, not real number data, and to treat it otherwise ends up violating important mathematical principles.
Sadly, this is exactly what we see happening time and time again in all manner of “risk assessment” programs. We see categorical rankings like Critical, High, Medium, Low, and Very Low – that are then converted into arbitrary numerical values and acted upon arithmetically in violation of statistical rules. While it is ok to associate those labels with ordinal values in order to calculate a straight average (because there’s an implied ordinal ranking), you cannot arbitrarily assign real number values to these labels and then start applying quantitative analysis techniques using arithmetic.
This point is often very confusing to people. We have seen many examples of elaborate spreadsheets that collect variously ranked data and then do some absolutely confounding arithmetic calculations that result in things like single arbitrary numbers that, ultimately, not only have no inherent meaning, but are really reflective of any number of biases (from assumptions) being introduced into the calculations, often without meritorious explanation.
If there is one thing you take away from today’s post, let it be this: Just because you are using numbers does not mean that you can perform standard arithmetic on those numbers. It is incredibly important to understand foundational statistics principles and realize that ordinal rankings are essentially a form of categorical data, which means you cannot rightly add, multiply, etc. After all, you would never say Ford + Chevy + Audi = 79, right? Nor would you even take it a step further and say “3*Ford + 2*Chevy + 100*Audi = 79.” These statements may seem absurd, and yet if you look at many “risk assessment” methods in practice today, we see exactly this happening, except Ford is High, Chevy is Medium, and Audi is Low (or some such). Beware quantitative analysis claims!
Getting a (Real) Start With Quantitative Analysis
Now that you have been suitably warned about bad math masquerading as quantitative analysis, let’s now look at ways in which we can apply real, legitimate quantitative methods in a manner that will benefit your program, regardless of program maturity.
First and foremost, a great start for quantitative analysis is, in fact, to apply it during context setting and not in the risk assessment itself. Specifically, a key hurdle to clear in any risk management program is establishing a reasonable, rational basis for business impact. What’s important to the business? What sort of (financial) losses can the business incur without experiencing “material harm” (a legal, meaningful term)? What lines of business or applications or systems or services provide the most and/or least amount of revenue, and what is their tolerance for disruption?
Answering these questions can provide a valuable basis for starting with quantitative risk analysis. Note that we haven’t even started to delve into the topic of probability estimates at this junction. Keep it simple. Start establishing actual, ranged value estimates (ranges are always best – see Douglas Hubbard’s book How to Measure Anything). Speak with people in the organization who can authoritatively answer these questions. Do not simply rely on your own best guess, nor should you stay simply within the IT department in hopes that techies can magically intuit actual business sensitivities (it turns out we’re not very good at estimating business impact).
Once you have successfully established an approach for collecting basic impact information, then and only then does it make sense to look at maturing practices to get into more advanced quantitative topics, including probability estimates. However, in moving onto these more advanced stages, we highly recommend having a good grounding in statistics and/or data science. You may find a method like Open FAIR to be of interest (as discussed in the last post), and the associated Open FAIR training (from The Open Group) may be useful. However, you need not adhere to any single method and are encouraged to thoroughly explore statistics and data science to better understand the correct ways to create, test, and refine quantitative models for your organization.
Right-Sizing Risk Assessment Efforts: Do You Even Need Quant?
One question you might be asking at this point is just how much quantitative analysis is worthwhile, and if it’s worth using it. We think the answer is definitely yes, to a point, but perhaps falling short of full-fledged decision analysis and management (it’s still fairly rare to see decision trees in action in the real world, for those who may have experienced those in academia).
The simple fact is that organizations have been muddling through without quantitative analysis all this time, and they seem to be surviving. In fact, this statement can be generalized and broadened to point out that, despite a lack of reasonable security protections and in the face of massive breaches, nobody is saying “Oh, what a pity seeing all those empty store fronts with the red bullseye logo.” or “Remember when we could go buy home improvement products from those large, orange-signed warehouse stores?” Despite the losses piling up, businesses are proving to be remarkably resilient, even if just out of sheer luck.
So… to the question at hand… do we even need quantitative analysis? How do we “right-size” our risk assessment activities?
The answer, simply, is this: You’re already doing risk assessment, whether or not it’s formalized. You’re weighing options. You’re roughly considering pros and cons. You’re trying to balance tradeoffs and hoping that your decisions are good ones that improve value while decreasing loss potential. You are likely considering business impact, albeit in a vaguely qualitative manner. For that matter, we do risk calculations in our heads every day. “Should I get on this airplane?” or “That fish smells funny, should I really eat it?” or “Let’s not drink the scummy green water that smells of petroleum byproducts.” are all examples of the kinds of risk management thoughts that pass through our brains every day. For the most part, we’re fairly good at making decisions.
The question, then, is if we can get better at making decisions, and how to best go about doing that without falling into “analysis paralysis” (being unable to make a decision), without making decisions worse (such as due to relying on bad assumptions), or creating processes that are so slow or unwieldy that they are bypassed or too inefficient to be worthwhile.
Yes, this can be done. No, it need not be excessive or inefficient. It may be as simple as establishing some baseline estimates for business impact in key areas, from which you can then drive short conversations that say things like “We know that if this application/service is down for an hour during peak business hours, it will cost us X dollars per hour. Thus, we should look at investing into the resilience of this application, up to ‘X’ dollars, to ensure that we are reasonably protected against downtime.” Notice, again, that at no point do we need to go down the rabbit hole of probabilities. Rather, it’s a better-informed conversation.
As we become comfortable with introducing basic quantitative (real number) values into a conversation in order to drive more rational decision-making, then and only then can we look into better formalizing processes and discussions, and then and only then can we start getting more elaborate in our calculations, likely leveraging tools to help speed data collection and calculations (including using various statistical models and methods). Until that point, begin with what you can, where you can. Slowly change unfounded “belief state” assertions to being fact-based, and then iterate and evolve from there.
In our upcoming fourth and final post in the series, we will conclude by looking at how to leverage platforms to improve risk management programs. We will take a look at common ad hoc practices (Excel! SharePoint!),evaluate pros and cons of using a platform, and end with a discussion about how leveraging platforms can lead to improved communication and visibility into risk states.
As breach remediation costs rise, seemingly in direct proportion to the number of attackers and attacks, what are you doing to manage your cybersecurity vulnerabilities and risks? Sufficient proof is easily found to reinforce that how you respond to threats and breaches can have a significant impact on your business. For example… The 2021 Ponemon Institute Annual Cost of a Breach Report found that the average cost of a breach rose 10% to $4.24M. The report also found that it took an average of 287 days to identify and contain a data breach. Even if you can handle the reputation hit of a breach, and even if your insurer agrees to cover a portion of the damages, do you want to be on the hook for millions of dollars in remediation and restoration costs? Prevention is easier and less expensive. Your data and intellectual property (IP) are often the most valuable assets you own, and as such are deserving of all the resources your team can muster for effective security vulnerability and risk management. Read on to learn more about the cyber risks to watch out for in 2022 and how you can plan and prepare for them. What types of cyberattacks can you expect? Counterintuitive, of course, because many organizations don’t expect their network to be attacked, any more than they expect it to contain dangerous vulnerabilities. You want to believe those events occur to others, not you. Right? Except competent hackers can infiltrate your network and steal your data and IP while remaining undetected. Ransomware attacks For several years now, ransomware attacks have been the fastest growing segment of cybersecurity breaches. Typically, criminals breach an organization and encrypt its data, rendering it unusable. Inaccessible data renders a firm unproductive and unprofitable for as long as the data remains inaccessible. The Colonial Pipeline ransomware attack, for example, led to the shutdown of the largest fuel pipeline in the U.S, which in turn caused fuel shortages across the East Coast. Criminals also threaten to publicize intellectual property (IP) and customer information, unless they receive a ransom. Although small-to-midsize businesses (SMBs) are at the most risk of criminal ransom demands, payouts can reach seven or eight figures. The highest ransom amount confirmed to have been paid is $40 million USD, by CNA Financial, in May 2021. Few SMBs can afford such extravagance. Cloud vulnerabilities The first researchers to discover and report on critical vulnerabilities in the cloud focused on Microsoft Azure infrastructure. In detailing the vulnerabilities, those researchers, who were with Check Point, “wanted to disprove the assumption that cloud infrastructures are secure.” And did they ever disprove it — the discovered vulnerabilities included those that received the highest possible score of 10.0. The qualitative severity ranking of a score of 9.0-10.0 is “critical.” The discovered vulnerabilities allowed malicious actors to compromise applications and data of those using similar cloud infrastructure. Firmware vulnerabilities Firmware vulnerabilities expose not only the major computer manufacturers, but also their customers. Undiscovered firmware vulnerabilities are especially damaging, because they grant criminals free reign over any network on which the devices are installed, leaving networks open until the vulnerability gets reported and patched. As the number of connected devices continues to grow, Internet of Things (IoT) security becomes increasingly important to analyze. Software vulnerabilities Applications contain vulnerabilities. According to Veracode, 75.2% of applications have security flaws, although 24% of those are considered high-severity. Common flaws include: Information leakage. Carriage Return and Line Feed (CRLF) injection. Cryptographic challenges. Code quality. Credentials management. Insider threats Insider theft and trading of secrets is another growing vulnerability area. As demonstrated by recent Cisco and GE breaches, employees with perceived grievances or bad intentions can choose to steal or wreak all kinds of damage on their employers’ data and networks. Carelessness and poor training also contribute to insider threats. Cyber threats to healthcare In recent years criminals have increasingly trained their sights onto hospitals, insurers, clinics, and others in that industry. A 2016 report by IBM and the Ponemon Institute found the frequency of healthcare industry data breaches has been rising since 2010, and it is now among the sectors most targeted by cyberattacks globally. Whether or not the reputation is deserved,healthcare industry computer networks are often considered soft targets by malicious actors. In 2021 Armis discovered nine vulnerabilities in critical systems used by 80% of major North American hospitals. Additionally, rapid health device adoption has increased the number of available targets for malicious breachers. Numerous healthcare devices suffer security flaws, including imaging equipment. Added together, those factors point to an increase in attacks on health care institutions. Attacks against health care networks threaten lives, not just productivity. Criminals might believe health care administrators are willing to pay ransoms faster to retrieve health data and help patients. That’s not always the case, as ransomware allegedly led to the death of an infant and was initially thought responsible for the death of a German patient. Individual medical data – name, birth date, blood type, surgeries, diagnoses, and other personally identifiable information – is particularly interesting to criminals. Once compromised, it’s impossible to restore patient privacy, just as it’s impossible to reverse the social and psychological harm inflicted. Forgotten cyber hygiene When IT professionals are always in stressful firefighting mode, they can’t be expected to remember everything. Sometimes patches fall through the cracks, and those vulnerabilities come back later to bite your network. Your IT department may be aware of old vulnerabilities, but just hasn’t gotten around to applying the necessary patches or closing open holes. A virtual private network (VPN) account that remained open, although no longer in use, was how criminals penetrated Colonial Pipeline. Employees had previously used that account to access the company network remotely. How can you uncover cybersecurity vulnerabilities and risks? It’s easy for consumers to learn what to watch for and what to avoid. They can download, for example, the Annual Data Breach Report from the Identity Theft Resource Center. You, on the other hand, have a network full of devices, endpoints, applications, and the weakest link in the security chain – users. Yes, you can lower the possibility of user negligence with cybersecurity training. Sure, you can find and read reports about currently existing threats. But without a comprehensive vulnerability management program that brings together every vulnerability scanning tool across your entire attack surface, it’s almost impossible to know what’s threatening your network right now. How do you find a vulnerability in YOUR cybersecurity and IT environments? Most organizations rely on several different vulnerability scanning tools to achieve full vulnerability assessment coverage over their IT environments. Most vulnerability scanning tools focus on only one specific aspect of your attack surface — network devices, web applications, open source components, cloud infrastructure, containers, IoT devices, etc. Vulnerability management teams are often left with the unenviable job of bringing these disconnected tools, and the incompatible data they deliver, together into cohesive and consistent programs. Deploying Brinqa vulnerability management software to perform vulnerability enumeration, analysis, and prioritization allows you to effortlessly synchronize and orchestrate the best vulnerability scanning tools for your environment. The Brinqa platform is designed for data-driven, risk-based cybersecurity solutions. Brinqa include risk models for cybersecurity problems like vulnerability management and application security, which are essentially data ontologies developed based on industry standards and best practices to represent these cybersecurity challenges in terms of data. Brinqa data models and risk scores are adaptive, open and configurable, and include not just vulnerability data, but also additional business context from within the organization, as well as external threat intelligence. For example, the data model automatically considers that if a server is internal facing, and it’s for testing code, then it’s going to differ in priority from an external facing server that is hosting an e-commerce site, and which contains customer personal data and information. Similarly, if external threat intelligence discovers that a particular vulnerability is suddenly very popular among malicious actors and is being used to affect breaches, the data model automatically computes and assigns a higher risk score to the vulnerability. First and foremost, we get you away from having to log into numerous different tools to bring all relevant information together and make it usable. Second, we streamline and automate your common vulnerability analysis, prioritization, and remediation use cases. That's the enormous benefit of Brinqa... The centralization is great, but once you start consolidating, enhancing, and contextualizing all of that data, you can provide a level of prioritization that takes your risk response to another level. Beginning with generic, out of the box rules based on best practices, the environment allows every Brinqa customer the flexibility to tailor analysis to their needs, basically giving them a self-service mechanism to implement their own cybersecurity service level agreements (SLAs). The default rules are like templates or starting points, which you adjust and configure as necessary. It is ineffective and inefficient to make decisions on an ad hoc, case by case basis, about what should be fixed and in what order. Once you implement Brinqa, your automated vulnerability remediation and cyber risk response processes deliver effective, consistent, and reliable results. Spend a little time (no money) to see how simple solving a major headache can be, with a free trial. Frequently Asked Questions: What is vulnerability scanning? Vulnerability scanning is the detection and classification of potentially exploitable points on network devices, computer systems, and applications. What is vulnerability remediation? Vulnerability remediation includes the processes for determining, patching, and fixing cybersecurity weaknesses that have been detected in networks, data, hardware, and applications. What is NVD? National Vulnerability Database (NVD) is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP). What is CVE? Common Vulnerabilities and Exposures is a list of publicly disclosed cybersecurity vulnerabilities that is free to search, use, and incorporate into products and services. What is CRLF? Carriage Return and Line Feed injection is a cyber attack in which an attacker injects malicious code.
Brinqa is actively investigating the impact of the Log4j library vulnerability CVE-2021-44228 disclosed on Dec 9 2021 and associated CVE’s (2021-45046, 2021-4104). This bulletin contains the latest information as it pertains to the impact of these vulnerabilities on Brinqa and will be updated as new information becomes available. We have been continuously monitoring for Log4j exploit attempts in our environment. At this time, we have not detected any successful Log4j exploit attempts in our systems or hosted solutions. We will continue to monitor our environment for new vulnerability instances and exploit attempts and will update this page as we learn more. The Cybersecurity and Infrastructure Security Agency (CISA) provides a useful summary of Log4J vulnerability guidance that customers may want to reference in addition to any product and version specific recommendations from your Brinqa customer success team. If you have any questions or concerns please feel free to reach out to us at email@example.com
What does cybersecurity mean to your business? This might seem like an odd question, but how an enterprise responds to it can say a lot about the culture and practice of cybersecurity within that organization. There are many different ways to ask the same question — Which function does cybersecurity report to within the enterprise? Who are the internal clients of cybersecurity? Does cybersecurity leadership have a voice at the highest levels of corporate decision-making? There are 2 main schools of thought about the role and orientation of cybersecurity within the enterprise. The traditional school places cybersecurity within the Information Technology (IT) function of a business. In this model cybersecurity reports to IT, IT is the internal client for cybersecurity, and the CISO might report up to the CTO or CIO. It’s easy to see why one might make this association. IT and cybersecurity professionals often have similar or adjacent skillsets and overlapping educational and professional backgrounds. Both functions often deal with highly technical, specialized, and complex information and processes. However, the goals and KPIs of IT and cybersecurity are not only unaligned, they are often in direct conflict. The internal clients for IT are other business functions that essentially pay for the various technology assets (applications, servers, cloud instances, etc.) required to keep the enterprise running. IT performance is evaluated by how seamlessly, continuously, and cheaply they are able to deliver their services. IT doesn’t really have visibility into or an understanding of how these assets are being used by the business, what kind of data they process, which critical business functions they support. When cybersecurity comes to IT and tells them that a particular technology asset or part of the IT infrastructure has problems or weaknesses that could be exploited by malicious actors, they have to weigh the benefits — stopping a potential attack that may or may not happen vs. the costs — resources allocated to fix the problem, unhappy internal clients due to technology assets being unavailable during fixing, valuable time spent fixing and validating the issue. This is a hard sell and essentially amounts to self-regulation. A significant percentage of breaches exploit known vulnerabilities and weaknesses within an organization. Looked at from this lens, it's not difficult to see how such problems can go unaddressed. The modern school of thought recognizes Cybersecurity as its own independent vertical within the enterprise — like sales, marketing, HR, or any other function whose purpose is to help the business function and thrive. In this model, cybersecurity has various different business functions as internal clients, and the CISO might have a seat at the C-level table. Cybersecurity informs business stakeholders of the risks they face as a result of the technology infrastructure they utilize. The business stakeholders provide the context necessary for informed risk triage and collaborate with cybersecurity to identify which vulnerabilities or weaknesses pose the biggest threats to the part of business they own. These prioritized risks are then sent to IT for remediation. Cybersecurity provides guidance to IT on how they may remediate or mitigate a particular problem. Since risk remediation or mitigation is being driven by the business stakeholders, IT is incentivized to fix these problems. Risk-based cybersecurity is a methodology for program design that can help organizations put this modern approach into practice. By putting an emphasis on incorporating business context in the risk analysis process and data models, and by ensuring that business stakeholders are involved in the decision chain, risk-based cybersecurity programs provide a shared space where IT, business, and cybersecurity can come together and collaborate.