Leveraging Standards for Risk Assessment
September 21, 2015 by Syed Abdur

In our first post in this series, we introduced core concepts of risk assessments, including where they fit within the overall risk management process and the exceptional importance of first completing the context-setting stage of the process. In that post, we looked at ISO 31000 as a reasonable model for an overall risk management process, but stopped short of diving into specific standards (frameworks and methodologies) for risk assessment.

In today’s post we will look at standards and how they can best be leveraged by your organization to improve risk assessment efficiency and effectiveness. We will also highlight some common data collection standards that may be useful. The key takeaways for this article are to first determine whether or not a standard will benefit your organization, what sort of standards might be useful and how to pick one, and a brief summary of major standards that are available.
Without further adieu, let’s delve into the first question.

Do I Even Need a Standard?
At first, this may seem like a silly question. On the one hand, it may seem absurd that an organization would need a standard for what might appear to be basic and foundational. As ISO 31000 readily demonstrates, there is nothing “whiz bang” special about risk management and risk assessments. Create a process, gather data, conduct analysis, and make the best decision possible. However, is this true, or are we oversimplifying matters?

On the other hand, standards can seem quite logical and appropriate – maybe even appealing – to help us construct and operate our risk management programs. However, not all standards are created equal, nor in fact do they accomplish the same things. As we’ll discuss later in this post, some standards can be quite large and overwhelming, while other standards have a more focused purpose and may need to be used in conjunction with other standards.

The base answer to the question of whether or not a standard will be useful ends up being fairly straightforward: All standards should be approached as guides to help fill in gaps in your overall risk management program. Standards can be useful in helping ensure you have the right steps in your overall process, and they can provide further value in helping you identify opportunities to refine and improve process definition and execution.

Choosing a Good Fit
Thankfully, most standards conform to the guidelines set forth in ISO 31000, which means picking a standard as a reference for risk management program development should not be scary (there are, of course, exceptions to this rule – COBIT 5, in particular). As such, the focus of your quest for a standard should be born out of a desire to find something that works for your organization without being so diametrically opposed to corporate culture that it will almost assuredly result in failure.

That may seem scary, but at heart the point is this: Read several standards and find the one that sounds and/or feels most like your organization’s culture. How does your organization function? How do people interact? What is the nature of the business being conducted? What sort of backgrounds do people have (e.g., public vs. private sector)? To what regulations is your organization subject?

As is often the case with risk management, it is imperative that you know your organization, and know it well. Risk assessments are as much art as science, and thus require having a good sense for how people think and behave. For example, consider the differences in culture, operations, and personnel between a stringent military organization versus a manufacturer versus a very white-collar financial services company versus a Wall Street trading house with very sensitive real-time processing requirements versus the typically laid-back and laissez faire environment of most higher education institutions.

Proper fit is key to success. If you want people to listen to you, hear you, and take you seriously, then you cannot present an approach that is radically different from how business is conducted, or worse, that interferes with their ability to complete their assigned duties.

Common Standards

  • Information Security Forum Information Risk Analysis Methodology (ISF IRAM): ISF IRAM is an interesting reference because it does an effective job of breaking down the process in a meaningful and useful manner. ISF’s overall approach to risk assessment starts with completing a business impact assessment (BIA), then performing a threat and vulnerability analysis, and then moving into control selection. This approach roughly approximates the ISO 31000 process (context-setting, assessment, and remediation), and the tooling support can be interesting. The biggest downsides to ISF IRAM are that organization membership can be expensive and the tools themselves may not easily integrate with a risk management or GRC platform. Nonetheless, studying their approach and any open materials you can find on how the conduct the BIA is interesting and might help you refine your approach.
  • ISACA COBIT 5 for Risk: A very commonly referenced standard, COBIT 5 itself can be incredibly overwhelming as it is intended to be a full-scale IT governance program and not just a risk management standard. ISACA has produced subsidiary documents specific to defining and conducting a risk management program, and that documentation can be useful and interesting as a reference. One of the largest challenges with COBIT 5 is learning enough about it to go through scoping, design, and implementation. Often, specialized resources are required to get through these steps. However, the average organization is not in financial services (the primary audience), and thus we recommend reading the COBIT 5 for Risk documentation, approaching it as a comprehensive reference, but not as a standard that any sane person might try to implement as-is.
  • ISO 27000 series: As has been noted before, ISO 31000 provides a generic guideline for the risk management process and its subsidiary components. This approach is further refined with more details within ISO 27005, which is designed to align with the Information Security Management System described in ISO 27001 and ISO 27002. For those organizations with an international presence, those subject to frequent external audits, or with a specific interest in acquiring an ISO certification as a liability shield, it is (obviously) useful to become acquainted with ISO 27005. Beyond that, the standard does not provide much more value beyond ISO 31000 itself, and thus may have limited reference value outside of seeking a certification.
  • OCTAVE Allegro: A product of CERT’s Risk Resiliency Center, OCTAVE Allegro is the most recent risk management publication in the OCTAVE methodology series. Overall, OCTAVE can be a good fit for organizations that tend more toward an analytical or engineering mindset. It includes supplemental worksheets that can be fairly easily integrated into risk analytics tools, and it has a reasonable amount of reference materials that can help in identify gaps and opportunities for improvement. Training is available from CERT for using OCTAVE, which could also provide value, especially for organizations that are just getting started with a formalized risk management program.
  • Open FAIR: In contrast to the other standards listed here, Open FAIR is not generally focused on the overall risk management process (not completely true, but bear with us), but rather provides a discrete approach for conducting risk assessment and risk analysis. That said, following the entire Open FAIR approach from start to finish definitely does take you through context-setting and risk assessment, and in some cases may even be used for analyzing risk remediation options. One of the most important and valuable components of Open FAIR is the Risk Taxonomy, which takes the concept of “risk” (defined therein as “probable frequency and probable magnitude of future loss”) and factors it into easily understood components. Open FAIR is intended as a quantitative risk assessment methodology, which is also unique in this list. However, the Risk Taxonomy itself can absolutely be used in a qualitative manner to quickly “back-of-the-napkin” assess a situation. Such a snap assessment can often be useful as an initial triage step before deciding whether or not an in-depth risk assessment is warranted. The Open FAIR standard is written in accessible language and can be a worthwhile resource for shaping your thinking and approach to risk assessment and risk management.
  • USG Standards: The United States Government’s National Institute of Standards and Technology provides a large amount of free, open source standards on a number of interesting and useful topics. Included among these are an entire series of standards for risk management and risk assessment that generally conform to the ISO 31000 guideline and provide worthwhile information on structuring an approach. As is unsurprising, the NIST methodologies do tend to be a bit more bureaucratic in nature, but that attribute may fit well with some organizations. We recommend reviewing NIST Special Publications 800-39, 800-37r1, and 800-30r1.
  • Regulatory guidance or requirements: When designing and refining your approach to risk assessment, please be mindful that most standards from the past decade have included guidance and requirements, to some degree, for risk management or risk assessment. Be sure to account for any such requirements when designing your approach. You may find that certain standards may have better alignment than others with these stipulations.
  • Data collection tools: Most risk management platforms will include a reference library of questionnaires to aid in data collection. One common standard is the Shared Assessments SIG and SIG lite set of questionnaires. If your organization works in or with financial services you may already be familiar with these tools. Even if your organization is not in financial services, or you do not foresee direct use of them, they can be worthwhile references in developing your own data collection tools. That said, please bear in mind the point made in our first post: Data collection is not the same as risk assessment or risk analysis. Data is just the input, not the actual evaluation.


As we have discussed throughout this post, standards can provide value for defining and refining your risk management and risk assessment approach. Moreover, standards for data collection (such as from Shared Assessments) can provide additional value in improving overall performance. However, finding a standard can at times be daunting, and implementation can be soul-crushingly overwhelming.

It is thus important to approach standards with a learning mindset intent on investigating different theories on risk assessment, and then assimilating those pieces that best match with your organization’s culture, rather than necessarily seeking to make wholesale changes that may be at complete odds with how business is performed. As always, risk management must be nuanced and seek to integrate seamlessly with existing practices and processes in order to be successful. If not done well, the risk management process will get bypassed in the name of “getting work done” and, as a result, will falter (if not fail completely).

In our next post in this series we will be exploring how to “right-size” risk assessments, as well as discussing the pros and cons of qualitative versus quantitative risk assessments (including defining just what those terms mean). Our decisions can only be as good as the data we collect and analyze, which means it’s important to understand what both good and bad data look like. You may be surprised by what we have to share.

Recent Posts
March 24, 2022
Cybersecurity vulnerabilities and risks to watch for in 2022 – and how to manage them

As breach remediation costs rise, seemingly in direct proportion to the number of attackers and attacks, what are you doing to manage your cybersecurity vulnerabilities and risks?  Sufficient proof is easily found to reinforce that how you respond to threats and breaches can have a significant impact on your business. For example…   The 2021 Ponemon Institute Annual Cost of a Breach Report found that the average cost of a breach rose 10% to $4.24M.  The report also found that it took an average of 287 days to identify and contain a data breach.  Even if you can handle the reputation hit of a breach, and even if your insurer agrees to cover a portion of the damages, do you want to be on the hook for millions of dollars in remediation and restoration costs?  Prevention is easier and less expensive. Your data and intellectual property (IP) are often the most valuable assets you own, and as such are deserving of all the resources your team can muster for effective security vulnerability and risk management. Read on to learn more about the cyber risks to watch out for in 2022 and how you can plan and prepare for them.   What types of cyberattacks can you expect?  Counterintuitive, of course, because many organizations don’t expect  their network to be attacked, any more than they expect it to contain dangerous vulnerabilities. You want to believe those events occur to others, not you. Right?   Except competent hackers can infiltrate your network and steal your data and IP while remaining undetected.   Ransomware attacks For several years now, ransomware attacks have been the fastest growing segment of cybersecurity breaches. Typically, criminals breach an organization and encrypt its data, rendering it unusable. Inaccessible data renders a firm unproductive and unprofitable for as long as the data remains inaccessible. The Colonial Pipeline ransomware attack, for example, led to the shutdown of the largest fuel pipeline in the U.S, which in turn caused fuel shortages across the East Coast.  Criminals also threaten to publicize intellectual property (IP) and customer information, unless they receive a ransom.      Although small-to-midsize businesses (SMBs) are at the most risk of criminal ransom demands, payouts can reach seven or eight figures. The highest ransom amount confirmed to have been paid is $40 million USD, by CNA Financial, in May 2021. Few SMBs can afford such extravagance.    Cloud vulnerabilities  The first researchers to discover and report on critical vulnerabilities in the cloud focused on Microsoft Azure infrastructure. In detailing the vulnerabilities, those researchers, who were with Check Point, “wanted to disprove the assumption that cloud infrastructures are secure.”  And did they ever disprove it — the discovered vulnerabilities included those that received the highest possible score of 10.0. The qualitative severity ranking of a score of 9.0-10.0 is “critical.” The discovered vulnerabilities allowed malicious actors to compromise applications and data of those using similar cloud infrastructure. Firmware vulnerabilities Firmware vulnerabilities expose not only the major computer manufacturers, but also their customers. Undiscovered firmware vulnerabilities are especially damaging, because they grant criminals free reign over any network on which the devices are installed, leaving networks open until the vulnerability gets reported and patched.  As the number of connected devices continues to grow, Internet of Things (IoT) security becomes increasingly important to analyze.   Software vulnerabilities Applications contain vulnerabilities. According to Veracode, 75.2% of applications have security flaws, although 24% of those are considered high-severity.  Common flaws include: Information leakage. Carriage Return and Line Feed (CRLF) injection.  Cryptographic challenges. Code quality. Credentials management. Insider threats  Insider theft and trading of secrets is another growing vulnerability area. As demonstrated by recent Cisco and GE breaches,  employees with perceived grievances or bad intentions can choose to steal or wreak all kinds of damage on their employers’ data and networks.  Carelessness and poor training also contribute to insider threats.  Cyber threats to healthcare In recent years criminals have increasingly trained their sights onto hospitals, insurers, clinics, and others in that industry.  A 2016 report by IBM and the Ponemon Institute found the frequency of healthcare industry data breaches has been rising since 2010, and it is now among the sectors most targeted by cyberattacks globally.  Whether or not the reputation is deserved,healthcare industry computer networks are often considered soft targets by malicious actors. In 2021 Armis discovered nine vulnerabilities in critical systems used by 80% of major North American hospitals.  Additionally, rapid health device adoption has increased the number of available targets for malicious breachers. Numerous healthcare devices suffer security flaws, including imaging equipment. Added together, those factors point to an increase in attacks on health care institutions.  Attacks against health care networks threaten lives, not just productivity. Criminals might believe health care administrators are willing to pay ransoms faster to retrieve health data and help patients. That’s not always the case, as ransomware allegedly led to the death of an infant and was initially thought responsible for the death of a German patient.   Individual medical data – name, birth date, blood type, surgeries, diagnoses, and other personally identifiable information – is particularly interesting to criminals. Once compromised, it’s impossible to restore patient privacy, just as it’s impossible to reverse the social and psychological harm inflicted.  Forgotten cyber hygiene  When IT professionals are always in stressful firefighting mode, they can’t be expected to remember everything. Sometimes patches fall through the cracks, and those vulnerabilities come back later to bite your network.  Your IT department may be aware of old vulnerabilities, but just hasn’t gotten around to applying the necessary patches or closing open holes. A virtual private network (VPN) account that remained open, although no longer in use, was how criminals penetrated Colonial Pipeline. Employees had previously used that account to access the company network remotely.   How can you uncover cybersecurity vulnerabilities and risks? It’s easy for consumers to learn what to watch for and what to avoid. They can download, for example, the Annual Data Breach Report from the Identity Theft Resource Center.  You, on the other hand, have a network full of devices, endpoints, applications, and the weakest link in the security chain – users. Yes, you can lower the possibility of user negligence with cybersecurity training. Sure, you can find and read reports about currently existing threats.  But without a comprehensive vulnerability management program that brings together every vulnerability scanning tool across your entire attack surface, it’s almost impossible to  know what’s threatening your network right now.  How do you find a vulnerability in YOUR cybersecurity and IT environments? Most organizations rely on several different vulnerability scanning tools to achieve full vulnerability assessment coverage over their IT environments. Most vulnerability scanning tools focus on only one specific aspect of your attack surface — network devices, web applications, open source components, cloud infrastructure, containers, IoT devices, etc. Vulnerability management teams are often left with the unenviable job of bringing these disconnected tools, and the incompatible data they deliver, together into cohesive and consistent programs.  Deploying Brinqa vulnerability management software to perform vulnerability enumeration, analysis, and prioritization allows you to effortlessly synchronize and orchestrate  the best vulnerability scanning tools for your environment. The Brinqa platform is designed for data-driven, risk-based cybersecurity solutions. Brinqa include risk models for cybersecurity problems like vulnerability management and application security, which are essentially data ontologies developed based on industry standards and best practices to represent these cybersecurity challenges in terms of data. Brinqa data models and risk scores are adaptive, open and configurable, and include not just vulnerability data, but also additional business context from within the organization, as well as external  threat intelligence. For example, the data model automatically considers that if a server is internal facing, and it’s for testing code, then it’s going to differ in priority from an external facing server that is hosting an e-commerce site, and which contains customer personal data and information. Similarly, if external threat intelligence discovers that a particular vulnerability is suddenly very popular among malicious actors and is being used to affect breaches, the data model automatically computes and assigns a higher risk score to the vulnerability. First and foremost, we get you away from having to log into numerous different tools to bring all relevant information together and make it usable. Second, we streamline and automate your common vulnerability analysis, prioritization, and remediation use cases. That's the enormous benefit of Brinqa... The centralization is great, but once you start consolidating, enhancing, and contextualizing all of that data, you can provide a level of prioritization that takes your risk response to another level.    Beginning with generic, out of the box rules based on best practices, the environment allows every Brinqa customer the flexibility to tailor analysis to their needs, basically giving them a self-service mechanism to implement their own cybersecurity service level agreements (SLAs). The default rules are like templates or starting points, which you adjust and configure as necessary.   It is ineffective and inefficient  to make decisions on an ad hoc, case by case basis, about what should be fixed and in what order. Once you implement Brinqa, your automated vulnerability remediation and cyber risk response processes  deliver effective, consistent, and reliable results. Spend a little time (no money) to see how simple solving a major headache can be, with a free trial.        Frequently Asked Questions: What is vulnerability scanning? Vulnerability scanning is the detection and classification of potentially exploitable points on network devices, computer systems, and applications. What is vulnerability remediation? Vulnerability remediation includes the processes for determining, patching, and fixing cybersecurity weaknesses that have been detected in networks, data, hardware, and applications.  What is NVD? National Vulnerability Database (NVD) is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP).  What is CVE?  Common Vulnerabilities and Exposures is a list of publicly disclosed cybersecurity vulnerabilities that is free to search, use, and incorporate into products and services. What is CRLF? Carriage Return and Line Feed injection is a cyber attack in which an attacker injects malicious code.

December 15, 2021
Brinqa and Apache Log4j Vulnerabilities

Brinqa is actively investigating the impact of the Log4j library vulnerability CVE-2021-44228 disclosed on Dec 9 2021 and associated CVE’s (2021-45046, 2021-4104). This bulletin contains the latest information as it pertains to the impact of these vulnerabilities on Brinqa and will be updated as new information becomes available. We have been continuously monitoring for Log4j exploit attempts in our environment. At this time, we have not detected any successful Log4j exploit attempts in our systems or hosted solutions. We will continue to monitor our environment for new vulnerability instances and exploit attempts and will update this page as we learn more. The Cybersecurity and Infrastructure Security Agency (CISA) provides a useful summary of Log4J vulnerability guidance that customers may want to reference in addition to any product and version specific recommendations from your Brinqa customer success team. If you have any questions or concerns please feel free to reach out to us at security@brinqa.com

June 24, 2021
What is the Role of Cybersecurity in your Enterprise?

What does cybersecurity mean to your business? This might seem like an odd question, but how an enterprise responds to it can say a lot about the culture and practice of cybersecurity within that organization. There are many different ways to ask the same question — Which function does cybersecurity report to within the enterprise? Who are the internal clients of cybersecurity? Does cybersecurity leadership have a voice at the highest levels of corporate decision-making? There are 2 main schools of thought about the role and orientation of cybersecurity within the enterprise. The traditional school places cybersecurity within the Information Technology (IT) function of a business. In this model cybersecurity reports to IT, IT is the internal client for cybersecurity, and the CISO might report up to the CTO or CIO. It’s easy to see why one might make this association. IT and cybersecurity professionals often have similar or adjacent skillsets and overlapping educational and professional backgrounds. Both functions often deal with highly technical, specialized, and complex information and processes. However, the goals and KPIs of IT and cybersecurity are not only unaligned, they are often in direct conflict. The internal clients for IT are other business functions that essentially pay for the various technology assets (applications, servers, cloud instances, etc.) required to keep the enterprise running. IT performance is evaluated by how seamlessly, continuously, and cheaply they are able to deliver their services. IT doesn’t really have visibility into or an understanding of how these assets are being used by the business, what kind of data they process, which critical business functions they support. When cybersecurity comes to IT and tells them that a particular technology asset or part of the IT infrastructure has problems or weaknesses that could be exploited by malicious actors, they have to weigh the benefits — stopping a potential attack that may or may not happen vs. the costs — resources allocated to fix the problem, unhappy internal clients due to technology assets being unavailable during fixing, valuable time spent fixing and validating the issue. This is a hard sell and essentially amounts to self-regulation. A significant percentage of breaches exploit known vulnerabilities and weaknesses within an organization. Looked at from this lens, it's not difficult to see how such problems can go unaddressed. The modern school of thought recognizes Cybersecurity as its own independent vertical within the enterprise — like sales, marketing, HR, or any other function whose purpose is to help the business function and thrive. In this model, cybersecurity has various different business functions as internal clients, and the CISO might have a seat at the C-level table. Cybersecurity informs business stakeholders of the risks they face as a result of the technology infrastructure they utilize. The business stakeholders provide the context necessary for informed risk triage and collaborate with cybersecurity to identify which vulnerabilities or weaknesses pose the biggest threats to the part of business they own. These prioritized risks are then sent to IT for remediation. Cybersecurity provides guidance to IT on how they may remediate or mitigate a particular problem. Since risk remediation or mitigation is being driven by the business stakeholders, IT is incentivized to fix these problems. Risk-based cybersecurity is a methodology for program design that can help organizations put this modern approach into practice. By putting an emphasis on incorporating business context in the risk analysis process and data models, and by ensuring that business stakeholders are involved in the decision chain, risk-based cybersecurity programs provide a shared space where IT, business, and cybersecurity can come together and collaborate.