Risk Models : Essential tools for analyzing, monitoring and managing risk
February 10, 2015 by Syed Abdur

There is perhaps no term in the vocabulary of a modern enterprise that causes more confusion and misunderstanding than ‘Risk Analytics’. ‘Risk Management’ fares slightly better but is also a contentious definition. Depending on the vertical you belong to and your role within the organization, this could mean very different things. A typical day in the life of a professional working on Financial Risk Management is very different from that of one working on Technology Risk Management, which in turn varies vastly from a professional working on Operational Risk Management. However, there are some common themes that risk professionals across these diverse areas could agree upon.

Risk management typically includes (but is not limited to) the following:

  • Definition and identification of risks
  • Identifying and monitoring the entities in the ecosystem impacted by risk
  • Defining the quantitative impact of risk
  • Identifying and defining the conditions that incur risks
  • Data points required to evaluate the conditions for imminent risk
  • A consistent and sustained process for collecting and collating said data
  • Defining the measures to mitigate or remediate risk
  • Managing the process for implementing mitigation and remediation measures

 

Risk Analytics attempts to build upon generic risk management themes by leveraging the power of intelligent systems to deliver a data driven and informed perspective of risk. These programs — typically driven by the analysis of a large magnitude of data points — aim to go beyond a reactionary approach to risk. They try to engender a better understanding of the current state of the organization with respect to risk and attempt to predict with some amount of confidence how things will look further down the line, if the risk environment stays the same or if certain factors change.

Risk analytics programs and systems typically involve (but are not limited to) the following:

  • A big data (since we’re on the topic of ambiguous terms…) backend for processing large magnitudes of data quickly and efficiently
  • Ability to correlate and analyze risk data from disparate sources
  • Factoring in business context and organizational bias or mandates to augment raw risk information
  • Identifying and representing key risk criteria
  • Metrics to define, evaluate, and monitor critical risk conditions
  • Historical representation of risk information
  • Application of mathematical and analytical libraries
  • Ability to define alerts and notifications based on current or imminent conditions
  • Manual and automated asset classification
  • Data mining capabilities for analyzing existing risk data and providing recommendations
  • Clustering capabilities to discovering hidden relationships between relevant risk assets

 

If any of the themes listed above seem familiar to you, and you are involved in initiatives within your organization targeted towards these, then (consciously or not) you are using risk models to achieve these goals. Whether your model is designed and maintained through manual processes using spreadsheets, specialized ETL, custom applications etc. or through sophisticated data modeling tools, the success or failure of your risk management or analytics program is heavily predicated on the accuracy, efficiency and performance of your model. If you are managing your program manually, without the help of a dedicated risk-modeling tool and would prefer to continue to do so, it might still be beneficial to think about the design of the program as an exercise in risk modeling.

Good risk models have certain key characteristics and functions including (but not limited to) the following:

  • Identify and accurately represent all relevant types of risk— depending on the industry you work in and nature of the risks you are interested in evaluating, it is very likely that there are products or services that monitor and report relevant data. Some examples include geopolitical ratings for foreign investment risk, credit ratings for vendor or supplier risk, software vulnerabilities for technology risk etc. Whatever the source of risk data, good risk models should represent and interpret this information accurately.
  • Identify and accurately represent all relevant risk entities — risk ecosystems are complex, fickle and infinitely diverse. It is highly unlikely that for any two distinct organizations, no matter how similar their risk management or analytics goals, the same risk model accurately captures all relevant risk criteria and mandates. By all means, see further by standing on the shoulders of giants (where they are offered), and learn from your peers, but it is imperative that you understand your organization’s risk ecosystem thoroughly and ensure that your risk model represents what is important to you as an organization. Within an organization itself, risk mandates and priorities change as you learn from your mistakes and react to the risk challenges the world poses, so it is crucial that your risk model is adaptive and capable of evolving.
  • Represent relationships and risk flows — risk originates from different sources within the organization and propagates until it impacts critical business entities and functions. Good risk models define chains of risk inheritance and flow, which can allow you to preemptively understand the effect of specific events on the risk ecosystem and their impact to business.
  • Play favorites — identify critical assets and functions so you can focus on the most critical risk information at any given time. Not all applications that support the daily operations of an organization have the same importance. Every partner and supplier serves a distinct (and rarely equally important) function.
  • Make data collection painless — the accuracy of your risk model is directly affected by the efficiency and performance of your data collection processes. Spend time, effort and money to make data collection as painless as possible. Put measures in place to monitor the health of data collection processes and to catch and highlight errors and exceptions.
  • Make provisions for manual data collection — while automated risk data collection represents the ideal situation, there will be scenarios where the required information resides with individuals and cannot be collected through automated means. Develop and implement structured processes to collect this information and complete the risk picture.

Whatever your final risk management or analytics goals, you can realize the significant benefits of a more formal and structured process while identifying and reducing inefficiencies and gaps by using an actual risk model to represent your organization’s ecosystem or by thinking about the design of your program in data modeling terms. 

Recent Posts
March 24, 2022
Cybersecurity vulnerabilities and risks to watch for in 2022 – and how to manage them

As breach remediation costs rise, seemingly in direct proportion to the number of attackers and attacks, what are you doing to manage your cybersecurity vulnerabilities and risks?  Sufficient proof is easily found to reinforce that how you respond to threats and breaches can have a significant impact on your business. For example…   The 2021 Ponemon Institute Annual Cost of a Breach Report found that the average cost of a breach rose 10% to $4.24M.  The report also found that it took an average of 287 days to identify and contain a data breach.  Even if you can handle the reputation hit of a breach, and even if your insurer agrees to cover a portion of the damages, do you want to be on the hook for millions of dollars in remediation and restoration costs?  Prevention is easier and less expensive. Your data and intellectual property (IP) are often the most valuable assets you own, and as such are deserving of all the resources your team can muster for effective security vulnerability and risk management. Read on to learn more about the cyber risks to watch out for in 2022 and how you can plan and prepare for them.   What types of cyberattacks can you expect?  Counterintuitive, of course, because many organizations don’t expect  their network to be attacked, any more than they expect it to contain dangerous vulnerabilities. You want to believe those events occur to others, not you. Right?   Except competent hackers can infiltrate your network and steal your data and IP while remaining undetected.   Ransomware attacks For several years now, ransomware attacks have been the fastest growing segment of cybersecurity breaches. Typically, criminals breach an organization and encrypt its data, rendering it unusable. Inaccessible data renders a firm unproductive and unprofitable for as long as the data remains inaccessible. The Colonial Pipeline ransomware attack, for example, led to the shutdown of the largest fuel pipeline in the U.S, which in turn caused fuel shortages across the East Coast.  Criminals also threaten to publicize intellectual property (IP) and customer information, unless they receive a ransom.      Although small-to-midsize businesses (SMBs) are at the most risk of criminal ransom demands, payouts can reach seven or eight figures. The highest ransom amount confirmed to have been paid is $40 million USD, by CNA Financial, in May 2021. Few SMBs can afford such extravagance.    Cloud vulnerabilities  The first researchers to discover and report on critical vulnerabilities in the cloud focused on Microsoft Azure infrastructure. In detailing the vulnerabilities, those researchers, who were with Check Point, “wanted to disprove the assumption that cloud infrastructures are secure.”  And did they ever disprove it — the discovered vulnerabilities included those that received the highest possible score of 10.0. The qualitative severity ranking of a score of 9.0-10.0 is “critical.” The discovered vulnerabilities allowed malicious actors to compromise applications and data of those using similar cloud infrastructure. Firmware vulnerabilities Firmware vulnerabilities expose not only the major computer manufacturers, but also their customers. Undiscovered firmware vulnerabilities are especially damaging, because they grant criminals free reign over any network on which the devices are installed, leaving networks open until the vulnerability gets reported and patched.  As the number of connected devices continues to grow, Internet of Things (IoT) security becomes increasingly important to analyze.   Software vulnerabilities Applications contain vulnerabilities. According to Veracode, 75.2% of applications have security flaws, although 24% of those are considered high-severity.  Common flaws include: Information leakage. Carriage Return and Line Feed (CRLF) injection.  Cryptographic challenges. Code quality. Credentials management. Insider threats  Insider theft and trading of secrets is another growing vulnerability area. As demonstrated by recent Cisco and GE breaches,  employees with perceived grievances or bad intentions can choose to steal or wreak all kinds of damage on their employers’ data and networks.  Carelessness and poor training also contribute to insider threats.  Cyber threats to healthcare In recent years criminals have increasingly trained their sights onto hospitals, insurers, clinics, and others in that industry.  A 2016 report by IBM and the Ponemon Institute found the frequency of healthcare industry data breaches has been rising since 2010, and it is now among the sectors most targeted by cyberattacks globally.  Whether or not the reputation is deserved,healthcare industry computer networks are often considered soft targets by malicious actors. In 2021 Armis discovered nine vulnerabilities in critical systems used by 80% of major North American hospitals.  Additionally, rapid health device adoption has increased the number of available targets for malicious breachers. Numerous healthcare devices suffer security flaws, including imaging equipment. Added together, those factors point to an increase in attacks on health care institutions.  Attacks against health care networks threaten lives, not just productivity. Criminals might believe health care administrators are willing to pay ransoms faster to retrieve health data and help patients. That’s not always the case, as ransomware allegedly led to the death of an infant and was initially thought responsible for the death of a German patient.   Individual medical data – name, birth date, blood type, surgeries, diagnoses, and other personally identifiable information – is particularly interesting to criminals. Once compromised, it’s impossible to restore patient privacy, just as it’s impossible to reverse the social and psychological harm inflicted.  Forgotten cyber hygiene  When IT professionals are always in stressful firefighting mode, they can’t be expected to remember everything. Sometimes patches fall through the cracks, and those vulnerabilities come back later to bite your network.  Your IT department may be aware of old vulnerabilities, but just hasn’t gotten around to applying the necessary patches or closing open holes. A virtual private network (VPN) account that remained open, although no longer in use, was how criminals penetrated Colonial Pipeline. Employees had previously used that account to access the company network remotely.   How can you uncover cybersecurity vulnerabilities and risks? It’s easy for consumers to learn what to watch for and what to avoid. They can download, for example, the Annual Data Breach Report from the Identity Theft Resource Center.  You, on the other hand, have a network full of devices, endpoints, applications, and the weakest link in the security chain – users. Yes, you can lower the possibility of user negligence with cybersecurity training. Sure, you can find and read reports about currently existing threats.  But without a comprehensive vulnerability management program that brings together every vulnerability scanning tool across your entire attack surface, it’s almost impossible to  know what’s threatening your network right now.  How do you find a vulnerability in YOUR cybersecurity and IT environments? Most organizations rely on several different vulnerability scanning tools to achieve full vulnerability assessment coverage over their IT environments. Most vulnerability scanning tools focus on only one specific aspect of your attack surface — network devices, web applications, open source components, cloud infrastructure, containers, IoT devices, etc. Vulnerability management teams are often left with the unenviable job of bringing these disconnected tools, and the incompatible data they deliver, together into cohesive and consistent programs.  Deploying Brinqa vulnerability management software to perform vulnerability enumeration, analysis, and prioritization allows you to effortlessly synchronize and orchestrate  the best vulnerability scanning tools for your environment. The Brinqa platform is designed for data-driven, risk-based cybersecurity solutions. Brinqa include risk models for cybersecurity problems like vulnerability management and application security, which are essentially data ontologies developed based on industry standards and best practices to represent these cybersecurity challenges in terms of data. Brinqa data models and risk scores are adaptive, open and configurable, and include not just vulnerability data, but also additional business context from within the organization, as well as external  threat intelligence. For example, the data model automatically considers that if a server is internal facing, and it’s for testing code, then it’s going to differ in priority from an external facing server that is hosting an e-commerce site, and which contains customer personal data and information. Similarly, if external threat intelligence discovers that a particular vulnerability is suddenly very popular among malicious actors and is being used to affect breaches, the data model automatically computes and assigns a higher risk score to the vulnerability. First and foremost, we get you away from having to log into numerous different tools to bring all relevant information together and make it usable. Second, we streamline and automate your common vulnerability analysis, prioritization, and remediation use cases. That's the enormous benefit of Brinqa... The centralization is great, but once you start consolidating, enhancing, and contextualizing all of that data, you can provide a level of prioritization that takes your risk response to another level.    Beginning with generic, out of the box rules based on best practices, the environment allows every Brinqa customer the flexibility to tailor analysis to their needs, basically giving them a self-service mechanism to implement their own cybersecurity service level agreements (SLAs). The default rules are like templates or starting points, which you adjust and configure as necessary.   It is ineffective and inefficient  to make decisions on an ad hoc, case by case basis, about what should be fixed and in what order. Once you implement Brinqa, your automated vulnerability remediation and cyber risk response processes  deliver effective, consistent, and reliable results. Spend a little time (no money) to see how simple solving a major headache can be, with a free trial.        Frequently Asked Questions: What is vulnerability scanning? Vulnerability scanning is the detection and classification of potentially exploitable points on network devices, computer systems, and applications. What is vulnerability remediation? Vulnerability remediation includes the processes for determining, patching, and fixing cybersecurity weaknesses that have been detected in networks, data, hardware, and applications.  What is NVD? National Vulnerability Database (NVD) is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP).  What is CVE?  Common Vulnerabilities and Exposures is a list of publicly disclosed cybersecurity vulnerabilities that is free to search, use, and incorporate into products and services. What is CRLF? Carriage Return and Line Feed injection is a cyber attack in which an attacker injects malicious code.

December 15, 2021
Brinqa and Apache Log4j Vulnerabilities

Brinqa is actively investigating the impact of the Log4j library vulnerability CVE-2021-44228 disclosed on Dec 9 2021 and associated CVE’s (2021-45046, 2021-4104). This bulletin contains the latest information as it pertains to the impact of these vulnerabilities on Brinqa and will be updated as new information becomes available. We have been continuously monitoring for Log4j exploit attempts in our environment. At this time, we have not detected any successful Log4j exploit attempts in our systems or hosted solutions. We will continue to monitor our environment for new vulnerability instances and exploit attempts and will update this page as we learn more. The Cybersecurity and Infrastructure Security Agency (CISA) provides a useful summary of Log4J vulnerability guidance that customers may want to reference in addition to any product and version specific recommendations from your Brinqa customer success team. If you have any questions or concerns please feel free to reach out to us at security@brinqa.com

June 24, 2021
What is the Role of Cybersecurity in your Enterprise?

What does cybersecurity mean to your business? This might seem like an odd question, but how an enterprise responds to it can say a lot about the culture and practice of cybersecurity within that organization. There are many different ways to ask the same question — Which function does cybersecurity report to within the enterprise? Who are the internal clients of cybersecurity? Does cybersecurity leadership have a voice at the highest levels of corporate decision-making? There are 2 main schools of thought about the role and orientation of cybersecurity within the enterprise. The traditional school places cybersecurity within the Information Technology (IT) function of a business. In this model cybersecurity reports to IT, IT is the internal client for cybersecurity, and the CISO might report up to the CTO or CIO. It’s easy to see why one might make this association. IT and cybersecurity professionals often have similar or adjacent skillsets and overlapping educational and professional backgrounds. Both functions often deal with highly technical, specialized, and complex information and processes. However, the goals and KPIs of IT and cybersecurity are not only unaligned, they are often in direct conflict. The internal clients for IT are other business functions that essentially pay for the various technology assets (applications, servers, cloud instances, etc.) required to keep the enterprise running. IT performance is evaluated by how seamlessly, continuously, and cheaply they are able to deliver their services. IT doesn’t really have visibility into or an understanding of how these assets are being used by the business, what kind of data they process, which critical business functions they support. When cybersecurity comes to IT and tells them that a particular technology asset or part of the IT infrastructure has problems or weaknesses that could be exploited by malicious actors, they have to weigh the benefits — stopping a potential attack that may or may not happen vs. the costs — resources allocated to fix the problem, unhappy internal clients due to technology assets being unavailable during fixing, valuable time spent fixing and validating the issue. This is a hard sell and essentially amounts to self-regulation. A significant percentage of breaches exploit known vulnerabilities and weaknesses within an organization. Looked at from this lens, it's not difficult to see how such problems can go unaddressed. The modern school of thought recognizes Cybersecurity as its own independent vertical within the enterprise — like sales, marketing, HR, or any other function whose purpose is to help the business function and thrive. In this model, cybersecurity has various different business functions as internal clients, and the CISO might have a seat at the C-level table. Cybersecurity informs business stakeholders of the risks they face as a result of the technology infrastructure they utilize. The business stakeholders provide the context necessary for informed risk triage and collaborate with cybersecurity to identify which vulnerabilities or weaknesses pose the biggest threats to the part of business they own. These prioritized risks are then sent to IT for remediation. Cybersecurity provides guidance to IT on how they may remediate or mitigate a particular problem. Since risk remediation or mitigation is being driven by the business stakeholders, IT is incentivized to fix these problems. Risk-based cybersecurity is a methodology for program design that can help organizations put this modern approach into practice. By putting an emphasis on incorporating business context in the risk analysis process and data models, and by ensuring that business stakeholders are involved in the decision chain, risk-based cybersecurity programs provide a shared space where IT, business, and cybersecurity can come together and collaborate.