The Makings of Quantitative Risk Assessment
October 16, 2015 by Syed Abdur

This is the third post in our ongoing series on IT risk assessments. In our first post we established critical foundational concepts and considerations. In our previous post we discussed different frameworks and how to best make use of them. In today’s post we will delve into the topic of qualitative versus quantitative risk assessment methods. This topic is important because there is much quackery in the industry claiming to be quantitative while masquerading as bad mathematics. We will get into some of the dos and don’ts of quant, including how you can start applying quantitative techniques now, regardless of program maturity.

Using Numbers Is Not Inherently Quant

Many tools, models, and methodologies like to claim that they provide a quantitative risk analysis capability, but there is a great deal of misunderstanding and misperception around what is and is not “quantitative analysis.” In fact, it is quite common to find that there isn’t anything truly quantitative happening, despite some rather complex calculations, all because the creators of the method or formula have failed to take into consideration foundational principles of statistical mathematics.

Just because your “assessment” (or, more often, your data collection tool) makes use of numbers does not mean that you are doing quantitative analysis. In fact, depending on the type and nature of the numbers being used, and the subsequent manipulation of those numbers, you might be breaking mathematical laws in addition to not doing quant analysis.

Specifically, an understanding of this topic must start from foundational concepts, such as understanding the difference between categorical data (for example, labels like high, medium, and low), ordinal data (such as used in ranking and prioritization, as in first, second, third), and real number data (either actual measured values or estimated measured values). Only the latter case (real number, or numerical, data) generally provides the basis for quantitative analysis. As a general rule, only numerical data can be acted upon using standard arithmetic.

For example, if I ask you to take a list of five attributes (categorical data) and rank them in order of importance from 1 to 5 (ordinal data), then we are most definitely not doing a quantitative analysis. We’re doing a simple ranking exercise. You can take all the ranked scores for each of the attributes and then average them out to help determine which attribute was deemed “most important” and so on. However, that’s about the extent of the arithmetic that you would be allowed to do on categorical and ordinal data.

Now… here is where things can start to get tricky. You would not take this data and add all the values together and/or start multiplying them by random weighting factors. You wouldn’t decide “Today, a 1 (“most important”) is worth 100 points, whereas a 5 (“least important”) is only worth 10 points.” You would not then add and multiply and perform logarithmic derivations. You collected ordinal stack-rankings, not real number data, and to treat it otherwise ends up violating important mathematical principles.

Sadly, this is exactly what we see happening time and time again in all manner of “risk assessment” programs. We see categorical rankings like Critical, High, Medium, Low, and Very Low – that are then converted into arbitrary numerical values and acted upon arithmetically in violation of statistical rules. While it is ok to associate those labels with ordinal values in order to calculate a straight average (because there’s an implied ordinal ranking), you cannot arbitrarily assign real number values to these labels and then start applying quantitative analysis techniques using arithmetic.

This point is often very confusing to people. We have seen many examples of elaborate spreadsheets that collect variously ranked data and then do some absolutely confounding arithmetic calculations that result in things like single arbitrary numbers that, ultimately, not only have no inherent meaning, but are really reflective of any number of biases (from assumptions) being introduced into the calculations, often without meritorious explanation.

If there is one thing you take away from today’s post, let it be this: Just because you are using numbers does not mean that you can perform standard arithmetic on those numbers. It is incredibly important to understand foundational statistics principles and realize that ordinal rankings are essentially a form of categorical data, which means you cannot rightly add, multiply, etc. After all, you would never say Ford + Chevy + Audi = 79, right? Nor would you even take it a step further and say “3*Ford + 2*Chevy + 100*Audi = 79.” These statements may seem absurd, and yet if you look at many “risk assessment” methods in practice today, we see exactly this happening, except Ford is High, Chevy is Medium, and Audi is Low (or some such). Beware quantitative analysis claims!

Getting a (Real) Start With Quantitative Analysis

Now that you have been suitably warned about bad math masquerading as quantitative analysis, let’s now look at ways in which we can apply real, legitimate quantitative methods in a manner that will benefit your program, regardless of program maturity.

First and foremost, a great start for quantitative analysis is, in fact, to apply it during context setting and not in the risk assessment itself. Specifically, a key hurdle to clear in any risk management program is establishing a reasonable, rational basis for business impact. What’s important to the business? What sort of (financial) losses can the business incur without experiencing “material harm” (a legal, meaningful term)? What lines of business or applications or systems or services provide the most and/or least amount of revenue, and what is their tolerance for disruption?

Answering these questions can provide a valuable basis for starting with quantitative risk analysis. Note that we haven’t even started to delve into the topic of probability estimates at this junction. Keep it simple. Start establishing actual, ranged value estimates (ranges are always best – see Douglas Hubbard’s book How to Measure Anything). Speak with people in the organization who can authoritatively answer these questions. Do not simply rely on your own best guess, nor should you stay simply within the IT department in hopes that techies can magically intuit actual business sensitivities (it turns out we’re not very good at estimating business impact).

Once you have successfully established an approach for collecting basic impact information, then and only then does it make sense to look at maturing practices to get into more advanced quantitative topics, including probability estimates. However, in moving onto these more advanced stages, we highly recommend having a good grounding in statistics and/or data science. You may find a method like Open FAIR to be of interest (as discussed in the last post), and the associated Open FAIR training (from The Open Group) may be useful. However, you need not adhere to any single method and are encouraged to thoroughly explore statistics and data science to better understand the correct ways to create, test, and refine quantitative models for your organization.

Right-Sizing Risk Assessment Efforts: Do You Even Need Quant?

One question you might be asking at this point is just how much quantitative analysis is worthwhile, and if it’s worth using it. We think the answer is definitely yes, to a point, but perhaps falling short of full-fledged decision analysis and management (it’s still fairly rare to see decision trees in action in the real world, for those who may have experienced those in academia).

The simple fact is that organizations have been muddling through without quantitative analysis all this time, and they seem to be surviving. In fact, this statement can be generalized and broadened to point out that, despite a lack of reasonable security protections and in the face of massive breaches, nobody is saying “Oh, what a pity seeing all those empty store fronts with the red bullseye logo.” or “Remember when we could go buy home improvement products from those large, orange-signed warehouse stores?” Despite the losses piling up, businesses are proving to be remarkably resilient, even if just out of sheer luck.

So… to the question at hand… do we even need quantitative analysis? How do we “right-size” our risk assessment activities?

The answer, simply, is this: You’re already doing risk assessment, whether or not it’s formalized. You’re weighing options. You’re roughly considering pros and cons. You’re trying to balance tradeoffs and hoping that your decisions are good ones that improve value while decreasing loss potential. You are likely considering business impact, albeit in a vaguely qualitative manner. For that matter, we do risk calculations in our heads every day. “Should I get on this airplane?” or “That fish smells funny, should I really eat it?” or “Let’s not drink the scummy green water that smells of petroleum byproducts.” are all examples of the kinds of risk management thoughts that pass through our brains every day. For the most part, we’re fairly good at making decisions.

The question, then, is if we can get better at making decisions, and how to best go about doing that without falling into “analysis paralysis” (being unable to make a decision), without making decisions worse (such as due to relying on bad assumptions), or creating processes that are so slow or unwieldy that they are bypassed or too inefficient to be worthwhile.

Yes, this can be done. No, it need not be excessive or inefficient. It may be as simple as establishing some baseline estimates for business impact in key areas, from which you can then drive short conversations that say things like “We know that if this application/service is down for an hour during peak business hours, it will cost us X dollars per hour. Thus, we should look at investing into the resilience of this application, up to ‘X’ dollars, to ensure that we are reasonably protected against downtime.” Notice, again, that at no point do we need to go down the rabbit hole of probabilities. Rather, it’s a better-informed conversation.

As we become comfortable with introducing basic quantitative (real number) values into a conversation in order to drive more rational decision-making, then and only then can we look into better formalizing processes and discussions, and then and only then can we start getting more elaborate in our calculations, likely leveraging tools to help speed data collection and calculations (including using various statistical models and methods). Until that point, begin with what you can, where you can. Slowly change unfounded “belief state” assertions to being fact-based, and then iterate and evolve from there.

In our upcoming fourth and final post in the series, we will conclude by looking at how to leverage platforms to improve risk management programs. We will take a look at common ad hoc practices (Excel! SharePoint!),evaluate pros and cons of using a platform, and end with a discussion about how leveraging platforms can lead to improved communication and visibility into risk states.

Recent Posts
January 11, 2023
Brinqa Makes Built In’s 2023 Best Places to Work

Brinqa™, a leader in cyber risk management, today announced it has made Built In’s 2023 Best Places to Work in Austin. The annual program honors companies, from startups to enterprises, in large tech markets across the U.S. “Being a 2023 Best Places to Work in Austin winner is an honor for everyone at Brinqa,” said Amad Fida, CEO and co-founder of Brinqa. “It reflects our unwavering company-wide commitment to bringing our customers the most innovative solutions for managing cyber-risk across their attack surface and security programs. We have built a fast-growing team of talented professionals with the singular goal of helping Brinqa customers to prioritize and take precise action on the risks that matter most to their business.” Built In determines the winners of Best Places to Work based on an algorithm using company data about compensation and benefits. To reflect the benefits candidates are searching for more frequently on Built In, the program also weighs criteria like remote and flexible work opportunities, programs for DEI and other people-first cultural offerings. “It’s my honor to congratulate this year’s Best Places to Work winners,” says Sheridan Orr, chief marketing officer of Built In. “These exemplary companies understand their people are their most valuable asset, and they’ve stepped up to meet the modern professional’s new expectations, including the desire to work for companies that deliver purpose, growth and inclusion. These winners set the stage for a human-centered future of work, and we can’t wait to see that future unfold.” ABOUT BUILT IN Built In is creating the largest platform for technology professionals globally. Monthly, millions of the industry’s most in-demand professionals visit the site from across the world. They rely on our platform to stay ahead of tech trends and news, learn skills to accelerate their careers and find opportunities at companies whose values they share. Built In also serves 2,000 customers, innovative companies ranging from startups to those in the Fortune 500. By putting their stories in front of our uniquely engaged audience, we help them hire otherwise hard-to-reach tech professionals. ABOUT BRINQA Brinqa is the only company that orchestrates the entire cyber-risk lifecycle — understanding the attack surface, prioritizing vulnerabilities, automating remediation, and continuously monitoring cyber hygiene — across all security tools and programs. Brinqa Attack Surface Intelligence Platform is the source of truth for cyber risk. It empowers organizations to elevate the security conversation, hold risk owners accountable, identify security control coverage gaps, and manage and track all vulnerabilities in a single platform. Based in Austin, Texas, Brinqa is backed by Insight Partners. Learn more at Contacts: Kevin Flanagan Brinqa +781-856-2589

December 7, 2022
Answers to your questions about threat and vulnerability management 

It should come as no surprise that the increase in cyberattacks reflects the ever-expanding number of connected devices. In 2021 the number of Internet of Things (IoT) devices alone grew to 12.3 billion. As attack surfaces expand, it’s essential to know the gaps in your security posture and protect them. That’s why enterprise vulnerability management is a critical security control. The challenge is how to filter ever-larger amounts of vulnerability data streaming in from a growing number of attack surfaces and find which vulnerabilities pose the most risk to your business. For example, a threat coming from a customer-facing server is likely more crucial than a vulnerability in an internal sandbox no one outside the organization sees.  Risk analytics permits the evaluation of all risks according to their sources. Because every environment has different risks and risk tolerance, filtering the essential data and adding context, such as business goals, is how Brinqa delivers the better insight into cybersecurity that businesses demand. Let’s define vulnerability management  Vulnerability management (VM) is the ongoing, cyclical process of identifying, classifying, reporting, prioritizing and remediating vulnerabilities in an organization’s IT infrastructure and assets. The primary goal is to actively address vulnerabilities in your environment before malicious actors can use them to launch a cyberattack. Why is vulnerability management required? Data pours into your network from many different sources and formats. Estimates by Forbes and Gartner indicate that 80% of enterprise data is unstructured. As more data enters your systems, it brings more vulnerabilities. New vulnerabilities appear daily in applications, operating systems, and hardware. Examining your network and data with the many available assessment tools – application scanners, database scanners, network scanners, and penetration testing applications – is time-consuming. Prioritizing remediation is difficult when you have so much information to consider. Adding to your time management difficulties is that different vulnerabilities require different mitigation approaches. For example, patch management, defined as closing network software vulnerabilities by applying patches, is just one component of vulnerability risk management. Vulnerabilities lead to exploits and threats Exploits are how malicious actors leverage vulnerabilities to launch an attack. An exploit can be a piece of purpose-built software, a sequence of commands, network worms, or toolkits. Attacker economies of scale have played a significant role in allowing the leveraging of vulnerabilities into successful exploits. Greater coordination and sharing of information within the hacker community have increased the number of attacks on enterprises. Zero-day vulnerabilities are particularly susceptible to exploits. Threats are an actual or hypothetical malicious event that leverages one or more exploits to launch an attack. Threats seek to adversely impact organizational operations, assets and individuals and represent the strategy employed to compromise or gain unauthorized access to the organization successfully. Malware, social engineering, ransomware, phishing and trojans are typical threats.  An easy way to map ALL your threat and enterprise vulnerability management data to a single, consistent model From its position atop your networks, all data arriving from external sensors and systems into Brinqa is automatically mapped, correlated and brought into a single entity that simplifies vulnerability management. Complementing the Brinqa platform are hosts, relationships, databases, threat intelligence, and patch intelligence. Asset context, also known as asset metadata, helps categorize assets as the amount of digital information grows. Specific business context ensures it is relevant to your organization and complies with regulations. Having processed that information, Brinqa models it and automates ticketing for remediation. Why vulnerability risk management improves VM Vulnerability risk management, sometimes known as risk-based vulnerability management, is a strategy cybersecurity professionals use to prioritize remediating software vulnerabilities according to the level of risk each poses. Using risk as the guide, you analyze and rate or assign a score to the various vulnerabilities you’ve discovered. Instead of the potentially harsh consequences of an exploited vulnerability, the risk-based strategy assesses the chances of that vulnerability being exploited.  Components of vulnerability risk management Vulnerability risk management has three components: Threat intelligence identifies the vulnerabilities attackers are discussing, experimenting with, or using. Then it performs risk evaluation and assigns risk scores according to the likelihood of exploitation. The business context of multiple assets, because intruding into specific network segments can cause destructive harm. Combinations of risk assessment and asset criticality, concentrating efforts on vulnerabilities that affect the most critical systems and those most likely to be exploited. Steps to a vulnerability management process Each new vulnerability poses risks to an organization and increases the size of its attack surface. That’s why it’s critical to continually identify and address vulnerabilities quickly by applying a defined routine. Here are some of the main vulnerability management processes, along with subprocesses and tasks:  Discover – Since you can’t protect what you don’t know about, take inventory of all assets. Identify operating systems, services, applications and configurations that might be subject to any vulnerability. Automate regularly scheduled discovery. Prioritize – Categorize the assets you’ve discovered into groups. Then prioritize each group’s risk according to how critical each asset is to your organization. Remediate – Resolve vulnerabilities by following your established risk priorities. Record and document the progress of the remediation process.  Verify – Run additional vulnerability scans. You’ll soon know if you’ve found and fixed them all. That’s when an automated vulnerability analysis platform like Brinqa can substantially ease your workload.   Report – A report comparing the most recent scan with the previous one lets IT know which vulnerabilities were identified and remediated and summarizes the current state of vulnerabilities. The C-suite needs a simple report with a high-level presentation of risk scores across the enterprise, which the Brinqa Cyber Risk Graph provides. Good VM counts every process and subprocess as a continual lifecycle intended to reduce risks to a network and improve data security. Processes that run daily offer more robust protection than those performed quarterly or annually. The Brinqa vulnerability management process Other vulnerability management vendors analyze only the data they gather, whereas Brinqa analyzes ALL incoming information and adds additional business context. Once Brinqa is completely set up, your vulnerability risk management challenges are automated. Log into Brinqa and see a list of vulnerabilities sorted by risk scores. The assessment and analysis logic is already included. You can begin remediating the most critical vulnerabilities: no more inefficient and ineffective ad hoc, case-by-case, manual decisions about what to fix first.    Unique among vulnerability risk management vendors, only Brinqa automatically: Pulls data from all sources Adds business and threat context Prioritizes the most crucial vulnerabilities Creates tickets and tasks for remediation And provides:   Advanced risk analysis Complete visibility into all of your data Automated vulnerability management Room for additional data whenever you deem necessary Brinqa delivers effective, consistent and reliable results by automating your vulnerability remediation and cyber-risk response processes. Automate your vulnerability risk management with Brinqa – See for yourself with a demo.  Frequently Asked Questions: What is unstructured data? Unstructured data is information stored in different forms that don't follow conventional data models and is, therefore, difficult to store and manage in a mainstream relational database. How do structured and unstructured data differ? Structured data is formatted to ensure consistency in processing and analyzing. Unstructured data can be stored in non-uniform formats. The main differences between structured and unstructured data include the type of analysis it can be used for, how it’s formatted, how it’s stored, and the schema used. What is inherent risk vs. residual risk? Inherent risk is the current risk level, considering the existing set of controls.  Residual risk refers to the remaining risk level after applying additional controls. How can an attacker execute malware through a script? Criminals use a script-based malware attack to execute malicious attacks on a network without accessing anything on the hard drive. Security programs cannot detect the attack without changes written to a hard drive.

November 22, 2022
A primer on the types of cybersecurity vulnerabilities organizations face

Vulnerabilities are everywhere in the cyber systems on which enterprises rely for, well, everything. The need for an effective vulnerability risk service has never been higher. The number of cybersecurity vulnerabilities grows along with the number of cyber systems and users, significantly increasing the attack surfaces of corporate network infrastructures. Organizations need a vulnerability risk service that connects, models and analyzes all relevant security, context and threat data to deliver knowledge-driven insights for vulnerability prioritization, remediation and reporting. Here’s why. A vulnerability is "a weakness in the computational logic found in software and hardware components that when exploited, results in a negative effect to confidentiality, integrity, or availability,” according to the National Vulnerability Database (NVD). NVD and Microsoft security updates are two free sources for vulnerability definitions. For more definitions, you can also pay for subscriptions to vulnerability databases available from cybersecurity vendors.  There are two types of vulnerabilities: known and unknown. Let’s take a look at each one. Known vulnerabilities If you know about a vulnerability’s existence, you can defend it — at least theoretically. The following are known vulnerabilities present in many corporate infrastructures.  Familiarity When an attacker is familiar with the code, software, operating systems and hardware of an organization, the chances are high that the attacker will find a vulnerability.  Complexity The more complex a system is, the higher the probability a flaw or misconfiguration will result in unintended access. Connectivity ‍The more connections a device has, the greater the chance a vulnerability exists among them. Poor password management Computers do the grunt work necessary for a brute-force attack, hurling password combinations at the speed of digits, hoping to uncover weakness. When users reuse passwords, a single breach can become many breaches, as the attacker tries the same password on different systems and platforms. Software flaws When an operating system is not secured, an attacker can access it to inject viruses and malware. ‍Sometimes programmers unintentionally leave exploitable bugs in software. Users leave their systems vulnerable by not updating or patching their software. Antivirus vulnerabilities The irony of antimalware solutions is situational – instead of protecting users from malware,  antimalware solutions expose users to vulnerability exploitation. Antimalware grants extensive permissions an attacker can abuse to access a system. Users ‍People who use computers are easily the most significant and weakest link in the entire security chain. According to the 2022 Verizon Data Breach Investigations Report:  80% of data breaches are from poor or reused passwords.  82% of breaches involved credentials. 82% of breaches involved a human element. 7% of breaches involved vulnerability exploitation. If not for users, phishing wouldn’t exist. Nor would social engineering. The former is an email message sent in the hope the recipient will click on an included link set to deliver a malware payload. The latter is a lie or deception used to enter a network for a cyberattack.  Physical cybersecurity threats When planning the protection of a network, it’s easy to forget about the physical security of IT assets, such as your buildings and infrastructure. Also, consider users’ security and privacy in cyber-physical systems. They can be bribed or intimidated into relinquishing valuable information.   Denial of service (DoS) A denial-of-service (DoS) attack is a malicious attempt to prevent legitimate traffic from accessing a website by overwhelming the web server with meaningless requests.  Application security testing (AST) Application security testing (AST) is the process of identifying security weaknesses and vulnerabilities in source code to harden applications by making them more resistant to security threats. According to Gartner research, “84% of breaches exploit vulnerabilities in the application layer, yet the ratio of spending between perimeter security and application security is 23-to-1.” If you’re aware of an application vulnerability, you can test for it.  Dynamic application security testing (DAST)  Dynamic application security testing (DAST) tools execute code and then inspect it at runtime to detect issues that might be security vulnerabilities. Issues may be with query strings, requests, responses, scripts, memory leaks, cookie handling, session handling, authentication, executing third-party components, and code and data injection. Static application security testing (SAST) Static application security testing (SAST) scans application source, binary, and byte code to identify vulnerability causes and assist with remediation. SAST tools attack applications from inside to perform a scan, inspecting static source code and reporting weaknesses. Interactive application security testing (IAST) Interactive application security testing (IAST) analyzes code for security vulnerabilities while the application is running. That can be an automated test, a human tester, or anything “interacting” with application functionality. Because it reports vulnerabilities in real time, IAST doesn’t add more time to your improvement and deliverability. Web application security testing Web application security testing involves assessing a web application for security flaws and vulnerabilities that require fixing before hackers take advantage of them. Meticulously testing for hidden vulnerable points in your application lessens the risk an attacker will find and exploit one of them. The Verizon 2022 Data Breach Investigations Report mentioned above found that 56% of breaches involved basic web application attacks. Software composition analysis (SCA) Software composition analysis (SCA) identifies specific open-source versions, software components, and licensing risks. It helps to ensure all embedded open-source code meets selected standards.  Advanced SCA tools have automated component detection and identification, as well as vulnerability, license association, and risk remediation. Unknown vulnerabilities When a home full of intelligent devices suffers more than 12,000 hacking or unknown scanning attacks from around the world in one week, can you imagine how many more risks to a network there are? Since your network is more extensive — and more valuable — than the technology of the average home, it presents a more significant target.   Zero-day A software flaw hackers have discovered while the developer remains unaware of it is known as a zero-day vulnerability. It’s called “zero-day” because it had never been seen before and the software vendor had “zero” time to patch it before criminals exploited it.  Trust relationship Trust configurations propagated across your network simplify user access between systems. Adverse possession of those trusted credentials opens the systems to attackers. After gaining access to a system, the adversary can breach all other systems that trust the system that was initially compromised. Compromised credentials  To get unauthorized access to a system in your network, attackers try to intercept and extract passwords from unencrypted or incorrectly encrypted communication, either from unsecured handling by software or users. Attackers also try to exploit passwords by reusing them across systems. Malicious insider Potentially the most dangerous security bad actors and the one motivated to do serious damage is the stealthy insider: a disgruntled team member with access to your critical systems. They may choose to exploit their access privileges to steal or destroy your data.  How do you find unknown vulnerabilities?  Penetration testing Penetration testing, or pen testing, is an exercise in which a cybersecurity professional probes a network to find and exploit vulnerabilities. Simulated attacks are how a pen tester identifies weak spots in system defenses that defenders can fix to tighten security. Pen testing is an intricate, specialized practice area that is critical to business security.  Breach and attack simulation (BAS) To perform comprehensive assessments of your cybersecurity defenses, you need automated breach simulation and attack simulation, continuous assets scanning, and protection. Breach and attack simulation (BAS) spots gaps in your security and helps you understand how well-defended you are against real threats to your systems. A BAS platform mimics the actual actions of a threat by simulated attacks against your data center, allowing you to assess your security controls and take action designed to catch a real threat actor when the need arises.  Often offered as software-as-a-service (SaaS), BAS goes beyond traditional testing methods such as penetration testing and vulnerability scans by simplifying how you conduct checks on your security controls. Modern BAS tools permit automated testing including customized, automated, simulated attacks. Unlike traditional penetration tests in which humans perform hacking attempts, cloud-based BAS apps host modules that run automated tests. The malware used doesn’t harm your network infrastructure and works only for the simulation. Brinqa performs vulnerability risk management Using connectors to pull data from all sources on your entire network, Brinqa calculates rules, performs advanced operational risk analysis, and applies specific business contexts to pinpoint those vulnerabilities you must fix first. It automatically creates tickets and tasks for remediation.  The capability for extensive visibility into all of your existing assets, information and infrastructure is practically infinite, meaning you can add more data and grow your network without worry.  Get your free trial to experience how easily Brinqa delivers efficient, repeatable and trustworthy results by automating your vulnerability risk management. FAQ  How do a vulnerability, a threat and a risk differ? Sometimes confused with vulnerability, a threat is anything capable of exploiting a vulnerability, whereas a risk is when a threat exploits a vulnerability. You worry about a threat occurring to an asset. You calculate the potential damage from a risk. What is a threat agent in information security? The National Institute for Standards and Technology defines a threat agent synonymously with a threat source as, “The intent and method targeted at the intentional exploitation of a vulnerability or a situation and method that may accidentally trigger a vulnerability.” What are the reasons why information systems are vulnerable? Being interconnected and accessible from many points in the connection makes information systems vulnerable.  What are cyber-physical systems? The Cyber-Physical Systems Research Center tells us cyber-­physical systems (CPS) happen when digital and analog devices, interfaces, sensors, networks, actuators and computers are combined with the natural environment and with human-made objects and structures. A CPS depends upon integrating computational algorithms and physical components.  What is cyber-physical security? Cyber-physical security concerns securing physical systems used to maintain and implement cybersecurity solutions. It includes the technology necessary for operations, industrial control systems, and the Internet of Things. The proliferation of devices has led to physical and cybersecurity convergence.