Leveraging Standards for Risk Assessment
September 21, 2015 by Syed Abdur

In our first post in this series, we introduced core concepts of risk assessments, including where they fit within the overall risk management process and the exceptional importance of first completing the context-setting stage of the process. In that post, we looked at ISO 31000 as a reasonable model for an overall risk management process, but stopped short of diving into specific standards (frameworks and methodologies) for risk assessment.

In today’s post we will look at standards and how they can best be leveraged by your organization to improve risk assessment efficiency and effectiveness. We will also highlight some common data collection standards that may be useful. The key takeaways for this article are to first determine whether or not a standard will benefit your organization, what sort of standards might be useful and how to pick one, and a brief summary of major standards that are available.
Without further adieu, let’s delve into the first question.

Do I Even Need a Standard?
At first, this may seem like a silly question. On the one hand, it may seem absurd that an organization would need a standard for what might appear to be basic and foundational. As ISO 31000 readily demonstrates, there is nothing “whiz bang” special about risk management and risk assessments. Create a process, gather data, conduct analysis, and make the best decision possible. However, is this true, or are we oversimplifying matters?

On the other hand, standards can seem quite logical and appropriate – maybe even appealing – to help us construct and operate our risk management programs. However, not all standards are created equal, nor in fact do they accomplish the same things. As we’ll discuss later in this post, some standards can be quite large and overwhelming, while other standards have a more focused purpose and may need to be used in conjunction with other standards.

The base answer to the question of whether or not a standard will be useful ends up being fairly straightforward: All standards should be approached as guides to help fill in gaps in your overall risk management program. Standards can be useful in helping ensure you have the right steps in your overall process, and they can provide further value in helping you identify opportunities to refine and improve process definition and execution.

Choosing a Good Fit
Thankfully, most standards conform to the guidelines set forth in ISO 31000, which means picking a standard as a reference for risk management program development should not be scary (there are, of course, exceptions to this rule – COBIT 5, in particular). As such, the focus of your quest for a standard should be born out of a desire to find something that works for your organization without being so diametrically opposed to corporate culture that it will almost assuredly result in failure.

That may seem scary, but at heart the point is this: Read several standards and find the one that sounds and/or feels most like your organization’s culture. How does your organization function? How do people interact? What is the nature of the business being conducted? What sort of backgrounds do people have (e.g., public vs. private sector)? To what regulations is your organization subject?

As is often the case with risk management, it is imperative that you know your organization, and know it well. Risk assessments are as much art as science, and thus require having a good sense for how people think and behave. For example, consider the differences in culture, operations, and personnel between a stringent military organization versus a manufacturer versus a very white-collar financial services company versus a Wall Street trading house with very sensitive real-time processing requirements versus the typically laid-back and laissez faire environment of most higher education institutions.

Proper fit is key to success. If you want people to listen to you, hear you, and take you seriously, then you cannot present an approach that is radically different from how business is conducted, or worse, that interferes with their ability to complete their assigned duties.

Common Standards

  • Information Security Forum Information Risk Analysis Methodology (ISF IRAM): ISF IRAM is an interesting reference because it does an effective job of breaking down the process in a meaningful and useful manner. ISF’s overall approach to risk assessment starts with completing a business impact assessment (BIA), then performing a threat and vulnerability analysis, and then moving into control selection. This approach roughly approximates the ISO 31000 process (context-setting, assessment, and remediation), and the tooling support can be interesting. The biggest downsides to ISF IRAM are that organization membership can be expensive and the tools themselves may not easily integrate with a risk management or GRC platform. Nonetheless, studying their approach and any open materials you can find on how the conduct the BIA is interesting and might help you refine your approach.
  • ISACA COBIT 5 for Risk: A very commonly referenced standard, COBIT 5 itself can be incredibly overwhelming as it is intended to be a full-scale IT governance program and not just a risk management standard. ISACA has produced subsidiary documents specific to defining and conducting a risk management program, and that documentation can be useful and interesting as a reference. One of the largest challenges with COBIT 5 is learning enough about it to go through scoping, design, and implementation. Often, specialized resources are required to get through these steps. However, the average organization is not in financial services (the primary audience), and thus we recommend reading the COBIT 5 for Risk documentation, approaching it as a comprehensive reference, but not as a standard that any sane person might try to implement as-is.
  • ISO 27000 series: As has been noted before, ISO 31000 provides a generic guideline for the risk management process and its subsidiary components. This approach is further refined with more details within ISO 27005, which is designed to align with the Information Security Management System described in ISO 27001 and ISO 27002. For those organizations with an international presence, those subject to frequent external audits, or with a specific interest in acquiring an ISO certification as a liability shield, it is (obviously) useful to become acquainted with ISO 27005. Beyond that, the standard does not provide much more value beyond ISO 31000 itself, and thus may have limited reference value outside of seeking a certification.
  • OCTAVE Allegro: A product of CERT’s Risk Resiliency Center, OCTAVE Allegro is the most recent risk management publication in the OCTAVE methodology series. Overall, OCTAVE can be a good fit for organizations that tend more toward an analytical or engineering mindset. It includes supplemental worksheets that can be fairly easily integrated into risk analytics tools, and it has a reasonable amount of reference materials that can help in identify gaps and opportunities for improvement. Training is available from CERT for using OCTAVE, which could also provide value, especially for organizations that are just getting started with a formalized risk management program.
  • Open FAIR: In contrast to the other standards listed here, Open FAIR is not generally focused on the overall risk management process (not completely true, but bear with us), but rather provides a discrete approach for conducting risk assessment and risk analysis. That said, following the entire Open FAIR approach from start to finish definitely does take you through context-setting and risk assessment, and in some cases may even be used for analyzing risk remediation options. One of the most important and valuable components of Open FAIR is the Risk Taxonomy, which takes the concept of “risk” (defined therein as “probable frequency and probable magnitude of future loss”) and factors it into easily understood components. Open FAIR is intended as a quantitative risk assessment methodology, which is also unique in this list. However, the Risk Taxonomy itself can absolutely be used in a qualitative manner to quickly “back-of-the-napkin” assess a situation. Such a snap assessment can often be useful as an initial triage step before deciding whether or not an in-depth risk assessment is warranted. The Open FAIR standard is written in accessible language and can be a worthwhile resource for shaping your thinking and approach to risk assessment and risk management.
  • USG Standards: The United States Government’s National Institute of Standards and Technology provides a large amount of free, open source standards on a number of interesting and useful topics. Included among these are an entire series of standards for risk management and risk assessment that generally conform to the ISO 31000 guideline and provide worthwhile information on structuring an approach. As is unsurprising, the NIST methodologies do tend to be a bit more bureaucratic in nature, but that attribute may fit well with some organizations. We recommend reviewing NIST Special Publications 800-39, 800-37r1, and 800-30r1.
  • Regulatory guidance or requirements: When designing and refining your approach to risk assessment, please be mindful that most standards from the past decade have included guidance and requirements, to some degree, for risk management or risk assessment. Be sure to account for any such requirements when designing your approach. You may find that certain standards may have better alignment than others with these stipulations.
  • Data collection tools: Most risk management platforms will include a reference library of questionnaires to aid in data collection. One common standard is the Shared Assessments SIG and SIG lite set of questionnaires. If your organization works in or with financial services you may already be familiar with these tools. Even if your organization is not in financial services, or you do not foresee direct use of them, they can be worthwhile references in developing your own data collection tools. That said, please bear in mind the point made in our first post: Data collection is not the same as risk assessment or risk analysis. Data is just the input, not the actual evaluation.


As we have discussed throughout this post, standards can provide value for defining and refining your risk management and risk assessment approach. Moreover, standards for data collection (such as from Shared Assessments) can provide additional value in improving overall performance. However, finding a standard can at times be daunting, and implementation can be soul-crushingly overwhelming.

It is thus important to approach standards with a learning mindset intent on investigating different theories on risk assessment, and then assimilating those pieces that best match with your organization’s culture, rather than necessarily seeking to make wholesale changes that may be at complete odds with how business is performed. As always, risk management must be nuanced and seek to integrate seamlessly with existing practices and processes in order to be successful. If not done well, the risk management process will get bypassed in the name of “getting work done” and, as a result, will falter (if not fail completely).

In our next post in this series we will be exploring how to “right-size” risk assessments, as well as discussing the pros and cons of qualitative versus quantitative risk assessments (including defining just what those terms mean). Our decisions can only be as good as the data we collect and analyze, which means it’s important to understand what both good and bad data look like. You may be surprised by what we have to share.

Recent Posts
January 11, 2023
Brinqa Makes Built In’s 2023 Best Places to Work

Brinqa™, a leader in cyber risk management, today announced it has made Built In’s 2023 Best Places to Work in Austin. The annual program honors companies, from startups to enterprises, in large tech markets across the U.S. “Being a 2023 Best Places to Work in Austin winner is an honor for everyone at Brinqa,” said Amad Fida, CEO and co-founder of Brinqa. “It reflects our unwavering company-wide commitment to bringing our customers the most innovative solutions for managing cyber-risk across their attack surface and security programs. We have built a fast-growing team of talented professionals with the singular goal of helping Brinqa customers to prioritize and take precise action on the risks that matter most to their business.” Built In determines the winners of Best Places to Work based on an algorithm using company data about compensation and benefits. To reflect the benefits candidates are searching for more frequently on Built In, the program also weighs criteria like remote and flexible work opportunities, programs for DEI and other people-first cultural offerings. “It’s my honor to congratulate this year’s Best Places to Work winners,” says Sheridan Orr, chief marketing officer of Built In. “These exemplary companies understand their people are their most valuable asset, and they’ve stepped up to meet the modern professional’s new expectations, including the desire to work for companies that deliver purpose, growth and inclusion. These winners set the stage for a human-centered future of work, and we can’t wait to see that future unfold.” ABOUT BUILT IN Built In is creating the largest platform for technology professionals globally. Monthly, millions of the industry’s most in-demand professionals visit the site from across the world. They rely on our platform to stay ahead of tech trends and news, learn skills to accelerate their careers and find opportunities at companies whose values they share. Built In also serves 2,000 customers, innovative companies ranging from startups to those in the Fortune 500. By putting their stories in front of our uniquely engaged audience, we help them hire otherwise hard-to-reach tech professionals. www.builtin.com. ABOUT BRINQA Brinqa is the only company that orchestrates the entire cyber-risk lifecycle — understanding the attack surface, prioritizing vulnerabilities, automating remediation, and continuously monitoring cyber hygiene — across all security tools and programs. Brinqa Attack Surface Intelligence Platform is the source of truth for cyber risk. It empowers organizations to elevate the security conversation, hold risk owners accountable, identify security control coverage gaps, and manage and track all vulnerabilities in a single platform. Based in Austin, Texas, Brinqa is backed by Insight Partners. Learn more at www.brinqa.com. Contacts: Kevin Flanagan Brinqa +781-856-2589 kevin.flanagan@brinqa.com

December 7, 2022
Answers to your questions about threat and vulnerability management 

It should come as no surprise that the increase in cyberattacks reflects the ever-expanding number of connected devices. In 2021 the number of Internet of Things (IoT) devices alone grew to 12.3 billion. As attack surfaces expand, it’s essential to know the gaps in your security posture and protect them. That’s why enterprise vulnerability management is a critical security control. The challenge is how to filter ever-larger amounts of vulnerability data streaming in from a growing number of attack surfaces and find which vulnerabilities pose the most risk to your business. For example, a threat coming from a customer-facing server is likely more crucial than a vulnerability in an internal sandbox no one outside the organization sees.  Risk analytics permits the evaluation of all risks according to their sources. Because every environment has different risks and risk tolerance, filtering the essential data and adding context, such as business goals, is how Brinqa delivers the better insight into cybersecurity that businesses demand. Let’s define vulnerability management  Vulnerability management (VM) is the ongoing, cyclical process of identifying, classifying, reporting, prioritizing and remediating vulnerabilities in an organization’s IT infrastructure and assets. The primary goal is to actively address vulnerabilities in your environment before malicious actors can use them to launch a cyberattack. Why is vulnerability management required? Data pours into your network from many different sources and formats. Estimates by Forbes and Gartner indicate that 80% of enterprise data is unstructured. As more data enters your systems, it brings more vulnerabilities. New vulnerabilities appear daily in applications, operating systems, and hardware. Examining your network and data with the many available assessment tools – application scanners, database scanners, network scanners, and penetration testing applications – is time-consuming. Prioritizing remediation is difficult when you have so much information to consider. Adding to your time management difficulties is that different vulnerabilities require different mitigation approaches. For example, patch management, defined as closing network software vulnerabilities by applying patches, is just one component of vulnerability risk management. Vulnerabilities lead to exploits and threats Exploits are how malicious actors leverage vulnerabilities to launch an attack. An exploit can be a piece of purpose-built software, a sequence of commands, network worms, or toolkits. Attacker economies of scale have played a significant role in allowing the leveraging of vulnerabilities into successful exploits. Greater coordination and sharing of information within the hacker community have increased the number of attacks on enterprises. Zero-day vulnerabilities are particularly susceptible to exploits. Threats are an actual or hypothetical malicious event that leverages one or more exploits to launch an attack. Threats seek to adversely impact organizational operations, assets and individuals and represent the strategy employed to compromise or gain unauthorized access to the organization successfully. Malware, social engineering, ransomware, phishing and trojans are typical threats.  An easy way to map ALL your threat and enterprise vulnerability management data to a single, consistent model From its position atop your networks, all data arriving from external sensors and systems into Brinqa is automatically mapped, correlated and brought into a single entity that simplifies vulnerability management. Complementing the Brinqa platform are hosts, relationships, databases, threat intelligence, and patch intelligence. Asset context, also known as asset metadata, helps categorize assets as the amount of digital information grows. Specific business context ensures it is relevant to your organization and complies with regulations. Having processed that information, Brinqa models it and automates ticketing for remediation. Why vulnerability risk management improves VM Vulnerability risk management, sometimes known as risk-based vulnerability management, is a strategy cybersecurity professionals use to prioritize remediating software vulnerabilities according to the level of risk each poses. Using risk as the guide, you analyze and rate or assign a score to the various vulnerabilities you’ve discovered. Instead of the potentially harsh consequences of an exploited vulnerability, the risk-based strategy assesses the chances of that vulnerability being exploited.  Components of vulnerability risk management Vulnerability risk management has three components: Threat intelligence identifies the vulnerabilities attackers are discussing, experimenting with, or using. Then it performs risk evaluation and assigns risk scores according to the likelihood of exploitation. The business context of multiple assets, because intruding into specific network segments can cause destructive harm. Combinations of risk assessment and asset criticality, concentrating efforts on vulnerabilities that affect the most critical systems and those most likely to be exploited. Steps to a vulnerability management process Each new vulnerability poses risks to an organization and increases the size of its attack surface. That’s why it’s critical to continually identify and address vulnerabilities quickly by applying a defined routine. Here are some of the main vulnerability management processes, along with subprocesses and tasks:  Discover – Since you can’t protect what you don’t know about, take inventory of all assets. Identify operating systems, services, applications and configurations that might be subject to any vulnerability. Automate regularly scheduled discovery. Prioritize – Categorize the assets you’ve discovered into groups. Then prioritize each group’s risk according to how critical each asset is to your organization. Remediate – Resolve vulnerabilities by following your established risk priorities. Record and document the progress of the remediation process.  Verify – Run additional vulnerability scans. You’ll soon know if you’ve found and fixed them all. That’s when an automated vulnerability analysis platform like Brinqa can substantially ease your workload.   Report – A report comparing the most recent scan with the previous one lets IT know which vulnerabilities were identified and remediated and summarizes the current state of vulnerabilities. The C-suite needs a simple report with a high-level presentation of risk scores across the enterprise, which the Brinqa Cyber Risk Graph provides. Good VM counts every process and subprocess as a continual lifecycle intended to reduce risks to a network and improve data security. Processes that run daily offer more robust protection than those performed quarterly or annually. The Brinqa vulnerability management process Other vulnerability management vendors analyze only the data they gather, whereas Brinqa analyzes ALL incoming information and adds additional business context. Once Brinqa is completely set up, your vulnerability risk management challenges are automated. Log into Brinqa and see a list of vulnerabilities sorted by risk scores. The assessment and analysis logic is already included. You can begin remediating the most critical vulnerabilities: no more inefficient and ineffective ad hoc, case-by-case, manual decisions about what to fix first.    Unique among vulnerability risk management vendors, only Brinqa automatically: Pulls data from all sources Adds business and threat context Prioritizes the most crucial vulnerabilities Creates tickets and tasks for remediation And provides:   Advanced risk analysis Complete visibility into all of your data Automated vulnerability management Room for additional data whenever you deem necessary Brinqa delivers effective, consistent and reliable results by automating your vulnerability remediation and cyber-risk response processes. Automate your vulnerability risk management with Brinqa – See for yourself with a demo.  Frequently Asked Questions: What is unstructured data? Unstructured data is information stored in different forms that don't follow conventional data models and is, therefore, difficult to store and manage in a mainstream relational database. How do structured and unstructured data differ? Structured data is formatted to ensure consistency in processing and analyzing. Unstructured data can be stored in non-uniform formats. The main differences between structured and unstructured data include the type of analysis it can be used for, how it’s formatted, how it’s stored, and the schema used. What is inherent risk vs. residual risk? Inherent risk is the current risk level, considering the existing set of controls.  Residual risk refers to the remaining risk level after applying additional controls. How can an attacker execute malware through a script? Criminals use a script-based malware attack to execute malicious attacks on a network without accessing anything on the hard drive. Security programs cannot detect the attack without changes written to a hard drive.

November 22, 2022
A primer on the types of cybersecurity vulnerabilities organizations face

Vulnerabilities are everywhere in the cyber systems on which enterprises rely for, well, everything. The need for an effective vulnerability risk service has never been higher. The number of cybersecurity vulnerabilities grows along with the number of cyber systems and users, significantly increasing the attack surfaces of corporate network infrastructures. Organizations need a vulnerability risk service that connects, models and analyzes all relevant security, context and threat data to deliver knowledge-driven insights for vulnerability prioritization, remediation and reporting. Here’s why. A vulnerability is "a weakness in the computational logic found in software and hardware components that when exploited, results in a negative effect to confidentiality, integrity, or availability,” according to the National Vulnerability Database (NVD). NVD and Microsoft security updates are two free sources for vulnerability definitions. For more definitions, you can also pay for subscriptions to vulnerability databases available from cybersecurity vendors.  There are two types of vulnerabilities: known and unknown. Let’s take a look at each one. Known vulnerabilities If you know about a vulnerability’s existence, you can defend it — at least theoretically. The following are known vulnerabilities present in many corporate infrastructures.  Familiarity When an attacker is familiar with the code, software, operating systems and hardware of an organization, the chances are high that the attacker will find a vulnerability.  Complexity The more complex a system is, the higher the probability a flaw or misconfiguration will result in unintended access. Connectivity ‍The more connections a device has, the greater the chance a vulnerability exists among them. Poor password management Computers do the grunt work necessary for a brute-force attack, hurling password combinations at the speed of digits, hoping to uncover weakness. When users reuse passwords, a single breach can become many breaches, as the attacker tries the same password on different systems and platforms. Software flaws When an operating system is not secured, an attacker can access it to inject viruses and malware. ‍Sometimes programmers unintentionally leave exploitable bugs in software. Users leave their systems vulnerable by not updating or patching their software. Antivirus vulnerabilities The irony of antimalware solutions is situational – instead of protecting users from malware,  antimalware solutions expose users to vulnerability exploitation. Antimalware grants extensive permissions an attacker can abuse to access a system. Users ‍People who use computers are easily the most significant and weakest link in the entire security chain. According to the 2022 Verizon Data Breach Investigations Report:  80% of data breaches are from poor or reused passwords.  82% of breaches involved credentials. 82% of breaches involved a human element. 7% of breaches involved vulnerability exploitation. If not for users, phishing wouldn’t exist. Nor would social engineering. The former is an email message sent in the hope the recipient will click on an included link set to deliver a malware payload. The latter is a lie or deception used to enter a network for a cyberattack.  Physical cybersecurity threats When planning the protection of a network, it’s easy to forget about the physical security of IT assets, such as your buildings and infrastructure. Also, consider users’ security and privacy in cyber-physical systems. They can be bribed or intimidated into relinquishing valuable information.   Denial of service (DoS) A denial-of-service (DoS) attack is a malicious attempt to prevent legitimate traffic from accessing a website by overwhelming the web server with meaningless requests.  Application security testing (AST) Application security testing (AST) is the process of identifying security weaknesses and vulnerabilities in source code to harden applications by making them more resistant to security threats. According to Gartner research, “84% of breaches exploit vulnerabilities in the application layer, yet the ratio of spending between perimeter security and application security is 23-to-1.” If you’re aware of an application vulnerability, you can test for it.  Dynamic application security testing (DAST)  Dynamic application security testing (DAST) tools execute code and then inspect it at runtime to detect issues that might be security vulnerabilities. Issues may be with query strings, requests, responses, scripts, memory leaks, cookie handling, session handling, authentication, executing third-party components, and code and data injection. Static application security testing (SAST) Static application security testing (SAST) scans application source, binary, and byte code to identify vulnerability causes and assist with remediation. SAST tools attack applications from inside to perform a scan, inspecting static source code and reporting weaknesses. Interactive application security testing (IAST) Interactive application security testing (IAST) analyzes code for security vulnerabilities while the application is running. That can be an automated test, a human tester, or anything “interacting” with application functionality. Because it reports vulnerabilities in real time, IAST doesn’t add more time to your improvement and deliverability. Web application security testing Web application security testing involves assessing a web application for security flaws and vulnerabilities that require fixing before hackers take advantage of them. Meticulously testing for hidden vulnerable points in your application lessens the risk an attacker will find and exploit one of them. The Verizon 2022 Data Breach Investigations Report mentioned above found that 56% of breaches involved basic web application attacks. Software composition analysis (SCA) Software composition analysis (SCA) identifies specific open-source versions, software components, and licensing risks. It helps to ensure all embedded open-source code meets selected standards.  Advanced SCA tools have automated component detection and identification, as well as vulnerability, license association, and risk remediation. Unknown vulnerabilities When a home full of intelligent devices suffers more than 12,000 hacking or unknown scanning attacks from around the world in one week, can you imagine how many more risks to a network there are? Since your network is more extensive — and more valuable — than the technology of the average home, it presents a more significant target.   Zero-day A software flaw hackers have discovered while the developer remains unaware of it is known as a zero-day vulnerability. It’s called “zero-day” because it had never been seen before and the software vendor had “zero” time to patch it before criminals exploited it.  Trust relationship Trust configurations propagated across your network simplify user access between systems. Adverse possession of those trusted credentials opens the systems to attackers. After gaining access to a system, the adversary can breach all other systems that trust the system that was initially compromised. Compromised credentials  To get unauthorized access to a system in your network, attackers try to intercept and extract passwords from unencrypted or incorrectly encrypted communication, either from unsecured handling by software or users. Attackers also try to exploit passwords by reusing them across systems. Malicious insider Potentially the most dangerous security bad actors and the one motivated to do serious damage is the stealthy insider: a disgruntled team member with access to your critical systems. They may choose to exploit their access privileges to steal or destroy your data.  How do you find unknown vulnerabilities?  Penetration testing Penetration testing, or pen testing, is an exercise in which a cybersecurity professional probes a network to find and exploit vulnerabilities. Simulated attacks are how a pen tester identifies weak spots in system defenses that defenders can fix to tighten security. Pen testing is an intricate, specialized practice area that is critical to business security.  Breach and attack simulation (BAS) To perform comprehensive assessments of your cybersecurity defenses, you need automated breach simulation and attack simulation, continuous assets scanning, and protection. Breach and attack simulation (BAS) spots gaps in your security and helps you understand how well-defended you are against real threats to your systems. A BAS platform mimics the actual actions of a threat by simulated attacks against your data center, allowing you to assess your security controls and take action designed to catch a real threat actor when the need arises.  Often offered as software-as-a-service (SaaS), BAS goes beyond traditional testing methods such as penetration testing and vulnerability scans by simplifying how you conduct checks on your security controls. Modern BAS tools permit automated testing including customized, automated, simulated attacks. Unlike traditional penetration tests in which humans perform hacking attempts, cloud-based BAS apps host modules that run automated tests. The malware used doesn’t harm your network infrastructure and works only for the simulation. Brinqa performs vulnerability risk management Using connectors to pull data from all sources on your entire network, Brinqa calculates rules, performs advanced operational risk analysis, and applies specific business contexts to pinpoint those vulnerabilities you must fix first. It automatically creates tickets and tasks for remediation.  The capability for extensive visibility into all of your existing assets, information and infrastructure is practically infinite, meaning you can add more data and grow your network without worry.  Get your free trial to experience how easily Brinqa delivers efficient, repeatable and trustworthy results by automating your vulnerability risk management. FAQ  How do a vulnerability, a threat and a risk differ? Sometimes confused with vulnerability, a threat is anything capable of exploiting a vulnerability, whereas a risk is when a threat exploits a vulnerability. You worry about a threat occurring to an asset. You calculate the potential damage from a risk. What is a threat agent in information security? The National Institute for Standards and Technology defines a threat agent synonymously with a threat source as, “The intent and method targeted at the intentional exploitation of a vulnerability or a situation and method that may accidentally trigger a vulnerability.” What are the reasons why information systems are vulnerable? Being interconnected and accessible from many points in the connection makes information systems vulnerable.  What are cyber-physical systems? The Cyber-Physical Systems Research Center tells us cyber-­physical systems (CPS) happen when digital and analog devices, interfaces, sensors, networks, actuators and computers are combined with the natural environment and with human-made objects and structures. A CPS depends upon integrating computational algorithms and physical components.  What is cyber-physical security? Cyber-physical security concerns securing physical systems used to maintain and implement cybersecurity solutions. It includes the technology necessary for operations, industrial control systems, and the Internet of Things. The proliferation of devices has led to physical and cybersecurity convergence.