IT Risk Assessment Primer – Don’t Overlook the Fundamentals
September 8, 2015 by Syed Abdur

Welcome to first in a series of posts covering foundational topics in IT risk assessment and management. As a risk analytics company, we are often asked by clients where to start, how to optimize their risk management approach, and what types of practices and considerations should go into risk management program planning and execution. This series will provide some common answers that will be helpful for launching and tuning your programs.

The phrase “Risk Assessment” has become a common part of our technical vernacular, yet it’s quite surprising how variable our understanding is of the phrase and what it actually means. In this post we will explore the concept of the risk assessment, where it fits within an overall risk management program, and how we will typically enter into the risk assessment process.

For starters, let’s address what is not a risk assessment. Quite simply, data collection, such as via a questionnaire, is not itself “risk assessment.” Rather, it’s the analysis and evaluation of all collected data, in context and aimed toward producing a statement on risk that is the core objective of a risk assessment. Simply posing questions to constituents, while potentially worthwhile, is not an act of analysis, and thus we must be very careful not to misconstrue the data collection as the “assessment” itself. This, alas, is a common mistake within organizations involved in IT risk management.

The Risk Management Process

A great starting point for a discussion of risk assessment is to first talk about the overall risk management program and process. (Figure 1)  Aligning with the three core sub-processes within ISO 31000’s risk management process, we see that risk assessment is in fact the second step, not the first. Step 1 is to establish context within which we will conduct risk analysis, crucial to the determinations that will inform risk treatment decisions. Let’s drill down into these topics.

Context-setting is a critically important step, and one often overlooked. In setting context, we clearly define target and environment of the assessment: the technologies, the data, the stakeholders, the environment, and the business context (such as a business impact analysis and general rubric for risk tolerance, capacity, and/or appetite). It’s only after framing the context that we can effectively then drill down to the next level and conduct the assessment itself.

Risk Assessment is the step where we implement data collection and analysis in light of the information gathered during context-setting (including the definition of assessment target, risk and business context). We will have already determined what the purpose of the assessment is in terms of the type of decision(s) to be supported, and we should have a solid understanding of how the output should be structured in order to be useful for decision-makers. For example, simply producing a magically calculated aggregate number may not provide any value whatsoever if the purpose of the exercise is to determine whether or not a given environment is adequately secured against a defined threat actor.

Risk Treatment (or remediation)_is the final core sub-process within the risk management process. In this step we take the output of the risk assessment and discuss options with the business owners/stakeholders, such as whether or not the assessed risks are palatable to the business or if additional controls should be adopted to reduce the identified exposure levels. Again, the output of the risk assessment should be in a format that is natively understood by your target audience, and it should align well with the desired use case for the report (that is, it should be tailored to how the report is to be used).

Putting everything together, we find that it is indeed critical that information be gathered first in the context-setting stage before any sort of assessment is actually performed. If one were to rely just on the wording of many regulations and standards, then you might not realize that it’s important to understand where risk assessments fall within the overall risk management process. Getting these steps right will make life easier, as well as lead to much happier customers (the recipients of the assessment output).

What and When Should You Assess?

Now that we have a general understanding of the risk management process and where risk assessments fit overall, the next logical question is “well, when do we need to do these things anyway?” At first blush this may seem like an easy question to answer, but it turns out there isn’t typically just a single answer, there are two or three possible approaches.

You can perform an assessment of varying degrees as a contributing data point for just about any decision. In fact, we do this all the time in our personal lives, implicitly, and without any sort of formal framework. Why in IT we think this has to be overcomplicated is somewhat of a head-scratcher, but in reality there are ways in which we can rapidly collect data and perform analysis toward improving decision quality. The mere step of collecting or maintaining contextual information can make a world of difference in overall decision quality.

When thinking about what to assess and when to assess it, there are a few scoping questions to consider:

  • Is this a periodic and/or recurring assessment or is it a one-time thing?
  • Are you assessing a large portfolio or a specific target environment?
  • What level of decision is being supported by the assessment (tactical/operational or strategic)?

These questions are important to understand because the type of assessment you conduct will need to vary in order to meet the needs defined in scoping. Consider these risk assessment scenarios:

  1. If conducting a risk assessment that is tactical in nature to determine whether or not an environment, as built/designed, is suitably secured (hardened) for production deployment as part of a defined and limited production timeline, you typically cannot afford to halt all work on the project while you spend a couple weeks collecting information, analyzing it, and considering a variety of threat scenarios. In this scenario you will be providing input for real-time decisions, primarily for a technical audience, as to whether or not suitable controls are in place.
  2. If conducting a risk assessment that is strategic in nature, looking at an entire portfolio of products or services (for example, a cardholder environment), then you almost certainly can afford (and will need) at least a couple weeks to collect and analyze data. In this scenario, your output should be quite different, looking for broader themes and patterns, and addressed to a higher-level business audience.

Getting engaged to perform a risk assessment can come through a variety of means and methods. At the most basic level, engagement may come through a simple conversation, email, or form-based request. Alternatively, certain types of assessments may be integrated into key processes – such as for procurement, project management, or M&A activities – to ensure that a proper risk assessment (that includes IT/information risk considerations) is performed, and early enough in the project to adequately account for IT or information risk. Lastly, it may also be appropriate to perform portfolio assessments on a regularly scheduled, recurring basis (such as quarterly or annually).

Ownership and Common Actors

The last topic we wish to highlight in this post is the role of people in the risk management and risk assessment processes. As has already been noted, the context-setting stage should capture important information prior to commencing the risk assessment itself. This information must include identifying the owner of the resultant assessment report (the person performing the work is rarely the owner in that risk analysts don’t general own the identified risk). Typically, the owner will be someone in management (either business or technical, depending on the scope of the assessment) who is charged with making a decision or recommendation. It is essential that the risk assessment output be tailored to their specific needs, including ensuring that the report is written in a manner that they can understand and use.

Beyond the risk owner, there will then often be the person performing the risk assessment (we’ll just refer to them as a the “risk analyst” here) and the other stakeholders and subject-matter experts who will help provide valuable inputs for context-setting.

The role of stakeholders and subject-matter experts is not to be understated. The worst thing the risk analyst can do is fail to seek out those people with a vested interest in the project or with specialized knowledge that is important for improving data quality and, by extension, decision quality. For example, IT professionals are notoriously bad at estimating business impact without the assistance of someone from the business. Think of all the IT and cybersecurity “sky is falling” moments trumpeted over the years only to fall flat in the face of reality. The simple fact is that, despite all the problems we see with IT and cybersecurity, businesses still manage to continue to exist and function. That alone is a testament to resiliency.

As such, it is very important not to rely on a single source for all or most of the data. Instead, find the people who truly know the answers (verifiably!) and get them involved in the process (early!).

At the end of the day, all of the considerations discussed within this post are critical to achieving success and demonstrating value. Knowing that risk assessment is not itself a starting point is key. Context is all-important. Good decisions cannot derive from poor assumptions or bad/non-existent data. Further, it’s imperative that the right people be involved and that the output of the risk assessment be properly tuned to the target audience. Tuning the output is also a key facet of context-setting, again highlighting the importance of not jumping into later phases too quickly. And, lastly, remember this motto as pertains to being engaged to perform a risk assessment: Semper Gumby! (always flexible). Work diligently to establish hooks into key processes, but resist the urge to make the risk assessment process so rigid and inflexible that people can only engage the risk analyst through a very narrowly defined set of circumstances. Every opportunity to have a risk management conversation should be welcomed. Creating friendly conditions for engagement and conversation are key to success, both today and in the future.

In future posts we will talk about how to leverage risk assessment standards, some of the key differences between – and considerations for – qualitative vs. quantitative risk assessment, and how to leverage platforms to improve the overall risk management program and process. The next post in this series will look at common risk assessment standards and how to best leverage them within risk management programs.

 

Recent Posts
January 11, 2023
Brinqa Makes Built In’s 2023 Best Places to Work

Brinqa™, a leader in cyber risk management, today announced it has made Built In’s 2023 Best Places to Work in Austin. The annual program honors companies, from startups to enterprises, in large tech markets across the U.S. “Being a 2023 Best Places to Work in Austin winner is an honor for everyone at Brinqa,” said Amad Fida, CEO and co-founder of Brinqa. “It reflects our unwavering company-wide commitment to bringing our customers the most innovative solutions for managing cyber-risk across their attack surface and security programs. We have built a fast-growing team of talented professionals with the singular goal of helping Brinqa customers to prioritize and take precise action on the risks that matter most to their business.” Built In determines the winners of Best Places to Work based on an algorithm using company data about compensation and benefits. To reflect the benefits candidates are searching for more frequently on Built In, the program also weighs criteria like remote and flexible work opportunities, programs for DEI and other people-first cultural offerings. “It’s my honor to congratulate this year’s Best Places to Work winners,” says Sheridan Orr, chief marketing officer of Built In. “These exemplary companies understand their people are their most valuable asset, and they’ve stepped up to meet the modern professional’s new expectations, including the desire to work for companies that deliver purpose, growth and inclusion. These winners set the stage for a human-centered future of work, and we can’t wait to see that future unfold.” ABOUT BUILT IN Built In is creating the largest platform for technology professionals globally. Monthly, millions of the industry’s most in-demand professionals visit the site from across the world. They rely on our platform to stay ahead of tech trends and news, learn skills to accelerate their careers and find opportunities at companies whose values they share. Built In also serves 2,000 customers, innovative companies ranging from startups to those in the Fortune 500. By putting their stories in front of our uniquely engaged audience, we help them hire otherwise hard-to-reach tech professionals. www.builtin.com. ABOUT BRINQA Brinqa is the only company that orchestrates the entire cyber-risk lifecycle — understanding the attack surface, prioritizing vulnerabilities, automating remediation, and continuously monitoring cyber hygiene — across all security tools and programs. Brinqa Attack Surface Intelligence Platform is the source of truth for cyber risk. It empowers organizations to elevate the security conversation, hold risk owners accountable, identify security control coverage gaps, and manage and track all vulnerabilities in a single platform. Based in Austin, Texas, Brinqa is backed by Insight Partners. Learn more at www.brinqa.com. Contacts: Kevin Flanagan Brinqa +781-856-2589 kevin.flanagan@brinqa.com

December 7, 2022
Answers to your questions about threat and vulnerability management 

It should come as no surprise that the increase in cyberattacks reflects the ever-expanding number of connected devices. In 2021 the number of Internet of Things (IoT) devices alone grew to 12.3 billion. As attack surfaces expand, it’s essential to know the gaps in your security posture and protect them. That’s why enterprise vulnerability management is a critical security control. The challenge is how to filter ever-larger amounts of vulnerability data streaming in from a growing number of attack surfaces and find which vulnerabilities pose the most risk to your business. For example, a threat coming from a customer-facing server is likely more crucial than a vulnerability in an internal sandbox no one outside the organization sees.  Risk analytics permits the evaluation of all risks according to their sources. Because every environment has different risks and risk tolerance, filtering the essential data and adding context, such as business goals, is how Brinqa delivers the better insight into cybersecurity that businesses demand. Let’s define vulnerability management  Vulnerability management (VM) is the ongoing, cyclical process of identifying, classifying, reporting, prioritizing and remediating vulnerabilities in an organization’s IT infrastructure and assets. The primary goal is to actively address vulnerabilities in your environment before malicious actors can use them to launch a cyberattack. Why is vulnerability management required? Data pours into your network from many different sources and formats. Estimates by Forbes and Gartner indicate that 80% of enterprise data is unstructured. As more data enters your systems, it brings more vulnerabilities. New vulnerabilities appear daily in applications, operating systems, and hardware. Examining your network and data with the many available assessment tools – application scanners, database scanners, network scanners, and penetration testing applications – is time-consuming. Prioritizing remediation is difficult when you have so much information to consider. Adding to your time management difficulties is that different vulnerabilities require different mitigation approaches. For example, patch management, defined as closing network software vulnerabilities by applying patches, is just one component of vulnerability risk management. Vulnerabilities lead to exploits and threats Exploits are how malicious actors leverage vulnerabilities to launch an attack. An exploit can be a piece of purpose-built software, a sequence of commands, network worms, or toolkits. Attacker economies of scale have played a significant role in allowing the leveraging of vulnerabilities into successful exploits. Greater coordination and sharing of information within the hacker community have increased the number of attacks on enterprises. Zero-day vulnerabilities are particularly susceptible to exploits. Threats are an actual or hypothetical malicious event that leverages one or more exploits to launch an attack. Threats seek to adversely impact organizational operations, assets and individuals and represent the strategy employed to compromise or gain unauthorized access to the organization successfully. Malware, social engineering, ransomware, phishing and trojans are typical threats.  An easy way to map ALL your threat and enterprise vulnerability management data to a single, consistent model From its position atop your networks, all data arriving from external sensors and systems into Brinqa is automatically mapped, correlated and brought into a single entity that simplifies vulnerability management. Complementing the Brinqa platform are hosts, relationships, databases, threat intelligence, and patch intelligence. Asset context, also known as asset metadata, helps categorize assets as the amount of digital information grows. Specific business context ensures it is relevant to your organization and complies with regulations. Having processed that information, Brinqa models it and automates ticketing for remediation. Why vulnerability risk management improves VM Vulnerability risk management, sometimes known as risk-based vulnerability management, is a strategy cybersecurity professionals use to prioritize remediating software vulnerabilities according to the level of risk each poses. Using risk as the guide, you analyze and rate or assign a score to the various vulnerabilities you’ve discovered. Instead of the potentially harsh consequences of an exploited vulnerability, the risk-based strategy assesses the chances of that vulnerability being exploited.  Components of vulnerability risk management Vulnerability risk management has three components: Threat intelligence identifies the vulnerabilities attackers are discussing, experimenting with, or using. Then it performs risk evaluation and assigns risk scores according to the likelihood of exploitation. The business context of multiple assets, because intruding into specific network segments can cause destructive harm. Combinations of risk assessment and asset criticality, concentrating efforts on vulnerabilities that affect the most critical systems and those most likely to be exploited. Steps to a vulnerability management process Each new vulnerability poses risks to an organization and increases the size of its attack surface. That’s why it’s critical to continually identify and address vulnerabilities quickly by applying a defined routine. Here are some of the main vulnerability management processes, along with subprocesses and tasks:  Discover – Since you can’t protect what you don’t know about, take inventory of all assets. Identify operating systems, services, applications and configurations that might be subject to any vulnerability. Automate regularly scheduled discovery. Prioritize – Categorize the assets you’ve discovered into groups. Then prioritize each group’s risk according to how critical each asset is to your organization. Remediate – Resolve vulnerabilities by following your established risk priorities. Record and document the progress of the remediation process.  Verify – Run additional vulnerability scans. You’ll soon know if you’ve found and fixed them all. That’s when an automated vulnerability analysis platform like Brinqa can substantially ease your workload.   Report – A report comparing the most recent scan with the previous one lets IT know which vulnerabilities were identified and remediated and summarizes the current state of vulnerabilities. The C-suite needs a simple report with a high-level presentation of risk scores across the enterprise, which the Brinqa Cyber Risk Graph provides. Good VM counts every process and subprocess as a continual lifecycle intended to reduce risks to a network and improve data security. Processes that run daily offer more robust protection than those performed quarterly or annually. The Brinqa vulnerability management process Other vulnerability management vendors analyze only the data they gather, whereas Brinqa analyzes ALL incoming information and adds additional business context. Once Brinqa is completely set up, your vulnerability risk management challenges are automated. Log into Brinqa and see a list of vulnerabilities sorted by risk scores. The assessment and analysis logic is already included. You can begin remediating the most critical vulnerabilities: no more inefficient and ineffective ad hoc, case-by-case, manual decisions about what to fix first.    Unique among vulnerability risk management vendors, only Brinqa automatically: Pulls data from all sources Adds business and threat context Prioritizes the most crucial vulnerabilities Creates tickets and tasks for remediation And provides:   Advanced risk analysis Complete visibility into all of your data Automated vulnerability management Room for additional data whenever you deem necessary Brinqa delivers effective, consistent and reliable results by automating your vulnerability remediation and cyber-risk response processes. Automate your vulnerability risk management with Brinqa – See for yourself with a demo.  Frequently Asked Questions: What is unstructured data? Unstructured data is information stored in different forms that don't follow conventional data models and is, therefore, difficult to store and manage in a mainstream relational database. How do structured and unstructured data differ? Structured data is formatted to ensure consistency in processing and analyzing. Unstructured data can be stored in non-uniform formats. The main differences between structured and unstructured data include the type of analysis it can be used for, how it’s formatted, how it’s stored, and the schema used. What is inherent risk vs. residual risk? Inherent risk is the current risk level, considering the existing set of controls.  Residual risk refers to the remaining risk level after applying additional controls. How can an attacker execute malware through a script? Criminals use a script-based malware attack to execute malicious attacks on a network without accessing anything on the hard drive. Security programs cannot detect the attack without changes written to a hard drive.

November 22, 2022
A primer on the types of cybersecurity vulnerabilities organizations face

Vulnerabilities are everywhere in the cyber systems on which enterprises rely for, well, everything. The need for an effective vulnerability risk service has never been higher. The number of cybersecurity vulnerabilities grows along with the number of cyber systems and users, significantly increasing the attack surfaces of corporate network infrastructures. Organizations need a vulnerability risk service that connects, models and analyzes all relevant security, context and threat data to deliver knowledge-driven insights for vulnerability prioritization, remediation and reporting. Here’s why. A vulnerability is "a weakness in the computational logic found in software and hardware components that when exploited, results in a negative effect to confidentiality, integrity, or availability,” according to the National Vulnerability Database (NVD). NVD and Microsoft security updates are two free sources for vulnerability definitions. For more definitions, you can also pay for subscriptions to vulnerability databases available from cybersecurity vendors.  There are two types of vulnerabilities: known and unknown. Let’s take a look at each one. Known vulnerabilities If you know about a vulnerability’s existence, you can defend it — at least theoretically. The following are known vulnerabilities present in many corporate infrastructures.  Familiarity When an attacker is familiar with the code, software, operating systems and hardware of an organization, the chances are high that the attacker will find a vulnerability.  Complexity The more complex a system is, the higher the probability a flaw or misconfiguration will result in unintended access. Connectivity ‍The more connections a device has, the greater the chance a vulnerability exists among them. Poor password management Computers do the grunt work necessary for a brute-force attack, hurling password combinations at the speed of digits, hoping to uncover weakness. When users reuse passwords, a single breach can become many breaches, as the attacker tries the same password on different systems and platforms. Software flaws When an operating system is not secured, an attacker can access it to inject viruses and malware. ‍Sometimes programmers unintentionally leave exploitable bugs in software. Users leave their systems vulnerable by not updating or patching their software. Antivirus vulnerabilities The irony of antimalware solutions is situational – instead of protecting users from malware,  antimalware solutions expose users to vulnerability exploitation. Antimalware grants extensive permissions an attacker can abuse to access a system. Users ‍People who use computers are easily the most significant and weakest link in the entire security chain. According to the 2022 Verizon Data Breach Investigations Report:  80% of data breaches are from poor or reused passwords.  82% of breaches involved credentials. 82% of breaches involved a human element. 7% of breaches involved vulnerability exploitation. If not for users, phishing wouldn’t exist. Nor would social engineering. The former is an email message sent in the hope the recipient will click on an included link set to deliver a malware payload. The latter is a lie or deception used to enter a network for a cyberattack.  Physical cybersecurity threats When planning the protection of a network, it’s easy to forget about the physical security of IT assets, such as your buildings and infrastructure. Also, consider users’ security and privacy in cyber-physical systems. They can be bribed or intimidated into relinquishing valuable information.   Denial of service (DoS) A denial-of-service (DoS) attack is a malicious attempt to prevent legitimate traffic from accessing a website by overwhelming the web server with meaningless requests.  Application security testing (AST) Application security testing (AST) is the process of identifying security weaknesses and vulnerabilities in source code to harden applications by making them more resistant to security threats. According to Gartner research, “84% of breaches exploit vulnerabilities in the application layer, yet the ratio of spending between perimeter security and application security is 23-to-1.” If you’re aware of an application vulnerability, you can test for it.  Dynamic application security testing (DAST)  Dynamic application security testing (DAST) tools execute code and then inspect it at runtime to detect issues that might be security vulnerabilities. Issues may be with query strings, requests, responses, scripts, memory leaks, cookie handling, session handling, authentication, executing third-party components, and code and data injection. Static application security testing (SAST) Static application security testing (SAST) scans application source, binary, and byte code to identify vulnerability causes and assist with remediation. SAST tools attack applications from inside to perform a scan, inspecting static source code and reporting weaknesses. Interactive application security testing (IAST) Interactive application security testing (IAST) analyzes code for security vulnerabilities while the application is running. That can be an automated test, a human tester, or anything “interacting” with application functionality. Because it reports vulnerabilities in real time, IAST doesn’t add more time to your improvement and deliverability. Web application security testing Web application security testing involves assessing a web application for security flaws and vulnerabilities that require fixing before hackers take advantage of them. Meticulously testing for hidden vulnerable points in your application lessens the risk an attacker will find and exploit one of them. The Verizon 2022 Data Breach Investigations Report mentioned above found that 56% of breaches involved basic web application attacks. Software composition analysis (SCA) Software composition analysis (SCA) identifies specific open-source versions, software components, and licensing risks. It helps to ensure all embedded open-source code meets selected standards.  Advanced SCA tools have automated component detection and identification, as well as vulnerability, license association, and risk remediation. Unknown vulnerabilities When a home full of intelligent devices suffers more than 12,000 hacking or unknown scanning attacks from around the world in one week, can you imagine how many more risks to a network there are? Since your network is more extensive — and more valuable — than the technology of the average home, it presents a more significant target.   Zero-day A software flaw hackers have discovered while the developer remains unaware of it is known as a zero-day vulnerability. It’s called “zero-day” because it had never been seen before and the software vendor had “zero” time to patch it before criminals exploited it.  Trust relationship Trust configurations propagated across your network simplify user access between systems. Adverse possession of those trusted credentials opens the systems to attackers. After gaining access to a system, the adversary can breach all other systems that trust the system that was initially compromised. Compromised credentials  To get unauthorized access to a system in your network, attackers try to intercept and extract passwords from unencrypted or incorrectly encrypted communication, either from unsecured handling by software or users. Attackers also try to exploit passwords by reusing them across systems. Malicious insider Potentially the most dangerous security bad actors and the one motivated to do serious damage is the stealthy insider: a disgruntled team member with access to your critical systems. They may choose to exploit their access privileges to steal or destroy your data.  How do you find unknown vulnerabilities?  Penetration testing Penetration testing, or pen testing, is an exercise in which a cybersecurity professional probes a network to find and exploit vulnerabilities. Simulated attacks are how a pen tester identifies weak spots in system defenses that defenders can fix to tighten security. Pen testing is an intricate, specialized practice area that is critical to business security.  Breach and attack simulation (BAS) To perform comprehensive assessments of your cybersecurity defenses, you need automated breach simulation and attack simulation, continuous assets scanning, and protection. Breach and attack simulation (BAS) spots gaps in your security and helps you understand how well-defended you are against real threats to your systems. A BAS platform mimics the actual actions of a threat by simulated attacks against your data center, allowing you to assess your security controls and take action designed to catch a real threat actor when the need arises.  Often offered as software-as-a-service (SaaS), BAS goes beyond traditional testing methods such as penetration testing and vulnerability scans by simplifying how you conduct checks on your security controls. Modern BAS tools permit automated testing including customized, automated, simulated attacks. Unlike traditional penetration tests in which humans perform hacking attempts, cloud-based BAS apps host modules that run automated tests. The malware used doesn’t harm your network infrastructure and works only for the simulation. Brinqa performs vulnerability risk management Using connectors to pull data from all sources on your entire network, Brinqa calculates rules, performs advanced operational risk analysis, and applies specific business contexts to pinpoint those vulnerabilities you must fix first. It automatically creates tickets and tasks for remediation.  The capability for extensive visibility into all of your existing assets, information and infrastructure is practically infinite, meaning you can add more data and grow your network without worry.  Get your free trial to experience how easily Brinqa delivers efficient, repeatable and trustworthy results by automating your vulnerability risk management. FAQ  How do a vulnerability, a threat and a risk differ? Sometimes confused with vulnerability, a threat is anything capable of exploiting a vulnerability, whereas a risk is when a threat exploits a vulnerability. You worry about a threat occurring to an asset. You calculate the potential damage from a risk. What is a threat agent in information security? The National Institute for Standards and Technology defines a threat agent synonymously with a threat source as, “The intent and method targeted at the intentional exploitation of a vulnerability or a situation and method that may accidentally trigger a vulnerability.” What are the reasons why information systems are vulnerable? Being interconnected and accessible from many points in the connection makes information systems vulnerable.  What are cyber-physical systems? The Cyber-Physical Systems Research Center tells us cyber-­physical systems (CPS) happen when digital and analog devices, interfaces, sensors, networks, actuators and computers are combined with the natural environment and with human-made objects and structures. A CPS depends upon integrating computational algorithms and physical components.  What is cyber-physical security? Cyber-physical security concerns securing physical systems used to maintain and implement cybersecurity solutions. It includes the technology necessary for operations, industrial control systems, and the Internet of Things. The proliferation of devices has led to physical and cybersecurity convergence.