Cyber risks are prevalent in all healthcare institutions, and understanding how to mitigate those risks is especially important in today’s cyber landscape. Cyber risk is the sum of all IT risks that can potentially lead to the loss or exposure of critical data, financial damages, reputational damages, and operational stoppages due to a data breach or data leak. Measuring cyber risks involves determining the likelihood and impact of each cyber threat.

The healthcare industry is one of the most heavily targeted industries in the world due to its poorly protected networks, systems, and computers, along with the vast amounts of protected health information (PHI), medical records, and patient data they handle. Hackers and cybercriminals frequently target healthcare operations because many providers have poor cybersecurity practices.

Without proper cyber risk management processes, institutions put themselves at extreme risk of severe cyber attacks. Before these organizations begin the risk remediation process, it’s important for them to understand how they can measure the entirety of their cyber risks and which ones they need to prioritize.

Most Common Healthcare Cybersecurity Risks

Although cybersecurity threats are consistent across most industries, the most common risks that affect the healthcare industry are:

  1. Phishing and social engineering attacks
  2. Malware or ransomware attacks
  3. Distributed denial-of-service (DDoS) attacks
  4. Third-party data breaches
  5. Open or vulnerable ports
  6. Outdated, unpatched software and applications
  7. Insider threats (malicious and unaware)
  8. Third-party or supply-chain attacks

Learn how to reduce cyber risks for your organization >

How Are Cyber Risks Measured in Healthcare?

The most common ways for healthcare organizations to measure their cyber risks are:

  1. Perform comprehensive risk assessments and vulnerability assessments
  2. Measure IT capabilities against various cybersecurity frameworks and regulations
  3. Use qualitative and quantitative risk models
  4. Implement threat intelligence

Ideally, all healthcare providers should use some combination of these four methods to measure the totality of their cyber risks. Once all the cyber risks have been identified and documented, institutions can begin the mitigation and remediation processes to reduce their attack surfaces.

If you haven't yet established a cybersecurity program for your healthcare organization, this ultimate guide will help.

Learn how to choose the best healthcare attack surface management product >

1. Performing Comprehensive Risk Assessments

Risk assessments are essential to identifying and measuring potential cyber risks. The process for conducting risk and vulnerability assessments can be broken down into the following steps:

  1. Asset classification - By identifying all the assets that hold risk within the IT ecosystem, organizations can begin to assign value, criticality, and operational importance to these assets. The assets with the most value and importance hold the highest risk impact and should be addressed first.
  2. Threat analysis - A threat analysis identifies and documents all possible threats that could exploit vulnerabilities and compromise sensitive data, like health records or patient information. This step also scores each risk by severity and flags it for immediate remediation.
  3. Risk analysis - This step can involve qualitative and quantitative risk models to analyze the impact and likelihood of cyber threats attempting to capitalize on an organization’s cyber risk. A healthcare institution’s total risk exposure can generally be calculated using the formula: Threat Probability*Asset Value*Potential Loss = Risk Exposure
  4. Risk prioritization - Before remediation processes begin, the final step of an assessment determines which risks should be remediated first based on severity level, impact, and likelihood. Items labeled as “high-risk” with a high likelihood of occurring and a high cost to the business should be at the top of the prioritization list.

2. Measuring IT Capabilities Against Cybersecurity Frameworks

The main security framework and regulation that all organizations in the healthcare sector should follow is HIPAA, which provides a mandatory set of controls, rules, and requirements that all healthcare entities and third-party service providers need to follow. Although HIPAA is currently only enforced at a federal level in the US, its framework has been adopted worldwide by many different healthcare service providers.

Following defined controls and requirements allows these healthcare entities to follow a roadmap and checklist of required actions to mitigate cyber risks and improve their security posture. Especially with larger organizations, such as nationwide hospitals and health insurance providers, that have wide-spanning networks, thousands of third-party vendors, and overlooked security programs, maintaining compliance with these frameworks can save significant amounts of time, effort, and resources.

Healthcare organizations can also choose to implement other global cybersecurity frameworks, such as NIST (National Institute of Standards and Technology) CSF or ISO 27001, to further improve their information security and compliance management practices. Institutions that fail to comply with mandatory requirements can face significant penalties and risk losing patient trust.

3. Using Qualitative and Quantitative Risk Models

Using qualitative and quantitative risk models can help organizations determine the likelihood of key risks occurring while examining the financial and reputation impact of those risks. Both risk models should be used to measure and communicate risks to stakeholders and senior management.

Qualitative Risk Analysis

Qualitative risk assessments are typically scenario-based and use hypothetical situations to determine perceived risk. Qualitative approaches are more subjective and require an understanding of the reputational impact and public-facing risks.

A common method of conducting qualitative analysis is mapping specific risks on a chart based on their likelihood of occurring (Very Low to Very High) against the Impact (Very Low to Very High). Risks considered “high impact, high likelihood” represents a severe risk to the organization that must be remediated as soon as possible.

The main drawback of using only a qualitative risk model is that many risks are judged ambiguously and lack consistency. Many risks are also over-inflated to account for the worst-case scenario without quantifiable data to support the classifications. Additionally, if multiple risks are considered “high impact, high likelihood,” it becomes tough for organizations to determine prioritization for mitigation and remediation.

Quantitative Risk Analysis

Quantitative risk assessments approach measuring cyber risk from a statistical point of view to quantify the exact cost and likelihood of occurrence. Rather than rely on subjective judgments or relative determinations, all quantitative risks have an exact cost/benefit analysis behind them.

All relevant risk metrics are considered, including:

  • Asset value
  • The number of identified risks vs. risks monitored vs. risks remediated
  • The annual rate of occurrence (ARO)
  • Single loss expectancy (SLE) & annual loss expectancy (ALE)
  • Cost of risk management programs

Once the risks have been quantified, healthcare organizations can begin to determine their risk tolerance (risk appetite) and prepare for a range of loss exposure outcomes for any given risk or risks. A quantitative risk analysis can also determine the value of IT investments and if those investments have reduced potential costs of cyber risks.

Learn how to choose a healthcare cyber risk remediation product >

4. Implementing Threat Intelligence

Threat intelligence is the process of identifying all current and future cyber threats that have the biggest potential to affect an organization. It’s a proactive approach to safeguarding against attacks, rather than waiting for them to happen. Threat intelligence analysts or specialists typically have a complete understanding of the cyber threat landscape and provide guidance on how to mitigate them. Threat intelligence analysts

Although threat intelligence doesn’t measure risk directly or in a quantifiable manner, it is used throughout the risk management process to help transform raw data into actionable items during the decision-making process. It is an extremely important part of any risk management program and can significantly reduce cyber risks and the chances of data loss.

For example, if information about a particular cyber threat is identified as a potential risk, information is then collected and stored for the organization to evaluate. Threat intelligence analysis adds value or “intelligence” to the information using qualitative and quantitative data to support their findings.

Threat intelligence is conducted in what is called the “intelligence cycle,” which is a circular process made up of six steps:

  1. Planning and direction
  2. Data collection
  3. Data analysis
  4. Dissemination
  5. Re-evaluation and feedback
  6. Repeat

Ready to see
UpGuard in action?