Publish date
December 18, 2025
{x} minute read

Cybersecurity Predictions for 2026: Human Risk, AI Data Leaks, and the Next Big Breach

Written by
Reviewed by
Table of contents

Looking back at 2025, two mega-trends from the past have continued: First, data breaches remained a constant and continued to trend upward; and second, there was once again a headline disaster no one anticipated. 

The first point needs no elaboration; data breaches are like air pollution—an accepted nuisance that only occasionally becomes so severe that we wonder why we live like this. For the second point, I gesture toward the major incidents of recent years. MoveIt. Crowdstrike. Snowflake. Now, Salesforce. By definition, we were not prepared for these problems. On their own, these are each singular disasters, but over the course of years, a pattern has emerged. Something big is going to happen, again. 

Looking ahead to 2026, we aim to identify the sweet spot of events that are likely yet unexpected. Anything that is obviously correct is not interesting; that’s the domain of statistical regression, not strategic extrapolation. These four predictions may not be right, but hopefully they at least make you see our present a little differently.  

1. We are on the cusp of a renaissance in human risk management

There are two trends in the threat environment that will drive innovation in human risk management. First, we have the rise of advanced persistent insider threats. We aren't talking about a rogue employee with a gambling debt; we are talking about state-level adversaries with long-running campaigns who have already been successful in infiltrating scores of Fortune 500 companies. 

Second, there is good evidence that security awareness training needs to be rebuilt from the ground up. Employees, heck, everyone, including your kids and parents,need education about information technology risks. But mandatory checkbox training does not improve security outcomes. How could it? If that's actually how human behavioral modification worked, everyone who saw an ad for Jenny Craig would have lost ten pounds. 

Instead, decades of scientific research eventually gave us something that does work: Ozempic and next-generation GLP-1s. We need an Ozempic for cybersecurity. And like the long arc of research that delivered GLP-1s, the size of this market means that we’ll eventually we'll get it. 

2. The total compromise of US infrastructure can’t be swept under the rug 

Speaking of data breaches at big organizations, the last year has seen the discovery of several cases where APTs had access to systemic US infrastructure for months or years. "There's a good chance this espionage campaign has stolen information from nearly every American," said the deputy assistant director for the FBI about the Salt Typhoon campaign that compromised around 200 organizations, including the U.S.’s principal telcos, AT&T and Verizon. 

While it's certainly better to have the attackers out, it's hard to believe the fallout has been fully remediated. The attackers didn’t just steal some collection of data; they were able to intercept virtually any communication for years. Just this year, attackers were able to use credentials stolen from Gainsight in one attack to launch another campaign on Gainsight’s customers. When we play that pattern out to everyone in America, it feels like we have not heard the last of this breach. 

Although that may be overstating the real impact. Year over year, data breaches have been increasing. The future may be much like the past: an ever-increasing number of data breaches and identity thefts. Perhaps the biggest change will be that we can no longer act surprised. 

3. Platform power success stories are tomorrow’s targets

An influential business strategy to emerge from Web 2.0 was the idea of platform power—that by aggregating some otherwise disparate resource, a business could offer sufficient value to everyone who wanted that resource to also extract some portion of the value for themselves. Google’s search engine is the epitome of this pattern: everyone wants to find things on the internet, Google is the product through which users do that, and thus Google is able to take a tiny cut of the total value of internet traffic (which happens to total billions of dollars). 

The businesses that have succeeded as platforms, through which large amounts of value pass, have also created enticing targets for attackers. In years past, we saw the beginnings of this year in Cl0p attacks on file transfer appliances. These are not platforms with the prestige of Google, but they are gateways through which large amounts of value, in the form of sensitive data, pass. 

In 2025, we saw the attack pattern expand to one of the great tech platform companies: Salesforce, the largest provider of software for managing customer sales information. Through various means (social engineering, stolen credentials, and third-party integrations) hackers pilfered Salesforce instances throughout the year. 

The throughline from MoveIT to Salesforce is platform power. Despite very different tactics (zero-day vulns versus social engineering and stolen creds), these attacks share a common form. By aggregating value, companies have benefited themselves, but they have also created a treasure map for attackers. 

4. AI will keep leaking data, but not the way you think. 

2025 has been filled with regular announcements of new methods to compromise the integrity of AI tools. That will continue because there is no way to stop it. The premise of AI tools is that they accept arbitrary inputs, and the premise of productivity tools is that they can read and/or write to important data sources. If you allow arbitrary inputs into a system that can execute code…Well, I can’t help you there. But that’s not even the real problem. 

While AI vulnerabilities will surely be weaponized at scale in the near future, we will also continue to see the problem we already have: AI systems leaking data the old-fashioned way. We think of AI models as the transmission method from inappropriate sensitive inputs to leaked data in outputs, but the actual vector for AI leaks is system misconfiguration. 

Multiple AI chat apps have leaked user conversations from Elasticsearch databases or Kafka brokers. There was also the Deepseek database left without a password. Or, in 2023, the Microsoft exposure from a misconfigured cloud storage bucket. All of those are classic, pre-AI problems that continue to be the most common method for AI data leaks. 

Plus, you have all the new AI-supportive technologies, which also allow misconfigurations leading to the exposure of sensitive data: Langflow, Flowise, Chroma, LlamaIndex, Streamlit, and llama.cpp to name a few. 

AI data leaks will continue to be in the headlines, but the articles will be about misconfigured databases.