When we think about cyber attacks, we usually think about the malicious actors behind the attacks, the people who profit or gain from exploiting digital vulnerabilities and trafficking sensitive data. In doing so, we can make the mistake of ascribing the same humanity to their methods, thinking of people sitting in front of laptops, typing code into a terminal window. But the reality is both more banal and more dangerous: just like businesses, governments, and other organizations have begun to index data and automate processes, the means of finding and exploiting internet-connected systems are largely performed by computers. There’s no security in obscurity if there’s no obscurity.
If the first stage of the internet focused on building up an information superhighway, the next phase is about finding ways to effectively parse that amount of information within a human context. Big data has only begun to be explored in its predictive and revelatory capacities, because only now do we have the processing power and applications to attempt it. Cybercrime, like ransomware, is no less affected by this than any other aspect of digitized society; and understanding how automation contributes to the discovery and exploitation of vulnerabilities across the internet will help to make legitimate systems and data more resilient.
Automated “hacking tools” have been around for decades, but like everything, have evolved with the times in both sophistication and scope. Not even a hacker in a movie could get away with just guessing at where a vulnerability would be. Instead, blackhat competency should be understood in terms of knowing which tools can deliver the results, and how to operate them effectively— the same as any modern knowledge worker.
If you know how a port scan works, an explanation of it probably seems a bit rudimentary. That’s because it is one of the oldest and simplest ways of discovering vulnerable systems on a network. The internet is a swarm of systems, each with open ports on which they listen, and each sending requests to ports on other systems. For legitimate traffic, this model makes sense, because an application will use the port it is assigned when it needs it. A port scanner’s only job is to probe systems for open ports, using the same channels as legitimate traffic, and return results to a central datastore. Where a person might take hours to probe a single server, a port scanner can hit an entire subnet in minutes, and record the results in a searchable, structured format. Something like an unsecured rsync server can be easily detected by this method.
Traditional port scanning was relegated to individual IP subnets or local domain addresses, but a newer method called ZMAP allows the entire internetto be scanned in under an hour. There are many interesting and beneficial uses for this type of internet-wide scan data, but as with all technology, the same vector that produces this value entails certain risks. Not only can vulnerable servers and network devices be detected, but IoT devices, most without even basic security protocols, and anything else connected to the internet can be found, examined, and vetted more quickly than ever before possible. Just recently, a Linux worm took control of Raspberry Pi devices detected with ZMAP and used their processing power en masse to mine cryptocurrency.
Another long lived automation tool is the password cracker. Lacking a better vector, such as social engineering, guessing at a password is nearly impossible, save those cases where common or default passwords are used. Most people are familiar with password complexity rules. A bank, for example, might require uppercase, lowercase, numbers, and symbols in a password. This isn’t to stop a person from guessing it, as any random word could do that; it’s to stop a computer from guessing it.
Password cracking utilities try millions of combinations per second to simply brute force the password until the right string is found. Dictionary attacks run through actual words first, though these are far less effective since complexity rules have become standard. These rules also dramatically increase the sheer number of permutations possible, lengthening the time it takes a cracker to break it.
Port scanners and password crackers are examples of tools that automate simple, repetitive processes exponentially faster than humans. But these are old techniques, and the innovations and elaborations of technology have opened new vectors for malicious automation. In 2016, DARPA, the originators of the internet itself, held a hacking contest. What made this contest interesting was that it was “entirely automated, with experimental software programs hacking, patching, and defending networks with no human intervention.”
If that sounds futuristic, it is; this was cutting edge experimental technology that functioned as little more than a basic proof of concept. However, as futuristic as it is, it’s also inevitable. The path for cybersecurity, and cyber attacks, will lead us to more and more intelligent automation. But the burden rests on legitimate organizations to find and implement strategies to resist new automated attacks, because automatically finding vulnerabilities is much simpler than automatically patching them.
Misconfigured, outdated, and unpatched software account for most successful external cyber attacks. Technology is running in production that hasn’t been deployed or maintained correctly, even as it has grown exponentially in scope and criticality. Cyber resilience is an approach to solve this problem. By introducing controls for cyber risk directly into operations themselves, the largest vector by far for cyber risk can be closed before an attacker even tries to exploit it.
This is accomplished by creating visibility into that risk, taking steps to remediate it during normal processes, and continually assessing it to track improvement over time. This feedback loop helps protect organizations as changes happen, so that new or modified systems don’t accidentally create an easy inroad, and helping to measure risk remediation efforts. Here too automation is key— a computer can crack a password, but it can also check millions of configurations against security benchmarks and other standards— automating operations not only reduces human error in performing them, but also allows for the recording and analysis of operational data, which can in turn be used to improve operations.
Primary operations and security are just one piece of the puzzle, however. The digital business ecosystem is made up of many interrelated parties, with technological and data handling functions being outsourced to specialists, who in turn outsource to cloud providers with their own permissions problems and other third and fourth parties. This creates a chain of dependency across which risk is distributed, especially in the case sensitive data handling. Improving primary operations helps protect the organization, but holding vendors to security standards helps protect the organization makes resilience a competitive factor, raising the bar across the board for companies who want to deal in data.
Business operations and cyber crime are automating their procedures, gathering metrics on them, and trying to get an edge over the other in speed, accuracy, and scope. Organizations looking to defend against external threats should consider how the daily work of the IT department impacts the overall security posture. Legacy processes, undocumented and ad hoc, will not possess the cyber resilience needed to fend off increasingly sophisticated and automated intrusions. Cyber risk will only be effectively mitigated by making business processes more sophisticated and automated themselves. See how UpGuard helps to prevent data breaches.