Organizations often regard cybersecurity as a series of barricades protecting the inner workings of the data center from attacks. These barricades can be hardware or software and take actions such as blocking ports, watching traffic patterns for possible intrusions, encrypting communications and so forth. In practice, these measures are only part of a comprehensive cybersecurity strategy, and by themselves will do little to bolster the overall resilience of an organization. But thoroughly tested and streamlined procedures within IT operations can prevent the most common attack point on the internet: misconfigurations.
Almost all of the successful cyber attacks on enterprise business exploit one of two issues caused by sloppy operations:
Misconfigurations - Gartner found that 99% of perimeter flaws were not vulnerabilities, but misconfigurations. A device or application was not set up to do its job properly, or the default settings were not changed to build a more secure profile.
Unpatched Vulnerabilities - According to the 2015 Verizon Data Breach Incident Report, 99% of actual exploited vulnerabilities were known for over a year, had fixes released, and yet remained unpatched on enterprise production systems.
Black-coated hackers with cutting edge tools working a business over like The Matrix makes a better story than an overworked, underpaid sysadmin accidentally leaving a default password on a database server, but you can probably guess which one has happened, repeatedly, with disastrous consequences for the businesses involved. Traditional cybersecurity technologies would not detect this problem, nor someone exploiting it, because it would all appear as legitimate traffic, using a legitimate username and password. This is not hacking. It doesn't require breaking through anything to gain entry. It’s walking through an open door.
Additionally, it’s easy to refer to an organization’s IT department as a single unit, but in reality there are often divisions within the department that at best have difficulty communicating, and at worse are complete silos, hostile to the sharing of information, even within IT. This creates gaps that are reflected in the technology— a no man’s land where responsibilities connect or overlap that receives a lower standard of care than other areas due to the resistance caused by interdivision or interpersonal conflict.
Finally, good operations often encounter another type of resistance in the relationship between IT and upper management, leading to a lack or misallocation of resources, which in turn leads to poor operations across the board. Executives often feel like IT doesn’t provide them with enough justification for funding, while IT argues that executives don’t have the technical understanding necessary to make decisions. Anyone who has worked in a modern business has probably encountered this scenario to some degree, as it is a manifestation of the gradual and organic assimilation of technology into business.
So what can be done about these types of problems?
Gain visibility into the IT environment - Misconfigurations and known, unpatched vulnerabilities occur because people don’t know they are there until it’s too late. The first step to getting ahead of these problems before they are exploited is to get visibility into the environment so there’s no question how systems are configured.
Test continuously - Visibility is just the first step. Continuous configuration testing not only refreshes the picture of your environment, but provides a historical map of how that environment has changed over time. If an environment is regularly tested against a policy that enforces the expected state, it is much more likely to fail that test and be remediated than have a misconfiguration exploited by a third party.
Change Detection and Notification - Unplanned and undocumented change, especially within a heavily siloed IT environment, can cause the downfall of the entire system. Most organizations attempt to manage changes through a management process of some kind, but those efforts often rely on changing human behavior, and do little to address the actual configurations themselves. A true change detection and notification process should monitor the actual environment for changes and report them, whether through a service desk application, email, text or morse code. The important thing is that people know when changes happen. This keeps people accountable, but more importantly maintains the integrity of the environment against attempts to circumvent the change management protocol.
Provisioning New Resources - Sysadmins today can build and tear down hundreds of servers a day in the cloud, or they can still be nursing a legacy data center with physical servers and local storage. Either way, the procedure by which a new server is built determines the beginning security of that system. Having a good procedure means testing it before use and determining if it complies to organizational policy.
Meaningful Reporting - Even if an IT department takes all of these steps, there’s still the issue of management and IT butting heads over resources and direction. But what if there were a way to take all of the information gathered in these visibility and testing efforts and aggregate it into a visual, data-driven executive summary that addressed business risk and needs, so non-technical managers could get the information they need to make better decisions on IT spending and strategy?
If you haven’t already guessed, UpGuard provides all of this with our cyber resilience platform. We take organizations through what we call the three waves of resilience: discover, control and predict. Each step creates more trust within the environment so operations can happen faster and with better results.
In the discover phase, organizations gain total visibility into every asset. Not just servers and network devices, though we cover nearly every flavor of those, but applications, databases, cloud hosts, VMs, containers and external assets like webpages, email domains and even company profile. Only through the combination of internal and external factors can a business gain a complete understanding of their cyber risk.
In the control phase, we use combine our robust policy engine with continuous testing to ensure that all of the assets you discovered are configured the way you expect, all the time. If something falls out of line, you’ll get notified of it in your preferred method, including integration into most service desk and ticketing applications. Furthermore, you can verify that planned change happened successfully across the board and ensure a server or entire environment wasn’t missed or misconfigured.
In the predict phase, we take the data gathered during the first two phases and establish context for your business risk. By aggregating this data into a single number we call CSTAR, organizations can see at a glance not only what their score is, but how they compare to similar companies and where they fall within their industry. Likewise, CSTAR can help you measure vendors and other business partners’ external security profiles before trusting them with your operations and/or data.
Digital business relies on trust. IT teams trusting their systems and each other, managers trusting their IT teams, and companies trusting other companies. Our goal at UpGuard is to provide this trust by creating a single pane system of record usable by both the most technical developer or sysadmin and the non-technical executives who rely on technology to keep the business going. Our enterprise cyber resilience platform addresses security in a new way, with the comprehensive business risk of the organization in mind. Firewalls, intrusion detection/prevention systems and the like are necessary for the types of protection they provide. But cybersecurity spans the entire IT workcycle and environment and UpGuard helps enterprises achieve the resilience necessary for today’s digital market.