Updated on December 21, 2017 by UpGuard
Nearly all large enterprises use the cloud to host servers, services, or data. Cloud hosted storage, like Amazon's S3, provides operational advantages over traditional computing that allow resources to be automatically distributed across robust and geographically varied servers. However, the cloud is part of the internet, and without proper care, the line separating the two disappears completely in cloud leaks— a major problem when it comes to sensitive information.
Despite being private by default, Amazon’s Simple Storage Service (S3) storage buckets are notorious for being left unlocked to the public, even by some of the world’s largest companies. This can result in a massive data breach, if the bucket was holding a corporate database, customer list, or other large collection of sensitive information. And it has. UpGuard security researcher Chris Vickery has found a slew of massive exposures among publicly accessible Amazon S3 buckets. Although the misconfiguration itself, a simple permission, is quite small, its implications can be disastrous.
Automatically Validating S3 Configurations
Why does this keep happening? Because organizations create S3 buckets, modify the default permissions, and later dump data into them without first validating their configurations. This happens for several reasons: the S3 instance was supposed to be temporary, the admin forgot to close out public access, the bucket was opened programmatically and the script didn’t set the correct permissions— many processes lead to the potential for oversight. The key is that S3 buckets, just like servers and network devices, need to be validated to ensure they are hardened. This crucial step grants the trust necessary to store sensitive corporate data in the cloud.
UpGuard fully supports Amazon S3 nodes and automatically checks public permissions to ensure they are closed. This simple but critical check could be the difference between performing casual routine maintenance and handling a severe data breach on the front page of the news. UpGuard validates the public permissions for every S3 bucket added as a node, not just once at deployment, but continuously and automatically, notifying you only if something is left open. We hear a lot about 0-day exploit prevention, advanced intrusion defenses, and other cutting edge cybersecurity technologies, but simply making sure S3 instances are secure to the public would likely prevent more breaches than all of those put together.
Using UpGuard Procedures, a thorough validation of AWS servers can be easily defined and automated, ensuring hardened configs like closed S3 permissions, and more importantly, surfacing misconfigurations immediately, allowing teams to correct them before a security researcher— or someone else— stumbles across them.
Our UpGuard procedure can validate S3 buckets and EC2 configurations for AWS, so we’ll put together a few steps to do both and validate the whole surface area of our AWS presence.
1. Test S3 Public Access
Our first step will validate all of the Amazon S3 buckets associated with our organization. We want to make sure first and foremost that public access is disabled, so we’ll set up checks for the AllUsers and AuthenticatedUsers groups which grant that access. If an S3 instance allows either group, it fails the test and we receive a notification.
2. Test EC2 Groups
Next we want to check our EC2 servers' security groups and verify that they meet our company policy. We should follow the principle of least privilege, so administrative rights should be minimally dispersed, with continuous validation that other accounts have not been granted admin access. Likewise, public access should be limited to necessary ports, for example 443 and 80 for web.
3. Test Asset Configurations
As the final step of the procedure, UpGuard examines the assets themselves, looking for open ports, unsafe default configurations, unnecessary services and programs, patches and software versions, known vulnerabilities and other critical information. UpGuard also captures AWS meta-data, allowing you to verify AWS specific settings as well, including AWS permissions. By measuring the servers against industry benchmarks like the Center for Internet Security’s critical security controls, we can shore up anything that could be used as a foothold later.
Amazon’s cloud storage offers a lot of value for digital enterprises, but it also presents a set of particular risks, which can lead to serious business problems if left unmitigated. The issue of unsecured S3 buckets gains visibility every day, as news stories relate the types of sensitive enterprise information left unattended in them. UpGuard’s cyber resilience platform validates S3 configurations specifically for public permissions, but more importantly, can do so as part of a larger procedure that validates the entire AWS infrastructure.
Cyber resilience means building security into the everyday work of IT operations. Automated processes, like our example of AWS maintenance with UpGuard, mitigate cyber risk through continuous validation. The few extremely sophisticated cyber attacks may occasionally succeed, but the vast majority of all attacks can be repelled through resilient operations. In the enterprise, it’s usually customer data at risk, and it’s customers who pay the price when that data is compromised. To maintain customer trust, companies must take responsibility as stewards of their information and do what they can to protect it.
Misconfigurations are an internal problem that emanate from within the IT infrastructure of any enterprise; no hacker is necessary for massive damage to occur to digital systems and stored data. And the problem is pervasive, with Gartner estimating anywhere from 70% to 99% of data breaches result not from external, concerted attacks, but from internal misconfiguration of the affected IT systems.