During the course of UpGuard’s cyber risk research, we uncover many assets that are publicly readable: cloud storage, file synchronization services, code repositories, and more. Most data exposures occur because of publicly readable assets, where sensitive and confidential data is leaked to the internet at large by way of a permissions misconfiguration. This type of exposure is damaging enough for most organizations— it disrupts the trust between customer and company by leaking personally identifiable information (PII), and often reveals trade secrets and other proprietary data that businesses depend on to maintain a competitive edge. But there’s another type of resource we encounter as well: assets that are publicly writable.
While far fewer in number than those exposures which are only readable, the criticality of a publicly writable asset is potentially much higher, with many other risks in addition to exposing sensitive data. When focusing on this issue, it seems like it should be self-evident that publicly-writable assets are dangerous and should be avoided— but the nature of misconfigurations is to slip under the radar as changes are made without the proper controls to ensure security— and when an asset is configured as publicly writable, it becomes a disaster waiting to happen.
The Dangers of Writable Assets
Most people are familiar with with the consequences of publicly readable data— secrets are exposed and confidentialities are broken. But publicly writable assets entail another set of threats, some of which could pose even greater risks to an organization than a data exposure incident. We’ll look at a few examples and illustrate how malicious actors could take advantage of these resources.
Start simple. Public write access means an anonymous user can delete or modify any of the files contained in the misconfigured asset. The havoc potential should be self-evident, but in the case of a mass deletion, organizations could be looking at exposure as well as data loss if backups are insufficient. Write access, like all access, should be given out based on the principle of least privilege. People should only have the access necessary to perform the functions for which they are responsible, and that’s inside the organization! There is almost zero use case for a publicly writable asset. But deletion isn’t the only change that can be made to a publicly writable file; they can also be modified, and that can be much more dangerous.
An obvious example of how data could be maliciously changed is for an attacker to embed malware within the writable asset, or even into the files themselves, by downloading them, altering them, and uploading them back over the originals. This malware gateway could give someone access to a wide audience of potential victims, depending on how often the data in the resource is accessed. For example, if an office document template that is commonly used is infected with malware, it could spread across an entire organization very quickly. If the assets are accessed by customers, it could infect their systems as well. By leaving a writable cloud resource available, a malware pipeline is created directly into the distribution network of that resource.
Malware is usually a stepping stone to another type of threat, typically fraud of some kind. But direct fraud is also possible when an organization leaves their data publicly writable. As an example, one of the types of data we regularly find exposed are invoices— payment notices generated by companies to bill their clients. Many of these invoices contain the EFT information necessary to wire money directly to the company. If these invoices, or worse, the invoice template, were stored in a writable location, someone could potentially change the account and routing numbers on the document, and have money flowing into a fraudulent account long before the affected company figured out what was happening. This is just one example, but many documents hold the same potential for malicious revision.
Social Engineering and Impersonation
Another example would be using the writable resource to assist in a social engineering effort, either by planting false credentials or documentation, or by manipulating data to match whatever scam they are using to trick employees. Or maybe they want to trick customers. Someone could set up an entire webpage inside a writable space. This would allow them to gather credentials and other information, or otherwise impersonate the organization— all from a valid and official URL.
Discovering Writable Assets During Research
For security researchers, discovering publicly writable assets can get tricky. Accessing a publicly readable data set is pretty straightforward— it’s configured for public access. Attempting to write something to a data set you don’t own, however, is a legal gray area at best, and can land even well-meaning researchers in hotter water than they expected. Despite this, efforts have been and are being made by some parties to inform owners of publicly writable assets by dropping a text file, for example, with details about the problem. In Amazon S3, these are sometimes called “poc.text” files and show up in writable buckets as a warning to their owners.
Example of a poc.text file in an Amazon S3 bucket.
Example of a text file placed in a publicly writable asset.
Sometimes writability can be determined by looking at the permissions, if they are publicly readable, which is how the UpGuard research team discovers such assets. For example, an Amazon S3 bucket has an access control list (ACL) that determines who is allowed. Best practice would obscure this ACL from anonymous browsing, but on buckets that are already misconfigured to be publicly accessible, the ACL is often accessible as well, helping us understand the level of access permitted.
How Do Enterprise Assets Become Writable?
So how does this happen? It doesn’t take much technical expertise to understand that only authorized users should be allowed to make changes to company data— that’s common sense. The problem is that while that sentiment can be conveyed and understood in a single sentence, the implementation of the idea across complex enterprise environments is far more challenging. In fact, there are many ways an asset could become publicly writable.
First, it may just be a simple mistake. The wrong box was checked; the wrong group was added. Or maybe it was a shortcut. Instead of troubleshooting access issues, an admin “temporarily” set the asset to be publicly writable to diffuse an immediate access problem (the CEO needs this NOW!)— and then forgot to change it back. Or maybe an admin was relying on “security by obscurity,” knowingly using an insecure configuration for ease, with the idea that if nobody knows about the resource, they won’t be able to find and exploit it. It’s easy to speculate on the myriad situations in which an enterprise asset is unwittingly exposed, but however it happens, it leads to the same conclusion— data breach, or worse.
Human error is inevitable. If there is an opportunity to make a mistake, a mistake will eventually get made. But blaming these or any misconfigurations on a single admin, or a single manager, misses the bigger picture, which is that human error isn’t the problem— again, there’s no living without it. The problem is process error. All of the data exposures we discover, readable and writable, occur because nobody responsible knows they are there. And how would they know if they aren’t looking? If human error can’t be helped, then process controls must be put in place to audit operations, catch those errors when they occur, and correct them before the process is completed. Vendors must be held to the same standard of security as primary operations, else data simply be leaked further down the supply chain.
How to Avoid Publicly Exposed Assets
Fortunately there are proactive measures that can be taken internally to drastically reduce the risk of data exposure, readable and writable. There are general rules that can be followed as policy to help control access permissions. Following the principle of least privilege is one way. But policies are ideas, and ideas don’t stop human error from occurring. In addition to policies that govern how things should work, controls need to be in place to show how they are actually working. No matter how good a policy is drafted, if processes aren’t designed to look for these misconfigurations, they will be missed. Let’s look at a couple examples.
Amazon S3 Cloud Storage
Many companies rely on Amazon’s cloud storage for file access, backups, and sharing. Though private by default, these S3 buckets can be made publicly accessible, and some not insignificant percentage of them are. There are two key groups that make a bucket public and knowing what they are and how they work can be the difference between security and breach.
Example of an S3 bucket that is publicly writable from the AWS console.
- All Users (Everyone) - This means everyone. Anyone who hits the S3 URL can anonymously access the bucket and whatever contents are not explicitly locked down themselves. No usernames or passwords necessary.
- Authenticated Users (Any authenticated AWS user) - This means anyone with a free Amazon AWS account— a smaller group than Literally Everyone, but hardly more secure. This group is often mistaken to mean users within a specific organization, but it refers to the entire AWS user base.
Example of the Authenticated Users group having write permissions on a publicly exposed bucket as discovered in research.
Neither of these groups should have any access (especially write access) unless there is a specific project that requires it, which there almost never is.
All of an organization’s S3 buckets should be audited at deployment to see if they are public before data is stored there. More importantly, buckets should be scanned regularly over their lives to ensure that changes to the permissions are quickly caught and surfaced. The faster the audit cadence, the smaller the window of potential exposure. Automating these checks is the best and fastest way to ensure integrity across the entire cloud surface. Learn more in our S3 blog post here.
But it isn’t just cloud platforms that open organizations up to public exposure. More traditional file transfer technologies like the rsync utility can also be easily misconfigured to allow public access. Rsync is a common tool employed by enterprise data centers to mirror, transfer, or backup files across the network. It’s a relatively simple utility that provides powerful functionality in a few easy commands. However, like many services, rsync requires explicit configuration to be secure. Rsync can easily be made publicly writable— a situation we have encountered— and which holds the potential for great damage to the responsible entity. There are two key methods of restricting rsync access.
- IP restriction - First and foremost, rsync servers and other file transfer endpoints should only allow those IP addresses it intends to communicate with. All others should be blacklisted, preventing unauthorized connections.
- User restriction - Do not allow anonymous access to rsync. This requires setting up the authentication mechanism in the rsync config, which is often overlooked or skipped for convenience. Users and services should always securely authenticate before being able to access data.
Our full rsync guide can be found here. Keep in mind that FTP and other legacy file transfer mechanisms, including open database ports, can lead to exposure if improperly configured. Sensitive data should be accessible only via a VPN, and not potentially to the whole internet. By building layers of protection around these resources and auditing them to ensure they are not misconfigured, powerful services can still deliver their functionality without subjecting an organization to undue risk.
Unfortunately many exposures don't even occur on primary systems, so no matter what kind of measures are taken on company infrastructure, vendors and other third-parties who handle sensitive data can still lead to the same kind of exposure if not carefully vetted and monitored. Technology vendors, data analytics, and other supply chains that involve the transfer, storage, and manipulation of sensitive information must be treated with the same care as a data center.
A combination of independent external auditing and vendor participation in the form of questionnaires can give a company insight into how much risk a third-party poses by evaluating the measures they take to protect themselves and their customers. The independent audit can assess external security and operations to objectively see if the vendor is following best practices, while the vendor questionnaire can shed light on internal processes, cloud policies, and other initiatives taken to keep data secure. Our buyer's guide to third-party risk can be found here.
Publicly writable assets, like all misconfigurations, are the result of complex data handling processes involving multiple companies, their vendors, and their vendor’s vendors. The possibility for assets to become publicly writable entails the inevitability that some of them will. The trick isn’t to prevent it from happening— again, people make mistakes, people don’t always follow policy— but to catch it as quickly as possible when it does happen, and have an effective remediation strategy to minimize the window of exposure. Our research has shown that misconfigurations are responsible for a great deal of data exposure, incidents that can have a major impact on the affected businesses. Addressing these procedural problems proactively, and building an operational awareness that includes third-party risk and doesn’t allow critical errors to go unnoticed can greatly reduce the chance of discovery and exploitation.