Arguably--in that people literally argue about it--there are two types of web servers: traditional servers like Apache and IIS, often backhandedly described as “full-featured,” and “lightweight” servers like Lighttp and nginx, stripped down for optimum memory footprint and performance. Lightweight web servers tend to integrate better into the modern, containerized environments designed for scale and automation. Of these, nginx is a frontrunner, serving major websites like Netflix, Hulu and Pinterest. But just because nginx slams Apache in performance doesn’t mean it’s immune from the same security problems the old heavyweight endures. By following our 15 step checklist, you can take advantage of nginx’s speed and extensibility while still serving websites secured against the most common attacks.

1. Containerization

There’s more to containers (Docker, Vagrant, Rocket) than (re)deployability. They also isolate applications in a discrete Linux namespace, protecting the underlying server (and any other containers) from the nginx process, which is what you want if someone else happens to take it over. However, deployability itself adds a type of security as well. By standardizing configurations for automation, you’re more likely to find, and consequently fix, problems that would have gone unrecognized in manually managed environments. Passing configuration options to a Docker container as environment variables, for example, requires that the configuration be closely examined and streamlined so it can be infinitely redeployed in ephemeral containers, sort of like the ending of The Prestige.

2. Remove Unwanted Modules

Despite nginx’s already lean profile, you can shrink its attack surface even further by removing unused modules from the installation. Best practice dictates that only the modules actually used in serving the legitimate content should be enabled, keeping the web server’s functionality to a minimum and thus its ability to be compromised. Obviously this could cause a major problem if you uninstall a module you actually need, so be sure your test and/or QA environments reflect the lack of these modules in production.

3. Keep Nginx Up to Date

Simple right? But simple is good. When there’s an easy way to increase resiliency, don’t overlook it. The package management for your distro, apt or yum probably, should offer the latest supported version, but you can check nginx.org for more information. You may need to access an additional repository such as epel for RedHat and CentOS if your distro’s repository doesn’t have the version you want.

yum update nginx
or
apt-get update nginx

Like any software, if you don’t keep it up to date, known vulnerabilities will stack up on your system until someone finally exploits one.

4. Consider SELinux 

Enabling SELinux should always be done with care, as it’s very good at what it does and can kill functionality in a second. But when properly configured, especially when an application specific policy exists, as with nginx, SELinux can truly harden a system against almost anything. The nginx policy and accompanying documentation can be found here. Getting to the right SELinux configuration is a process of trial and error, so expect a time investment when going this route and weigh that against the relative increase in security it provides to determine whether it’s right for you.

5. Use Sitewide SSL 

Thankfully the mentality on SSL is shifting away from “it should be on pages that need it” to “it should be everywhere and the only choice.” SSL is the most basic form of web security, in that it encrypts communication between the server and its clients. Given the low cost and low difficulty of implementing sitewide SSL with a trusted certificate, the question isn’t whether the content being encrypted warrants privacy, but why you would choose to pass any communication in plain text across the internet. Just as important as using SSL is using it correctly; that means using modern encryption standards such as SHA256 and 2048-bit keys. As stronger encryption becomes available and the current standards get cracked, servers too will have to adapt. Security is a habit, not an on-off switch.

6. Disable Weak Cipher Suites

Another part of correct SSL implementation is disabling legacy cipher suites such as RC4 that are still included with web servers for backwards compatibility. There really should be no reason to leave these enabled, especially since the consequences of exploitation are high, so unless you know that a client must use a specific legacy cipher suite, turn them off. A secure cipher suite in ssl.conf would resemble the following, note that the order is important:

ssl_cipherssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';

A simple edit to the config file will block insecure suites and render their vulnerabilities harmless. Consider too explicitly disabling SSLv2 and v3 and using only the more secure TLS mechanisms.

7. Secure Diffie-Hellman

Diffie-Hellman (DH) is a secure key exchange mechanism that can be secured even further by generating a custom DH group with OpenSSL. To do so, run openssl dhparam -out dhparams.pem 4096. This will generate the dhparams.pem file you can place along with your SSL certificates. Once you have it in a good place, edit the ssl.conf file and add the following: ssl_dhparam /your/path/dhparams.pem;. Unless you’re a cryptography expert, don’t worry too much about the details of this, just know that the default DH group is considered less secure and these few steps tighten it up.

8. Enable HTTP Strict Transport Security

This is a fancy way of saying “only allow HTTPS.” By using HTTP Strict Transport Security, you ensure that all of your pages only load on encrypted connections. All modern browsers support HTTPS so there’s no reason not to set this and make full use of the encrypted channel you’ve already set up. Add the following line to the server section of your ssl config file:

add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

Not setting this means someone can potentially still connect to your site on an unencrypted connection and pass information in plain text, for example with a manually created bookmark pointing to http://.

9. Disable the server_tokens Directive

In nginx, the server_tokens directive controls what information is advertised about the server. Be sure to disable this, as there’s no need to inform anonymous people what type and version of software is running. Every web server has an attack surface, so advertising the specifics of what’s running quickly narrows the focus for anyone trying to attack it. Add the following line to your nginx.conf file to disable broadcasting this info:

server_tokens off;

10. Deny Automated User-Agents

To prevent scripts, bots and other automated methods of web page retrieval from running against your server, explicitly deny certain user-agents. Applications like wget can grab entire directory structures at once, making them useful for denial of service attacks or simply accessing any improperly secured files on the website. You want real people using browsers to see your website, not let anyone with specialized tools pick through your web server at will. In the nginx.conf file, add the following:

if ($http_user_agent ~* LWP::Simple|BBBike|wget) {
return 403;
}

Factor in the legitimate ways the site is used. If you have some integration or client need for any of these protocols, they will obviously need to remain open. But start with the least privilege and open things up as you need them; it’s the golden rule of security.

11. Limit Available Methods

Now that you’ve blocked the user-agents you don’t want, you need to regulate the ones you do. Limiting the allowed methods on the server prevents someone from utilizing HTTP calls to manipulate the server in an unforeseen way. By specifying allowed verbs such as GET and POST where applicable, you can rule out the majority of possible methods and restrict clients to a minimal set of commands necessary for normal web browsing. The following is an example of what to add to the nginx.conf file to secure available methods:

add_header Allow "GET, POST, HEAD" always;
if ( $request_method !~ ^(GET|POST|HEAD)$ ) {
return 405;
}

It bears mentioning yet again, but although this does tighten security, it also limits usability. Be sure not to throw the baby out with the bathwater and interrupt regular operations or legitimate traffic with new security measures-- it’s important to know in advance and in detail what your applications require to run properly.

12. Control Simultaneous Connections

A web server best practice since the beginning of time, limiting the number of simultaneous connections to the server, both in total and from individual sources, prevents the server from being overwhelmed by connection swarms, whether intentional (DoS) or not. The defaults may not serve you here, and there are some recommended settings, but ultimately every environment is unique and optimum settings can only be found by studying the actual activity of the server in production. If you have established baselines of your traffic patterns, you can tweak the connection settings to the data you’ve collected, perhaps placing a ceiling slightly above your maximum expected connections for legitimate traffic.

13. Prevent Buffer Overflow 

Like connections, limiting the buffer prevents clients from overwhelming the resources of the server. Here is an example of what to add to your nginx.conf file to secure it:

##buffer policy
client_body_buffer_size 1K;
client_header_buffer_size 1k;
client_max_body_size 1k;
large_client_header_buffers 2 1k;
##end buffer policy

This too is nothing new to nginx, and should be standard practice with any web server, but goes to the point that despite its innovations, nginx IS a web server and these basic configurations still need to be scoped to your environment to prevent accidental or malicious exploitation.

14. XSS Protection

Cross-site scripting (XSS) is a type of web attack that allows someone to execute a script through unsanitized data entry for content then displayed on the site. For example, if someone builds a profile on a social media site not protected against this and injects a background script into their details, they could then gather information on anyone else from the site visiting their page. One way to protect against this in nginx is to force “X-XSS protection” headers to the client. Do so by adding the following to your ssl.conf file: 

add_header X-XSS-Protection "1; mode=block";

15. Configuration Testing

The only way to make sure all this stuff is setup correctly across all of your servers is to have continuous visibility into your environment that tests these configurations against a policy. This allows you to track your configs and their changes over time. Do you have the same version of nginx in dev, test and production? Do all your ssl.conf files have that one crucial line protecting your customer data? If you can’t answer those kinds of questions without looking on every server, you might want to try UpGuard, that’s what we’re built for, and the first 10 nodes are free.

Ready to see
UpGuard in action?