Securing Nginx and PHP

Nginx Configuration

Depending on how Nginx was installed (Check Wiki – Install as many distribution provided copies of Nginx are quite old) you may find the nginx.conf file in /etc/nginx/ or even /usr/local/etc/nginx.

If you do not have a user line near the top, use this command to see if a default user/group was provided during install time:

cat /etc/passwd

For example the following entries may appear in the FreeBSD port install:

useradd -g 33 -m kbeezie

The above would be fine, if you do not have a web user/group used for nginx, you can define it in the nginx.conf with the user command. Passing one parameter such as user www-data; will attempt to use that name for both the user and group. Passing a second parameter will set the group name if it differs from the user.

PermitRootLogin no

Above is the default keepalive_timeout that ships with the nginx source. Essentially you allow each keep-alive connection up to 65 seconds before it is terminated. This duration can be quite a bit long for a site in production and can cause some issues if you are flooded with quite a bit of traffic. Lowering this value to 5 to 10 seconds is more sensible when you consider the amount of time it takes for a visitor to download all the content on your page as each subsequent visit will usually be cached by the browser. This will also help prevent nginx from going down due to an attack against the keep-alive feature when fewer connections are possible due to the shorter timeout.

ssh-keygen

The default client_max_body_size value is 1MB, which is considered high enough to allow posting images or form data, but low enough to be attack friendly. However in #nginx we’ve seen quite a few cases where a user has set this number quite high in the http { } scope in order to fix upload problems. Realistically this value should only be raised in the exact area it will be used (in location /uploads { } for example or on a specific server block that will be expecting uploads that size).

This value should never be higher than what your backend will allow. For example if your php.ini contains a max upload size of 8MB, and you are not using any other backend server for dynamic processing , it makes very little sense to have your client_max_body_size higher than 8MB.

Something to keep in mind when setting this value globally at the http { } level. Nginx requires workers times connections/worker times client_max_body_size space on the disk as temporary storage. If your server were flooded with a bunch of large upload requests and you had 4 workers, with 1024 connections each, with a client_max_body_size globally set to 8MB you would need roughly 32GB of temporary space in either ram or disk to handle those requests. It would be more sensible to only have client_max_body_size increased specifically where you’ll be using it.

cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

Having the above inside of your server {} block is always a good idea. You might have seen the above used specifically for /\.ht to block access to the usual .htaccess/.htpasswd files that may be left behind from an apache/httpd configuration. But why stop at .ht*, why not forbid access to all hidden files and folders from web users.

location ~ ~$           { access_log off; log_not_found off; deny all; }

For most users, the above is not really needed. On some systems vi/vim is set up to save a backup of a file you are working on, and is often saved with a ~ on the end of the file name. Without blocking them they can reveal your uninterpreted php code if someone were to try to access them.

location ~ \.php$ {
        try_files $uri =404;
        #...
}

Even with “pretty urls” set up with try_files or some other means, the php block should only send requests to the backend server for php files that actually exist. The above try_files directive checks to see if the requested php file actually exists before passing it to the backend. If it does not exist the user will see the usual 404 response. This prevents the possibility of an exploit such as yourdomain.com/uploads/avatar.png/index.php. Where the uploader placed php code into a file named as an image in attempt to making PHP process it. Since index.php doesn’t exist under such a folder, this attempt would be met with a Not Found request.

PHP Configuration

Like with nginx you should run PHP-FPM as a web service so that access to files and folders can be controlled at the group level. Running PHP as the owner of the files/folders can allow a malicious script more control than you may want onto your files (such as appending some iframe injections at the top of every php file on your server).

Also if this is your first time using PHP-FPM it would be a good idea to check your www.conf or php-fpm.conf file to change the number of max workers (as well as max requests per workers to help resolve slow memory leaks). The default set up is to behave much like apache and allow up to 50 children processes which can be quite a bit excessive on a small VPS. I usually like to set pm = static and set the max workers to around 4 to 8 depending on how much memory I can spare to PHP. Just remember unlike Nginx each PHP process can only handle one request at a time, so limiting to a single process is not advised.

It is also a good idea for PHP to be compiled with the suhosin patch for additional hardening. Most distributions already have this option enabled by default (dotdeb distribution for debian, and the freeBSD port include this option by default).

cgi.fix_pathinfo=0

The above is set to 1 by default. On apache/httpd with mod_php this feature is ignored, however for cgi/fastcgi this should be set to zero. This option when turned off prevents a user from referencing a uri that does not exist and executing a script wherever php finds it up the chain. This option can be used in conjunction with the try_files $uri =404; work-around mentioned earlier.

memory_limit = 32M

Not really security related, but it can help to set this low by default. The default value shipped with PHP 5.3.* is usually 128MB. Setting this low will prevent random scripts from using too much memory. Many scripts such as WordPress have the ability to set this on a case by case basis, if you allow the scripts to change the default values. Ideally you want to set this low and then work it up as needed.

display_errors = Off

In a development environment, seeing errors on the screen can be helpful. However in a production environment, errors should be logged. If an error does occur in your script you should handle it gracefully and only show visitors what they need to know. Not handling the errors and displaying them to the screen can help hackers learn how to exploit your script (or worse yet cause your script to reveal information you may not wish for them to see).

register_globals = Off

This is the default value for a production configuration and should usually be left as such. Processing inputs should be handled via $_REQUEST/GET/POST and you should not be relying on this feature to handle most of your day to day scripting needs. For those of you migrating from old PHP 5.2 scripts this can be somewhat difficult to get used to, such as the lack of having short tags enabled by default.

post_max_size = 8M
upload_max_filesize = 2M

Above is the default values for the maximum size of a Post request and file uploads. This can be lowered if none of the sites on your server will be utilizing that much information. However in some cases it may actually need to be increased such as in the vent of using phpMyAdmin or other web-based administration scripts. Just remember to set client_max_body_size to match where appropriate, and make sure you do not set such a high value in the global http { } location.

# In nginx.conf
limit_conn_zone $binary_remote_addr zone=phplimit:1m;
 
# in your php block
location ~ \.php$ {
        try_files $uri =404;
        limit_conn phplimit 5;
        #...
}

Earlier you may have recalled I mentioned that a PHP-FPM worker can only handle one request at a time. You may also notice that PHP-FPM cannot handle as many requests as Nginx is capable of. While Nginx could handle a rush of users, those same users could be met with a 502/504 error if someone in particular has hit the PHP backend too hard.

The above will limit a specific IP address from requesting a php script to a maximum of 5 concurrent requests. In essence someone attempting to crash your PHP backend flooding it with multiple requests will likely fail due to this limitation. They may see 403/502/504 and so forth while the rest of your visitors may be browsing the site just fine without much interruption. Now technically 5 concurrent php request per IP may seem a tad high, but it’s quite possible with visitors coming from educational institutions where they share the same IP behind a NAT-based firewall. The above does not effect how many requests per second a user can make, just how many they can make concurrently.

If you use a separate PHP block for each of your server { } blocks you can adjust this limit accordingly, using the same zone for each block will keep a record of IPs and how many concurrent connections they have.

More information on this module can be found here : Limit Zone Module. Also note that the above syntax is specific to the development (1.1.*) branch of Nginx.

WordPress Specific

	location ~* ^/wp-content/uploads/.*.php$ {
		return 403;
	}

Not much to see here. Simply put if your wp-content folder resides at the root of the site then with this configuration we’re prohibiting access to any files with a .php extension in the upload location. This block should come before your php block as regular expression matches are done on a first-matched first-served basis. Using the above example you can create other locations for areas you know no PHP scripts should be executed.

# in your nginx.conf
limit_req_zone $binary_remote_addr zone=wpsearch:1m rate=1r/s;
 
# in your server { } config
	location /search { 
		limit_req zone=wpsearch burst=3 nodelay;
		try_files $uri /index.php; 
	}

A little something from the Search Overload blog entry. The above will prevent your search page on wordpress from being hammered since search queries are often executed by PHP without much caching. It is often the unprotected area of a wordpress installation. The above would prevent a user from a single IP from hitting the search page for more than a rate of one request per second. View the link above for more details on this feature.

More information on this module can be found here : Limit Request Module. Also note that the above syntax is specific to the development (1.1.*) branch of Nginx.

Noteworthy Blogs Discussing Nginx Security

Nginx & PHP via FastCGI important security issue by Clement Nedelcu

Why Path Info is the Worst PHP Feature Since Register Globals by Martin Fjordvald

Dummies Guide to Setting Up Nginx by Michael Lustfield

Remember, when in doubt check the Wiki

Comments are closed.