Posts Tagged ‘wordpress’

Securing Nginx and PHP

December 16th, 2011

Disclaimer
This write up is intended for single-user systems where you are the only user expected to log in via shell/terminal/sftp (or at least people you actually trust). This collection of tips does not cover situations where you may have multiple users home folders or a shared hosting situation utilizing nginx and php-fpm. Generally speaking if you have to read this post for security tips you probably should not be allowing access to any other user but yourself in the first place.

If you do not care to read this whole write up, just remember one thing: `777` is not a magical quick-fix; it’s an open invitation to having your system compromised. (Script-Kiddies… can you say jackpot?)

User/Groups

Generally speaking your server, which will most likely be a VPS running some fashion of linux will already have a web service user and group. This will sometimes be www, www-data, http, apache, (or even nginx if a package manager installed it for you). You can run the following command to get a list of users that are on your system.

cat /etc/passwd

Both Nginx and PHP-FPM should run as a web service, on a Debian squeeze this would be www-data:www-data, or on FreeBSD www:www.

If your server was set up with root being the main user you should create an unprivileged user for handling your web files. This will also make it easier to handle permissions when uploading your web files via SFTP. For example the following command on a debian system would create a user named kbeezie which has www-data as the primary group.

useradd -g 33 -m kbeezie

Group ID #33 is the id for www-data on Debian Squeeze (you can verify with id www-data). You may have to su into the new user and change the password (or usermod to change). This will also create a home folder in /home/kbeezie/ by default. You can log in via SFTP to this user and create a www folder if you wish. You’ll notice that the files will be owned by kbeezie:www-data, which will allow Nginx and PHP to read from, but also gives you group-level control over how the web services may treat those files.

This is ideal since you’re not giving nginx or php-fpm too much control over the user’s files and they can be controlled with the group flag. You could also create the user with it’s own group such as kbeezie:kbeezie and just change the group of the web files to www-data where appropriate.

SSH Options

It is usually advisable to disable Root access via /etc/ssh/sshd_config with the following line:

PermitRootLogin no

However make sure you can log in with your new unprivileged user via SSH, and also make sure that you can `su` into root permission. On a FreeBSD system only a user belonging to the wheel group can su into root, and only a user listed in the sudoers file can use the sudo command. However on Debian/etc the user could have www-data as it’s own group and still be able to su/sudo as long as the root password is valid. Your password should be at least 12 characters long and contain digits, symbols and at least one capital letter. Do not use the same password for root and your web user.

Once you’ve verified that you’re able to obtain root status with the new user you can safely disable root log in via sshd_config and restart the ssh deaemon.

You should also change your default SSH port, which is 22. While a port scanner could probably find the new SSH port it is usually best practice not to use the default port for any type of secure access. Like before make sure you can log into the new port (you can configure sshd_config to listen to both 22 and another port to test this out).

SSH – PubKey Authentication

If you are on OSX or another unix/linux operating system, like I am, setting up pub key authentication is fairly painless. Logged in as your web user on the server you can run the following command:

ssh-keygen

The above by default will ask for a passphrase for your private key as well as a destination to save both the id_rsa and id_rsa.pub files (which will normally be ~/.ssh/). You can then copy your own user’s public key to an authorized_key file with the following command.

cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

Then on your own computer you can run the ssh-keygen command as well, copy your own computer’s public key from the id_rsa.pub file and add it as another line to your server’s authorized_keys file.

If you have `PubkeyAuthentication yes` listed in your sshd_config file with the authorized key path being that of .ssh in your home folder the server should allow you to log in without prompting you for a password. Just remember that if you chose not to use a passphrase for your private key then it is possible for anyone who obtains your id_rsa* files to log into your server without being prompted for a password.

You can even turn off password authentication completely and rely solely on public key authentication by setting `PasswordAuthentication no` in your sshd_config file. However keep in mind, unless you have another means of getting into your server you might get locked out if you lose your public/private key or access to the machine you use to log in (also not every SFTP or Terminal application supports pub key authentication).

I actually have public key authentication set up with my iPad using iSSH for quick server access on the go (and you do not need a jailbroken iPad for this).

On the next page are some Nginx and PHP specific configurations to hardening your installation.

Search Page Getting Hammered?

March 27th, 2011

On my WordPress Caching write up someone mentioned asked a very good question in the comments. What good is the caching if your site gets brought down by excessive search queries?

Great article.

However I have following problem.

My WP setup is like that.

Currently I am on shared hosting with WP + W3 Total cache and during peak hours, my site is very slow. That is mainly because I have a huge traffic from Google.

My webstie caches plenty of keywords with AskApache Google 404 and Redirection.

What happens is that traffic from Google goes to /search/what-ever-keywords dynamicly created everytime. And that is killing my system.
The problem is I have no idea how to help poor server and cache that kind of traffic.

Would you have any advice for that ?
Regards,
Peter

Fortunately the Nginx webserver has a way to soften the impact with a module called Limit Request. For wordpress this is how you would implement it:

http {
	limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
	#... the rest of your content
 
	server {
		#... your server content
 
		location /search { 
			limit_req zone=one burst=3 nodelay;
			try_files $uri /index.php; 
		}
	}
}

What we have here is almost identical to the example provided in the Nginx Wiki (HttpLimitReqModule). Essentially we’ve created a zone that allows a user no more than 1 requests per seconds, assigning them via $binary_remote_addr (which is smaller than $remote_addr while still accomplishing the same goal), within a space of 10MB.

Then we’ve placed a limit_req directive into the /search location. (Remember to include the rewrite so that index.php receives the search request in wordpress) This directive uses the zone we created earlier, allowing for a burst of up to 3 requests from a single user. If the user exceeds the limit too many times (the burst limit) they’re presented with a 503 error which Google and others consider sort of a ‘back off’ response.

By default the burst value is zero.

We use try_files here because rewriting will occur before the limiting had a chance to act, since the rewrite phase takes precedence before most other processes.

Here is an example wordpress configuration utilizing the above, configured for W3TC w/ Memcache (or OpCode Caching), please see the article [ The Importance of Caching WordPress] for details on caching WordPress in other ways, and why you should be using caching with wordpress.

This is config is somewhat based off the same configuration I currently use for kbeezie.com on a FreeBSD server.

 
user kb244 www;
worker_processes  2;
 
events {
	worker_connections	2048;
}
 
 
http {
	sendfile on;
	tcp_nopush on; # Generally on for linux, off for FreeBSD/Unix
	tcp_nodelay on;
	server_tokens off;
	include mime.types;
	default_type  application/octet-stream;
	index index.php index.htm index.html redirect.php;
 
	#Gzip
	gzip  on;
	gzip_vary on;
	gzip_proxied any;	
	gzip_comp_level 6;
	gzip_buffers 16 8k;
	gzip_http_version 1.1;
	gzip_disable "MSIE [1-6]\.(?!.*SV1)";	
	gzip_types text/plain text/css application/json application/x-javascript text/xml 
                        application/xml application/xml+rss text/javascript;
 
	#FastCGI
	fastcgi_intercept_errors on;
	fastcgi_ignore_client_abort on;
	fastcgi_buffers 8 16k;
	fastcgi_buffer_size 32k;
	fastcgi_read_timeout 120;
	fastcgi_index  index.php;
 
	limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
 
	server {
		listen 80;
 
		# Replace with your domain(s)
		server_name kbeezie.com www.kbeezie.com;
 
		# Setting your root at the server level helps a lot
		# especially if you don't like having to customize your
		# PHP configurations every single time you create a vhost
		root /usr/local/www/kbeezie.com;
 
		# Always a good idea to log your vhosts seperately
		access_log /var/log/nginx/kbeezie.access.log;
		error_log /var/log/nginx/kbeezie.error.log;
 
		# I'm using W3TC with memcache, otherwise this block
		# would look a lot more complicated for handling file
		# based caching.
		location / { 
			try_files $uri $uri/ /index.php; 
		}
 
		# And here we have the search block limiting the requests
		location /search {
			limit_req zone=kbeezieone burst=3 nodelay;
			try_files $uri /index.php;
		}
 
		# We want Wordpress to handle the error page
		fastcgi_intercept_errors off;
 
		# Handle static file requests here setting appropiate headers
		location ~* \.(ico|css|js|gif|jpe?g|png)$ {
			expires max;
			add_header Pragma public;
			add_header Cache-Control "public, must-revalidate, proxy-revalidate";
		}
 
		# I prefer to save this location block in it's own php.conf
		# in the same folder as nginx.conf so I can just use:
		# include php.conf;
		# at the bottom of each server block I want PHP enabled on
 
		location ~ \.php$ {
			fastcgi_param  QUERY_STRING       $query_string;
			fastcgi_param  REQUEST_METHOD     $request_method;
			fastcgi_param  CONTENT_TYPE       $content_type;
			fastcgi_param  CONTENT_LENGTH     $content_length;
 
			fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
			fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;
			fastcgi_param  REQUEST_URI        $request_uri;
			fastcgi_param  DOCUMENT_URI       $document_uri;
			fastcgi_param  DOCUMENT_ROOT      $document_root;
			fastcgi_param  SERVER_PROTOCOL    $server_protocol;
 
			fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
			fastcgi_param  SERVER_SOFTWARE    nginx;
 
			fastcgi_param  REMOTE_ADDR        $remote_addr;
			fastcgi_param  REMOTE_PORT        $remote_port;
			fastcgi_param  SERVER_ADDR        $server_addr;
			fastcgi_param  SERVER_PORT        $server_port;
			fastcgi_param  SERVER_NAME        $server_name;
 
			fastcgi_pass unix:/tmp/php5-fpm.sock;
 
			# I use a Unix socket for PHP-Fpm, yours instead may be:
			# fastcgi_pass 127.0.0.1:9000;
		}
 
		# Browsers always look for a favicon, I rather not flood my logs
		location = /favicon.ico { access_log off; log_not_found off; }	
 
		# Make sure you hide your .hidden linux/unix files
		location ~ /\. { deny  all; access_log off; log_not_found off; }
	}
}

As always a little research and tweaking to your own requirements is advised.