Archive for the ‘Wordpress’ category

Allowing secure WordPress Updates with SSH2

April 2nd, 2013

The Set Up

For the purpose of this guide:

  • PHP-FPM runs as an unprivileged user such as www-data or www (FreeBSD).
  • The owner of the web files is non-root such as “WebUser” belonging to the www/www-data group.
  • PECL-ssh2 has been installed for PHP
  • You currently use paired key authentication (How-To)

The Guild

If you are like myself, you may be running your wordpress-driven site on a VPS without a control panel, or even without a typical FTP server (ie: SSH/SCP only). I’ll show you how to set up wordpress to update itself via SSH, while only doing so when you allow it.

For those of you using Apache’s SuExec, this guide will not be of much use, as SuExec executes PHP and other processes as the same user that owns the files. In which case the permission setting at the very bottom of this page may be of more help to you, or you can use ‘direct’ mode instead of ssh2.

PECL-ssh2

First thing we need to do is make sure PHP has the SSH2 extension installed. This can be installed via PECL, or in the case of FreeBSD ports:

cd /usr/ports/www/security/pecl-ssh2
make install clean
service php-fpm restart

Once installed SSH2 will be visible on a php_info() output.

Paired Key Authentication for PHP

Now we need to provide PHP with a public/private key, for this purpose let us create a folder to store these files in. Because PHP runs as www, and the files are owned by WebUser (or whichever username you’ve given to manage your web files), PHP will not have free reign to any existing paired keys that user may already exist. Besides it is not advisable to use the same keys for your main Web User.

For my purposes, websites are stored in a path under the Web user’s home folder.
Example: /home/WebUser/www/domain_name.com/public_html

We can create a folder in “www” (outside of any of the website’s public documents), named .ssh:

mkdir /home/WebUser/www/.ssh
cd /home/WebUser/www/.ssh
ssh-keygen -b 4096 -C php-key -t rsa
** When asked use the path /home/WebUser/www/.ssh/id_rsa instead of the default
** Provide a passphrase, DO NOT leave it blank.

You do not need to create such a strong key using 4096 bits for local communication, nor do you need to store it in a folder named “.ssh”. The keys do not even need to be named id_rsa, so feel free to mix it up a bit, just remember your public key will have a pub extension. You can even create separate keys for each website so as long as you do not place them inside the publicly accessible root.

The “php-key” portion of the command is the comment, if you are using multiple keys, you can edit this comment to help keep organized.

Authorizing the new keys

As mentioned before, this guide assumes you are already using paired key authenication. As a result there should be an authorized_keys file placed in your User’s .ssh folder (typically /home/UserName/.ssh/authorized_keys).

In order for the new keys to be given permission to use the web user’s credentials, we need to add the content of id_rsa.pub to authorized_keys. You may do this either in an editor such as WinSCP, or via the command line, such as using the ‘cat’ command:

cat /home/WebUser/www/.ssh/id_rsa.pub >> /home/WebUser/.ssh/authorized_keys

Make sure there is a double arrow in the command above and not a single one, or you risk replacing the entire content of authorized_keys with just that one public key.

The purpose of the passphrase, which is not required but STRONGLY encouraged is to make it so that PHP cannot utilize this key unless you explicitly provide the passphrase. This would of course prevent a malicious script from acting as the main Web User, when you are not in the process of performing an update or installation (since your passphrase would only be provided at those times).

While you can also store the passphrase in the wp-config.php as the FTP_PASS, it is also strongly discouraged.

Setting Key Ownership and Permission

Because PHP in this configuration runs as www-data or www, it will not be able to access the newly created keys unless we change their ownership and permission. With the commands below we’re setting the ownership of the .ssh folder to www:www (or www-data:www-data) and changing the permissions so that only the owner of the file can read the private key, and owner+group can read the public key; Though only the owner really ever needs to see it, as the public key provided in the authorized_keys, but normally you will not be logged in as www, and may need to read the content of the file.

chown -R www:www /home/WebUser/www/.ssh
chmod 0400 /home/WebUser/www/.ssh/id_rsa
chmod 0440 /home/WebUser/www/.ssh/id_rsa.pub

Modifying wp-config.php

Below is the code segment that needs to be added to the wp-config.php file. First the method is defined as ssh2, then the absolute folder paths, then the paths to the public and private key, and finally the username and host SSH will attempt to connect to.

define('FS_METHOD', 'ssh2');
define('FTP_BASE', '/home/WebUser/www/YourDomain.com/public_html/WordPressRoot/');
define('FTP_CONTENT_DIR', '/home/WebUser/www/YourDomain.com/public_html/WordPressRoot/wp-content/');
define('FTP_PLUGIN_DIR ', '/home/WebUser/www/YourDomain.com/public_html/WordPressRoot/wp-content/plugins/');
define('FTP_PUBKEY', '/home/WebUser/www/.ssh/id_rsa.pub');
define('FTP_PRIKEY', '/home/WebUser/www/.ssh/id_rsa');
define('FTP_USER', 'WebUser');
define('FTP_HOST', 'YourDomain.com');
// If you are not using port 22, append :Port# to the domain, such as YourDomain:1212

With the following in place, you’ll be able to:

  • Install a Theme or Plugin from WordPress itself
  • Update a Theme or Plugin automatically in WordPress
  • Delete a Theme or Plugin from within WordPress
  • Upgrade WordPress Itself from the automatic update utility

Each time you perform any of the above tasks, you’ll be informed that the public and private keys are invalid, this error is only shown because without the passphrase it cannot continue. So provide the passphrase via the password field each time to perform the tasks. Make sure the “ssh2” radio button has been selected when you do this.

If you are uploading a zip file to install a theme/plugin

While the setting above will work for the most part with the automatic fetch-n-install, such as searching for plugin and then clicking install. It may not work when providing a zip file from your local storage.

If this becomes the case we just need to adjust the permissions of the upload directory. Assuming your files are owned by WebUser:www and PHP runs as www:www, we need to set the permissions of the /wp-content/uploads folder to 775 (read/write/execute for both the WebUser owner, and www group, but keep read/execute on ‘other’).

chmod 0775 /home/WebUser/www/YourDomain.com/public_html/wp-content/uploads/

If you have content already in there you may need to add on the recursive flag with -R before 0775.

chmod -R 0775 /home/WebUser/www/YourDomain.com/public_html/wp-content/uploads/

For the purpose of installations, this is only required in order for PHP to move the uploaded zip file to the uploads folder where it will be unpacked. From there the familiar SSH dialog will appear to continue the rest of the installation. After which the uploaded zip file will be removed from the uploads folder.

Securing your Uploads folder on Nginx

Because PHP is now capable of writing to the uploads folder, there is a chance that someone may attempt to upload a script into it and as such execute from it. The uploads folder should not host any executable scripts so to fix this we need to add some rules into the configuration for Nginx.

This location block should go before the PHP location block.

location ~* ^/wp-content/uploads/.*\.php$ {
    return 403;
}

Any attempts to call a PHP script from the uploads folder will now be met with a forbidden response code.

For further information on securing PHP and Nginx please refer to Securing Nginx & PHP

Updated Nginx rules for New W3TC

April 2nd, 2013

IF you have been using the W3TC rules for “Disk Enhanced” mode from this blog in the past you may find that after upgrading W3TC that they no longer work.

Previously the following would work:

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	# the next two location blocks are to ensure gzip encoding is turned off
	# for the serving of already gzipped w3tc page cache
	# normally gzip static would handle this but W3TC names them with _gzip
 
	location ~ /wp-content/cache/page_enhanced.*html$ {
		expires max;
		charset utf-8;
		add_header Vary "Accept-Encoding, Cookie";
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	location ~ /wp-content/cache/page_enhanced.*gzip$ {
		expires max;
		gzip off;
		types {}
		charset utf-8;
		default_type text/html;
		add_header Vary "Accept-Encoding, Cookie";
		add_header Content-Encoding gzip;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	location / { 
		if (-f $request_filename) {
		        break;
		}
 
		set $pgcache "";
 
		if ($request_method = GET) { set $pgcache "${pgcache}D"; }
 
		if ($args = "") { set $pgcache "${pgcache}I"; }
 
		if ($http_cookie !~ (comment_author_|wordpress|wordpress_logged_in|wp-postpass_)) {
			set $pgcache "${pgcache}S";
		}
		if (-f $document_root/wp-content/w3tc/pgcache/$request_uri/_index.html) {
			set $pgcache "${pgcache}K";
		}
		if ($pgcache = "DISK") {
			rewrite ^ /wp-content/w3tc/pgcache/$request_uri/_index.html break;
		}
 
		if (!-e $request_filename) { rewrite ^ /index.php last; }
	}
 
	location /search { limit_req zone=kbeezieone burst=3 nodelay; rewrite ^ /index.php; }
 
	fastcgi_intercept_errors off;
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	include php.conf;
 
	# You may want to remove the robots line from drop to use a virtual robots.txt
	# or create a drop_wp.conf tailored to the needs of the wordpress configuration
	include drop.conf;
}

However W3 Total Cache no longer saves the page caches in the w3tc folder, instead they are placed cache/page_enhanced/domain/ folder. So the following must be used. (The following has been tested with W3TC 0.9.2.8).

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	location / { 
		if (-f $request_filename) {
		        break;
		}
 
		set $w3tc_rewrite 1;
		if ($request_method = POST) { set $w3tc_rewrite 0; }
		if ($query_string != "") { set $w3tc_rewrite 0; }
 
		set $w3tc_rewrite2 1;
		if ($request_uri !~ \/$) { set $w3tc_rewrite2 0; }
		if ($request_uri ~* "(sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { set $w3tc_rewrite2 1; }
		if ($w3tc_rewrite2 != 1) { set $w3tc_rewrite 0; }
 
		if ($http_cookie ~* "(comment_author|wp\-postpass|wordpress_\[a\-f0\-9\]\+|wordpress_logged_in)") { set $w3tc_rewrite 0; }
		if ($http_user_agent ~* "(W3\ Total\ Cache/0\.9\.2\.4)") { set $w3tc_rewrite 0; }
 
		set $w3tc_ua "";
		set $w3tc_ref "";
		set $w3tc_ssl "";
		set $w3tc_enc "";
 
		if ($http_accept_encoding ~ gzip) { set $w3tc_enc _gzip; }
 
		set $w3tc_ext "";
		if (-f "$document_root/wp-content/cache/page_enhanced/$host/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl.html$w3tc_enc") {
		    set $w3tc_ext .html;
		}
		if ($w3tc_ext = "") { set $w3tc_rewrite 0; }
 
		if ($w3tc_rewrite = 1) {
		    rewrite ^ "/wp-content/cache/page_enhanced/$host/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl$w3tc_ext$w3tc_enc" last;
		}
 
		if (!-e $request_filename) { 
			rewrite ^ /index.php last; 
		}
	}
 
	location /search { limit_req zone=kbeezieone burst=3 nodelay; rewrite ^ /index.php; }
 
	fastcgi_intercept_errors off;
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	include php.conf;
 
	# You may want to remove the robots line from drop to use a virtual robots.txt
	# or create a drop_wp.conf tailored to the needs of the wordpress configuration
	include drop.conf;
}

If $host fails to work, replace with your actual domain name.

Securing Nginx and PHP

December 16th, 2011

Disclaimer
This write up is intended for single-user systems where you are the only user expected to log in via shell/terminal/sftp (or at least people you actually trust). This collection of tips does not cover situations where you may have multiple users home folders or a shared hosting situation utilizing nginx and php-fpm. Generally speaking if you have to read this post for security tips you probably should not be allowing access to any other user but yourself in the first place.

If you do not care to read this whole write up, just remember one thing: `777` is not a magical quick-fix; it’s an open invitation to having your system compromised. (Script-Kiddies… can you say jackpot?)

User/Groups

Generally speaking your server, which will most likely be a VPS running some fashion of linux will already have a web service user and group. This will sometimes be www, www-data, http, apache, (or even nginx if a package manager installed it for you). You can run the following command to get a list of users that are on your system.

cat /etc/passwd

Both Nginx and PHP-FPM should run as a web service, on a Debian squeeze this would be www-data:www-data, or on FreeBSD www:www.

If your server was set up with root being the main user you should create an unprivileged user for handling your web files. This will also make it easier to handle permissions when uploading your web files via SFTP. For example the following command on a debian system would create a user named kbeezie which has www-data as the primary group.

useradd -g 33 -m kbeezie

Group ID #33 is the id for www-data on Debian Squeeze (you can verify with id www-data). You may have to su into the new user and change the password (or usermod to change). This will also create a home folder in /home/kbeezie/ by default. You can log in via SFTP to this user and create a www folder if you wish. You’ll notice that the files will be owned by kbeezie:www-data, which will allow Nginx and PHP to read from, but also gives you group-level control over how the web services may treat those files.

This is ideal since you’re not giving nginx or php-fpm too much control over the user’s files and they can be controlled with the group flag. You could also create the user with it’s own group such as kbeezie:kbeezie and just change the group of the web files to www-data where appropriate.

SSH Options

It is usually advisable to disable Root access via /etc/ssh/sshd_config with the following line:

PermitRootLogin no

However make sure you can log in with your new unprivileged user via SSH, and also make sure that you can `su` into root permission. On a FreeBSD system only a user belonging to the wheel group can su into root, and only a user listed in the sudoers file can use the sudo command. However on Debian/etc the user could have www-data as it’s own group and still be able to su/sudo as long as the root password is valid. Your password should be at least 12 characters long and contain digits, symbols and at least one capital letter. Do not use the same password for root and your web user.

Once you’ve verified that you’re able to obtain root status with the new user you can safely disable root log in via sshd_config and restart the ssh deaemon.

You should also change your default SSH port, which is 22. While a port scanner could probably find the new SSH port it is usually best practice not to use the default port for any type of secure access. Like before make sure you can log into the new port (you can configure sshd_config to listen to both 22 and another port to test this out).

SSH – PubKey Authentication

If you are on OSX or another unix/linux operating system, like I am, setting up pub key authentication is fairly painless. Logged in as your web user on the server you can run the following command:

ssh-keygen

The above by default will ask for a passphrase for your private key as well as a destination to save both the id_rsa and id_rsa.pub files (which will normally be ~/.ssh/). You can then copy your own user’s public key to an authorized_key file with the following command.

cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

Then on your own computer you can run the ssh-keygen command as well, copy your own computer’s public key from the id_rsa.pub file and add it as another line to your server’s authorized_keys file.

If you have `PubkeyAuthentication yes` listed in your sshd_config file with the authorized key path being that of .ssh in your home folder the server should allow you to log in without prompting you for a password. Just remember that if you chose not to use a passphrase for your private key then it is possible for anyone who obtains your id_rsa* files to log into your server without being prompted for a password.

You can even turn off password authentication completely and rely solely on public key authentication by setting `PasswordAuthentication no` in your sshd_config file. However keep in mind, unless you have another means of getting into your server you might get locked out if you lose your public/private key or access to the machine you use to log in (also not every SFTP or Terminal application supports pub key authentication).

I actually have public key authentication set up with my iPad using iSSH for quick server access on the go (and you do not need a jailbroken iPad for this).

On the next page are some Nginx and PHP specific configurations to hardening your installation.

Nginx Flood Protection with Limit_req

April 9th, 2011

The Test

I’ll show you a very simple demonstration of Nginx’s Limit Request module and how it may be helpful to you in keeping your website up if you are hit by excessive connections or HTTP based denial-of-service attacks.

For this test I’m using a copy of my site’s about page saved as about.html, and the Blitz.io service (which is free at the moment) to test the limit_req directive.

First I test the page with the following command in Blitz, which will essentially ramp the number of concurrent connections from 1 to 1,075 over a period of 1 minute. The timeout has been set to 2 minutes, and the region set to California. I also set it to consider any response code other than 200 to be an error, otherwise even a 503 response will be considered a hit.

-p 1-1075:60 --status 200 -T 2000 -r california http://kbeezie.com/about.html

Not bad right? But what if that were a php document. That many users frequently might cause your server to show 502/504 errors as the php processes behind it keep crashing or timing out. This is especially true if you’re using a VPS or an inexpensive dedicated server without any additional protection. (Though if you use either, iptables or pf might be a good resource to look into)

You can of course use caching and other tools to help improve your website, such as using a wordpress caching plugin, which you should be using anyways. But sometimes that one lone person might decide to hit one of your php scripts directly. For those type of people we can use the limit request module.

Let’s create a zone in our http { } block in Nginx, we’ll call it blitz and set a rate of 5 request per second, and the zone to hold up to 10MB of data. I’ve used the $binary_remote_addr as the session variable since you can pack a lot more of those into 10MB of space than the human readible $remote_addr.

limit_req_zone $binary_remote_addr zone=blitz:10m rate=5r/s;

Then inside my server block I set a limit_req for the file I was testing above:

location = /about.html {
	limit_req zone=blitz nodelay;
}

So I reload Nginx’s configuration and I give Blitz another try:

You’ll notice that now only about 285 hits made it to the server, thats roughly 4.75 requests per second, just shy of the 5r/sec we set for the limit. The rest of the requests were hit with a 503 error. If you check out the access log for this you’ll see that a majority of the requests will be the 503 response with the spurts of 200 responses mixed in there.

Using this can be quite helpful if you want to limit the request to certain weak locations on your website. It can also be applied to all php requests.

Applying Limit Request to PHP

If you would like to limit all access to PHP requests you can do so by inserting the limit_req directive into your php block like so:

location ~ \.php {
	limit_req   zone=flood;
	include php_params.conf;
	fastcgi_pass unix:/tmp/php5-fpm.sock;
}

It may help to play with some settings like increasing or decreasing the rate, as well as using the burst or nodelay options. Those configuration options are detailed here: HttpLimitReqModule.

Note about Blitz.io

You may have noticed from the graphs above that the test were performed for up to 1,075 users. That is a bit misleading. All of the hits from the California region came from a single IP (50.18.0.223).

This is hardly realistic compared to an actual rush of traffic, or a DDOS (Distributed Denial of Service) attack. This of course also explains why the hits are consistent with the limiting of a single user as opposed to a higher number that reflecting a higher number of actual users or IP sources. Load Impact actually uses separate IPs for their testing regions. However with the free version you are limited to a max of 50 users and number of times you may perform the test. You would have to spend about $49/day to perform a test consisting up to 1,000 users to your site.

Testing from a single IP can be easily done from your own server or personal computer if you got enough ram and bandwidth. Such tools to do this from your own machine include: siege, ab, openload and so forth. You just don’t get all the pretty graphs or various locations to test from.

Also if you were testing this yourself, you have to remember to use the –status flag as Blitz will consider 5xx responses as a successful hit.

Better Alternatives

I won’t go into too much details, but if you are serious about protecting your site from the likes of an actual DDOS or multi-service attack it would be best to look into other tools such as iptables (linux), pf (packet filter for BSD) on the software side, or a hardware firewall if your host provides one. The limit request module above will only work for floods against your site over the HTTP protocol, it will not protect you from ping floods or various other exploits. For those it helps to turn off any unnecessary services and to avoid any services listening on a public port that you do not need.

For example on my server, the only public ports being listened on are HTTP/HTTPS and SSH. Services such as MySQL should be bound to a local connection. It may also help to use alternate ports for common services such as SSH, but keep in mind it doesn’t take much for a port sniffer to find (thats where iptables/pf come in handy).

Nginx Configuration Examples

March 31st, 2011

Here are a number of Nginx configurations for common scenarios that should help make things easier to get started with. This page will grow as needed.

A quick and easy starter example

First one is for most of you fly-by and cut-and-paste type of people. If you’re using a typical /sites-enabled/default type of configuration, replace the content of that file with this. Make sure to change the root path and your php port/path if it differs.

server {
	# .domain.com will match both domain.com and anything.domain.com
	server_name .example.com;
 
	# It is best to place the root of the server block at the server level, and not the location level
	# any location block path will be relative to this root. 
	root /usr/local/www/example.com;
 
	# It's always good to set logs, note however you cannot turn off the error log
	# setting error_log off; will simply create a file called 'off'.
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	# This can also go in the http { } level
	index index.html index.htm index.php;
 
	location / { 
		# if you're just using wordpress and don't want extra rewrites
		# then replace the word @rewrites with /index.php
		try_files $uri $uri/ @rewrites;
	}
 
	location @rewrites {
		# Can put some of your own rewrite rules in here
		# for example rewrite ^/~(.*)/(.*)/? /users/$1/$2 last;
		# If nothing matches we'll just send it to /index.php
		rewrite ^ /index.php last;
	}
 
	# This block will catch static file requests, such as images, css, js
	# The ?: prefix is a 'non-capturing' mark, meaning we do not require
	# the pattern to be captured into $1 which should help improve performance
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		# Some basic cache-control for static files to be sent to the browser
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	# remove the robots line if you want to use wordpress' virtual robots.txt
	location = /robots.txt  { access_log off; log_not_found off; }
	location = /favicon.ico { access_log off; log_not_found off; }	
 
	# this prevents hidden files (beginning with a period) from being served
	location ~ /\.          { access_log off; log_not_found off; deny all; }
 
	location ~ \.php {
        	fastcgi_param  QUERY_STRING       $query_string;
        	fastcgi_param  REQUEST_METHOD     $request_method;
	        fastcgi_param  CONTENT_TYPE       $content_type;
	        fastcgi_param  CONTENT_LENGTH     $content_length;
 
	        fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
	        fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;
	        fastcgi_param  REQUEST_URI        $request_uri;
	        fastcgi_param  DOCUMENT_URI       $document_uri;
	        fastcgi_param  DOCUMENT_ROOT      $document_root;
	        fastcgi_param  SERVER_PROTOCOL    $server_protocol;
 
        	fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
	        fastcgi_param  SERVER_SOFTWARE    nginx;
 
        	fastcgi_param  REMOTE_ADDR        $remote_addr;
	        fastcgi_param  REMOTE_PORT        $remote_port;
	        fastcgi_param  SERVER_ADDR        $server_addr;
	        fastcgi_param  SERVER_PORT        $server_port;
	        fastcgi_param  SERVER_NAME        $server_name;
 
	        fastcgi_pass 127.0.0.1:9000;
	}
}

As for the rest of you, read on for some more goodies and other configuration examples.

Making the PHP inclusion easier

For the purpose of PHP I’ve created a php.conf in the same folder with nginx.conf, this file DOES NOT go into the same folder as your virtual host configuration for example if nginx.conf is in /etc/nginx/ , then php.conf goes into /etc/nginx/ not /etc/nginx/sites-enabled/.

If you are not using PHP-FPM, you really should be as opposed to the old spawn-fcgi or php-cgi methods. For debian lenny/squeeze dotdeb makes php5-fpm available, FreeBSD already includes it in the PHP 5.2 and 5.3 ports.

location ~ \.php {
        # for security reasons the next line is highly encouraged
        try_files $uri =404;
 
        fastcgi_param  QUERY_STRING       $query_string;
        fastcgi_param  REQUEST_METHOD     $request_method;
        fastcgi_param  CONTENT_TYPE       $content_type;
        fastcgi_param  CONTENT_LENGTH     $content_length;
 
        fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
 
        # if the next line in yours still contains $document_root
        # consider switching to $request_filename provides
        # better support for directives such as alias
        fastcgi_param  SCRIPT_FILENAME    $request_filename;
 
        fastcgi_param  REQUEST_URI        $request_uri;
        fastcgi_param  DOCUMENT_URI       $document_uri;
        fastcgi_param  DOCUMENT_ROOT      $document_root;
        fastcgi_param  SERVER_PROTOCOL    $server_protocol;
 
        fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
        fastcgi_param  SERVER_SOFTWARE    nginx;
 
        fastcgi_param  REMOTE_ADDR        $remote_addr;
        fastcgi_param  REMOTE_PORT        $remote_port;
        fastcgi_param  SERVER_ADDR        $server_addr;
        fastcgi_param  SERVER_PORT        $server_port;
        fastcgi_param  SERVER_NAME        $server_name;
 
        # If using a unix socket...
        # fastcgi_pass unix:/tmp/php5-fpm.sock;
 
        # If using a TCP connection...
        fastcgi_pass 127.0.0.1:9000;
}

I prefer to use a unix socket, it cuts out the TCP overhead, and increases security since file-based permissions are stronger. In php-fpm.conf you can change the listen line to /tmp/php5-fpm.sock, and should uncomment the listen.owner, listen.group and listen.mode lines.

REMEMBER: Set cgi.fix_pathinfo to 0 in the php.ini when using Nginx with PHP-FPM/FastCGI, otherwise something as simple as /forum/avatars/user2.jpg/index.php could be used to execute an uploaded php script hidden as an image.

Path_Info
If you must use PATH_INFO and PATH_TRANSLATED then add the following within your location block above (make sure $ does not exist after \.php or /index.php/some/path/ will not match):

	fastcgi_split_path_info ^(.+\.php)(/.+)$;
	fastcgi_param  PATH_INFO          $fastcgi_path_info;
	fastcgi_param  PATH_TRANSLATED    $document_root$fastcgi_path_info;

It is advised however to update your script to use REQUEST_URI instead.

Optional Config – Drop

You can also have a file called drop.conf in the same folder with nginx.conf with the following:

	location = /robots.txt  { access_log off; log_not_found off; }
	location = /favicon.ico { access_log off; log_not_found off; }	
	location ~ /\.          { access_log off; log_not_found off; deny all; }
	location ~ ~$           { access_log off; log_not_found off; deny all; }

Using include drop.conf; at the end of your server block will include these. The first two simply turn off access logs and prevents logging an error if robots.txt and favicon.ico are not found, which is something a lot of browsers ask for. WordPress typically uses a virtual robots.txt so you may omit the first line if needed so that the request is passed back to wordpress.

The third line prevents nginx from serving any hidden unix/linux files, basically any request beginning with a period.

And the forth line is mainly for people who use vim, or any other command line editor that creates a backup copy of a file being worked on with a file name ending in ~. Hiding this prevents someone from accessing a backup copy of a file you have been working on.

Sample Nginx Configurations

Simple Configuration with PHP enabled

Here we have a very simple starting configuration with PHP support. I will provide most of the comments here for the basic lines. The other sample configurations won’t be as heavily commented.

server {
	# This will listen on all interfaces, you can instead choose a specific IP
	# such as listen x.x.x.x:80;  Setting listen 80 default_server; will make
	# this server block the default one if no other blocks match the request
	listen 80;
 
	# Here you can set a server name, you can use wildcards such as *.example.com
	# however remember if you use server_name *.example.com; You'll only match subdomains
	# to match both subdomains and the main domain use both example.com and *.example.com
	server_name example.com www.example.com;
 
	# It is best to place the root of the server block at the server level, and not the location level
	# any location block path will be relative to this root. 
	root /usr/local/www/example.com;
 
	# It's always good to set logs, note however you cannot turn off the error log
	# setting error_log off; will simply create a file called 'off'.
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	location / { 
		# Rewrite rules and other criterias can go here
		# Remember to avoid using if() where possible (http://wiki.nginx.org/IfIsEvil)
	}
 
	# This block will catch static file requests, such as images, css, js
	# The ?: prefix is a 'non-capturing' mark, meaning we do not require
	# the pattern to be captured into $1 which should help improve performance
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		# Some basic cache-control for static files to be sent to the browser
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	# We can include our basic configs here, as you can see its much easier
	# than pasting out the entire sets of location block each time we add a vhost
 
	include drop.conf;
	include php.conf;
}

WordPress Simple (Not using file-based caching or special rewrites)
This can also be used with W3 Total Cache if you’re using Memcache or Opcode Caching.

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	location / { 
		try_files $uri $uri/ /index.php; 
	}
 
	location /search { limit_req zone=kbeezieone burst=3 nodelay; rewrite ^ /index.php; }
 
	fastcgi_intercept_errors off;
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	include php.conf;
 
	# You may want to remove the robots line from drop to use a virtual robots.txt
	# or create a drop_wp.conf tailored to the needs of the wordpress configuration
	include drop.conf;
}

WordPress w/ W3 Total Cache using Disk (Enhanced)
The following rules have been updated to work with W3TC 0.9.2.8, in some cases $host may need to be replaced with your domain name.

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	# the next two location blocks are to ensure gzip encoding is turned off
	# for the serving of already gzipped w3tc page cache
	# normally gzip static would handle this but W3TC names them with _gzip
 
	location ~ /wp-content/cache/page_enhanced.*html$ {
		expires max;
		charset utf-8;
		add_header Vary "Accept-Encoding, Cookie";
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	location ~ /wp-content/cache/page_enhanced.*gzip$ {
		expires max;
		gzip off;
		types {}
		charset utf-8;
		default_type text/html;
		add_header Vary "Accept-Encoding, Cookie";
		add_header Content-Encoding gzip;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	location / { 
		if (-f $request_filename) {
		        break;
		}
 
		set $w3tc_rewrite 1;
		if ($request_method = POST) { set $w3tc_rewrite 0; }
		if ($query_string != "") { set $w3tc_rewrite 0; }
 
		set $w3tc_rewrite2 1;
		if ($request_uri !~ \/$) { set $w3tc_rewrite2 0; }
		if ($request_uri ~* "(sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { set $w3tc_rewrite2 1; }
		if ($w3tc_rewrite2 != 1) { set $w3tc_rewrite 0; }
 
		if ($http_cookie ~* "(comment_author|wp\-postpass|wordpress_\[a\-f0\-9\]\+|wordpress_logged_in)") { set $w3tc_rewrite 0; }
		if ($http_user_agent ~* "(W3\ Total\ Cache/0\.9\.2\.4)") { set $w3tc_rewrite 0; }
 
		set $w3tc_ua "";
		set $w3tc_ref "";
		set $w3tc_ssl "";
		set $w3tc_enc "";
 
		if ($http_accept_encoding ~ gzip) { set $w3tc_enc _gzip; }
 
		set $w3tc_ext "";
		if (-f "$document_root/wp-content/cache/page_enhanced/$host/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl.html$w3tc_enc") {
		    set $w3tc_ext .html;
		}
		if ($w3tc_ext = "") { set $w3tc_rewrite 0; }
 
		if ($w3tc_rewrite = 1) {
		    rewrite ^ "/wp-content/cache/page_enhanced/$host/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl$w3tc_ext$w3tc_enc" last;
		}
 
		if (!-e $request_filename) { 
			rewrite ^ /index.php last; 
		}
	}
 
	location /search { limit_req zone=kbeezieone burst=3 nodelay; rewrite ^ /index.php; }
 
	fastcgi_intercept_errors off;
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	include php.conf;
 
	# You may want to remove the robots line from drop to use a virtual robots.txt
	# or create a drop_wp.conf tailored to the needs of the wordpress configuration
	include drop.conf;
}

WordPress w/ WP Supercache (Full-On mode)

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	location / { 
		# This line when enabled will use Nginx's gzip static module
		gzip_static on;
 
		# Disables serving gzip content to IE 6 or below
		gzip_disable        "MSIE [1-6]\.";
 
		# Sets the default type to text/html so that gzipped content is served
		# as html, instead of raw uninterpreted data.
		default_type text/html;
 
		# does the requested file exist exactly as it is? if yes, serve it and stop here
		if (-f $request_filename) { break; }
 
		# sets some variables to help test for the existence of a cached copy of the request
		set $supercache_file '';
		set $supercache_uri $request_uri;
 
		# IF the request is a post, has a query attached, or a cookie
		# then don't serve the cache (ie: users logged in, or posting comments)
		if ($request_method = POST) { set $supercache_uri ''; }
		if ($query_string) { set $supercache_uri ''; }
		if ($http_cookie ~* "comment_author_|wordpress|wp-postpass_" ) { 
			set $supercache_uri ''; 
		}
 
		# if the supercache_uri variable hasn't been blanked by this point, attempt
		# to set the name of the destination to the possible cache file
		if ($supercache_uri ~ ^(.+)$) { 
			set $supercache_file /wp-content/cache/supercache/$http_host/$1index.html; 
		}
 
		# If a cache file of that name exists, serve it directly
		if (-f $document_root$supercache_file) { rewrite ^ $supercache_file break; }
 
		# Otherwise send the request back to index.php for further processing
		if (!-e $request_filename) { rewrite ^ /index.php last; }
	}
 
	location /search { limit_req zone=kbeezieone burst=3 nodelay; rewrite ^ /index.php; }
 
	fastcgi_intercept_errors off;
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	include php.conf;
 
	# You may want to remove the robots line from drop to use a virtual robots.txt
	# or create a drop_wp.conf tailored to the needs of the wordpress configuration
	include drop.conf;
}

Nibbleblog 3.4.1+ with Pretty URLs enabled

If you’re using NibbleBlog ( http://nibbleblog.com ), the following basic configuration works with version 3.4.1, and 3.4.1 Markdown.

server {
    listen 80;
    root /usr/local/www/example.com;;
    server_name example.com www.example.com;
 
    # access log turned off for speed
    access_log off;
    error_log /var/log/nginx/domain.error.log;
 
    # main location block
    location / {
        expires 7d;
        try_files $uri $uri/ @rewrites;
    }
 
    # rewrite rules if file/folder did not exist
    location @rewrites {
        rewrite ^/dashboard$ /admin.php?controller=user&action=login last;
        rewrite ^/feed /feed.php last;
        rewrite ^/category/([^/]+)/page-([0-9]+)$ /index.php?controller=blog&action=view&category=$1&page=$2 last;
        rewrite ^/category/([^/]+)/$ /index.php?controller=blog&action=view&category=$1&page=0 last;
        rewrite ^/page-([0-9]+)$ /index.php?controller=blog&action=view&page=$1 last;
    }
 
    # location catch for /post-#/Post_Title
    # will also catch odd instances of /post-#/something.php
    location ~ ^/post-([0-9]+)/ {
        rewrite ^ /index.php?controller=post&action=view&id_post=$1 last;
    }
 
    # cache control for static files
    location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
        expires max;
        add_header Pragma public;
        add_header Cache-Control "public, must-revalidate, proxy-revalidate";
    }
 
	include drop.conf;
	include php.conf;
}

Drupal 6+
From http://wiki.nginx.org/Drupal with some changes

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	# This matters if you use drush
	location = /backup {
		deny all;
	}
 
	# Very rarely should these ever be accessed outside of your lan
	location ~* \.(txt|log)$ {
		allow 192.168.0.0/16;
		deny all;
	}
 
	location ~ \..*/.*\.php$ { return 403; }
 
	location / {
		# This is cool because no php is touched for static content
		try_files $uri @rewrite;
	}
 
	location @rewrite {
		# Some modules enforce no slash (/) at the end of the URL
		# Else this rewrite block wouldn't be needed (GlobalRedirect)
		rewrite ^/(.*)$ /index.php?q=$1;
	}
 
	# Fighting with ImageCache? This little gem is amazing.
	location ~ ^/sites/.*/files/imagecache/ {
		try_files $uri @rewrite;
	}
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
		log_not_found off;
	}
 
	include php.conf;
 
	# You may want to remove the robots line from drop to use a virtual robots.txt
	# or create a drop_wp.conf tailored to the needs of the wordpress configuration
	include drop.conf;
}

Static with an Apache (for dynamic) Backend

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	location / { 
		# Rewrite rules can go here
	}
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	location ~ \.php {
		proxy_set_header X-Real-IP  $remote_addr;
		proxy_set_header X-Forwarded-For $remote_addr;
 
		proxy_set_header Host $host;
 
		proxy_pass http://127.0.0.1:8080;
	}
 
	include drop.conf;
}

Search Page Getting Hammered?

March 27th, 2011

On my WordPress Caching write up someone mentioned asked a very good question in the comments. What good is the caching if your site gets brought down by excessive search queries?

Great article.

However I have following problem.

My WP setup is like that.

Currently I am on shared hosting with WP + W3 Total cache and during peak hours, my site is very slow. That is mainly because I have a huge traffic from Google.

My webstie caches plenty of keywords with AskApache Google 404 and Redirection.

What happens is that traffic from Google goes to /search/what-ever-keywords dynamicly created everytime. And that is killing my system.
The problem is I have no idea how to help poor server and cache that kind of traffic.

Would you have any advice for that ?
Regards,
Peter

Fortunately the Nginx webserver has a way to soften the impact with a module called Limit Request. For wordpress this is how you would implement it:

http {
	limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
	#... the rest of your content
 
	server {
		#... your server content
 
		location /search { 
			limit_req zone=one burst=3 nodelay;
			try_files $uri /index.php; 
		}
	}
}

What we have here is almost identical to the example provided in the Nginx Wiki (HttpLimitReqModule). Essentially we’ve created a zone that allows a user no more than 1 requests per seconds, assigning them via $binary_remote_addr (which is smaller than $remote_addr while still accomplishing the same goal), within a space of 10MB.

Then we’ve placed a limit_req directive into the /search location. (Remember to include the rewrite so that index.php receives the search request in wordpress) This directive uses the zone we created earlier, allowing for a burst of up to 3 requests from a single user. If the user exceeds the limit too many times (the burst limit) they’re presented with a 503 error which Google and others consider sort of a ‘back off’ response.

By default the burst value is zero.

We use try_files here because rewriting will occur before the limiting had a chance to act, since the rewrite phase takes precedence before most other processes.

Here is an example wordpress configuration utilizing the above, configured for W3TC w/ Memcache (or OpCode Caching), please see the article [ The Importance of Caching WordPress] for details on caching WordPress in other ways, and why you should be using caching with wordpress.

This is config is somewhat based off the same configuration I currently use for kbeezie.com on a FreeBSD server.

 
user kb244 www;
worker_processes  2;
 
events {
	worker_connections	2048;
}
 
 
http {
	sendfile on;
	tcp_nopush on; # Generally on for linux, off for FreeBSD/Unix
	tcp_nodelay on;
	server_tokens off;
	include mime.types;
	default_type  application/octet-stream;
	index index.php index.htm index.html redirect.php;
 
	#Gzip
	gzip  on;
	gzip_vary on;
	gzip_proxied any;	
	gzip_comp_level 6;
	gzip_buffers 16 8k;
	gzip_http_version 1.1;
	gzip_disable "MSIE [1-6]\.(?!.*SV1)";	
	gzip_types text/plain text/css application/json application/x-javascript text/xml 
                        application/xml application/xml+rss text/javascript;
 
	#FastCGI
	fastcgi_intercept_errors on;
	fastcgi_ignore_client_abort on;
	fastcgi_buffers 8 16k;
	fastcgi_buffer_size 32k;
	fastcgi_read_timeout 120;
	fastcgi_index  index.php;
 
	limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
 
	server {
		listen 80;
 
		# Replace with your domain(s)
		server_name kbeezie.com www.kbeezie.com;
 
		# Setting your root at the server level helps a lot
		# especially if you don't like having to customize your
		# PHP configurations every single time you create a vhost
		root /usr/local/www/kbeezie.com;
 
		# Always a good idea to log your vhosts seperately
		access_log /var/log/nginx/kbeezie.access.log;
		error_log /var/log/nginx/kbeezie.error.log;
 
		# I'm using W3TC with memcache, otherwise this block
		# would look a lot more complicated for handling file
		# based caching.
		location / { 
			try_files $uri $uri/ /index.php; 
		}
 
		# And here we have the search block limiting the requests
		location /search {
			limit_req zone=kbeezieone burst=3 nodelay;
			try_files $uri /index.php;
		}
 
		# We want Wordpress to handle the error page
		fastcgi_intercept_errors off;
 
		# Handle static file requests here setting appropiate headers
		location ~* \.(ico|css|js|gif|jpe?g|png)$ {
			expires max;
			add_header Pragma public;
			add_header Cache-Control "public, must-revalidate, proxy-revalidate";
		}
 
		# I prefer to save this location block in it's own php.conf
		# in the same folder as nginx.conf so I can just use:
		# include php.conf;
		# at the bottom of each server block I want PHP enabled on
 
		location ~ \.php$ {
			fastcgi_param  QUERY_STRING       $query_string;
			fastcgi_param  REQUEST_METHOD     $request_method;
			fastcgi_param  CONTENT_TYPE       $content_type;
			fastcgi_param  CONTENT_LENGTH     $content_length;
 
			fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
			fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;
			fastcgi_param  REQUEST_URI        $request_uri;
			fastcgi_param  DOCUMENT_URI       $document_uri;
			fastcgi_param  DOCUMENT_ROOT      $document_root;
			fastcgi_param  SERVER_PROTOCOL    $server_protocol;
 
			fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
			fastcgi_param  SERVER_SOFTWARE    nginx;
 
			fastcgi_param  REMOTE_ADDR        $remote_addr;
			fastcgi_param  REMOTE_PORT        $remote_port;
			fastcgi_param  SERVER_ADDR        $server_addr;
			fastcgi_param  SERVER_PORT        $server_port;
			fastcgi_param  SERVER_NAME        $server_name;
 
			fastcgi_pass unix:/tmp/php5-fpm.sock;
 
			# I use a Unix socket for PHP-Fpm, yours instead may be:
			# fastcgi_pass 127.0.0.1:9000;
		}
 
		# Browsers always look for a favicon, I rather not flood my logs
		location = /favicon.ico { access_log off; log_not_found off; }	
 
		# Make sure you hide your .hidden linux/unix files
		location ~ /\. { deny  all; access_log off; log_not_found off; }
	}
}

As always a little research and tweaking to your own requirements is advised.

Removing l10n.js

March 1st, 2011

Anyone who has upgraded to WordPress 3.1 probably notices a new javascript being auto-loaded on their sites. It’s a very simple script that replaces htmlenities with the actual characters they represent.

Since I don’t need this function, and I’d rather not have one additional content to load on my page I wanted to remove this. It’s actually pretty easy to do.

Simply add the following to your theme’s functions.php file.

wp_deregister_script('l10n');

If you are using any form of caching, you may need to clear it in order for the change to take effect.

The Importance of Caching WordPress

February 16th, 2011

Update: The last page has been updated with some W3 Total Cache Rules if you would like to use W3 Total Cache with File-based caching (disk enhanced). Performance wise, it should be about the same as WP-Super-Cache that you see here, but maybe a bit better since you’ll also get the benefits of database/object caching behind it

IMPORTANT: This article is NOT a Wp-Super-cache vs W3-Total-Cache comparison, or standoff. This article is simply to illustrates the benefits of using a caching plugin with your WordPress installation and how it can save you not only hits, but resources on your server. Besides it has some pretty looking graphs that don’t really mean much for you to look at…

WordPress, in all its glory… is a damn sloppy hog.

I use WordPress, many of my clients use WordPress, a lot of you reading this probably use WordPress as well. But in it’s simplicity of installation, and vast library of plugins, themes and other addons, it is quite simply a resource hog. A simple one-post blog would easily rack up the PHP memory usage and still be quite slow in handling multiple visitors.

If you are hosting clients using wordpress, you cannot tell me that not one of them didn’t approach you about increasing their memory limit, or their site being incredibly slow after being hit with a moderate amount of traffic.

But despite all that, I like WordPress. It’s easy to use, update and modify; even I want to be lazy at times.

The Setups

For the purpose of this article, I cloned my blog to three different subdomains onto three separate VPSes with the same base configuration. A Debian squeeze 6.0 based installation, with 512MB assigned, and running Nginx 0.9.4, PHP-FPM 5.3.5, and MySQL 5.1.

All three had the following plugins installed:

  • Google XML Sitemaps
    Automatically generates sitemap xmls and reports new postings to search engines like Google.
  • Mollom
    Alternative to Akismat plugin, which catches spam comments.
  • WP-Syntax
    Plugin that uses GeSHi to provide syntax highlighting. Something I use quite a bit on this site for code examples.
  • WP-Mail-SMTP
    Plugin to use an SMTP server instead of php’s built in mail() function.

As you can see, nothing too fancy, pretty basic.

No Caching

This setup is basically only the above plugins, no additional caching option turned on other than what would come with wordpress (which seems to be none…)

W3 Total Cache w/ memcached

This setup has the W3 Total Cache installed, with all of the caching options (Page, Minify, Database, Object) set to use a local memcache daemon with 32MB of memory set (all of the objects seemed to need no more than 25MB). Some of the stuff suggested by Browser Cache is already handled by nginx such as mime types and a location block to set an expiration header to static content.

Memcached was installed from the normal Debian repository, and the memcache extension for PHP installed via pecl.

This type of setup is ideal for those who really don’t want to hassle with the nginx configuration aside from the usual optimization you apply to static content, and the one-line-try-files solution to WordPress’ permalink feature. It also makes it very easy to clear the cache when you make a change.

WP Super Cache

Doesn’t cache things like the database or performs minifying operations. I simply set this one up with disk caching the files to a static folder. Nginx was configured to look in these folders and serve off any static cache that was found. I’ve tested this with both gzip precompression enabled on nginx (serves a .gz file if it exists), as well as having nginx handle the gzip compression on the fly.

Essentially with this setup, most visitors rarely actually make it to the PHP backend (as the php pages themselves are cached as html output onto the disk).

This setup is ideal if you need to spare PHP from running as much as possible. Logging into the back of course will still have that little spike of usage. And of course with PHP rarely being accessed you’ll likely need to setup a cron job for scheduled postings and other maintainance to wp-cron.php.

PS: W3 Total cache can also use disk-caching on top of the additional features it offers, but I figured It’d be nice to show how much memcache can help even with every user visit being sent to the PHP backend.

The Method of Testing

Testing was done very simply with an application called Siege. Each scenario was tested separately, so that the environment conditions they were in were as close as possible (it would be kind of silly to try to run a test on all three at the same time if they were on the same physical server despite being in separate VPS containers).

Each scenario was hit with a surge of connections, ranging from 5 concurrent browsers, to 150 concurrent browser. Each Siege test was run for 5 minutes, with a pause of 4 minutes in between. Meaning starting from 5 concurrent browsers, it would run for 5 minutes hitting up the destination as much as possible, then pause for 4 minutes before trying again with 10, then 20, then 30 then 40 concurrent browsers and so forth.

An example of the command would be:

seige -c# -t5M -b http://destination

Where # is the number of concurrent browsers to use, and b sets it in benchmark mode, allowing no delays between connection attempts. This is not very realistic in terms of actual internet traffic which tends to have some delay, and connections occur at random intervals rather than all-out-slashdotting/digg (which is still technically random, but just more intense). The idea here is to illustrate just how much load WordPress could theoretically take under a large amount of traffic.

The “attacking” server was another VPS, so some of these numbers would rarely be possible in the real world. Normally most users on shared or VPS hosting don’t normally have access to higher than a 10mbit uplink, let alone the next-to-nothing latency you get between two VPSes on the same box, so essentially we’re breaking it down to where the webserver, PHP and database server become the main bottleneck.

Also keep in mind, the test does not attempt to download anything other than the front page html, meaning additional requests for images, css, js were not retrieved. Just simply the download of the front page, which is far less than what a normal browser would download off a typical blog.

On the following pages are all the pretty graphs and numbers between each scenario.

Nginx as a Proxy to your Blog

July 23rd, 2010

Some time ago, back when Google Biz Ops were really popular I’ve had quite a number of clients use this method of changing the IP address of their auto-blogs. I once had a server hosted by ServerPronto*. The server was physically located in Panama (despite being advertised as a US hosting provider), the IPs were cheap though.

The Idea

The client had a good number of auto-blogs, landing pages and so forth on the same hosting provider (in this case his own dedicated server with multiple non-class-c IPs). The problem was making one or more of those domains appear not to be on the same server, or datacenter in this case. The solution was to use various class-c and class-b IPs on my own server, and use Nginx to proxy any web request to his own existing websites thus making those sites appear on a server in Panama as opposed to their actual location. The same could work if you wanted to use US IPs on a cheap provider and proxy to an off shore provider with cheaper hardware.

The Solution

For this we’ll need a Dedicated server or VPS to host Nginx on. (I’ll have an article later on how to install Nginx from scratch). You’ll probably want a couple extra IPs on hand, in case you want to proxy multiple sites of the same niche.

For every hostname you wish to proxy you’ll need a server block similar to this in your nginx configuration.

server { 
	#Turns access log off
	access_log off;
 
	#Set this to the IP on the nginx server you want to represent this proxy
	listen nginx-ip-here:80;
 
	#Matches the requested hostname
	server_name your-hostname.com www.your-hostname.com;
 
	location / {
		# Tells the destination server the IP of the visitor (some take X-Real-IP, 
		# some take X-Forwarded-For as their header of choice
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header X-Forwarded-For $remote_addr;
		# Tells the destination server which hostname you want
		proxy_set_header Host $host;
		# Performs a Proxy Pass to that server
		proxy_pass http://ip-address-of-your-server;
	}
}

From here you would point your domain name to the Nginx server. If your domain currently uses your existing hosting’s nameservers, the easiest way to change this would be to use your registrar’s DNS such as namecheap or godaddy (via total domain control), or a free DNS provider such as everydns.com. In any of those you would simply change the A record of your domain name to the IP address of the Nginx server.

Once that is setup any request on port 80 (standard http port) will be proxied to the server actually hosting the website or blog.

The Problems

A couple of downsides may exist with this method.

Control panel access via your domain name
When you point your domain to the Nginx IP, you not only direct web traffic there, but all other forms of traffic such as SSH, Cpanel/whm, FTP, and so forth. While you can in theory forward via iptables and other methods for those additional ports, it would be difficult and not without problems. So to get around this you’ll either want to access SSH, Cpanel/DirectAdmin, FTP, and so forth via your actual server’s IP address, or use a non-proxied domain. You can use any of your other domains on the same server especially if using a control panel, since the login is what determines which account you need access to, not the domain itself.

Emails
Email access of course will cease if you don’t setup Nginx as mail proxy to that specific domain name. Though the work around would be to add an A record for mail.yourdomain.com in the DNS settings, and then change your MX records to point to mail.yourdomain.com. Course the better alternative if you must have email at that domain would be to use google apps, that way a simple whois or DNS lookup won’t reveal your actual server’s IP address. The best alternative is to simply not use email at proxied domains.

Speed
Normally a visitor takes their provided route from their location directly to your server over whatever backbone connects them. But with a proxied domain the visitor is then sent to the nginx server where ever it may be, and then the time is increased by how long it takes for the nginx server to communicate with your own webserver and then back. If the latency between the Nginx server and your destination is short, then the speed difference to visitors may not even be noticeable.

Security Concerns
Some of you no doubt are looking at this configuration and wondering if it could be changed so you don’t have to make a server block for each hostname you have. While it is possible to make server_name and the proxy_pass destination dynamic to the requested hostname, this is a very bad idea. Doing so will allow not only your own domains to be proxied but any domain pointed to your nginx server. This can result in abuse or even bandwidth overload.

For example, don’t do this:

server { 
	access_log off;
	listen nginx-ip-here:80;
 
	location / {
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header X-Forwarded-For $remote_addr;
		proxy_set_header Host $host;
		proxy_pass http://$host;
	}
}

While it may be easy for you, it opens your server up for a floodgate of abuse using your server as some kind of public proxy point. Instead to tighten security of the nginx box, we should deny (or redirect) any traffic to the nginx server that either does not provide a host name, or does not provide one that has been defined. You can do so like this:

    server {
	#visits not supplying hostname
	server_name _;
	location / { return 444; }
    }

The return 444, sends a forbidden error back to the browser.

* ServerPronto: While I did use them at one point of time for the proxy purpose, I would not recommend them for a hosting provider or dedi reseller.

Migrating Cpanel to DirectAdmin

March 28th, 2010

One of the most frustrating thing someone can do involving their websites is moving them from one hosting provider to another. It’s increasingly more difficult if your hosting was based on a control panel such as Cpanel, and try to migrate to a different kind of control panel or none at all.

Step One – Package your account

If you have root access to the server via SSH (such as if you have a VPS) you can do this quite simply by performing the following command.

/scripts/pkgacct username

The above command will place the cpmove-username.tar.gz file into the /home directory where you can download it via FTP/SCP.

For those who only have user-level cpanel access, you will need to log into your cpanel account. From there under the file section you will see a Backup Wizard icon.

Cpanel Files Section

When you click on that, you’ll be prompted to 1) Backup. 2) Full Backup 3) Provide email.

When the backup packaging is complete you will receive an email notification, and the generated file will be placed in your home directory. (usually the default root of your FTP client when you log in). You will want to download this .tar.gz file onto your desktop.

Step Two – Preparing to upload

For those of you who are windows users, you will need to download something such as 7-Zip in order to unpack the tar.gz file. Create a folder to place the file in and unpack it. The resulting files will be all the data from your Cpanel account (including your mail in MailDir format). Find the homedir.tar and pull it out of the folder and extract it.

Once extracted you’ll see various folders that was in your home directory. In /tmp you may even find some of your statistics such as webalizer which can be opened right from your desktop. The files you want to concentrare are in /public_html.

If you use any add-on domains, move them from the public_html folder. Once you’ve done that you can then compress the public_html folder content as a .zip file. Do the same for each one of the add-on domains. Don’t delete the uncompressed copies just yet.