Archive for the ‘Nginx’ category

Configuring Nginx for Nibbleblog 4.0.3

July 25th, 2014

This is a simple configuration example to replicate the .htaccess rules for NibbleBlog 4.0.3.

For those unfamiliar with NibbleBlog:

Nibbleblog is a powerful engine for creating blogs, all you need is PHP to work. Very simple to install and configure (only 1 step).

Nginx Configuration Example
Tested on Nginx 1.7.3

server {
    listen 0.0.0.0:80;
    root /path/to/public_html;
    server_name domain.com www.domain.com;
 
    # Directory indexing is disabled by default
    # So no need to disable it, unless you enabled it yourself
 
    # access log turned off for speed
    access_log off; #enable if you need it
    error_log /var/log/nginx/nib.error.log; #or path to your error log
 
    error_page 404 = /index.php?controller=page&action=404;
 
    # main location block
    location / {
        expires 7d;
        try_files $uri $uri/ @rewrites;
    }
 
    # rewrite rules if file/folder did not exist
    # based off rules for version 4.0.3's .htaccess
    location @rewrites {
        rewrite ^/category/([^/]+)page-([0-9+])$ /index.php?controller=blog&action=view&category=$1&number=$2 last;
        rewrite ^/category/([^/]+)/$ /index.php?controller=blog&action=view&category=$1&number=0 last;
        rewrite ^/tag/([^/]+)/page-([0-9]+)$ /index.php?controller=blog&action=view&tag=$1&number=$2 last;
        rewrite ^/tag/([^/]+)/$ /index.php?controller=blog&action=view&tag=$1&number=0 last;
        rewrite ^/page-([0-9]+)$ /index.php?controller=blog&action=view&number=$1 last;
        rewrite ^/post/([^/]+)/$ /index.php?controller=post&action=view&post=$1 last;
        rewrite ^/post-([0-9]+)/(.*)$ /index.php?controller=post&action=view&id_post=$1 last;
        rewrite ^/page/([^/]+)/$ /index.php?controller=page&action=view&page=$1 last;
        rewrite ^/feed/$ /feed.php last;
        rewrite ^/([^/]+)/$ /index.php?controller=page&action=$1 last;
 
        # if any of the redirects/urls above contain a *.php file in the url
        # then you may have to create a separate location ~ ^/post-([0-9]+)/, etc block
        # otherwise the php block will be invoked before this @rewrites block does
 
    }
 
    # cache control for static files
    location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
        expires max;
        add_header Pragma public;
        add_header Cache-Control "public, must-revalidate, proxy-revalidate";
    }
 
    # files to block or handle differently
 
    # Don't log commonly requested files that may not always exist
    location = /robots.txt  { access_log off; log_not_found off; }
    location = /favicon.ico { access_log off; log_not_found off; }
 
    # block access to these files (shadow.php, keys.php, hidden/tmp files, xml files)
    location ~ /shadow\.php { access_log off; log_not_found off; deny all; } # block access to shadow.php
    location ~ /keys\.php   { access_log off; log_not_found off; deny all; } # block access to keys.php
    location ~ /\.          { access_log off; log_not_found off; deny all; } # block access to hidden files (starting with .)
    location ~ ~$           { access_log off; log_not_found off; deny all; } # block access to vim temp files (starting with ~)
    location ~ \.xml        { access_log off; log_not_found off; deny all; } # block access to xml files
 
    # typical PHP handling block
    # with a safeguard against passing a non-existent
    # php file to the PHP interpreter
 
    location ~ \.php$ {
        try_files $uri =404;
 
        fastcgi_param  QUERY_STRING       $query_string;
        fastcgi_param  REQUEST_METHOD     $request_method;
        fastcgi_param  CONTENT_TYPE       $content_type;
        fastcgi_param  CONTENT_LENGTH     $content_length;
 
        fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
        fastcgi_param  SCRIPT_FILENAME    $request_filename;
        fastcgi_param  REQUEST_URI        $request_uri;
        fastcgi_param  DOCUMENT_URI       $document_uri;
        fastcgi_param  DOCUMENT_ROOT      $document_root;
        fastcgi_param  SERVER_PROTOCOL    $server_protocol;
 
        fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
        fastcgi_param  SERVER_SOFTWARE    nginx;
 
        fastcgi_param  REMOTE_ADDR        $remote_addr;
        fastcgi_param  REMOTE_PORT        $remote_port;
        fastcgi_param  SERVER_ADDR        $server_addr;
        fastcgi_param  SERVER_PORT        $server_port;
        fastcgi_param  SERVER_NAME        $server_name;
 
        fastcgi_pass 127.0.0.2:9000;
 
        #fastcgi_pass unix:/tmp/php.sock; 
        #use the above line instead if running php-fpm on a socket
    }
 
}

This should work pretty well as long as the newest NibbleBlog does not utilize php file names in the friendly urls (from my testing, it no longer does that).

Also Friendly URLs are not enabled by default in NibbleBlog, you need to enable it under SEO tab under settings. (ignore the instructions regarding .htaccess since you’re not using apache).

Allowing secure WordPress Updates with SSH2

April 2nd, 2013

The Set Up

For the purpose of this guide:

  • PHP-FPM runs as an unprivileged user such as www-data or www (FreeBSD).
  • The owner of the web files is non-root such as “WebUser” belonging to the www/www-data group.
  • PECL-ssh2 has been installed for PHP
  • You currently use paired key authentication (How-To)

The Guild

If you are like myself, you may be running your wordpress-driven site on a VPS without a control panel, or even without a typical FTP server (ie: SSH/SCP only). I’ll show you how to set up wordpress to update itself via SSH, while only doing so when you allow it.

For those of you using Apache’s SuExec, this guide will not be of much use, as SuExec executes PHP and other processes as the same user that owns the files. In which case the permission setting at the very bottom of this page may be of more help to you, or you can use ‘direct’ mode instead of ssh2.

PECL-ssh2

First thing we need to do is make sure PHP has the SSH2 extension installed. This can be installed via PECL, or in the case of FreeBSD ports:

cd /usr/ports/www/security/pecl-ssh2
make install clean
service php-fpm restart

Once installed SSH2 will be visible on a php_info() output.

Paired Key Authentication for PHP

Now we need to provide PHP with a public/private key, for this purpose let us create a folder to store these files in. Because PHP runs as www, and the files are owned by WebUser (or whichever username you’ve given to manage your web files), PHP will not have free reign to any existing paired keys that user may already exist. Besides it is not advisable to use the same keys for your main Web User.

For my purposes, websites are stored in a path under the Web user’s home folder.
Example: /home/WebUser/www/domain_name.com/public_html

We can create a folder in “www” (outside of any of the website’s public documents), named .ssh:

mkdir /home/WebUser/www/.ssh
cd /home/WebUser/www/.ssh
ssh-keygen -b 4096 -C php-key -t rsa
** When asked use the path /home/WebUser/www/.ssh/id_rsa instead of the default
** Provide a passphrase, DO NOT leave it blank.

You do not need to create such a strong key using 4096 bits for local communication, nor do you need to store it in a folder named “.ssh”. The keys do not even need to be named id_rsa, so feel free to mix it up a bit, just remember your public key will have a pub extension. You can even create separate keys for each website so as long as you do not place them inside the publicly accessible root.

The “php-key” portion of the command is the comment, if you are using multiple keys, you can edit this comment to help keep organized.

Authorizing the new keys

As mentioned before, this guide assumes you are already using paired key authenication. As a result there should be an authorized_keys file placed in your User’s .ssh folder (typically /home/UserName/.ssh/authorized_keys).

In order for the new keys to be given permission to use the web user’s credentials, we need to add the content of id_rsa.pub to authorized_keys. You may do this either in an editor such as WinSCP, or via the command line, such as using the ‘cat’ command:

cat /home/WebUser/www/.ssh/id_rsa.pub >> /home/WebUser/.ssh/authorized_keys

Make sure there is a double arrow in the command above and not a single one, or you risk replacing the entire content of authorized_keys with just that one public key.

The purpose of the passphrase, which is not required but STRONGLY encouraged is to make it so that PHP cannot utilize this key unless you explicitly provide the passphrase. This would of course prevent a malicious script from acting as the main Web User, when you are not in the process of performing an update or installation (since your passphrase would only be provided at those times).

While you can also store the passphrase in the wp-config.php as the FTP_PASS, it is also strongly discouraged.

Setting Key Ownership and Permission

Because PHP in this configuration runs as www-data or www, it will not be able to access the newly created keys unless we change their ownership and permission. With the commands below we’re setting the ownership of the .ssh folder to www:www (or www-data:www-data) and changing the permissions so that only the owner of the file can read the private key, and owner+group can read the public key; Though only the owner really ever needs to see it, as the public key provided in the authorized_keys, but normally you will not be logged in as www, and may need to read the content of the file.

chown -R www:www /home/WebUser/www/.ssh
chmod 0400 /home/WebUser/www/.ssh/id_rsa
chmod 0440 /home/WebUser/www/.ssh/id_rsa.pub

Modifying wp-config.php

Below is the code segment that needs to be added to the wp-config.php file. First the method is defined as ssh2, then the absolute folder paths, then the paths to the public and private key, and finally the username and host SSH will attempt to connect to.

define('FS_METHOD', 'ssh2');
define('FTP_BASE', '/home/WebUser/www/YourDomain.com/public_html/WordPressRoot/');
define('FTP_CONTENT_DIR', '/home/WebUser/www/YourDomain.com/public_html/WordPressRoot/wp-content/');
define('FTP_PLUGIN_DIR ', '/home/WebUser/www/YourDomain.com/public_html/WordPressRoot/wp-content/plugins/');
define('FTP_PUBKEY', '/home/WebUser/www/.ssh/id_rsa.pub');
define('FTP_PRIKEY', '/home/WebUser/www/.ssh/id_rsa');
define('FTP_USER', 'WebUser');
define('FTP_HOST', 'YourDomain.com');
// If you are not using port 22, append :Port# to the domain, such as YourDomain:1212

With the following in place, you’ll be able to:

  • Install a Theme or Plugin from WordPress itself
  • Update a Theme or Plugin automatically in WordPress
  • Delete a Theme or Plugin from within WordPress
  • Upgrade WordPress Itself from the automatic update utility

Each time you perform any of the above tasks, you’ll be informed that the public and private keys are invalid, this error is only shown because without the passphrase it cannot continue. So provide the passphrase via the password field each time to perform the tasks. Make sure the “ssh2” radio button has been selected when you do this.

If you are uploading a zip file to install a theme/plugin

While the setting above will work for the most part with the automatic fetch-n-install, such as searching for plugin and then clicking install. It may not work when providing a zip file from your local storage.

If this becomes the case we just need to adjust the permissions of the upload directory. Assuming your files are owned by WebUser:www and PHP runs as www:www, we need to set the permissions of the /wp-content/uploads folder to 775 (read/write/execute for both the WebUser owner, and www group, but keep read/execute on ‘other’).

chmod 0775 /home/WebUser/www/YourDomain.com/public_html/wp-content/uploads/

If you have content already in there you may need to add on the recursive flag with -R before 0775.

chmod -R 0775 /home/WebUser/www/YourDomain.com/public_html/wp-content/uploads/

For the purpose of installations, this is only required in order for PHP to move the uploaded zip file to the uploads folder where it will be unpacked. From there the familiar SSH dialog will appear to continue the rest of the installation. After which the uploaded zip file will be removed from the uploads folder.

Securing your Uploads folder on Nginx

Because PHP is now capable of writing to the uploads folder, there is a chance that someone may attempt to upload a script into it and as such execute from it. The uploads folder should not host any executable scripts so to fix this we need to add some rules into the configuration for Nginx.

This location block should go before the PHP location block.

location ~* ^/wp-content/uploads/.*\.php$ {
    return 403;
}

Any attempts to call a PHP script from the uploads folder will now be met with a forbidden response code.

For further information on securing PHP and Nginx please refer to Securing Nginx & PHP

Updated Nginx rules for New W3TC

April 2nd, 2013

IF you have been using the W3TC rules for “Disk Enhanced” mode from this blog in the past you may find that after upgrading W3TC that they no longer work.

Previously the following would work:

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	# the next two location blocks are to ensure gzip encoding is turned off
	# for the serving of already gzipped w3tc page cache
	# normally gzip static would handle this but W3TC names them with _gzip
 
	location ~ /wp-content/cache/page_enhanced.*html$ {
		expires max;
		charset utf-8;
		add_header Vary "Accept-Encoding, Cookie";
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	location ~ /wp-content/cache/page_enhanced.*gzip$ {
		expires max;
		gzip off;
		types {}
		charset utf-8;
		default_type text/html;
		add_header Vary "Accept-Encoding, Cookie";
		add_header Content-Encoding gzip;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	location / { 
		if (-f $request_filename) {
		        break;
		}
 
		set $pgcache "";
 
		if ($request_method = GET) { set $pgcache "${pgcache}D"; }
 
		if ($args = "") { set $pgcache "${pgcache}I"; }
 
		if ($http_cookie !~ (comment_author_|wordpress|wordpress_logged_in|wp-postpass_)) {
			set $pgcache "${pgcache}S";
		}
		if (-f $document_root/wp-content/w3tc/pgcache/$request_uri/_index.html) {
			set $pgcache "${pgcache}K";
		}
		if ($pgcache = "DISK") {
			rewrite ^ /wp-content/w3tc/pgcache/$request_uri/_index.html break;
		}
 
		if (!-e $request_filename) { rewrite ^ /index.php last; }
	}
 
	location /search { limit_req zone=kbeezieone burst=3 nodelay; rewrite ^ /index.php; }
 
	fastcgi_intercept_errors off;
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	include php.conf;
 
	# You may want to remove the robots line from drop to use a virtual robots.txt
	# or create a drop_wp.conf tailored to the needs of the wordpress configuration
	include drop.conf;
}

However W3 Total Cache no longer saves the page caches in the w3tc folder, instead they are placed cache/page_enhanced/domain/ folder. So the following must be used. (The following has been tested with W3TC 0.9.2.8).

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	location / { 
		if (-f $request_filename) {
		        break;
		}
 
		set $w3tc_rewrite 1;
		if ($request_method = POST) { set $w3tc_rewrite 0; }
		if ($query_string != "") { set $w3tc_rewrite 0; }
 
		set $w3tc_rewrite2 1;
		if ($request_uri !~ \/$) { set $w3tc_rewrite2 0; }
		if ($request_uri ~* "(sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { set $w3tc_rewrite2 1; }
		if ($w3tc_rewrite2 != 1) { set $w3tc_rewrite 0; }
 
		if ($http_cookie ~* "(comment_author|wp\-postpass|wordpress_\[a\-f0\-9\]\+|wordpress_logged_in)") { set $w3tc_rewrite 0; }
		if ($http_user_agent ~* "(W3\ Total\ Cache/0\.9\.2\.4)") { set $w3tc_rewrite 0; }
 
		set $w3tc_ua "";
		set $w3tc_ref "";
		set $w3tc_ssl "";
		set $w3tc_enc "";
 
		if ($http_accept_encoding ~ gzip) { set $w3tc_enc _gzip; }
 
		set $w3tc_ext "";
		if (-f "$document_root/wp-content/cache/page_enhanced/$host/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl.html$w3tc_enc") {
		    set $w3tc_ext .html;
		}
		if ($w3tc_ext = "") { set $w3tc_rewrite 0; }
 
		if ($w3tc_rewrite = 1) {
		    rewrite ^ "/wp-content/cache/page_enhanced/$host/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl$w3tc_ext$w3tc_enc" last;
		}
 
		if (!-e $request_filename) { 
			rewrite ^ /index.php last; 
		}
	}
 
	location /search { limit_req zone=kbeezieone burst=3 nodelay; rewrite ^ /index.php; }
 
	fastcgi_intercept_errors off;
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	include php.conf;
 
	# You may want to remove the robots line from drop to use a virtual robots.txt
	# or create a drop_wp.conf tailored to the needs of the wordpress configuration
	include drop.conf;
}

If $host fails to work, replace with your actual domain name.

Securing Nginx and PHP

December 16th, 2011

Disclaimer
This write up is intended for single-user systems where you are the only user expected to log in via shell/terminal/sftp (or at least people you actually trust). This collection of tips does not cover situations where you may have multiple users home folders or a shared hosting situation utilizing nginx and php-fpm. Generally speaking if you have to read this post for security tips you probably should not be allowing access to any other user but yourself in the first place.

If you do not care to read this whole write up, just remember one thing: `777` is not a magical quick-fix; it’s an open invitation to having your system compromised. (Script-Kiddies… can you say jackpot?)

User/Groups

Generally speaking your server, which will most likely be a VPS running some fashion of linux will already have a web service user and group. This will sometimes be www, www-data, http, apache, (or even nginx if a package manager installed it for you). You can run the following command to get a list of users that are on your system.

cat /etc/passwd

Both Nginx and PHP-FPM should run as a web service, on a Debian squeeze this would be www-data:www-data, or on FreeBSD www:www.

If your server was set up with root being the main user you should create an unprivileged user for handling your web files. This will also make it easier to handle permissions when uploading your web files via SFTP. For example the following command on a debian system would create a user named kbeezie which has www-data as the primary group.

useradd -g 33 -m kbeezie

Group ID #33 is the id for www-data on Debian Squeeze (you can verify with id www-data). You may have to su into the new user and change the password (or usermod to change). This will also create a home folder in /home/kbeezie/ by default. You can log in via SFTP to this user and create a www folder if you wish. You’ll notice that the files will be owned by kbeezie:www-data, which will allow Nginx and PHP to read from, but also gives you group-level control over how the web services may treat those files.

This is ideal since you’re not giving nginx or php-fpm too much control over the user’s files and they can be controlled with the group flag. You could also create the user with it’s own group such as kbeezie:kbeezie and just change the group of the web files to www-data where appropriate.

SSH Options

It is usually advisable to disable Root access via /etc/ssh/sshd_config with the following line:

PermitRootLogin no

However make sure you can log in with your new unprivileged user via SSH, and also make sure that you can `su` into root permission. On a FreeBSD system only a user belonging to the wheel group can su into root, and only a user listed in the sudoers file can use the sudo command. However on Debian/etc the user could have www-data as it’s own group and still be able to su/sudo as long as the root password is valid. Your password should be at least 12 characters long and contain digits, symbols and at least one capital letter. Do not use the same password for root and your web user.

Once you’ve verified that you’re able to obtain root status with the new user you can safely disable root log in via sshd_config and restart the ssh deaemon.

You should also change your default SSH port, which is 22. While a port scanner could probably find the new SSH port it is usually best practice not to use the default port for any type of secure access. Like before make sure you can log into the new port (you can configure sshd_config to listen to both 22 and another port to test this out).

SSH – PubKey Authentication

If you are on OSX or another unix/linux operating system, like I am, setting up pub key authentication is fairly painless. Logged in as your web user on the server you can run the following command:

ssh-keygen

The above by default will ask for a passphrase for your private key as well as a destination to save both the id_rsa and id_rsa.pub files (which will normally be ~/.ssh/). You can then copy your own user’s public key to an authorized_key file with the following command.

cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

Then on your own computer you can run the ssh-keygen command as well, copy your own computer’s public key from the id_rsa.pub file and add it as another line to your server’s authorized_keys file.

If you have `PubkeyAuthentication yes` listed in your sshd_config file with the authorized key path being that of .ssh in your home folder the server should allow you to log in without prompting you for a password. Just remember that if you chose not to use a passphrase for your private key then it is possible for anyone who obtains your id_rsa* files to log into your server without being prompted for a password.

You can even turn off password authentication completely and rely solely on public key authentication by setting `PasswordAuthentication no` in your sshd_config file. However keep in mind, unless you have another means of getting into your server you might get locked out if you lose your public/private key or access to the machine you use to log in (also not every SFTP or Terminal application supports pub key authentication).

I actually have public key authentication set up with my iPad using iSSH for quick server access on the go (and you do not need a jailbroken iPad for this).

On the next page are some Nginx and PHP specific configurations to hardening your installation.

Nginx Flood Protection with Limit_req

April 9th, 2011

The Test

I’ll show you a very simple demonstration of Nginx’s Limit Request module and how it may be helpful to you in keeping your website up if you are hit by excessive connections or HTTP based denial-of-service attacks.

For this test I’m using a copy of my site’s about page saved as about.html, and the Blitz.io service (which is free at the moment) to test the limit_req directive.

First I test the page with the following command in Blitz, which will essentially ramp the number of concurrent connections from 1 to 1,075 over a period of 1 minute. The timeout has been set to 2 minutes, and the region set to California. I also set it to consider any response code other than 200 to be an error, otherwise even a 503 response will be considered a hit.

-p 1-1075:60 --status 200 -T 2000 -r california http://kbeezie.com/about.html

Not bad right? But what if that were a php document. That many users frequently might cause your server to show 502/504 errors as the php processes behind it keep crashing or timing out. This is especially true if you’re using a VPS or an inexpensive dedicated server without any additional protection. (Though if you use either, iptables or pf might be a good resource to look into)

You can of course use caching and other tools to help improve your website, such as using a wordpress caching plugin, which you should be using anyways. But sometimes that one lone person might decide to hit one of your php scripts directly. For those type of people we can use the limit request module.

Let’s create a zone in our http { } block in Nginx, we’ll call it blitz and set a rate of 5 request per second, and the zone to hold up to 10MB of data. I’ve used the $binary_remote_addr as the session variable since you can pack a lot more of those into 10MB of space than the human readible $remote_addr.

limit_req_zone $binary_remote_addr zone=blitz:10m rate=5r/s;

Then inside my server block I set a limit_req for the file I was testing above:

location = /about.html {
	limit_req zone=blitz nodelay;
}

So I reload Nginx’s configuration and I give Blitz another try:

You’ll notice that now only about 285 hits made it to the server, thats roughly 4.75 requests per second, just shy of the 5r/sec we set for the limit. The rest of the requests were hit with a 503 error. If you check out the access log for this you’ll see that a majority of the requests will be the 503 response with the spurts of 200 responses mixed in there.

Using this can be quite helpful if you want to limit the request to certain weak locations on your website. It can also be applied to all php requests.

Applying Limit Request to PHP

If you would like to limit all access to PHP requests you can do so by inserting the limit_req directive into your php block like so:

location ~ \.php {
	limit_req   zone=flood;
	include php_params.conf;
	fastcgi_pass unix:/tmp/php5-fpm.sock;
}

It may help to play with some settings like increasing or decreasing the rate, as well as using the burst or nodelay options. Those configuration options are detailed here: HttpLimitReqModule.

Note about Blitz.io

You may have noticed from the graphs above that the test were performed for up to 1,075 users. That is a bit misleading. All of the hits from the California region came from a single IP (50.18.0.223).

This is hardly realistic compared to an actual rush of traffic, or a DDOS (Distributed Denial of Service) attack. This of course also explains why the hits are consistent with the limiting of a single user as opposed to a higher number that reflecting a higher number of actual users or IP sources. Load Impact actually uses separate IPs for their testing regions. However with the free version you are limited to a max of 50 users and number of times you may perform the test. You would have to spend about $49/day to perform a test consisting up to 1,000 users to your site.

Testing from a single IP can be easily done from your own server or personal computer if you got enough ram and bandwidth. Such tools to do this from your own machine include: siege, ab, openload and so forth. You just don’t get all the pretty graphs or various locations to test from.

Also if you were testing this yourself, you have to remember to use the –status flag as Blitz will consider 5xx responses as a successful hit.

Better Alternatives

I won’t go into too much details, but if you are serious about protecting your site from the likes of an actual DDOS or multi-service attack it would be best to look into other tools such as iptables (linux), pf (packet filter for BSD) on the software side, or a hardware firewall if your host provides one. The limit request module above will only work for floods against your site over the HTTP protocol, it will not protect you from ping floods or various other exploits. For those it helps to turn off any unnecessary services and to avoid any services listening on a public port that you do not need.

For example on my server, the only public ports being listened on are HTTP/HTTPS and SSH. Services such as MySQL should be bound to a local connection. It may also help to use alternate ports for common services such as SSH, but keep in mind it doesn’t take much for a port sniffer to find (thats where iptables/pf come in handy).

Nginx Configuration Examples

March 31st, 2011

Here are a number of Nginx configurations for common scenarios that should help make things easier to get started with. This page will grow as needed.

A quick and easy starter example

First one is for most of you fly-by and cut-and-paste type of people. If you’re using a typical /sites-enabled/default type of configuration, replace the content of that file with this. Make sure to change the root path and your php port/path if it differs.

server {
	# .domain.com will match both domain.com and anything.domain.com
	server_name .example.com;
 
	# It is best to place the root of the server block at the server level, and not the location level
	# any location block path will be relative to this root. 
	root /usr/local/www/example.com;
 
	# It's always good to set logs, note however you cannot turn off the error log
	# setting error_log off; will simply create a file called 'off'.
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	# This can also go in the http { } level
	index index.html index.htm index.php;
 
	location / { 
		# if you're just using wordpress and don't want extra rewrites
		# then replace the word @rewrites with /index.php
		try_files $uri $uri/ @rewrites;
	}
 
	location @rewrites {
		# Can put some of your own rewrite rules in here
		# for example rewrite ^/~(.*)/(.*)/? /users/$1/$2 last;
		# If nothing matches we'll just send it to /index.php
		rewrite ^ /index.php last;
	}
 
	# This block will catch static file requests, such as images, css, js
	# The ?: prefix is a 'non-capturing' mark, meaning we do not require
	# the pattern to be captured into $1 which should help improve performance
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		# Some basic cache-control for static files to be sent to the browser
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	# remove the robots line if you want to use wordpress' virtual robots.txt
	location = /robots.txt  { access_log off; log_not_found off; }
	location = /favicon.ico { access_log off; log_not_found off; }	
 
	# this prevents hidden files (beginning with a period) from being served
	location ~ /\.          { access_log off; log_not_found off; deny all; }
 
	location ~ \.php {
        	fastcgi_param  QUERY_STRING       $query_string;
        	fastcgi_param  REQUEST_METHOD     $request_method;
	        fastcgi_param  CONTENT_TYPE       $content_type;
	        fastcgi_param  CONTENT_LENGTH     $content_length;
 
	        fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
	        fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;
	        fastcgi_param  REQUEST_URI        $request_uri;
	        fastcgi_param  DOCUMENT_URI       $document_uri;
	        fastcgi_param  DOCUMENT_ROOT      $document_root;
	        fastcgi_param  SERVER_PROTOCOL    $server_protocol;
 
        	fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
	        fastcgi_param  SERVER_SOFTWARE    nginx;
 
        	fastcgi_param  REMOTE_ADDR        $remote_addr;
	        fastcgi_param  REMOTE_PORT        $remote_port;
	        fastcgi_param  SERVER_ADDR        $server_addr;
	        fastcgi_param  SERVER_PORT        $server_port;
	        fastcgi_param  SERVER_NAME        $server_name;
 
	        fastcgi_pass 127.0.0.1:9000;
	}
}

As for the rest of you, read on for some more goodies and other configuration examples.

Making the PHP inclusion easier

For the purpose of PHP I’ve created a php.conf in the same folder with nginx.conf, this file DOES NOT go into the same folder as your virtual host configuration for example if nginx.conf is in /etc/nginx/ , then php.conf goes into /etc/nginx/ not /etc/nginx/sites-enabled/.

If you are not using PHP-FPM, you really should be as opposed to the old spawn-fcgi or php-cgi methods. For debian lenny/squeeze dotdeb makes php5-fpm available, FreeBSD already includes it in the PHP 5.2 and 5.3 ports.

location ~ \.php {
        # for security reasons the next line is highly encouraged
        try_files $uri =404;
 
        fastcgi_param  QUERY_STRING       $query_string;
        fastcgi_param  REQUEST_METHOD     $request_method;
        fastcgi_param  CONTENT_TYPE       $content_type;
        fastcgi_param  CONTENT_LENGTH     $content_length;
 
        fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
 
        # if the next line in yours still contains $document_root
        # consider switching to $request_filename provides
        # better support for directives such as alias
        fastcgi_param  SCRIPT_FILENAME    $request_filename;
 
        fastcgi_param  REQUEST_URI        $request_uri;
        fastcgi_param  DOCUMENT_URI       $document_uri;
        fastcgi_param  DOCUMENT_ROOT      $document_root;
        fastcgi_param  SERVER_PROTOCOL    $server_protocol;
 
        fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
        fastcgi_param  SERVER_SOFTWARE    nginx;
 
        fastcgi_param  REMOTE_ADDR        $remote_addr;
        fastcgi_param  REMOTE_PORT        $remote_port;
        fastcgi_param  SERVER_ADDR        $server_addr;
        fastcgi_param  SERVER_PORT        $server_port;
        fastcgi_param  SERVER_NAME        $server_name;
 
        # If using a unix socket...
        # fastcgi_pass unix:/tmp/php5-fpm.sock;
 
        # If using a TCP connection...
        fastcgi_pass 127.0.0.1:9000;
}

I prefer to use a unix socket, it cuts out the TCP overhead, and increases security since file-based permissions are stronger. In php-fpm.conf you can change the listen line to /tmp/php5-fpm.sock, and should uncomment the listen.owner, listen.group and listen.mode lines.

REMEMBER: Set cgi.fix_pathinfo to 0 in the php.ini when using Nginx with PHP-FPM/FastCGI, otherwise something as simple as /forum/avatars/user2.jpg/index.php could be used to execute an uploaded php script hidden as an image.

Path_Info
If you must use PATH_INFO and PATH_TRANSLATED then add the following within your location block above (make sure $ does not exist after \.php or /index.php/some/path/ will not match):

	fastcgi_split_path_info ^(.+\.php)(/.+)$;
	fastcgi_param  PATH_INFO          $fastcgi_path_info;
	fastcgi_param  PATH_TRANSLATED    $document_root$fastcgi_path_info;

It is advised however to update your script to use REQUEST_URI instead.

Optional Config – Drop

You can also have a file called drop.conf in the same folder with nginx.conf with the following:

	location = /robots.txt  { access_log off; log_not_found off; }
	location = /favicon.ico { access_log off; log_not_found off; }	
	location ~ /\.          { access_log off; log_not_found off; deny all; }
	location ~ ~$           { access_log off; log_not_found off; deny all; }

Using include drop.conf; at the end of your server block will include these. The first two simply turn off access logs and prevents logging an error if robots.txt and favicon.ico are not found, which is something a lot of browsers ask for. WordPress typically uses a virtual robots.txt so you may omit the first line if needed so that the request is passed back to wordpress.

The third line prevents nginx from serving any hidden unix/linux files, basically any request beginning with a period.

And the forth line is mainly for people who use vim, or any other command line editor that creates a backup copy of a file being worked on with a file name ending in ~. Hiding this prevents someone from accessing a backup copy of a file you have been working on.

Sample Nginx Configurations

Simple Configuration with PHP enabled

Here we have a very simple starting configuration with PHP support. I will provide most of the comments here for the basic lines. The other sample configurations won’t be as heavily commented.

server {
	# This will listen on all interfaces, you can instead choose a specific IP
	# such as listen x.x.x.x:80;  Setting listen 80 default_server; will make
	# this server block the default one if no other blocks match the request
	listen 80;
 
	# Here you can set a server name, you can use wildcards such as *.example.com
	# however remember if you use server_name *.example.com; You'll only match subdomains
	# to match both subdomains and the main domain use both example.com and *.example.com
	server_name example.com www.example.com;
 
	# It is best to place the root of the server block at the server level, and not the location level
	# any location block path will be relative to this root. 
	root /usr/local/www/example.com;
 
	# It's always good to set logs, note however you cannot turn off the error log
	# setting error_log off; will simply create a file called 'off'.
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	location / { 
		# Rewrite rules and other criterias can go here
		# Remember to avoid using if() where possible (http://wiki.nginx.org/IfIsEvil)
	}
 
	# This block will catch static file requests, such as images, css, js
	# The ?: prefix is a 'non-capturing' mark, meaning we do not require
	# the pattern to be captured into $1 which should help improve performance
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		# Some basic cache-control for static files to be sent to the browser
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	# We can include our basic configs here, as you can see its much easier
	# than pasting out the entire sets of location block each time we add a vhost
 
	include drop.conf;
	include php.conf;
}

WordPress Simple (Not using file-based caching or special rewrites)
This can also be used with W3 Total Cache if you’re using Memcache or Opcode Caching.

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	location / { 
		try_files $uri $uri/ /index.php; 
	}
 
	location /search { limit_req zone=kbeezieone burst=3 nodelay; rewrite ^ /index.php; }
 
	fastcgi_intercept_errors off;
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	include php.conf;
 
	# You may want to remove the robots line from drop to use a virtual robots.txt
	# or create a drop_wp.conf tailored to the needs of the wordpress configuration
	include drop.conf;
}

WordPress w/ W3 Total Cache using Disk (Enhanced)
The following rules have been updated to work with W3TC 0.9.2.8, in some cases $host may need to be replaced with your domain name.

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	# the next two location blocks are to ensure gzip encoding is turned off
	# for the serving of already gzipped w3tc page cache
	# normally gzip static would handle this but W3TC names them with _gzip
 
	location ~ /wp-content/cache/page_enhanced.*html$ {
		expires max;
		charset utf-8;
		add_header Vary "Accept-Encoding, Cookie";
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	location ~ /wp-content/cache/page_enhanced.*gzip$ {
		expires max;
		gzip off;
		types {}
		charset utf-8;
		default_type text/html;
		add_header Vary "Accept-Encoding, Cookie";
		add_header Content-Encoding gzip;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	location / { 
		if (-f $request_filename) {
		        break;
		}
 
		set $w3tc_rewrite 1;
		if ($request_method = POST) { set $w3tc_rewrite 0; }
		if ($query_string != "") { set $w3tc_rewrite 0; }
 
		set $w3tc_rewrite2 1;
		if ($request_uri !~ \/$) { set $w3tc_rewrite2 0; }
		if ($request_uri ~* "(sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { set $w3tc_rewrite2 1; }
		if ($w3tc_rewrite2 != 1) { set $w3tc_rewrite 0; }
 
		if ($http_cookie ~* "(comment_author|wp\-postpass|wordpress_\[a\-f0\-9\]\+|wordpress_logged_in)") { set $w3tc_rewrite 0; }
		if ($http_user_agent ~* "(W3\ Total\ Cache/0\.9\.2\.4)") { set $w3tc_rewrite 0; }
 
		set $w3tc_ua "";
		set $w3tc_ref "";
		set $w3tc_ssl "";
		set $w3tc_enc "";
 
		if ($http_accept_encoding ~ gzip) { set $w3tc_enc _gzip; }
 
		set $w3tc_ext "";
		if (-f "$document_root/wp-content/cache/page_enhanced/$host/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl.html$w3tc_enc") {
		    set $w3tc_ext .html;
		}
		if ($w3tc_ext = "") { set $w3tc_rewrite 0; }
 
		if ($w3tc_rewrite = 1) {
		    rewrite ^ "/wp-content/cache/page_enhanced/$host/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl$w3tc_ext$w3tc_enc" last;
		}
 
		if (!-e $request_filename) { 
			rewrite ^ /index.php last; 
		}
	}
 
	location /search { limit_req zone=kbeezieone burst=3 nodelay; rewrite ^ /index.php; }
 
	fastcgi_intercept_errors off;
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	include php.conf;
 
	# You may want to remove the robots line from drop to use a virtual robots.txt
	# or create a drop_wp.conf tailored to the needs of the wordpress configuration
	include drop.conf;
}

WordPress w/ WP Supercache (Full-On mode)

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	location / { 
		# This line when enabled will use Nginx's gzip static module
		gzip_static on;
 
		# Disables serving gzip content to IE 6 or below
		gzip_disable        "MSIE [1-6]\.";
 
		# Sets the default type to text/html so that gzipped content is served
		# as html, instead of raw uninterpreted data.
		default_type text/html;
 
		# does the requested file exist exactly as it is? if yes, serve it and stop here
		if (-f $request_filename) { break; }
 
		# sets some variables to help test for the existence of a cached copy of the request
		set $supercache_file '';
		set $supercache_uri $request_uri;
 
		# IF the request is a post, has a query attached, or a cookie
		# then don't serve the cache (ie: users logged in, or posting comments)
		if ($request_method = POST) { set $supercache_uri ''; }
		if ($query_string) { set $supercache_uri ''; }
		if ($http_cookie ~* "comment_author_|wordpress|wp-postpass_" ) { 
			set $supercache_uri ''; 
		}
 
		# if the supercache_uri variable hasn't been blanked by this point, attempt
		# to set the name of the destination to the possible cache file
		if ($supercache_uri ~ ^(.+)$) { 
			set $supercache_file /wp-content/cache/supercache/$http_host/$1index.html; 
		}
 
		# If a cache file of that name exists, serve it directly
		if (-f $document_root$supercache_file) { rewrite ^ $supercache_file break; }
 
		# Otherwise send the request back to index.php for further processing
		if (!-e $request_filename) { rewrite ^ /index.php last; }
	}
 
	location /search { limit_req zone=kbeezieone burst=3 nodelay; rewrite ^ /index.php; }
 
	fastcgi_intercept_errors off;
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	include php.conf;
 
	# You may want to remove the robots line from drop to use a virtual robots.txt
	# or create a drop_wp.conf tailored to the needs of the wordpress configuration
	include drop.conf;
}

Nibbleblog 3.4.1+ with Pretty URLs enabled

If you’re using NibbleBlog ( http://nibbleblog.com ), the following basic configuration works with version 3.4.1, and 3.4.1 Markdown.

server {
    listen 80;
    root /usr/local/www/example.com;;
    server_name example.com www.example.com;
 
    # access log turned off for speed
    access_log off;
    error_log /var/log/nginx/domain.error.log;
 
    # main location block
    location / {
        expires 7d;
        try_files $uri $uri/ @rewrites;
    }
 
    # rewrite rules if file/folder did not exist
    location @rewrites {
        rewrite ^/dashboard$ /admin.php?controller=user&action=login last;
        rewrite ^/feed /feed.php last;
        rewrite ^/category/([^/]+)/page-([0-9]+)$ /index.php?controller=blog&action=view&category=$1&page=$2 last;
        rewrite ^/category/([^/]+)/$ /index.php?controller=blog&action=view&category=$1&page=0 last;
        rewrite ^/page-([0-9]+)$ /index.php?controller=blog&action=view&page=$1 last;
    }
 
    # location catch for /post-#/Post_Title
    # will also catch odd instances of /post-#/something.php
    location ~ ^/post-([0-9]+)/ {
        rewrite ^ /index.php?controller=post&action=view&id_post=$1 last;
    }
 
    # cache control for static files
    location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
        expires max;
        add_header Pragma public;
        add_header Cache-Control "public, must-revalidate, proxy-revalidate";
    }
 
	include drop.conf;
	include php.conf;
}

Drupal 6+
From http://wiki.nginx.org/Drupal with some changes

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	# This matters if you use drush
	location = /backup {
		deny all;
	}
 
	# Very rarely should these ever be accessed outside of your lan
	location ~* \.(txt|log)$ {
		allow 192.168.0.0/16;
		deny all;
	}
 
	location ~ \..*/.*\.php$ { return 403; }
 
	location / {
		# This is cool because no php is touched for static content
		try_files $uri @rewrite;
	}
 
	location @rewrite {
		# Some modules enforce no slash (/) at the end of the URL
		# Else this rewrite block wouldn't be needed (GlobalRedirect)
		rewrite ^/(.*)$ /index.php?q=$1;
	}
 
	# Fighting with ImageCache? This little gem is amazing.
	location ~ ^/sites/.*/files/imagecache/ {
		try_files $uri @rewrite;
	}
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
		log_not_found off;
	}
 
	include php.conf;
 
	# You may want to remove the robots line from drop to use a virtual robots.txt
	# or create a drop_wp.conf tailored to the needs of the wordpress configuration
	include drop.conf;
}

Static with an Apache (for dynamic) Backend

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	location / { 
		# Rewrite rules can go here
	}
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	location ~ \.php {
		proxy_set_header X-Real-IP  $remote_addr;
		proxy_set_header X-Forwarded-For $remote_addr;
 
		proxy_set_header Host $host;
 
		proxy_pass http://127.0.0.1:8080;
	}
 
	include drop.conf;
}

Search Page Getting Hammered?

March 27th, 2011

On my WordPress Caching write up someone mentioned asked a very good question in the comments. What good is the caching if your site gets brought down by excessive search queries?

Great article.

However I have following problem.

My WP setup is like that.

Currently I am on shared hosting with WP + W3 Total cache and during peak hours, my site is very slow. That is mainly because I have a huge traffic from Google.

My webstie caches plenty of keywords with AskApache Google 404 and Redirection.

What happens is that traffic from Google goes to /search/what-ever-keywords dynamicly created everytime. And that is killing my system.
The problem is I have no idea how to help poor server and cache that kind of traffic.

Would you have any advice for that ?
Regards,
Peter

Fortunately the Nginx webserver has a way to soften the impact with a module called Limit Request. For wordpress this is how you would implement it:

http {
	limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
	#... the rest of your content
 
	server {
		#... your server content
 
		location /search { 
			limit_req zone=one burst=3 nodelay;
			try_files $uri /index.php; 
		}
	}
}

What we have here is almost identical to the example provided in the Nginx Wiki (HttpLimitReqModule). Essentially we’ve created a zone that allows a user no more than 1 requests per seconds, assigning them via $binary_remote_addr (which is smaller than $remote_addr while still accomplishing the same goal), within a space of 10MB.

Then we’ve placed a limit_req directive into the /search location. (Remember to include the rewrite so that index.php receives the search request in wordpress) This directive uses the zone we created earlier, allowing for a burst of up to 3 requests from a single user. If the user exceeds the limit too many times (the burst limit) they’re presented with a 503 error which Google and others consider sort of a ‘back off’ response.

By default the burst value is zero.

We use try_files here because rewriting will occur before the limiting had a chance to act, since the rewrite phase takes precedence before most other processes.

Here is an example wordpress configuration utilizing the above, configured for W3TC w/ Memcache (or OpCode Caching), please see the article [ The Importance of Caching WordPress] for details on caching WordPress in other ways, and why you should be using caching with wordpress.

This is config is somewhat based off the same configuration I currently use for kbeezie.com on a FreeBSD server.

 
user kb244 www;
worker_processes  2;
 
events {
	worker_connections	2048;
}
 
 
http {
	sendfile on;
	tcp_nopush on; # Generally on for linux, off for FreeBSD/Unix
	tcp_nodelay on;
	server_tokens off;
	include mime.types;
	default_type  application/octet-stream;
	index index.php index.htm index.html redirect.php;
 
	#Gzip
	gzip  on;
	gzip_vary on;
	gzip_proxied any;	
	gzip_comp_level 6;
	gzip_buffers 16 8k;
	gzip_http_version 1.1;
	gzip_disable "MSIE [1-6]\.(?!.*SV1)";	
	gzip_types text/plain text/css application/json application/x-javascript text/xml 
                        application/xml application/xml+rss text/javascript;
 
	#FastCGI
	fastcgi_intercept_errors on;
	fastcgi_ignore_client_abort on;
	fastcgi_buffers 8 16k;
	fastcgi_buffer_size 32k;
	fastcgi_read_timeout 120;
	fastcgi_index  index.php;
 
	limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
 
	server {
		listen 80;
 
		# Replace with your domain(s)
		server_name kbeezie.com www.kbeezie.com;
 
		# Setting your root at the server level helps a lot
		# especially if you don't like having to customize your
		# PHP configurations every single time you create a vhost
		root /usr/local/www/kbeezie.com;
 
		# Always a good idea to log your vhosts seperately
		access_log /var/log/nginx/kbeezie.access.log;
		error_log /var/log/nginx/kbeezie.error.log;
 
		# I'm using W3TC with memcache, otherwise this block
		# would look a lot more complicated for handling file
		# based caching.
		location / { 
			try_files $uri $uri/ /index.php; 
		}
 
		# And here we have the search block limiting the requests
		location /search {
			limit_req zone=kbeezieone burst=3 nodelay;
			try_files $uri /index.php;
		}
 
		# We want Wordpress to handle the error page
		fastcgi_intercept_errors off;
 
		# Handle static file requests here setting appropiate headers
		location ~* \.(ico|css|js|gif|jpe?g|png)$ {
			expires max;
			add_header Pragma public;
			add_header Cache-Control "public, must-revalidate, proxy-revalidate";
		}
 
		# I prefer to save this location block in it's own php.conf
		# in the same folder as nginx.conf so I can just use:
		# include php.conf;
		# at the bottom of each server block I want PHP enabled on
 
		location ~ \.php$ {
			fastcgi_param  QUERY_STRING       $query_string;
			fastcgi_param  REQUEST_METHOD     $request_method;
			fastcgi_param  CONTENT_TYPE       $content_type;
			fastcgi_param  CONTENT_LENGTH     $content_length;
 
			fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
			fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;
			fastcgi_param  REQUEST_URI        $request_uri;
			fastcgi_param  DOCUMENT_URI       $document_uri;
			fastcgi_param  DOCUMENT_ROOT      $document_root;
			fastcgi_param  SERVER_PROTOCOL    $server_protocol;
 
			fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
			fastcgi_param  SERVER_SOFTWARE    nginx;
 
			fastcgi_param  REMOTE_ADDR        $remote_addr;
			fastcgi_param  REMOTE_PORT        $remote_port;
			fastcgi_param  SERVER_ADDR        $server_addr;
			fastcgi_param  SERVER_PORT        $server_port;
			fastcgi_param  SERVER_NAME        $server_name;
 
			fastcgi_pass unix:/tmp/php5-fpm.sock;
 
			# I use a Unix socket for PHP-Fpm, yours instead may be:
			# fastcgi_pass 127.0.0.1:9000;
		}
 
		# Browsers always look for a favicon, I rather not flood my logs
		location = /favicon.ico { access_log off; log_not_found off; }	
 
		# Make sure you hide your .hidden linux/unix files
		location ~ /\. { deny  all; access_log off; log_not_found off; }
	}
}

As always a little research and tweaking to your own requirements is advised.

Kwolla; shows promise but lacks professionalism.

March 16th, 2011

Over the years I’ve seen several social network scripts and applications pop up, dolphin, socialengine, buddypress and so forth. I’ve worked with SocialEngine on two major occasions (2.8 and 4.1.2), and still determined it to be a very temperamental piece of software that suffers far worse under the hood than an uncached copy of wordpress. I’ve already went thru the painstaking process of making a script like SocialeEngine work rather decently with a fast Nginx server configuration. So naturally any time I see a social networking script pop up that wasn’t specifically built for a website, I consider it bloatware.

I originally learned of Kwolla earlier this month from a friend noticing it on reddit. As of March 4th 2011 according to his development blog was made free as part of a promotional effort. Kwolla originally came out in January 2011 for a price of 149 USD, and was dropped to 49 USD until it was eventually made free less than a month later.

Product Website

I first visited Kwolla.com roughly around the 5th, shortly after the free promotional announcement. The website was clean, somewhat easy to understand but lacked specific information I was seeking, namely server requirements.

The main site has plenty of features listed, themes, modules, video capability, translations and so forth. However there is no wiki, forum, and the knowledge base is empty. No mentions on the website on what server, php, mysql versions are required. However there is a “free” compatibility download to test your hosting.

Customer support area at the time was a contact form. It is now something of a discussion panel, with no current discussions and an empty knowledge based hosted on TenderApp.

Because there’s no technical specification listed on the main website, you either have to click on a host that Kwolla works with (half of which are affiliate links… can you blame him?) or you have to download the Kwolla Compatibility check.

Checking Compatibility

When you download the kwolla-compatibility package which is available as a “Free Download”, and unzip it, you’ll notice approximately 21 files, with only one that really matters (the rest are graphics such as suggested host logos).

Now if you don’t have MySQLi extension for PHP installed… you’ll likely never know as the compatibility crashes to a blank page during the check due to lack of the MySQLi extension. While it checks for the existance of various functions, it just uses MySQLi functionality blindly which of course won’t be handled properly if you don’t have that extension installed.

I noticed that it said my Apache version was of the ‘right version’. Problem is I don’t use the Apache webserver at all, and it checks by means of the apache_get_version() function. If the function doesn’t exist then the compatibility check considers it valid. Likewise every other apache module passed as well since it uses apache_get_modules() function to test for the availability of such module. Like before, if the functions don’t exist, it considers the tests passed. However most potential customers are going to be using Apache, and will likely see a more realistic compatibility report.

Installation

When I originally tried Kwolla it was version 1.1.0, which based on the installation guide (another “free download”) was probably written with the idea that another developer would be installing it. There was command line instructions for setting up a MySQL database, and importing the provided kwolla.sql file. This was later changed to screenshots of Cpanel with phpMyAdmin for Kwolla 1.1.1. (Should note, due to using MySQL 5.5, I had to change every instance of TYPE= to ENGINE= in the sql file in order to get it to import correctly)

Originally you had to upload the 1.1.0 framework, and set the virtualhost to point to the httpdocs folder while the rest of the application logic was above the web-accessible location. In 1.1.1 this was changed to where the upload folder was the root of the application, with a sole index.php in the root. This of course would be friendlier to your typical end users. I noticed the user guide for 1.1.1 was changed to state that other webservers were untested after my initial query with the developer regarding Nginx, and that some requirements were updated in the install guide, but still missing on the main website.

The installation guide also instructs you to assign all files to a permission of 755, particularly the shell scripts provided for cron jobs. However doing so is incredibly insecure especially when you consider social networking allows for uploads of files that could potentially be processed by the server.

Needless to say installation wasn’t too bad for me, since I’m used to working with the command line interface, and handling sql dumps as well as configuring virtual hosts from a text file. My only real obstacle was converting the .htaccess to something that would work within Nginx, as such this is the converted configuration I came up with:

server {
       access_log off;
       listen 80;
       error_log /var/log/nginx/kwolla.error.log;
 
       server_name community.domain.com;
       root /usr/local/www/community.domain.com/;
 
       location ~ ^.*.svn/* {
               rewrite ^ / last;
       }
 
       location / {
               try_files $uri $uri/ @rewrites;
       }
 
       location @rewrites {
               rewrite ^/profile/(.*)/(status|friends|comments) /index.php?u=profile/$2/$1&$query_string last;
               rewrite ^/profile/(.*) /index.php?u=profile/view/$1&$query_string last;
               rewrite ^ /index.php?u=$request_uri? last;
       }
 
       include drop.conf;
       include php.conf;
}

It is far from perfect, but it has been working thus far. Except in a few instances where links were hard coded as relative paths, or direct to index.php such as placing comments on other members profiles.

Installed

Initially I had just a blank screen, since I had left out the mysqli extension on this particular jailed installation, and support didn’t help too much other than stating that there was a secondary log file for debugging. But also making emphasis that only apache was supported at this time (though will be providing compatibility with nginx, lighttpd and others soon). Once I was able to identify the error as a missing mysqli extension I was able to move on with the installation.

Once the script was installed (correctly), I was presented with an empty community. As the installation guide suggests you should go ahead and create your profile first. So in my line of thinking, I was assuming that either the site is controlled by the primary participant, or controlled by whatever member shares the same email address as the owner in the configuration file.

Needless to say there really is no control other than for your own profile. On version 1.1.1 every member that signs up appears to have the same ability as yourself. While functions like comments, sending private message, uploading photographs or videos appeared to work fine, you essentially had no ‘command central’ so to speak.

So I shot off an email with a few questions and suggestions, namely:

  • MySQL 5.5 note
  • Nginx configuration (0.8.x-0.9.x)
  • The Curious Missing Admin panel

On march 6th I received the following response;

Hi Karl,
 
The install script will work with MySQL 5.1, it's what I'm running.
You're right, TYPE has been replaced with ENGINE, but 99% of all users
will be using MySQL 5.0 or 5.1 because it's what comes with a shared
hosting account or whats by default in most major Linux distributions.
 
Thank you for the nginx information and configuration.
 
>> Though I'm a tad confused... where's the administrator panel, 
>> or area to set member's permissions on the site?
 
There isn't one. It will be available in version 2.0.

Not exactly pleasant about the lack of an administrative control panel. Since its one feature I would consider especially important for any kind of ‘social networking’, especially if you have spam accounts, or disruptive members. What is an end user to do but figure out how to remove or change such a profile via the MySQL database and files? For something that was once sold for $149, the lack of administrative control seemed like a deal breaker to me.

Kwolla 2.0 Pre-Sale

Earlier yesterday I received a promotional email that the next version 1.2.0 was going to become 2.0, and it would be available at a $359.95 Savings. Essentially the savings are from:

  • $110 discount off the normal $249 Kwolla 2.0 Pricing
  • Free 2 hour premium support, which is valued at $200
  • Free installation of Kwolla 1.1.x valued at $49 (yes same one I mentioned above)

The fact that Kwolla 2.0 will have an admin panel and web-based installer is particularly nice, however the admin panel is something I feel that should have been included since 1.0.

Now being the facetious type of person I am I shot back a remark regarding the lack of a administrative panel, particularly at the $249 pricing which happens to be the same price as SocialEngine.

When you can get back to me about some of the most basic features like an
admin panel, and so forth, then I'll consider Kwolla even worth close to
that of socialengine.net pricing.
 
-Karl

Mind you while SocialEngine is more complicated, and I do not really care for it, most of the features like ‘photo albums’, music, videos, so forth are ‘extras’ on top of SE’s $249 pricing. So in that sense Kwolla did have a competitive advantage. However as much as I loathe SocialEngine (clients use them, I do not), I would not have expected the following reply from them in the same circumstances.

Karl,
 
Kwolla 2.0 will have an admin panel. I prefer you never use my
software or contact me about it again. Please use Social Engine, Elgg,
Dolphin, or any of the other pieces of social network software out
there.
 
-Vic

The software itself shows plenty of promise. And the free version 1.1.1 can be a great starting point for any developer wishing to develop a custom social networking site, assuming they downloaded it during the promotional period. However such a response carries a heavy reflection on what you might expect from customer support down the road if you are dissatisfied with the product. Granted I did receive the product for free, and I was better equipped at trouble shooting issues myself, such responses are all too common of signs of things to come.

Can you imagine what a typical end user would do if faced with a blank screen due to older software? There goes the 2 hour premium support over such a simple issue. Also I hope for his customer’s sake that when 2.0 is available the migration from 1.1.1 will be quite painless, and that he doesn’t receive plenty of emails asking where the admin panel is prior to 2.0’s release.

It’s a shame too, because I know plenty of clients who would like to escape from scripts like SocialEngine and others.

Nginx and Codeigniter The Easy Way

March 5th, 2011

I personally don’t use Codeigniter (was never much of a fan of PHP frameworks), however I had a client approach me with this issue so I decided to take a stab at it.

Now normally you would look for any existing .htaccess that comes with a script package and attempt to convert the rewrite rules. But seriously though, why do we really want to go thru all that fuss when we can simply use try files:

server {
    listen 80;
    server_name yourdomain.com www.yourdomain.com;
    root /path/to/your/website.com/;
 
    location / {
        try_files $uri $uri/ /index.php;
    }
 
    # For more information on the next two lines see http://kbeezie.com/view/nginx-configuration-examples/
    include php.conf;
    include drop.conf;
}

In a nutshell this should normally work… but why does it not?

Quite simple really; CodeIgniter by default uses PATH_INFO which is a really antiquated method of parsing the uri. As a result we must tell CI to use REQUEST_URI instead. To do this open up your config.php under /system/application/config/ and find the $config[‘uri_protocol’] and set it to this:

$config['uri_protocol'] = "REQUEST_URI";

You could also choose to use AUTO, but since we know we’ll be dealing with request_uri, it is best to set it as such (though if you do run into problems, give AUTO a try).

If you have not already set the index page, you will want to blank it out in order for it to work with a rewrite method (request_uri, etc).

$config['index_page'] = "";

For known static files, take it a step further and capture common static requests so that you can cache accordingly:

	location ~* \.(ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}

The Importance of Caching WordPress

February 16th, 2011

Update: The last page has been updated with some W3 Total Cache Rules if you would like to use W3 Total Cache with File-based caching (disk enhanced). Performance wise, it should be about the same as WP-Super-Cache that you see here, but maybe a bit better since you’ll also get the benefits of database/object caching behind it

IMPORTANT: This article is NOT a Wp-Super-cache vs W3-Total-Cache comparison, or standoff. This article is simply to illustrates the benefits of using a caching plugin with your WordPress installation and how it can save you not only hits, but resources on your server. Besides it has some pretty looking graphs that don’t really mean much for you to look at…

WordPress, in all its glory… is a damn sloppy hog.

I use WordPress, many of my clients use WordPress, a lot of you reading this probably use WordPress as well. But in it’s simplicity of installation, and vast library of plugins, themes and other addons, it is quite simply a resource hog. A simple one-post blog would easily rack up the PHP memory usage and still be quite slow in handling multiple visitors.

If you are hosting clients using wordpress, you cannot tell me that not one of them didn’t approach you about increasing their memory limit, or their site being incredibly slow after being hit with a moderate amount of traffic.

But despite all that, I like WordPress. It’s easy to use, update and modify; even I want to be lazy at times.

The Setups

For the purpose of this article, I cloned my blog to three different subdomains onto three separate VPSes with the same base configuration. A Debian squeeze 6.0 based installation, with 512MB assigned, and running Nginx 0.9.4, PHP-FPM 5.3.5, and MySQL 5.1.

All three had the following plugins installed:

  • Google XML Sitemaps
    Automatically generates sitemap xmls and reports new postings to search engines like Google.
  • Mollom
    Alternative to Akismat plugin, which catches spam comments.
  • WP-Syntax
    Plugin that uses GeSHi to provide syntax highlighting. Something I use quite a bit on this site for code examples.
  • WP-Mail-SMTP
    Plugin to use an SMTP server instead of php’s built in mail() function.

As you can see, nothing too fancy, pretty basic.

No Caching

This setup is basically only the above plugins, no additional caching option turned on other than what would come with wordpress (which seems to be none…)

W3 Total Cache w/ memcached

This setup has the W3 Total Cache installed, with all of the caching options (Page, Minify, Database, Object) set to use a local memcache daemon with 32MB of memory set (all of the objects seemed to need no more than 25MB). Some of the stuff suggested by Browser Cache is already handled by nginx such as mime types and a location block to set an expiration header to static content.

Memcached was installed from the normal Debian repository, and the memcache extension for PHP installed via pecl.

This type of setup is ideal for those who really don’t want to hassle with the nginx configuration aside from the usual optimization you apply to static content, and the one-line-try-files solution to WordPress’ permalink feature. It also makes it very easy to clear the cache when you make a change.

WP Super Cache

Doesn’t cache things like the database or performs minifying operations. I simply set this one up with disk caching the files to a static folder. Nginx was configured to look in these folders and serve off any static cache that was found. I’ve tested this with both gzip precompression enabled on nginx (serves a .gz file if it exists), as well as having nginx handle the gzip compression on the fly.

Essentially with this setup, most visitors rarely actually make it to the PHP backend (as the php pages themselves are cached as html output onto the disk).

This setup is ideal if you need to spare PHP from running as much as possible. Logging into the back of course will still have that little spike of usage. And of course with PHP rarely being accessed you’ll likely need to setup a cron job for scheduled postings and other maintainance to wp-cron.php.

PS: W3 Total cache can also use disk-caching on top of the additional features it offers, but I figured It’d be nice to show how much memcache can help even with every user visit being sent to the PHP backend.

The Method of Testing

Testing was done very simply with an application called Siege. Each scenario was tested separately, so that the environment conditions they were in were as close as possible (it would be kind of silly to try to run a test on all three at the same time if they were on the same physical server despite being in separate VPS containers).

Each scenario was hit with a surge of connections, ranging from 5 concurrent browsers, to 150 concurrent browser. Each Siege test was run for 5 minutes, with a pause of 4 minutes in between. Meaning starting from 5 concurrent browsers, it would run for 5 minutes hitting up the destination as much as possible, then pause for 4 minutes before trying again with 10, then 20, then 30 then 40 concurrent browsers and so forth.

An example of the command would be:

seige -c# -t5M -b http://destination

Where # is the number of concurrent browsers to use, and b sets it in benchmark mode, allowing no delays between connection attempts. This is not very realistic in terms of actual internet traffic which tends to have some delay, and connections occur at random intervals rather than all-out-slashdotting/digg (which is still technically random, but just more intense). The idea here is to illustrate just how much load WordPress could theoretically take under a large amount of traffic.

The “attacking” server was another VPS, so some of these numbers would rarely be possible in the real world. Normally most users on shared or VPS hosting don’t normally have access to higher than a 10mbit uplink, let alone the next-to-nothing latency you get between two VPSes on the same box, so essentially we’re breaking it down to where the webserver, PHP and database server become the main bottleneck.

Also keep in mind, the test does not attempt to download anything other than the front page html, meaning additional requests for images, css, js were not retrieved. Just simply the download of the front page, which is far less than what a normal browser would download off a typical blog.

On the following pages are all the pretty graphs and numbers between each scenario.