Posts Tagged ‘Nginx’

Deploying circuits.web with Nginx/uwsgi

May 7th, 2019

Edit (May 2019): Modified some of the code so that it would be compatible with Python 3.x

I’m a very minimal person when it comes to frameworks, I don’t generally like something that needs to generate an entire application file structure like you’d see with Django. When I was searching around for various frameworks to get me started with python and web development, I investigated the usual; DJango, CherryPy, Web.Py. I fell in love with circuits due it’s ease and simplicity, yet it can be quite powerful. This article will show you how to get Nginx setup with uWSGI along with a sample circuits.web application.

Nginx, as of version 0.8.40, comes deployed with the uwsgi module unless otherwise excluded (ie: –without-http_uwsgi_module), the older legacy version 0.7.63 and above came with the module, but needed to be compiled into it. The syntax I use in this article conforms to 0.8.40 and newer and might not work with an older version of Nginx. (Currently I’m using this on Nginx 1.10.3, with Python 2.7.13 & 3.5.3)

This article assumes you already have Nginx 0.8.40+ and Python installed.

Installing the uWSGI server

As stated on their wiki, “uWSGI is a fast (pure C), self-healing, developer/sysadmin-friendly application container server.”, it utilizes the uwsgi protocol (notice the all-lowercase spelling), and supports WSGI applications served from it.

Once you’ve downloaded the source from the wiki, you can install it (a number of methods here). For my purpose I’ve moved the compiled uwsgi binary to my /usr/local/bin folder so that I could call it from anywhere on my system.

The python module for uwsgi should be installed as well.

# pip install uwsgi

And for Python 3

# pip3 install uwsgi

Other than FreeBSD I have not seen uWSGI readily available in most distribution’s package systems.

You can test your installation by moving out of the source folder, and calling the binary:

# uwsgi --version
uWSGI 2.0.18

Simple Hello world app without daemonizing uWSGI

Now we’re going to setup a very simple WSGI hello world application, and host it behind uWSGI and use Nginx to serve it. We’re not going to daemonize the uWSGI as such you’ll see it’s output in your terminal as connections are made. Also in Nginx we’ll simply send everything to the backend application for this demonstration.

Configure Nginx

server {
	server_name myapp.example.com;
 
	location / { 
		uwsgi_pass 127.0.0.1:3031;
		#You can also use sockets such as : uwsgi_pass unix:/tmp/uwsgi.sock;
		include uwsgi_params;
	}
}

If Nginx is currently running, either restart it, or you can reload the configuration with “nginx -s reload”.

Create a WSGI application
In your application folder create a python file, for example myapp.py:

def application(environ, start_response):
    start_response("200 OK", [("Content-Type", "text/plain")])
    return ["Hello World!"]

For Python 3 a major change is that UWSGI requires responses in bytes instances and not strings. The fastest way is to type a string response as a byte, handy for constructed responses, otherwise you can encode it as UTF-8.

def application(environ, start_response):
    start_response("200 OK", [("Content-Type", "text/plain")])
    return [b"Hello World!"]
    # or ["Hello World!".encode('utf-8')]

Deploy with uWSGI
Now we’ll want to deploy a simple single-worker uWSGI instance to serve up your application from your application folder:

# uwsgi -s 127.0.0.1:3031 -w myapp
*** Starting uWSGI 0.9.6.7 (64bit) on [Mon Jan 31 00:10:36 2011] ***
compiled with version: 4.4.5
Python version: 2.6.6 (r266:84292, Dec 26 2010, 22:48:11)  [GCC 4.4.5]
uWSGI running as root, you can use --uid/--gid/--chroot options
 *** WARNING: you are running uWSGI as root !!! (use the --uid flag) *** 
 *** WARNING: you are running uWSGI without its master process manager ***
your memory page size is 4096 bytes
allocated 640 bytes (0 KB) for 1 request's buffer.
binding on TCP port: 3031
your server socket listen backlog is limited to 100 connections
initializing hooks...done.
...getting the applications list from the 'myapp' module...
uwsgi.applications dictionary is not defined, trying with the "applications" one...
applications dictionary is not defined, trying with the "application" callable.
application 0 () ready
setting default application to 0
spawned uWSGI worker 1 (and the only) (pid: 20317)

Newer versions of UWSGI and Python 3 may require some additional flags:

# uwsgi -s 127.0.0.1:3031 --plugin python3 -w /home/youruser/www/app.domain.com/myapp.py
    ...

If you attempt to connect to the site, and all goes well you’ll see Hello World on the screen as well as some log outputs on your terminal. To shut down the uWSGI server use Ctrl+C on that screen.

Now for some fun with Circuits.web

Circuits is a Lightweight Event driven Framework for the Python Programming Language, with a strong Component Architecture. It’s also my favorite framework for deploying very simple web applications (but can be used for far more complicated needs, for example SahrisWiki).

For this article I’ll show you a few ways circuits.web can really simplify handling requests and URI parsing.

First thing we’ll need to do is get the circuits module installed into Python.

If you do not already have Python SetupTools, install them:

apt-get install python-setuptools

And then simply install via easy_install:

easy_install circuits

You can get the egg (2.6 only) or the source of Circuits 1.3.1 from PyPi.Python.org.

For Python 3 you can use pip3 (or easy_install)

pip3 install circuits

On to page 2…

Configuring Nginx for Nibbleblog 4.0.3

July 25th, 2014

This is a simple configuration example to replicate the .htaccess rules for NibbleBlog 4.0.3.

For those unfamiliar with NibbleBlog:

Nibbleblog is a powerful engine for creating blogs, all you need is PHP to work. Very simple to install and configure (only 1 step).

Nginx Configuration Example
Tested on Nginx 1.7.3

server {
    listen 0.0.0.0:80;
    root /path/to/public_html;
    server_name domain.com www.domain.com;
 
    # Directory indexing is disabled by default
    # So no need to disable it, unless you enabled it yourself
 
    # access log turned off for speed
    access_log off; #enable if you need it
    error_log /var/log/nginx/nib.error.log; #or path to your error log
 
    error_page 404 = /index.php?controller=page&action=404;
 
    # main location block
    location / {
        expires 7d;
        try_files $uri $uri/ @rewrites;
    }
 
    # rewrite rules if file/folder did not exist
    # based off rules for version 4.0.3's .htaccess
    location @rewrites {
        rewrite ^/category/([^/]+)page-([0-9+])$ /index.php?controller=blog&action=view&category=$1&number=$2 last;
        rewrite ^/category/([^/]+)/$ /index.php?controller=blog&action=view&category=$1&number=0 last;
        rewrite ^/tag/([^/]+)/page-([0-9]+)$ /index.php?controller=blog&action=view&tag=$1&number=$2 last;
        rewrite ^/tag/([^/]+)/$ /index.php?controller=blog&action=view&tag=$1&number=0 last;
        rewrite ^/page-([0-9]+)$ /index.php?controller=blog&action=view&number=$1 last;
        rewrite ^/post/([^/]+)/$ /index.php?controller=post&action=view&post=$1 last;
        rewrite ^/post-([0-9]+)/(.*)$ /index.php?controller=post&action=view&id_post=$1 last;
        rewrite ^/page/([^/]+)/$ /index.php?controller=page&action=view&page=$1 last;
        rewrite ^/feed/$ /feed.php last;
        rewrite ^/([^/]+)/$ /index.php?controller=page&action=$1 last;
 
        # if any of the redirects/urls above contain a *.php file in the url
        # then you may have to create a separate location ~ ^/post-([0-9]+)/, etc block
        # otherwise the php block will be invoked before this @rewrites block does
 
    }
 
    # cache control for static files
    location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
        expires max;
        add_header Pragma public;
        add_header Cache-Control "public, must-revalidate, proxy-revalidate";
    }
 
    # files to block or handle differently
 
    # Don't log commonly requested files that may not always exist
    location = /robots.txt  { access_log off; log_not_found off; }
    location = /favicon.ico { access_log off; log_not_found off; }
 
    # block access to these files (shadow.php, keys.php, hidden/tmp files, xml files)
    location ~ /shadow\.php { access_log off; log_not_found off; deny all; } # block access to shadow.php
    location ~ /keys\.php   { access_log off; log_not_found off; deny all; } # block access to keys.php
    location ~ /\.          { access_log off; log_not_found off; deny all; } # block access to hidden files (starting with .)
    location ~ ~$           { access_log off; log_not_found off; deny all; } # block access to vim temp files (starting with ~)
    location ~ \.xml        { access_log off; log_not_found off; deny all; } # block access to xml files
 
    # typical PHP handling block
    # with a safeguard against passing a non-existent
    # php file to the PHP interpreter
 
    location ~ \.php$ {
        try_files $uri =404;
 
        fastcgi_param  QUERY_STRING       $query_string;
        fastcgi_param  REQUEST_METHOD     $request_method;
        fastcgi_param  CONTENT_TYPE       $content_type;
        fastcgi_param  CONTENT_LENGTH     $content_length;
 
        fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
        fastcgi_param  SCRIPT_FILENAME    $request_filename;
        fastcgi_param  REQUEST_URI        $request_uri;
        fastcgi_param  DOCUMENT_URI       $document_uri;
        fastcgi_param  DOCUMENT_ROOT      $document_root;
        fastcgi_param  SERVER_PROTOCOL    $server_protocol;
 
        fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
        fastcgi_param  SERVER_SOFTWARE    nginx;
 
        fastcgi_param  REMOTE_ADDR        $remote_addr;
        fastcgi_param  REMOTE_PORT        $remote_port;
        fastcgi_param  SERVER_ADDR        $server_addr;
        fastcgi_param  SERVER_PORT        $server_port;
        fastcgi_param  SERVER_NAME        $server_name;
 
        fastcgi_pass 127.0.0.2:9000;
 
        #fastcgi_pass unix:/tmp/php.sock; 
        #use the above line instead if running php-fpm on a socket
    }
 
}

This should work pretty well as long as the newest NibbleBlog does not utilize php file names in the friendly urls (from my testing, it no longer does that).

Also Friendly URLs are not enabled by default in NibbleBlog, you need to enable it under SEO tab under settings. (ignore the instructions regarding .htaccess since you’re not using apache).

Securing Nginx and PHP

December 16th, 2011

Disclaimer
This write up is intended for single-user systems where you are the only user expected to log in via shell/terminal/sftp (or at least people you actually trust). This collection of tips does not cover situations where you may have multiple users home folders or a shared hosting situation utilizing nginx and php-fpm. Generally speaking if you have to read this post for security tips you probably should not be allowing access to any other user but yourself in the first place.

If you do not care to read this whole write up, just remember one thing: `777` is not a magical quick-fix; it’s an open invitation to having your system compromised. (Script-Kiddies… can you say jackpot?)

User/Groups

Generally speaking your server, which will most likely be a VPS running some fashion of linux will already have a web service user and group. This will sometimes be www, www-data, http, apache, (or even nginx if a package manager installed it for you). You can run the following command to get a list of users that are on your system.

cat /etc/passwd

Both Nginx and PHP-FPM should run as a web service, on a Debian squeeze this would be www-data:www-data, or on FreeBSD www:www.

If your server was set up with root being the main user you should create an unprivileged user for handling your web files. This will also make it easier to handle permissions when uploading your web files via SFTP. For example the following command on a debian system would create a user named kbeezie which has www-data as the primary group.

useradd -g 33 -m kbeezie

Group ID #33 is the id for www-data on Debian Squeeze (you can verify with id www-data). You may have to su into the new user and change the password (or usermod to change). This will also create a home folder in /home/kbeezie/ by default. You can log in via SFTP to this user and create a www folder if you wish. You’ll notice that the files will be owned by kbeezie:www-data, which will allow Nginx and PHP to read from, but also gives you group-level control over how the web services may treat those files.

This is ideal since you’re not giving nginx or php-fpm too much control over the user’s files and they can be controlled with the group flag. You could also create the user with it’s own group such as kbeezie:kbeezie and just change the group of the web files to www-data where appropriate.

SSH Options

It is usually advisable to disable Root access via /etc/ssh/sshd_config with the following line:

PermitRootLogin no

However make sure you can log in with your new unprivileged user via SSH, and also make sure that you can `su` into root permission. On a FreeBSD system only a user belonging to the wheel group can su into root, and only a user listed in the sudoers file can use the sudo command. However on Debian/etc the user could have www-data as it’s own group and still be able to su/sudo as long as the root password is valid. Your password should be at least 12 characters long and contain digits, symbols and at least one capital letter. Do not use the same password for root and your web user.

Once you’ve verified that you’re able to obtain root status with the new user you can safely disable root log in via sshd_config and restart the ssh deaemon.

You should also change your default SSH port, which is 22. While a port scanner could probably find the new SSH port it is usually best practice not to use the default port for any type of secure access. Like before make sure you can log into the new port (you can configure sshd_config to listen to both 22 and another port to test this out).

SSH – PubKey Authentication

If you are on OSX or another unix/linux operating system, like I am, setting up pub key authentication is fairly painless. Logged in as your web user on the server you can run the following command:

ssh-keygen

The above by default will ask for a passphrase for your private key as well as a destination to save both the id_rsa and id_rsa.pub files (which will normally be ~/.ssh/). You can then copy your own user’s public key to an authorized_key file with the following command.

cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

Then on your own computer you can run the ssh-keygen command as well, copy your own computer’s public key from the id_rsa.pub file and add it as another line to your server’s authorized_keys file.

If you have `PubkeyAuthentication yes` listed in your sshd_config file with the authorized key path being that of .ssh in your home folder the server should allow you to log in without prompting you for a password. Just remember that if you chose not to use a passphrase for your private key then it is possible for anyone who obtains your id_rsa* files to log into your server without being prompted for a password.

You can even turn off password authentication completely and rely solely on public key authentication by setting `PasswordAuthentication no` in your sshd_config file. However keep in mind, unless you have another means of getting into your server you might get locked out if you lose your public/private key or access to the machine you use to log in (also not every SFTP or Terminal application supports pub key authentication).

I actually have public key authentication set up with my iPad using iSSH for quick server access on the go (and you do not need a jailbroken iPad for this).

On the next page are some Nginx and PHP specific configurations to hardening your installation.

Nginx Flood Protection with Limit_req

April 9th, 2011

The Test

I’ll show you a very simple demonstration of Nginx’s Limit Request module and how it may be helpful to you in keeping your website up if you are hit by excessive connections or HTTP based denial-of-service attacks.

For this test I’m using a copy of my site’s about page saved as about.html, and the Blitz.io service (which is free at the moment) to test the limit_req directive.

First I test the page with the following command in Blitz, which will essentially ramp the number of concurrent connections from 1 to 1,075 over a period of 1 minute. The timeout has been set to 2 minutes, and the region set to California. I also set it to consider any response code other than 200 to be an error, otherwise even a 503 response will be considered a hit.

-p 1-1075:60 --status 200 -T 2000 -r california http://kbeezie.com/about.html

Not bad right? But what if that were a php document. That many users frequently might cause your server to show 502/504 errors as the php processes behind it keep crashing or timing out. This is especially true if you’re using a VPS or an inexpensive dedicated server without any additional protection. (Though if you use either, iptables or pf might be a good resource to look into)

You can of course use caching and other tools to help improve your website, such as using a wordpress caching plugin, which you should be using anyways. But sometimes that one lone person might decide to hit one of your php scripts directly. For those type of people we can use the limit request module.

Let’s create a zone in our http { } block in Nginx, we’ll call it blitz and set a rate of 5 request per second, and the zone to hold up to 10MB of data. I’ve used the $binary_remote_addr as the session variable since you can pack a lot more of those into 10MB of space than the human readible $remote_addr.

limit_req_zone $binary_remote_addr zone=blitz:10m rate=5r/s;

Then inside my server block I set a limit_req for the file I was testing above:

location = /about.html {
	limit_req zone=blitz nodelay;
}

So I reload Nginx’s configuration and I give Blitz another try:

You’ll notice that now only about 285 hits made it to the server, thats roughly 4.75 requests per second, just shy of the 5r/sec we set for the limit. The rest of the requests were hit with a 503 error. If you check out the access log for this you’ll see that a majority of the requests will be the 503 response with the spurts of 200 responses mixed in there.

Using this can be quite helpful if you want to limit the request to certain weak locations on your website. It can also be applied to all php requests.

Applying Limit Request to PHP

If you would like to limit all access to PHP requests you can do so by inserting the limit_req directive into your php block like so:

location ~ \.php {
	limit_req   zone=flood;
	include php_params.conf;
	fastcgi_pass unix:/tmp/php5-fpm.sock;
}

It may help to play with some settings like increasing or decreasing the rate, as well as using the burst or nodelay options. Those configuration options are detailed here: HttpLimitReqModule.

Note about Blitz.io

You may have noticed from the graphs above that the test were performed for up to 1,075 users. That is a bit misleading. All of the hits from the California region came from a single IP (50.18.0.223).

This is hardly realistic compared to an actual rush of traffic, or a DDOS (Distributed Denial of Service) attack. This of course also explains why the hits are consistent with the limiting of a single user as opposed to a higher number that reflecting a higher number of actual users or IP sources. Load Impact actually uses separate IPs for their testing regions. However with the free version you are limited to a max of 50 users and number of times you may perform the test. You would have to spend about $49/day to perform a test consisting up to 1,000 users to your site.

Testing from a single IP can be easily done from your own server or personal computer if you got enough ram and bandwidth. Such tools to do this from your own machine include: siege, ab, openload and so forth. You just don’t get all the pretty graphs or various locations to test from.

Also if you were testing this yourself, you have to remember to use the –status flag as Blitz will consider 5xx responses as a successful hit.

Better Alternatives

I won’t go into too much details, but if you are serious about protecting your site from the likes of an actual DDOS or multi-service attack it would be best to look into other tools such as iptables (linux), pf (packet filter for BSD) on the software side, or a hardware firewall if your host provides one. The limit request module above will only work for floods against your site over the HTTP protocol, it will not protect you from ping floods or various other exploits. For those it helps to turn off any unnecessary services and to avoid any services listening on a public port that you do not need.

For example on my server, the only public ports being listened on are HTTP/HTTPS and SSH. Services such as MySQL should be bound to a local connection. It may also help to use alternate ports for common services such as SSH, but keep in mind it doesn’t take much for a port sniffer to find (thats where iptables/pf come in handy).

Nginx Configuration Examples

March 31st, 2011

Here are a number of Nginx configurations for common scenarios that should help make things easier to get started with. This page will grow as needed.

A quick and easy starter example

First one is for most of you fly-by and cut-and-paste type of people. If you’re using a typical /sites-enabled/default type of configuration, replace the content of that file with this. Make sure to change the root path and your php port/path if it differs.

server {
	# .domain.com will match both domain.com and anything.domain.com
	server_name .example.com;
 
	# It is best to place the root of the server block at the server level, and not the location level
	# any location block path will be relative to this root. 
	root /usr/local/www/example.com;
 
	# It's always good to set logs, note however you cannot turn off the error log
	# setting error_log off; will simply create a file called 'off'.
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	# This can also go in the http { } level
	index index.html index.htm index.php;
 
	location / { 
		# if you're just using wordpress and don't want extra rewrites
		# then replace the word @rewrites with /index.php
		try_files $uri $uri/ @rewrites;
	}
 
	location @rewrites {
		# Can put some of your own rewrite rules in here
		# for example rewrite ^/~(.*)/(.*)/? /users/$1/$2 last;
		# If nothing matches we'll just send it to /index.php
		rewrite ^ /index.php last;
	}
 
	# This block will catch static file requests, such as images, css, js
	# The ?: prefix is a 'non-capturing' mark, meaning we do not require
	# the pattern to be captured into $1 which should help improve performance
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		# Some basic cache-control for static files to be sent to the browser
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	# remove the robots line if you want to use wordpress' virtual robots.txt
	location = /robots.txt  { access_log off; log_not_found off; }
	location = /favicon.ico { access_log off; log_not_found off; }	
 
	# this prevents hidden files (beginning with a period) from being served
	location ~ /\.          { access_log off; log_not_found off; deny all; }
 
	location ~ \.php {
        	fastcgi_param  QUERY_STRING       $query_string;
        	fastcgi_param  REQUEST_METHOD     $request_method;
	        fastcgi_param  CONTENT_TYPE       $content_type;
	        fastcgi_param  CONTENT_LENGTH     $content_length;
 
	        fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
	        fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;
	        fastcgi_param  REQUEST_URI        $request_uri;
	        fastcgi_param  DOCUMENT_URI       $document_uri;
	        fastcgi_param  DOCUMENT_ROOT      $document_root;
	        fastcgi_param  SERVER_PROTOCOL    $server_protocol;
 
        	fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
	        fastcgi_param  SERVER_SOFTWARE    nginx;
 
        	fastcgi_param  REMOTE_ADDR        $remote_addr;
	        fastcgi_param  REMOTE_PORT        $remote_port;
	        fastcgi_param  SERVER_ADDR        $server_addr;
	        fastcgi_param  SERVER_PORT        $server_port;
	        fastcgi_param  SERVER_NAME        $server_name;
 
	        fastcgi_pass 127.0.0.1:9000;
	}
}

As for the rest of you, read on for some more goodies and other configuration examples.

Making the PHP inclusion easier

For the purpose of PHP I’ve created a php.conf in the same folder with nginx.conf, this file DOES NOT go into the same folder as your virtual host configuration for example if nginx.conf is in /etc/nginx/ , then php.conf goes into /etc/nginx/ not /etc/nginx/sites-enabled/.

If you are not using PHP-FPM, you really should be as opposed to the old spawn-fcgi or php-cgi methods. For debian lenny/squeeze dotdeb makes php5-fpm available, FreeBSD already includes it in the PHP 5.2 and 5.3 ports.

location ~ \.php {
        # for security reasons the next line is highly encouraged
        try_files $uri =404;
 
        fastcgi_param  QUERY_STRING       $query_string;
        fastcgi_param  REQUEST_METHOD     $request_method;
        fastcgi_param  CONTENT_TYPE       $content_type;
        fastcgi_param  CONTENT_LENGTH     $content_length;
 
        fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
 
        # if the next line in yours still contains $document_root
        # consider switching to $request_filename provides
        # better support for directives such as alias
        fastcgi_param  SCRIPT_FILENAME    $request_filename;
 
        fastcgi_param  REQUEST_URI        $request_uri;
        fastcgi_param  DOCUMENT_URI       $document_uri;
        fastcgi_param  DOCUMENT_ROOT      $document_root;
        fastcgi_param  SERVER_PROTOCOL    $server_protocol;
 
        fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
        fastcgi_param  SERVER_SOFTWARE    nginx;
 
        fastcgi_param  REMOTE_ADDR        $remote_addr;
        fastcgi_param  REMOTE_PORT        $remote_port;
        fastcgi_param  SERVER_ADDR        $server_addr;
        fastcgi_param  SERVER_PORT        $server_port;
        fastcgi_param  SERVER_NAME        $server_name;
 
        # If using a unix socket...
        # fastcgi_pass unix:/tmp/php5-fpm.sock;
 
        # If using a TCP connection...
        fastcgi_pass 127.0.0.1:9000;
}

I prefer to use a unix socket, it cuts out the TCP overhead, and increases security since file-based permissions are stronger. In php-fpm.conf you can change the listen line to /tmp/php5-fpm.sock, and should uncomment the listen.owner, listen.group and listen.mode lines.

REMEMBER: Set cgi.fix_pathinfo to 0 in the php.ini when using Nginx with PHP-FPM/FastCGI, otherwise something as simple as /forum/avatars/user2.jpg/index.php could be used to execute an uploaded php script hidden as an image.

Path_Info
If you must use PATH_INFO and PATH_TRANSLATED then add the following within your location block above (make sure $ does not exist after \.php or /index.php/some/path/ will not match):

	fastcgi_split_path_info ^(.+\.php)(/.+)$;
	fastcgi_param  PATH_INFO          $fastcgi_path_info;
	fastcgi_param  PATH_TRANSLATED    $document_root$fastcgi_path_info;

It is advised however to update your script to use REQUEST_URI instead.

Optional Config – Drop

You can also have a file called drop.conf in the same folder with nginx.conf with the following:

	location = /robots.txt  { access_log off; log_not_found off; }
	location = /favicon.ico { access_log off; log_not_found off; }	
	location ~ /\.          { access_log off; log_not_found off; deny all; }
	location ~ ~$           { access_log off; log_not_found off; deny all; }

Using include drop.conf; at the end of your server block will include these. The first two simply turn off access logs and prevents logging an error if robots.txt and favicon.ico are not found, which is something a lot of browsers ask for. WordPress typically uses a virtual robots.txt so you may omit the first line if needed so that the request is passed back to wordpress.

The third line prevents nginx from serving any hidden unix/linux files, basically any request beginning with a period.

And the forth line is mainly for people who use vim, or any other command line editor that creates a backup copy of a file being worked on with a file name ending in ~. Hiding this prevents someone from accessing a backup copy of a file you have been working on.

Sample Nginx Configurations

Simple Configuration with PHP enabled

Here we have a very simple starting configuration with PHP support. I will provide most of the comments here for the basic lines. The other sample configurations won’t be as heavily commented.

server {
	# This will listen on all interfaces, you can instead choose a specific IP
	# such as listen x.x.x.x:80;  Setting listen 80 default_server; will make
	# this server block the default one if no other blocks match the request
	listen 80;
 
	# Here you can set a server name, you can use wildcards such as *.example.com
	# however remember if you use server_name *.example.com; You'll only match subdomains
	# to match both subdomains and the main domain use both example.com and *.example.com
	server_name example.com www.example.com;
 
	# It is best to place the root of the server block at the server level, and not the location level
	# any location block path will be relative to this root. 
	root /usr/local/www/example.com;
 
	# It's always good to set logs, note however you cannot turn off the error log
	# setting error_log off; will simply create a file called 'off'.
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	location / { 
		# Rewrite rules and other criterias can go here
		# Remember to avoid using if() where possible (http://wiki.nginx.org/IfIsEvil)
	}
 
	# This block will catch static file requests, such as images, css, js
	# The ?: prefix is a 'non-capturing' mark, meaning we do not require
	# the pattern to be captured into $1 which should help improve performance
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		# Some basic cache-control for static files to be sent to the browser
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	# We can include our basic configs here, as you can see its much easier
	# than pasting out the entire sets of location block each time we add a vhost
 
	include drop.conf;
	include php.conf;
}

WordPress Simple (Not using file-based caching or special rewrites)
This can also be used with W3 Total Cache if you’re using Memcache or Opcode Caching.

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	location / { 
		try_files $uri $uri/ /index.php; 
	}
 
	location /search { limit_req zone=kbeezieone burst=3 nodelay; rewrite ^ /index.php; }
 
	fastcgi_intercept_errors off;
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	include php.conf;
 
	# You may want to remove the robots line from drop to use a virtual robots.txt
	# or create a drop_wp.conf tailored to the needs of the wordpress configuration
	include drop.conf;
}

WordPress w/ W3 Total Cache using Disk (Enhanced)
The following rules have been updated to work with W3TC 0.9.2.8, in some cases $host may need to be replaced with your domain name.

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	# the next two location blocks are to ensure gzip encoding is turned off
	# for the serving of already gzipped w3tc page cache
	# normally gzip static would handle this but W3TC names them with _gzip
 
	location ~ /wp-content/cache/page_enhanced.*html$ {
		expires max;
		charset utf-8;
		add_header Vary "Accept-Encoding, Cookie";
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	location ~ /wp-content/cache/page_enhanced.*gzip$ {
		expires max;
		gzip off;
		types {}
		charset utf-8;
		default_type text/html;
		add_header Vary "Accept-Encoding, Cookie";
		add_header Content-Encoding gzip;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	location / { 
		if (-f $request_filename) {
		        break;
		}
 
		set $w3tc_rewrite 1;
		if ($request_method = POST) { set $w3tc_rewrite 0; }
		if ($query_string != "") { set $w3tc_rewrite 0; }
 
		set $w3tc_rewrite2 1;
		if ($request_uri !~ \/$) { set $w3tc_rewrite2 0; }
		if ($request_uri ~* "(sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { set $w3tc_rewrite2 1; }
		if ($w3tc_rewrite2 != 1) { set $w3tc_rewrite 0; }
 
		if ($http_cookie ~* "(comment_author|wp\-postpass|wordpress_\[a\-f0\-9\]\+|wordpress_logged_in)") { set $w3tc_rewrite 0; }
		if ($http_user_agent ~* "(W3\ Total\ Cache/0\.9\.2\.4)") { set $w3tc_rewrite 0; }
 
		set $w3tc_ua "";
		set $w3tc_ref "";
		set $w3tc_ssl "";
		set $w3tc_enc "";
 
		if ($http_accept_encoding ~ gzip) { set $w3tc_enc _gzip; }
 
		set $w3tc_ext "";
		if (-f "$document_root/wp-content/cache/page_enhanced/$host/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl.html$w3tc_enc") {
		    set $w3tc_ext .html;
		}
		if ($w3tc_ext = "") { set $w3tc_rewrite 0; }
 
		if ($w3tc_rewrite = 1) {
		    rewrite ^ "/wp-content/cache/page_enhanced/$host/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl$w3tc_ext$w3tc_enc" last;
		}
 
		if (!-e $request_filename) { 
			rewrite ^ /index.php last; 
		}
	}
 
	location /search { limit_req zone=kbeezieone burst=3 nodelay; rewrite ^ /index.php; }
 
	fastcgi_intercept_errors off;
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	include php.conf;
 
	# You may want to remove the robots line from drop to use a virtual robots.txt
	# or create a drop_wp.conf tailored to the needs of the wordpress configuration
	include drop.conf;
}

WordPress w/ WP Supercache (Full-On mode)

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	location / { 
		# This line when enabled will use Nginx's gzip static module
		gzip_static on;
 
		# Disables serving gzip content to IE 6 or below
		gzip_disable        "MSIE [1-6]\.";
 
		# Sets the default type to text/html so that gzipped content is served
		# as html, instead of raw uninterpreted data.
		default_type text/html;
 
		# does the requested file exist exactly as it is? if yes, serve it and stop here
		if (-f $request_filename) { break; }
 
		# sets some variables to help test for the existence of a cached copy of the request
		set $supercache_file '';
		set $supercache_uri $request_uri;
 
		# IF the request is a post, has a query attached, or a cookie
		# then don't serve the cache (ie: users logged in, or posting comments)
		if ($request_method = POST) { set $supercache_uri ''; }
		if ($query_string) { set $supercache_uri ''; }
		if ($http_cookie ~* "comment_author_|wordpress|wp-postpass_" ) { 
			set $supercache_uri ''; 
		}
 
		# if the supercache_uri variable hasn't been blanked by this point, attempt
		# to set the name of the destination to the possible cache file
		if ($supercache_uri ~ ^(.+)$) { 
			set $supercache_file /wp-content/cache/supercache/$http_host/$1index.html; 
		}
 
		# If a cache file of that name exists, serve it directly
		if (-f $document_root$supercache_file) { rewrite ^ $supercache_file break; }
 
		# Otherwise send the request back to index.php for further processing
		if (!-e $request_filename) { rewrite ^ /index.php last; }
	}
 
	location /search { limit_req zone=kbeezieone burst=3 nodelay; rewrite ^ /index.php; }
 
	fastcgi_intercept_errors off;
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	include php.conf;
 
	# You may want to remove the robots line from drop to use a virtual robots.txt
	# or create a drop_wp.conf tailored to the needs of the wordpress configuration
	include drop.conf;
}

Nibbleblog 3.4.1+ with Pretty URLs enabled

If you’re using NibbleBlog ( http://nibbleblog.com ), the following basic configuration works with version 3.4.1, and 3.4.1 Markdown.

server {
    listen 80;
    root /usr/local/www/example.com;;
    server_name example.com www.example.com;
 
    # access log turned off for speed
    access_log off;
    error_log /var/log/nginx/domain.error.log;
 
    # main location block
    location / {
        expires 7d;
        try_files $uri $uri/ @rewrites;
    }
 
    # rewrite rules if file/folder did not exist
    location @rewrites {
        rewrite ^/dashboard$ /admin.php?controller=user&action=login last;
        rewrite ^/feed /feed.php last;
        rewrite ^/category/([^/]+)/page-([0-9]+)$ /index.php?controller=blog&action=view&category=$1&page=$2 last;
        rewrite ^/category/([^/]+)/$ /index.php?controller=blog&action=view&category=$1&page=0 last;
        rewrite ^/page-([0-9]+)$ /index.php?controller=blog&action=view&page=$1 last;
    }
 
    # location catch for /post-#/Post_Title
    # will also catch odd instances of /post-#/something.php
    location ~ ^/post-([0-9]+)/ {
        rewrite ^ /index.php?controller=post&action=view&id_post=$1 last;
    }
 
    # cache control for static files
    location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
        expires max;
        add_header Pragma public;
        add_header Cache-Control "public, must-revalidate, proxy-revalidate";
    }
 
	include drop.conf;
	include php.conf;
}

Drupal 6+
From http://wiki.nginx.org/Drupal with some changes

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	# This matters if you use drush
	location = /backup {
		deny all;
	}
 
	# Very rarely should these ever be accessed outside of your lan
	location ~* \.(txt|log)$ {
		allow 192.168.0.0/16;
		deny all;
	}
 
	location ~ \..*/.*\.php$ { return 403; }
 
	location / {
		# This is cool because no php is touched for static content
		try_files $uri @rewrite;
	}
 
	location @rewrite {
		# Some modules enforce no slash (/) at the end of the URL
		# Else this rewrite block wouldn't be needed (GlobalRedirect)
		rewrite ^/(.*)$ /index.php?q=$1;
	}
 
	# Fighting with ImageCache? This little gem is amazing.
	location ~ ^/sites/.*/files/imagecache/ {
		try_files $uri @rewrite;
	}
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
		log_not_found off;
	}
 
	include php.conf;
 
	# You may want to remove the robots line from drop to use a virtual robots.txt
	# or create a drop_wp.conf tailored to the needs of the wordpress configuration
	include drop.conf;
}

Static with an Apache (for dynamic) Backend

server {
	listen 80;
 
	server_name example.com www.example.com;
 
	root /usr/local/www/example.com;
 
	access_log /var/log/nginx/example.access.log;
	error_log /var/log/nginx/example.error.log;
 
	location / { 
		# Rewrite rules can go here
	}
 
	location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}
 
	location ~ \.php {
		proxy_set_header X-Real-IP  $remote_addr;
		proxy_set_header X-Forwarded-For $remote_addr;
 
		proxy_set_header Host $host;
 
		proxy_pass http://127.0.0.1:8080;
	}
 
	include drop.conf;
}

Search Page Getting Hammered?

March 27th, 2011

On my WordPress Caching write up someone mentioned asked a very good question in the comments. What good is the caching if your site gets brought down by excessive search queries?

Great article.

However I have following problem.

My WP setup is like that.

Currently I am on shared hosting with WP + W3 Total cache and during peak hours, my site is very slow. That is mainly because I have a huge traffic from Google.

My webstie caches plenty of keywords with AskApache Google 404 and Redirection.

What happens is that traffic from Google goes to /search/what-ever-keywords dynamicly created everytime. And that is killing my system.
The problem is I have no idea how to help poor server and cache that kind of traffic.

Would you have any advice for that ?
Regards,
Peter

Fortunately the Nginx webserver has a way to soften the impact with a module called Limit Request. For wordpress this is how you would implement it:

http {
	limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
	#... the rest of your content
 
	server {
		#... your server content
 
		location /search { 
			limit_req zone=one burst=3 nodelay;
			try_files $uri /index.php; 
		}
	}
}

What we have here is almost identical to the example provided in the Nginx Wiki (HttpLimitReqModule). Essentially we’ve created a zone that allows a user no more than 1 requests per seconds, assigning them via $binary_remote_addr (which is smaller than $remote_addr while still accomplishing the same goal), within a space of 10MB.

Then we’ve placed a limit_req directive into the /search location. (Remember to include the rewrite so that index.php receives the search request in wordpress) This directive uses the zone we created earlier, allowing for a burst of up to 3 requests from a single user. If the user exceeds the limit too many times (the burst limit) they’re presented with a 503 error which Google and others consider sort of a ‘back off’ response.

By default the burst value is zero.

We use try_files here because rewriting will occur before the limiting had a chance to act, since the rewrite phase takes precedence before most other processes.

Here is an example wordpress configuration utilizing the above, configured for W3TC w/ Memcache (or OpCode Caching), please see the article [ The Importance of Caching WordPress] for details on caching WordPress in other ways, and why you should be using caching with wordpress.

This is config is somewhat based off the same configuration I currently use for kbeezie.com on a FreeBSD server.

 
user kb244 www;
worker_processes  2;
 
events {
	worker_connections	2048;
}
 
 
http {
	sendfile on;
	tcp_nopush on; # Generally on for linux, off for FreeBSD/Unix
	tcp_nodelay on;
	server_tokens off;
	include mime.types;
	default_type  application/octet-stream;
	index index.php index.htm index.html redirect.php;
 
	#Gzip
	gzip  on;
	gzip_vary on;
	gzip_proxied any;	
	gzip_comp_level 6;
	gzip_buffers 16 8k;
	gzip_http_version 1.1;
	gzip_disable "MSIE [1-6]\.(?!.*SV1)";	
	gzip_types text/plain text/css application/json application/x-javascript text/xml 
                        application/xml application/xml+rss text/javascript;
 
	#FastCGI
	fastcgi_intercept_errors on;
	fastcgi_ignore_client_abort on;
	fastcgi_buffers 8 16k;
	fastcgi_buffer_size 32k;
	fastcgi_read_timeout 120;
	fastcgi_index  index.php;
 
	limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
 
	server {
		listen 80;
 
		# Replace with your domain(s)
		server_name kbeezie.com www.kbeezie.com;
 
		# Setting your root at the server level helps a lot
		# especially if you don't like having to customize your
		# PHP configurations every single time you create a vhost
		root /usr/local/www/kbeezie.com;
 
		# Always a good idea to log your vhosts seperately
		access_log /var/log/nginx/kbeezie.access.log;
		error_log /var/log/nginx/kbeezie.error.log;
 
		# I'm using W3TC with memcache, otherwise this block
		# would look a lot more complicated for handling file
		# based caching.
		location / { 
			try_files $uri $uri/ /index.php; 
		}
 
		# And here we have the search block limiting the requests
		location /search {
			limit_req zone=kbeezieone burst=3 nodelay;
			try_files $uri /index.php;
		}
 
		# We want Wordpress to handle the error page
		fastcgi_intercept_errors off;
 
		# Handle static file requests here setting appropiate headers
		location ~* \.(ico|css|js|gif|jpe?g|png)$ {
			expires max;
			add_header Pragma public;
			add_header Cache-Control "public, must-revalidate, proxy-revalidate";
		}
 
		# I prefer to save this location block in it's own php.conf
		# in the same folder as nginx.conf so I can just use:
		# include php.conf;
		# at the bottom of each server block I want PHP enabled on
 
		location ~ \.php$ {
			fastcgi_param  QUERY_STRING       $query_string;
			fastcgi_param  REQUEST_METHOD     $request_method;
			fastcgi_param  CONTENT_TYPE       $content_type;
			fastcgi_param  CONTENT_LENGTH     $content_length;
 
			fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
			fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;
			fastcgi_param  REQUEST_URI        $request_uri;
			fastcgi_param  DOCUMENT_URI       $document_uri;
			fastcgi_param  DOCUMENT_ROOT      $document_root;
			fastcgi_param  SERVER_PROTOCOL    $server_protocol;
 
			fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
			fastcgi_param  SERVER_SOFTWARE    nginx;
 
			fastcgi_param  REMOTE_ADDR        $remote_addr;
			fastcgi_param  REMOTE_PORT        $remote_port;
			fastcgi_param  SERVER_ADDR        $server_addr;
			fastcgi_param  SERVER_PORT        $server_port;
			fastcgi_param  SERVER_NAME        $server_name;
 
			fastcgi_pass unix:/tmp/php5-fpm.sock;
 
			# I use a Unix socket for PHP-Fpm, yours instead may be:
			# fastcgi_pass 127.0.0.1:9000;
		}
 
		# Browsers always look for a favicon, I rather not flood my logs
		location = /favicon.ico { access_log off; log_not_found off; }	
 
		# Make sure you hide your .hidden linux/unix files
		location ~ /\. { deny  all; access_log off; log_not_found off; }
	}
}

As always a little research and tweaking to your own requirements is advised.

Nginx and Codeigniter The Easy Way

March 5th, 2011

I personally don’t use Codeigniter (was never much of a fan of PHP frameworks), however I had a client approach me with this issue so I decided to take a stab at it.

Now normally you would look for any existing .htaccess that comes with a script package and attempt to convert the rewrite rules. But seriously though, why do we really want to go thru all that fuss when we can simply use try files:

server {
    listen 80;
    server_name yourdomain.com www.yourdomain.com;
    root /path/to/your/website.com/;
 
    location / {
        try_files $uri $uri/ /index.php;
    }
 
    # For more information on the next two lines see http://kbeezie.com/view/nginx-configuration-examples/
    include php.conf;
    include drop.conf;
}

In a nutshell this should normally work… but why does it not?

Quite simple really; CodeIgniter by default uses PATH_INFO which is a really antiquated method of parsing the uri. As a result we must tell CI to use REQUEST_URI instead. To do this open up your config.php under /system/application/config/ and find the $config[‘uri_protocol’] and set it to this:

$config['uri_protocol'] = "REQUEST_URI";

You could also choose to use AUTO, but since we know we’ll be dealing with request_uri, it is best to set it as such (though if you do run into problems, give AUTO a try).

If you have not already set the index page, you will want to blank it out in order for it to work with a rewrite method (request_uri, etc).

$config['index_page'] = "";

For known static files, take it a step further and capture common static requests so that you can cache accordingly:

	location ~* \.(ico|css|js|gif|jpe?g|png)$ {
		expires max;
		add_header Pragma public;
		add_header Cache-Control "public, must-revalidate, proxy-revalidate";
	}

Debian/Ubuntu Nginx init Script (opt)

February 28th, 2011

Normally if you install Nginx from a repository this init script would already be included. However if you installed from source, or did not use the standard paths, you may need this.

If you find that stop/restart and such do not work, your pid file location may be incorrect. You can either set it in the nginx.conf or you can change the init script here to point to the correct pid location. If you’re ever in a pinch stopping and restarting an nginx process that’s already running you can use the following:

nginx -s quit (or reload)

Quit will terminate the existing processes, reload will keep nginx running but will simply reload the configuration. But if your init script is setup properly you should be able to use that instead.

The below init script was for a configuration where /opt was the provided configuration prefix.

#! /bin/sh
 
### BEGIN INIT INFO
# Provides:          nginx
# Required-Start:    $all
# Required-Stop:     $all
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: starts the nginx web server
# Description:       starts nginx using start-stop-daemon
### END INIT INFO
 
PATH=/opt/bin:/opt/sbin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/opt/sbin/nginx
NAME=nginx
DESC=nginx
 
test -x $DAEMON || exit 0
 
# Include nginx defaults if available
if [ -f /etc/default/nginx ] ; then
        . /etc/default/nginx
fi
 
set -e
 
case "$1" in
  start)
        echo -n "Starting $DESC: "
        start-stop-daemon --start --quiet --pidfile /var/run/nginx.pid \
                --exec $DAEMON -- $DAEMON_OPTS
        echo "$NAME."
        ;;
  stop)
        echo -n "Stopping $DESC: "
        start-stop-daemon --stop --quiet --pidfile /var/run/nginx.pid \
                --exec $DAEMON
        echo "$NAME."
        ;;
  restart|force-reload)
        echo -n "Restarting $DESC: "
        start-stop-daemon --stop --quiet --pidfile \
                /var/run/nginx.pid --exec $DAEMON
        sleep 1
        start-stop-daemon --start --quiet --pidfile \
                /var/run/nginx.pid --exec $DAEMON -- $DAEMON_OPTS
        echo "$NAME."
        ;;
  reload)
      echo -n "Reloading $DESC configuration: "
      start-stop-daemon --stop --signal HUP --quiet --pidfile /var/run/nginx.pid \
          --exec $DAEMON
      echo "$NAME."
      ;;
  *)
        N=/etc/init.d/$NAME
        echo "Usage: $N {start|stop|restart|force-reload}" >&2
        exit 1
        ;;
esac
 
exit 0

Protecting Folders with Nginx

February 5th, 2011

This question comes up every so often, and its actually fairly easy besides the fact you do not use an .htaccess file. To do so we’ll need the auth_basic module which comes included with Nginx.

First we’ll need to create a password file, you can create this in the folder you wish to protect (though the file can reside anywhere Nginx has access to).

Creating .htpasswd

If you previously had apache/httpd installed on your machine you may still have some of the utilities that came with it. In this case we’ll want to use the htpasswd command:

htpasswd -c /var/www/domain.com/admin/.htpasswd username

When you run the above command, it will prompt you for a password for the provided username, and then create the file .htpasswd in the folder you specified. If you already have a pre-existing password file, you can omit the -c flag. You can use the -D flag to remove the specified user from a password file.

If you do not have htpasswd on your system, you can create the file yourself which each line in the format of:

username:encrypted-password:comment

To generate the encrypted password you can use Perl which is installed on most systems.

perl -le 'print crypt("your-password", "salt-hash")'

Or you can use Ruby via the interactive console command ‘irb’:

"your-password".crypt("salt-hash")

Setting up Nginx

Add protection for the /admin folder.

server {
    listen 80;
    server_name domain.com www.domain.com;
 
    location / { 
        # your normal content
    }
 
    location /admin {
        auth_basic "Administrator Login";
        auth_basic_user_file /var/www/domain.com/admin/.htpasswd;
    }
 
    #!!! IMPORTANT !!! We need to hide the password file from prying eyes
    # This will deny access to any hidden file (beginning with a .period)
    location ~ /\. { deny  all; }
}

With the above access to the admin directory will prompt the user with a basic authentication dialog, and will be challenged against the password file provided.

Additional Security Considerations

The password file itself does not have to be named .htpasswd, but if you do store it a web-accessible location make sure to protect it. With the last location block above using any name with a period in front should be protected from web-access.

The safest place to store a password file is outside of the web-accessible location even if you take measures to deny access. If you wish to do so, you can create a protected folder in your nginx/conf directory to store your password files (such as conf/domain.com/folder-name/password) and load the user file from that location in your configuration.

Apache and Nginx Together

January 30th, 2011

I’m not going to get into a lot of details about how to install and configure either http server from scratch. This article is primarily going to be food for thought for those who may want or need to configure nginx along side an existing apache (httpd) configuration. If you would rather ditch Apache all together, consider reading Martin Fjordvald’s Nginx Primer 2: From Apache to Nginx.

Side by Side

In some cases you may need to run both Apache (httpd) and Nginx on port 80. Such a situation can be a server running Cpanel/Whm and as such doesn’t support nginx, so you wouldn’t want to mess with the apache configuration at all.

To do this you have to make sure Apache and Nginx are bound to their own IP adddress, In the event of WHM/Cpanel based webserver, you can Release an IP to be used for Nginx in WHM. At this time I am not aware of a method of reserving an IP, and automatically forcing Apache to listen on a specific set of IPs in a control panel such as DirectAdmin or Plesk. But the link above will show you how with WHM/Cpanel.

Normally Apache will be listening on all interfaces, and such you may see a line like this in your httpd.conf file:

Port 80
#or
Listen 80

Instead you’ll need to tell apache to listen on a specific IP (you can have multiple Listen lines for multiple IPs)

Listen 192.170.2.1:80

When you change Apache to listen on specific interfaces some VirtualHost may fail to load if they are listening on specific IPs themselves. The address option in <VirtualHost addr[:port] [addr[:port]] …> only applies when the main server is listening on all interfaces (or an alias of such). If you do not have an IP Listed in your <VirtualHost …> tag, then you do not need to worry bout this. Otherwise make sure to remove the addr:port from the VirtualHost tag.

Make sure to update the NameVirtualHost directive in Apache for name-based virtual hosts.

Once you have apache configured to listen on a specific set of IPs you can do the same with nginx.

server {
    listen 192.170.2.2:80;
    ...
}

Now that both servers are bound to specific IPs, both can then be started up on port 80. From there you would simply point the IP of the domain to the server you wish to use. In the case of WHM/Cpanel you can either manually configure the DNS entry for the domain going to nginx in WHM, or you can use your own DNS such as with your registrar to point the domain to the specific IP. Using the latter may break your mail/ftp etc configurations if the DNS entries are not duplicated correctly.

Though normally if you are setting up Nginx side-by-side with apache, you’ll have your domain configurations separate from the rest of the system. The above paragraph only applies if the domain you are using has already been added by cpanel/whm and you wish to have it served by nginx exclusively. [Scroll down for PHP considerations]

Apache behind Nginx

If you are using a control panel based hosting such as cpanel/whm, this method is not advised. Most of the servers configuration is handled automatically in those cases, and making manual changes will likely lead to problems (plus you won’t get support from the control panel makers or your hosting provider in most cases).

Using Nginx as the primary frontend webserver can increase performance regardless if you choose to keep Apache running on the system. One of Nginx’s greatest advantage is how well it serves static content. It does so much more efficiently than Apache, and with very little cost to memory or processing. So placing Nginx in front will remove that burdern off Apache, leaving it to concentrate on dynamic request or special scenarios.

This method is also popular for those who don’t want to use PHP via fastcgi, or install a separate php-fpm process.

First thing that needs to be done is to change the interface apache listens on:

Listen 127.0.0.1:8080

Above we bind Apache to the localhost on an alternate port, since only Nginx on the same machine will be communicating with Apache.

Note: Since Nginx needs to access the same files that Apache serves, you need to make sure that Nginx is setup to run as the same user as apache, or to make sure that the Nginx’s selected user:group has permission to read the web documents. Often times the webroot is owned by nobody or www-data and Nginx likewise can be setup to run as those users

Then in Nginx listening on port 80 (either on all interfaces such as 0.0.0.0 or to specific IPs, your choice) we would need to configure Nginx to serve the same content. Take for example this virtual host in an apache configuration:

<VirtualHost>
      DocumentRoot "/usr/local/www/mydomain.com"
      ServerName mydomain.com
      ServerAlias www.mydomain.com
      CustomLog /var/log/httpd/mydomain_access.log common
      ErrorLog /var/log/httpd/mydomain_error.log
      ...
</VirtualHost>

In Nginx the equivalent server configuration would be:

server {
      root /usr/local/www/mydomain.com;
      server_name mydomain.com www.mydomain.com;
 
      # by default logs are stored in nginx's log folder
      # it can be changed to a full path such as /var/log/...
      access_log logs/mydomain_access.log;
      error_log logs/mydomain_error.log;
      ...
}

The above would serve all static content from the same location, however PHP will simply come back as PHP source. For this we need to send any PHP requests back to Apache.

server {
      root /usr/local/www/mydomain.com;
      server_name mydomain.com www.mydomain.com;
 
      access_log logs/mydomain_access.log;
      error_log logs/mydomain_error.log;
 
      location / { 
            # this is the location block for the document root of this vhost
      } 
 
      location ~ \.php {
            # this block will catch any requests with a .php extension
            # normally in this block data would be passed to a FastCGI process
 
            # these two lines tell Apache the actual IP of the client being forwarded
            # Apache requires mod_proxy (http://bit.ly/mod_proxy) for these to work
            # Most Apache 2.0+ servers have mod_proxy configured already
 
            proxy_set_header X-Real-IP  $remote_addr;
            proxy_set_header X-Forwarded-For $remote_addr;
 
            # this next line adds the Host header so that apache knows which vHost to serve
            # the $host variable is automatically set to the hostname Nginx is responding to
 
            proxy_set_header Host $host;
 
            # And now we pass back to apache
            # if you're using a side-by-side configuration the IP can be changed to
            # apache's bound IP at port 80 such as http://192.170.2.1:80
 
            proxy_pass http://127.0.0.1:8080;
      }
 
       # if you don't like seeing all the errors for missing favicon.ico in root
       location = /favicon.ico { access_log off; log_not_found off; }
 
       # if you don't like seeing errors for a missing robots.txt in root
       location = /robots.txt { access_log off; log_not_found off; }
 
       # this will prevent files like .htaccess .htpassword .secret etc from being served
       # You can remove the log directives if you wish to
       # log any attempts at a client trying to access a hidden file
       location ~ /\. { deny all; access_log off; log_not_found off; }
}

In the case of rewriting (mod_rewrite) with apache behind nginx you can use an incredibly simple directive called try_files. Here’s the same code as above, but when changes made so that any files or folder not found by nginx gets passed to apache for processing (useful for situations like WordPress, or .htaccess based rewrites when the file or folder is not caught by Nginx)

server {
      root /usr/local/www/mydomain.com;
      server_name mydomain.com www.mydomain.com;
 
      access_log logs/mydomain_access.log;
      error_log logs/mydomain_error.log;
 
      location / { 
            # try_files attempts to serve a file or folder, until it reaches the fallback at the end
            try_files $uri $uri/ @backend;
      } 
 
      location @backend {
            # essentially the same as passing php requests back to apache
 
            proxy_set_header X-Real-IP  $remote_addr;
            proxy_set_header X-Forwarded-For $remote_addr;
            proxy_set_header Host $host;
            proxy_pass http://127.0.0.1:8080;
      }
      # ... The rest is the same as the configuration above ... 
}

With the above configuration, any file or folder that exists will be served by Nginx (the php block will catch files ending in .php), otherwise it will be passed back to apache on the backend.

In some cases you may wish for everything to be passed to Apache unless otherwise specified, in which case you would have a configuration such as this:

server {
      root /usr/local/www/mydomain.com;
      server_name mydomain.com www.mydomain.com;
 
      access_log logs/mydomain_access.log;
      error_log logs/mydomain_error.log;
 
      location / {
            proxy_set_header X-Real-IP  $remote_addr;
            proxy_set_header X-Forwarded-For $remote_addr;
 
            proxy_set_header Host $host;
 
            proxy_pass http://127.0.0.1:8080;
      }
 
      # Example 1 - Specify a folder and it's content
      # To serve everything under /css from Nginx-only
      location /css { }
 
      # Example 2 - Specify a RegEx pattern such as file extensions
      # Serve certain files directly from Nginx
      location ~* ^.+\.(jpg|jpeg|gif|png|css|zip|pdf|txt|js|flv|swf|html|htm)$
      {
            # this will basically match any file of the above extensions
            # course if the file does not exist you'll see an Nginx error page as opposed
            # to apache's own error page. 
      }
}

Personally I prefer to leave Nginx to handle everything, and specify exactly what needs to be passed back to Apache. But the last configuration above would more accurately allow .htaccess to function during almost-every request since almost everything gets passed back to apache. The css folder or any files found by the other location block would be served directly by nginx regardless if there’s an .htaccess file there, since only Apache processes .ht* files.

PHP Considerations

If you are not familiar with Nginx, then it should be noted that Nginx does not have a PHP module like apache’s mod_php, instead you either need to build PHP with FPM (ie: php-fpm/fastcgi), or you need to pass the request to something that can handle PHP.

In the case of using Nginx with Apache you essentially have two choices:

1) Compile and Install PHP-FPM, thus running a seperate PHP process to be used by Nginx. This option is usually good if you want to keep the two webserver configurations separated from each other, that way any changes to one won’t affect the other. But of course it adds additional memory usage to the system.

2) Utilize Apache’s mod_php. This option is ideal when you will be passing data to apache as a backend. Even in a side-by-side scenario, you can utilize mod_php by proxying any php request to Apache’s IP. But in the side-by-side scenario you have to make sure that Apache is also configured to serve the same virtualhost that Nginx is requesting.

Speed-wise both methods are about the same when PHP is configured with the same set of options on both. Which option you choose depends solely on your needs. Another article that may be of interest in relations to this one would be Nginx as a Proxy to your Blog, which can be just as easily utilized on a single server (especially if you get multiple IP ranges like I do).