Archive for the ‘Nginx’ category

IPv6 with OpenVz (venet)

February 16th, 2011

This article assumes that the physical node that is setup with OpenVz already has Native IPv6 setup and running, and that you are using OpenVz containers connected with a venet device. For those who need help getting their physical node setup with IPv6 on CentOS/RHEL click here.

Ideally IPv6 works best over a veth adapter, as they have a MAC address and play much more nicely with auto configuration within the container. However not all of us have the luxury of getting an OpenVz container with a bridged veth adapter AND IPv6.

In a nutshell, using a venet adapter suffers the following:

venet devices are not fully IPv6 compliant, but still works if you statically assign IPv6 addresses. They do not properly support MAC addresses and consequently link local addresses and can not play nice with neighbor discovery or router advertisements, router discovery, or auto-conf. They also require additional modifications to the layer 3 forwarding behaviour of the host via sysctl.

The wiki of course tells you how to add an IPv6 address to a container, but doesn’t go into details on actually getting it to work. I only know of these steps to work on a CentOS and Debian container.

It’s important that the following has been set on the physical server for the following guide to work

sysctl net.ipv6.bindv6only 1

Adding an IP

In the case of the server is on, I have a /64 block assigned to me by my datacenter. For those not familiar, a /64 block equates to about 18 quintillion addresses. In SolusVm you normally add a range of IPv6 addresses by randomly generating about a 100 addresses at a time. But you can also manually define an address like I did with kbeezie, choosing 2001:4858:aaaa:6::6 for this site. The :: represents an area that is filled with zeros, as such the long address would be 2001:4858:aaaa:0006:0000:0000:0006.

Now to add this address to a container we would simply use vzctl:

vzctl set <id> --ipadd <ipv6_addr> --save

For example if your containers’ CTID is 101, and you wanted to add 2001:4858:aaaa:6::1, you would perform the following:

vzctl set 101 --ipadd 2001:4858:aaaa:6::1 --save

Verifying IPv6 Connectivity

Now when you log into your container, you should be able to perform the following:


If you get an error that there’s no connection to host, or you simply time out, then first double check that the IPv6 address shows in your ifconfig output. If it’s there then try the following command to add a default route from the container to the host node:

On CentOS/RHEL container:

ip route add 2000::/3 dev venet0

On Debian/Ubuntu container:

ip route add ::/0 dev venet0

You can verify the default route has been added with:

ip -6 route show

Once that has been added, then try to ping6 once a gain. If you see ping responses then you’re done with that. If not move onto the next step.

Adding route to container at physical node

Sometimes when you add an IP address to a container, the traffic simply doesn’t make it to the containers. I’ve had this annoyance a number of times especially when I add an IP address to a container in SolusVm.

Lets say for example you added 2001:4858:aaaa:6::1 to a container, but traffic simply is not making it thru. You may need to re-add the route between the container and the eht0 adapter. To do this use the following command:

ip -6 neigh add proxy 2001:4858:aaaa:6::1 dev eth0

Once thats done, log back into the container and try to ping6 again. At this point you should be getting a response.

Configuring Web Servers

As mentioned in the openvz wiki, link local and other features will not be available with a venet adapter. A prime example of this is listening on all available IPv6 interfaces such as [::]:80 will not work in most cases as the software will often complain of the address already being bound. Likewise any services that can listen on both IPv4 and IPv6 will need to be set to only handle IPv6 traffic over IPv6 (since its possible for IPv6 to handle both IPv6 and IPv4 traffic).

It helps to have Google’s nameservers ( and ) in your /etc/resolv.conf if you do not already have a IPv6-friendly nameserver configured.

The following are tweaks you can use to make IPv6 work in various webservers.


Before you can use IPv6 with Nginx, the webserver needs to be compiled with the –with-ipv6 option. You may already have this option enabled, you can check by calling nginx -V. To listen on IPv6 you can add the following line to your server block:

listen [2001:4858:aaaa:6::1]:80 ipv6only=on;

Remember that each server block will need to have it’s own unique IPv6 address and that you cannot bind to all [::] interfaces. The option ipv6only=on needs to be enabled in order to play nicely with venet. If the physical node already has ipv4 binding turned off then ipv6only may not be necessary, but its a good idea to use anwyays with the venet adapter. To ensure that a server block is listening on both IPv6 and IPv4 you may need to have the following:

listen [2001:4858:aaaa:6::1]:80 ipv6only=on;

You can also choose to listen on a specific IPv4 address. The listen line for IPv4 is needed, since otherwise the server block will only be listening on IPv6.

More information on Nginx’s listen directive can be found in the Wiki.


I’m not entirely familiar with lighttpd myself, but it does share a similar directive to nginx for forcing IPv6 listeners to only handle IPv6 traffic. IPv6 setup is outlined in Wiki overview. Which shows V6_ONLY is already enabled in most common cases where you have to statically define the IP address.

Again you cannot bind to [::] when using venet on OpenVz.


Apache does not have a v6-only type of directive. Instead you have to recompile apache with –disable-v4-mapped. Once Apache has been compiled you can use the [IPv6]:80 format, you’ll need to have a Listen directive set to your new IPv6 address, as well as any virtual hosts. Like with both nginx and lighttpd above, you cannot bind to all interfaces ([::]), only specifically assigned addresses.

Control panels, Firewalls, etc

Firewall software such as UFW (uncomplicated firewall by ubuntu), and such do not play nicely with IPv6 and OpenVz (over venet). So thats something to keep in mind when setting up firewall software that rely on kernel modules and such that are not often available in a virtualized environment (Xen is typically more suited for something like that).

Not all control panels are IPv6 ready. DirectAdmin for example does have the option to enable IPv6, and will let you manage IPv6 addresses in the IP configuration and DNS area. However it does not configure the services like apache, bind9, dovecot and so forth for you. For those services you’ll have to configure manually.


For the purpose of hosting websites and logging into my server via SSH, the above guidelines work just fine. I currently host, (static CDN), and a few other websites behind IPv6 with Nginx on such a configuration.

If you do not have IPv6 connectivity from your home or office machine, you can use a free tunnnel service such as’s IPv6 Tunnel Broker Service. which I’m using here at home. I currently use Hurricane Electric for my servers backbone (native IPv6 for my servers), Home IPv6 Tunneling, and Free DNS Service which seems to work much better than using my domain registrar’s DNS.

Protecting Folders with Nginx

February 5th, 2011

This question comes up every so often, and its actually fairly easy besides the fact you do not use an .htaccess file. To do so we’ll need the auth_basic module which comes included with Nginx.

First we’ll need to create a password file, you can create this in the folder you wish to protect (though the file can reside anywhere Nginx has access to).

Creating .htpasswd

If you previously had apache/httpd installed on your machine you may still have some of the utilities that came with it. In this case we’ll want to use the htpasswd command:

htpasswd -c /var/www/ username

When you run the above command, it will prompt you for a password for the provided username, and then create the file .htpasswd in the folder you specified. If you already have a pre-existing password file, you can omit the -c flag. You can use the -D flag to remove the specified user from a password file.

If you do not have htpasswd on your system, you can create the file yourself which each line in the format of:


To generate the encrypted password you can use Perl which is installed on most systems.

perl -le 'print crypt("your-password", "salt-hash")'

Or you can use Ruby via the interactive console command ‘irb’:


Setting up Nginx

Add protection for the /admin folder.

server {
    listen 80;
    location / { 
        # your normal content
    location /admin {
        auth_basic "Administrator Login";
        auth_basic_user_file /var/www/;
    #!!! IMPORTANT !!! We need to hide the password file from prying eyes
    # This will deny access to any hidden file (beginning with a .period)
    location ~ /\. { deny  all; }

With the above access to the admin directory will prompt the user with a basic authentication dialog, and will be challenged against the password file provided.

Additional Security Considerations

The password file itself does not have to be named .htpasswd, but if you do store it a web-accessible location make sure to protect it. With the last location block above using any name with a period in front should be protected from web-access.

The safest place to store a password file is outside of the web-accessible location even if you take measures to deny access. If you wish to do so, you can create a protected folder in your nginx/conf directory to store your password files (such as conf/ and load the user file from that location in your configuration.

Deploying circuits.web with Nginx/uwsgi

January 31st, 2011

I’m a very minimal person when it comes to frameworks, I don’t generally like something that needs to generate an entire application file structure like you’d see with Django. When I was searching around for various frameworks to get me started with python and web development, I investigated the usual; DJango, CherryPy, Web.Py. I fell in love with circuits due it’s ease and simplicity, yet it can be quite powerful. This article will show you how to get Nginx setup with uWSGI along with a sample circuits.web application.

Nginx, as of version 0.8.40, comes deployed with the uwsgi module unless otherwise excluded (ie: –without-http_uwsgi_module), the older legacy version 0.7.63 and above came with the module, but needed to be compiled into it. The syntax I use in this article conforms to 0.8.40 and newer and might not work with an older version of Nginx. (Currently I’m using this on Nginx 0.9.4, with Python 2.6.6)

This article assumes you already have Nginx 0.8.40+ and Python installed.

Installing the uWSGI server

As stated on their wiki, “uWSGI is a fast (pure C), self-healing, developer/sysadmin-friendly application container server.”, it utilizes the uwsgi protocol (notice the all-lowercase spelling), and supports WSGI applications served from it.

Once you’ve downloaded the source from the wiki, you can install it (a number of methods here). For my purpose I’ve moved the compiled uwsgi binary to my /usr/local/bin folder so that I could call it from anywhere on my system.

Other than FreeBSD I have not seen uWSGI readily available in most distribution’s package systems.

You can test your installation by moving out of the source folder, and calling the binary:

# uwsgi --version

Simple Hello world app without daemonizing uWSGI

Now we’re going to setup a very simple WSGI hello world application, and host it behind uWSGI and use Nginx to serve it. We’re not going to daemonize the uWSGI as such you’ll see it’s output in your terminal as connections are made. Also in Nginx we’ll simply send everything to the backend application for this demonstration.

Configure Nginx

server {
	location / { 
		#You can also use sockets such as : uwsgi_pass unix:/tmp/uwsgi.sock;
		include uwsgi_params;

If Nginx is currently running, either restart it, or you can reload the configuration with “nginx -s reload”.

Create a WSGI application
In your application folder create a python file, for example

def application(environ, start_response):
    start_response("200 OK", [("Content-Type", "text/plain")])
    return ["Hello World!"]

Deploy with uWSGI
Now we’ll want to deploy a simple single-worker uWSGI instance to serve up your application from your application folder:

# uwsgi -s -w myapp
*** Starting uWSGI (64bit) on [Mon Jan 31 00:10:36 2011] ***
compiled with version: 4.4.5
Python version: 2.6.6 (r266:84292, Dec 26 2010, 22:48:11)  [GCC 4.4.5]
uWSGI running as root, you can use --uid/--gid/--chroot options
 *** WARNING: you are running uWSGI as root !!! (use the --uid flag) *** 
 *** WARNING: you are running uWSGI without its master process manager ***
your memory page size is 4096 bytes
allocated 640 bytes (0 KB) for 1 request's buffer.
binding on TCP port: 3031
your server socket listen backlog is limited to 100 connections
initializing hooks...done.
...getting the applications list from the 'myapp' module...
uwsgi.applications dictionary is not defined, trying with the "applications" one...
applications dictionary is not defined, trying with the "application" callable.
application 0 () ready
setting default application to 0
spawned uWSGI worker 1 (and the only) (pid: 20317)

If you attempt to connect to the site, and all goes well you’ll see Hello World on the screen as well as some log outputs on your terminal. To shut down the uWSGI server use Ctrl+C on that screen.

Now for some fun with Circuits.web

Circuits is a Lightweight Event driven Framework for the Python Programming Language, with a strong Component Architecture. It’s also my favorite framework for deploying very simple web applications (but can be used for far more complicated needs, for example SahrisWiki).

For this article I’ll show you a few ways circuits.web can really simplify handling requests and URI parsing.

First thing we’ll need to do is get the circuits module installed into Python.

If you do not already have Python SetupTools, install them:

apt-get install python-setuptools

And then simply install via easy_install:

easy_install circuits

You can get the egg (2.6 only) or the source of Circuits 1.3.1 from

On to page 2…

Apache and Nginx Together

January 30th, 2011

I’m not going to get into a lot of details about how to install and configure either http server from scratch. This article is primarily going to be food for thought for those who may want or need to configure nginx along side an existing apache (httpd) configuration. If you would rather ditch Apache all together, consider reading Martin Fjordvald’s Nginx Primer 2: From Apache to Nginx.

Side by Side

In some cases you may need to run both Apache (httpd) and Nginx on port 80. Such a situation can be a server running Cpanel/Whm and as such doesn’t support nginx, so you wouldn’t want to mess with the apache configuration at all.

To do this you have to make sure Apache and Nginx are bound to their own IP adddress, In the event of WHM/Cpanel based webserver, you can Release an IP to be used for Nginx in WHM. At this time I am not aware of a method of reserving an IP, and automatically forcing Apache to listen on a specific set of IPs in a control panel such as DirectAdmin or Plesk. But the link above will show you how with WHM/Cpanel.

Normally Apache will be listening on all interfaces, and such you may see a line like this in your httpd.conf file:

Port 80
Listen 80

Instead you’ll need to tell apache to listen on a specific IP (you can have multiple Listen lines for multiple IPs)


When you change Apache to listen on specific interfaces some VirtualHost may fail to load if they are listening on specific IPs themselves. The address option in <VirtualHost addr[:port] [addr[:port]] …> only applies when the main server is listening on all interfaces (or an alias of such). If you do not have an IP Listed in your <VirtualHost …> tag, then you do not need to worry bout this. Otherwise make sure to remove the addr:port from the VirtualHost tag.

Make sure to update the NameVirtualHost directive in Apache for name-based virtual hosts.

Once you have apache configured to listen on a specific set of IPs you can do the same with nginx.

server {

Now that both servers are bound to specific IPs, both can then be started up on port 80. From there you would simply point the IP of the domain to the server you wish to use. In the case of WHM/Cpanel you can either manually configure the DNS entry for the domain going to nginx in WHM, or you can use your own DNS such as with your registrar to point the domain to the specific IP. Using the latter may break your mail/ftp etc configurations if the DNS entries are not duplicated correctly.

Though normally if you are setting up Nginx side-by-side with apache, you’ll have your domain configurations separate from the rest of the system. The above paragraph only applies if the domain you are using has already been added by cpanel/whm and you wish to have it served by nginx exclusively. [Scroll down for PHP considerations]

Apache behind Nginx

If you are using a control panel based hosting such as cpanel/whm, this method is not advised. Most of the servers configuration is handled automatically in those cases, and making manual changes will likely lead to problems (plus you won’t get support from the control panel makers or your hosting provider in most cases).

Using Nginx as the primary frontend webserver can increase performance regardless if you choose to keep Apache running on the system. One of Nginx’s greatest advantage is how well it serves static content. It does so much more efficiently than Apache, and with very little cost to memory or processing. So placing Nginx in front will remove that burdern off Apache, leaving it to concentrate on dynamic request or special scenarios.

This method is also popular for those who don’t want to use PHP via fastcgi, or install a separate php-fpm process.

First thing that needs to be done is to change the interface apache listens on:


Above we bind Apache to the localhost on an alternate port, since only Nginx on the same machine will be communicating with Apache.

Note: Since Nginx needs to access the same files that Apache serves, you need to make sure that Nginx is setup to run as the same user as apache, or to make sure that the Nginx’s selected user:group has permission to read the web documents. Often times the webroot is owned by nobody or www-data and Nginx likewise can be setup to run as those users

Then in Nginx listening on port 80 (either on all interfaces such as or to specific IPs, your choice) we would need to configure Nginx to serve the same content. Take for example this virtual host in an apache configuration:

      DocumentRoot "/usr/local/www/"
      CustomLog /var/log/httpd/mydomain_access.log common
      ErrorLog /var/log/httpd/mydomain_error.log

In Nginx the equivalent server configuration would be:

server {
      root /usr/local/www/;
      # by default logs are stored in nginx's log folder
      # it can be changed to a full path such as /var/log/...
      access_log logs/mydomain_access.log;
      error_log logs/mydomain_error.log;

The above would serve all static content from the same location, however PHP will simply come back as PHP source. For this we need to send any PHP requests back to Apache.

server {
      root /usr/local/www/;
      access_log logs/mydomain_access.log;
      error_log logs/mydomain_error.log;
      location / { 
            # this is the location block for the document root of this vhost
      location ~ \.php {
            # this block will catch any requests with a .php extension
            # normally in this block data would be passed to a FastCGI process
            # these two lines tell Apache the actual IP of the client being forwarded
            # Apache requires mod_proxy ( for these to work
            # Most Apache 2.0+ servers have mod_proxy configured already
            proxy_set_header X-Real-IP  $remote_addr;
            proxy_set_header X-Forwarded-For $remote_addr;
            # this next line adds the Host header so that apache knows which vHost to serve
            # the $host variable is automatically set to the hostname Nginx is responding to
            proxy_set_header Host $host;
            # And now we pass back to apache
            # if you're using a side-by-side configuration the IP can be changed to
            # apache's bound IP at port 80 such as
       # if you don't like seeing all the errors for missing favicon.ico in root
       location = /favicon.ico { access_log off; log_not_found off; }
       # if you don't like seeing errors for a missing robots.txt in root
       location = /robots.txt { access_log off; log_not_found off; }
       # this will prevent files like .htaccess .htpassword .secret etc from being served
       # You can remove the log directives if you wish to
       # log any attempts at a client trying to access a hidden file
       location ~ /\. { deny all; access_log off; log_not_found off; }

In the case of rewriting (mod_rewrite) with apache behind nginx you can use an incredibly simple directive called try_files. Here’s the same code as above, but when changes made so that any files or folder not found by nginx gets passed to apache for processing (useful for situations like WordPress, or .htaccess based rewrites when the file or folder is not caught by Nginx)

server {
      root /usr/local/www/;
      access_log logs/mydomain_access.log;
      error_log logs/mydomain_error.log;
      location / { 
            # try_files attempts to serve a file or folder, until it reaches the fallback at the end
            try_files $uri $uri/ @backend;
      location @backend {
            # essentially the same as passing php requests back to apache
            proxy_set_header X-Real-IP  $remote_addr;
            proxy_set_header X-Forwarded-For $remote_addr;
            proxy_set_header Host $host;
      # ... The rest is the same as the configuration above ... 

With the above configuration, any file or folder that exists will be served by Nginx (the php block will catch files ending in .php), otherwise it will be passed back to apache on the backend.

In some cases you may wish for everything to be passed to Apache unless otherwise specified, in which case you would have a configuration such as this:

server {
      root /usr/local/www/;
      access_log logs/mydomain_access.log;
      error_log logs/mydomain_error.log;
      location / {
            proxy_set_header X-Real-IP  $remote_addr;
            proxy_set_header X-Forwarded-For $remote_addr;
            proxy_set_header Host $host;
      # Example 1 - Specify a folder and it's content
      # To serve everything under /css from Nginx-only
      location /css { }
      # Example 2 - Specify a RegEx pattern such as file extensions
      # Serve certain files directly from Nginx
      location ~* ^.+\.(jpg|jpeg|gif|png|css|zip|pdf|txt|js|flv|swf|html|htm)$
            # this will basically match any file of the above extensions
            # course if the file does not exist you'll see an Nginx error page as opposed
            # to apache's own error page. 

Personally I prefer to leave Nginx to handle everything, and specify exactly what needs to be passed back to Apache. But the last configuration above would more accurately allow .htaccess to function during almost-every request since almost everything gets passed back to apache. The css folder or any files found by the other location block would be served directly by nginx regardless if there’s an .htaccess file there, since only Apache processes .ht* files.

PHP Considerations

If you are not familiar with Nginx, then it should be noted that Nginx does not have a PHP module like apache’s mod_php, instead you either need to build PHP with FPM (ie: php-fpm/fastcgi), or you need to pass the request to something that can handle PHP.

In the case of using Nginx with Apache you essentially have two choices:

1) Compile and Install PHP-FPM, thus running a seperate PHP process to be used by Nginx. This option is usually good if you want to keep the two webserver configurations separated from each other, that way any changes to one won’t affect the other. But of course it adds additional memory usage to the system.

2) Utilize Apache’s mod_php. This option is ideal when you will be passing data to apache as a backend. Even in a side-by-side scenario, you can utilize mod_php by proxying any php request to Apache’s IP. But in the side-by-side scenario you have to make sure that Apache is also configured to serve the same virtualhost that Nginx is requesting.

Speed-wise both methods are about the same when PHP is configured with the same set of options on both. Which option you choose depends solely on your needs. Another article that may be of interest in relations to this one would be Nginx as a Proxy to your Blog, which can be just as easily utilized on a single server (especially if you get multiple IP ranges like I do).

Mimic Apache mod_geoip in Nginx

November 12th, 2010

Maxmind makes a variety of APIs and tools to use their geolocation database and one such tool is the mod_geoip module for Apache. Using the GeoIP module at the apache level means that PHP can access the visitor’s country code simply by means of an environment variable such as this:


To setup Nginx with this capability we’ll need to recompile Nginx if you have not already used the –with-http_geoip_module compile option.

First we’ll need to install the GeoIP API system-wide:

CentOS (yum)

yum install GeoIP-devel

Debian/Ubuntu (aptitude)

apt-get install libgeoip-dev

Mac OS X 10.5+ (via HomeBrew)

brew install geoip
sudo brew link geoip

Once you have the GeoIP library installed you can then proceed to recompile Nginx, this is rather simple if you have installed Nginx from source (this assumes you already have the nginx source unpacked somewhere):

$ cd src/nginx-0.8.53
$ nginx -V 
nginx version: nginx/0.8.53
built by gcc 4.4.4 (Debian 4.4.4-6) 
TLS SNI support enabled
configure arguments: --prefix=/opt --with-pcre=/root/src/pcre-8.02 --with-md5=/usr/lib 
--with-sha1=/usr/lib --with-http_ssl_module --with-http_realip_module --with-http_gzip_static_module 
--with-openssl=/root/src/openssl-1.0.0a/ --without-mail_pop3_module
 --without-mail_imap_module --without-mail_smtp_module
$ ./configure --prefix=/opt --with-pcre=/root/src/pcre-8.02 --with-md5=/usr/lib --with-sha1=/usr/lib \
--with-http_ssl_module --with-http_realip_module --with-http_gzip_static_module \
--with-openssl=/root/src/openssl-1.0.0a/ --without-mail_pop3_module \
--without-mail_imap_module --without-mail_smtp_module --with-http_geoip_module
$ make && sudo make install
$ /etc/init.d/nginx restart

We’ll want to download the Maxmind Geolite Country Database some place Nginx can use it.

$ cd /opt/conf
$ wget
$ gunzip ./GeoIP.dat.gz

Then we need to tell Nginx where to find that file, so in your nginx.conf add this in your http { } block:

geoip_country  /opt/conf/GeoIP.dat;
#geoip_city     /opt/conf/GeoLiteCity.dat;
#Uncomment the above if you also wish to lookup cities

Once that is done you can restart Nginx, you’ll be able to use variables such as $geoip_country_code to obtain the visitor’s country code. Full details of this module can be found at HttpGeoIPModule.

We’re not done yet, now we need to make it so that PHP see’s these results in the same fashion it would with the apache module. In your fastcgi_params or where ever you are passing fastcgi_param values to PHP you’ll wish to add at least these two lines:

	fastcgi_param GEOIP_COUNTRY_CODE $geoip_country_code; 
	fastcgi_param GEOIP_COUNTRY_NAME $geoip_country_name;

Once that is done, restart Nginx. Now you’ll be able to access the visitor’s country code in PHP via $_SERVER[‘GEOIP_COUNTRY_CODE].

You can instead use the module to redirect visitors straight from nginx in the following fashion:

    server {
	root /opt/html/;
        location / {
                try_files /index_$geoip_country_code.html index.html;

If a visitor from Russia visits your site it will try to load index_RU.html, and if that is not found will fall back to index.html, likewise if you created a index_US.html a visitor from the United States will see that page’s content.

Nginx as a Proxy to your Blog

July 23rd, 2010

Some time ago, back when Google Biz Ops were really popular I’ve had quite a number of clients use this method of changing the IP address of their auto-blogs. I once had a server hosted by ServerPronto*. The server was physically located in Panama (despite being advertised as a US hosting provider), the IPs were cheap though.

The Idea

The client had a good number of auto-blogs, landing pages and so forth on the same hosting provider (in this case his own dedicated server with multiple non-class-c IPs). The problem was making one or more of those domains appear not to be on the same server, or datacenter in this case. The solution was to use various class-c and class-b IPs on my own server, and use Nginx to proxy any web request to his own existing websites thus making those sites appear on a server in Panama as opposed to their actual location. The same could work if you wanted to use US IPs on a cheap provider and proxy to an off shore provider with cheaper hardware.

The Solution

For this we’ll need a Dedicated server or VPS to host Nginx on. (I’ll have an article later on how to install Nginx from scratch). You’ll probably want a couple extra IPs on hand, in case you want to proxy multiple sites of the same niche.

For every hostname you wish to proxy you’ll need a server block similar to this in your nginx configuration.

server { 
	#Turns access log off
	access_log off;
	#Set this to the IP on the nginx server you want to represent this proxy
	listen nginx-ip-here:80;
	#Matches the requested hostname
	location / {
		# Tells the destination server the IP of the visitor (some take X-Real-IP, 
		# some take X-Forwarded-For as their header of choice
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header X-Forwarded-For $remote_addr;
		# Tells the destination server which hostname you want
		proxy_set_header Host $host;
		# Performs a Proxy Pass to that server
		proxy_pass http://ip-address-of-your-server;

From here you would point your domain name to the Nginx server. If your domain currently uses your existing hosting’s nameservers, the easiest way to change this would be to use your registrar’s DNS such as namecheap or godaddy (via total domain control), or a free DNS provider such as In any of those you would simply change the A record of your domain name to the IP address of the Nginx server.

Once that is setup any request on port 80 (standard http port) will be proxied to the server actually hosting the website or blog.

The Problems

A couple of downsides may exist with this method.

Control panel access via your domain name
When you point your domain to the Nginx IP, you not only direct web traffic there, but all other forms of traffic such as SSH, Cpanel/whm, FTP, and so forth. While you can in theory forward via iptables and other methods for those additional ports, it would be difficult and not without problems. So to get around this you’ll either want to access SSH, Cpanel/DirectAdmin, FTP, and so forth via your actual server’s IP address, or use a non-proxied domain. You can use any of your other domains on the same server especially if using a control panel, since the login is what determines which account you need access to, not the domain itself.

Email access of course will cease if you don’t setup Nginx as mail proxy to that specific domain name. Though the work around would be to add an A record for in the DNS settings, and then change your MX records to point to Course the better alternative if you must have email at that domain would be to use google apps, that way a simple whois or DNS lookup won’t reveal your actual server’s IP address. The best alternative is to simply not use email at proxied domains.

Normally a visitor takes their provided route from their location directly to your server over whatever backbone connects them. But with a proxied domain the visitor is then sent to the nginx server where ever it may be, and then the time is increased by how long it takes for the nginx server to communicate with your own webserver and then back. If the latency between the Nginx server and your destination is short, then the speed difference to visitors may not even be noticeable.

Security Concerns
Some of you no doubt are looking at this configuration and wondering if it could be changed so you don’t have to make a server block for each hostname you have. While it is possible to make server_name and the proxy_pass destination dynamic to the requested hostname, this is a very bad idea. Doing so will allow not only your own domains to be proxied but any domain pointed to your nginx server. This can result in abuse or even bandwidth overload.

For example, don’t do this:

server { 
	access_log off;
	listen nginx-ip-here:80;
	location / {
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header X-Forwarded-For $remote_addr;
		proxy_set_header Host $host;
		proxy_pass http://$host;

While it may be easy for you, it opens your server up for a floodgate of abuse using your server as some kind of public proxy point. Instead to tighten security of the nginx box, we should deny (or redirect) any traffic to the nginx server that either does not provide a host name, or does not provide one that has been defined. You can do so like this:

    server {
	#visits not supplying hostname
	server_name _;
	location / { return 444; }

The return 444, sends a forbidden error back to the browser.

* ServerPronto: While I did use them at one point of time for the proxy purpose, I would not recommend them for a hosting provider or dedi reseller.

SSL: Untrusted Connection in Firefox

June 8th, 2010

This is primarily for PositiveSSL Certificates

If you’ve ever received an “Untrusted Connection” from Firefox, specifically with the error “Error code: sec_error_unknown_issuer”, then one possible fix is to append your issuer’s own certificates to yours.

For example if you have a domain.crt file, then download the PositiveSSL CA Bundle, once downloaded append it’s content to the end of your own certificate and then restart the webserver.

For example if your certificate looks something like this:


It would become something like this:


Upon restarting it should cause newer strict browsers to lookup and identify the issuer correctly.

Apache to Nginx Migration Tips

April 11th, 2010

For a simple beginning example configuration to start with please view my example configuration

Nginx currently holds shy of 6.5% of the known webserver market, which is just roughly shy of 13 million servers. This little lightweight webserver created by a sole Russian developer has been gaining a great deal of popularity over the last few years and is used by sites such as WordPress, Texts from Last Night and Hulu.

This guide will show you common examples in Nginx to make migration from Apache a bit easier.

HTTP Basic Authentication or htpasswd support

If you have been playing with nginx for a the first time you may have noticed that there is no .htaccess support. However Nginx can still utilize HTTP Basic Authenication with existing htpasswd files using the HTTP Auth Basic Module. The module is compiled by default.

If you still have Apache installed (but perhaps turned off or on a different port) you can still use htpasswd to generate a password file. But in the event that you don’t have or don’t want to install Apache you can use Perl or Ruby to generate at least the encrypted passwords. The .htpasswd files should be in the following format.


To generate a password with Ruby in irb:


Likewise to generate with perl from the terminal:

perl -le 'print crypt("your-password", "salt-hash")'

The two commands above will utilize a 56-bit DES encryption which can be used in the htpasswd file.

Once you have your password file saved, store it outside of the web-accessible location like you would have when using Apache (likewise you can ensure that web access to hidden files are denied, more on that later).

Lets say you have a subfolder on your server called ‘admin’ and you wish to protect it. Here is an example configuration:

location  /admin  {
  auth_basic            "Admin Access";
  auth_basic_user_file  conf/;

If the above did not work, check your error logs as sometimes it will notify you if the path to the password file could not be found.

Ignoring robots.txt, favicon.ico and hidden files

Sometimes you may wish to omit commonly accessed files from being recorded in the access log as well as block access to hidden files (filenames beginning with a period such as .htpasswd and .htaccess). Even though nginx doesn’t support htaccess, you may still wish to secure files left behind from the migration. You can easily do this by adding a couple location blocks to your domain’s server block.

location = /favicon.ico { access_log off; log_not_found off; }	
location = /robots.txt { access_log off; log_not_found off; }
location ~ /\. { deny  all; access_log off; log_not_found off; }

The above will not record access to the favicon.ico or robots.txt in the root of your site, as well as ignore the file-not-found error if you do not have either file present. Also the last line will deny access to any file beginning with a period anywhere within the root path. You can actually save the above into a file such as /conf/drop.conf, and include it at the bottom of each of your server blocks like so.

include drop.conf;

Extremely Easy WordPress Migration

Most people now days take advantage of WordPress’ permalink feature which normally requires mod_rewrite enabled in the .htaccess file. However as of Nginx 0.7.26 and above this can be taken care of very easily with a single directive.

location / { 
   try_files $uri $uri/ index.php;

The above will attempt to try the requested URI first as a file, then as a folder, and if both of those come back negative it will fall back to index.php. WordPress can simply parse the request URI sent by the webserver so there is no need to manually add arguments onto index.php. If you instead need to do something further you can change the fallback from index.php to @fallback like so:

location / {
   try_files $uri $uri/ @wordpress;
location @wordpress {
     # Additional rewrite rules or such you need here.

For Nginx versions older than 0.7.26 you can use the older method shown below (however its strongly advised to have 0.7.26 or newer).

location / { 
        if (-f $request_filename) {
            expires 30d;
        #Ultimately 'if' should only be used in the context of rewrite/return :
         if (!-e $request_filename) {
            rewrite ^(.+)$ /index.php?q=$1 last;

The two if blocks replace the common RewriteCond used by wordpress to test if the requested filename is an existing file or folder.

On the next page: Utilizing WP Super Cache, rewrite examples and more.

My Nginx Configuration

March 14th, 2010

I’m creating this page on popular request, as I’ve had to paste my configuration for people a number of times especially on IRC. Below is an example configuration of how is setup with some comments.

My primary nginx.conf file located in /conf

# Normally you don't want to run a webserver as root
# so you set www-data (debian/ubuntu) or nobody (centos/rhel)
# you'll want to make sure your web root is owned by www-data group
user www-data;
# 4 worker processes is usually sufficient for a webserver serving
# both static files and passing dynamic requests back to apache, fastcgi or an app server
worker_processes     4;
# normally you leave this at the default of 1024
events {
    worker_connections  1024;
http {
    # General Settings
    gzip  on;
    sendfile on;
    tcp_nopush on;
    tcp_nodelay off;
    server_tokens off;
    include mime.types;
    keepalive_timeout 5;
    default_type  application/octet-stream;
    # If we set index here, we won't have to anywhere else
    index index.php index.html index.htm;
    # I prefer nginx to show the errors than "No Input Files Specified"
    # If you're using wordpress you want to turn this off so Wordpress
    # Shows the error. You can turn it off at the server or location level.
    # ONLY works if the server block has error pages defined for 4xx/5xx
    fastcgi_intercept_errors on;
    # We don't want someone to visit a default site via IP
    # So we catch all non-defined Hosts or blank hosts here
    # the default listen will cause this server block to be used
    # when no matching hostname can be found in other server blocks
    server {
	# use default instead for nginx 0.7.x, default_server for 0.8.x+
	listen 80 default_server;
	# if no listen is specified, all IPv4 interfaces on port 80 are listened to
	# to listen on both IPv4 and IPv6 as well, listen [::] and must be specified. 
	server_name _;
	return 444; 
    include sites-enabled/*;

A site configuration located inside the /conf/sites_enabled folder

# Wordpress Example
server {
	# The usual names, starting with the base, then www., subdomains or *. wild cards.
	# Keep a root path in the server level, this will help automatically fill
	# Information for stuff like FastCGI Parameters
	root html/;
	# You can set access and error logs at http, server and location level
	# Likewise means you turn them off at specific locations
	access_log logs/kbeezie.access.log;
	error_log logs/kbeezie.error.log;
	# For my wordpress configuration, I prefer try_files
	# It will try for static file, folder, then falls back to index.php
	# The wordpress index.php is capable of parsing the URI automatically
	location / { try_files $uri $uri/ /index.php; }
	# Where I turned off intercept errors for WordPress
	fastcgi_intercept_errors off;
	# Includes my PHP location block and parameters
	include php;
	# My all in one settings to hide stuff like .invisible files
	# or turn off access/error logs to favicon/robots.txt
	include drop;
# Proxy_Pass example (backend server, or in my case Python App)
# For Python WSGI or Ruby/Rails you can check out 
server {
	# You can choose to turn remove this if you wish to
	# See requested URIs
	access_log off;
	# If your application returns any erorrs it can be logged by nginx
	# However if the application fails, or is not stated you'll see
	error_log logs/python.error.log;
	# I usually run my apps from base domains or subdomains rather than
	# folders, though it is possible. 
	# a root definition where you can store static files
	# if not served by the application
	root html/python-static/;
	# Since we have a static root defined, we can check
	# for static files there, otherwise goes to the backend
	location / { try_files $uri $uri/ @backend; }
	# The backend for either backend servers or apps
	location @backend {
		# Lets the app/backend know the visitor's IP
		# otherwise shows
		proxy_set_header X-Real-IP  $remote_addr;
		proxy_set_header X-Forwarded-For $remote_addr;
		# Some app servers need to be made aware of the hostname
		proxy_set_header Host $host;
		# example on how to connect to a unix socket
		proxy_pass	http://unix:/opt/apps/ipn/ipn.sock:/;
		# Example via TCP location of the backend server
		# proxy_pass;
	# you could copy drop into drop_deny to outright deny favicon and robots.txt for apps
	include drop;

A single file called php located in the /conf folder, using this method makes it easy to enable
php on a per-server basis.

location ~ \.php {
	fastcgi_param  QUERY_STRING       $query_string;
	fastcgi_param  REQUEST_METHOD     $request_method;
	fastcgi_param  CONTENT_TYPE       $content_type;
	fastcgi_param  CONTENT_LENGTH     $content_length;
	fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
	fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;
	fastcgi_param  REQUEST_URI        $request_uri;
	fastcgi_param  DOCUMENT_URI       $document_uri;
	fastcgi_param  DOCUMENT_ROOT      $document_root;
	fastcgi_param  SERVER_PROTOCOL    $server_protocol;
	fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
	fastcgi_param  SERVER_SOFTWARE    nginx;
	fastcgi_param  REMOTE_ADDR        $remote_addr;
	fastcgi_param  REMOTE_PORT        $remote_port;
	fastcgi_param  SERVER_ADDR        $server_addr;
	fastcgi_param  SERVER_PORT        $server_port;
	fastcgi_param  SERVER_NAME        $server_name;
	# I use a socket for php, tends to be faster
	# for TCP just use
	fastcgi_pass   unix:/opt/php-fpm.sock;
	# Not normally needed for wordpress since you are
	# sending everything to index.php in try_files
	# this tells it to use index.php when the url
	# ends in a trailing slash such as
	fastcgi_index  index.php;

A file called drop in the /conf folder, including this into your server configuration will drop/block
common files you do not wish to be exposed to the web.

# Most sites won't have configured favicon or robots.txt
# and since its always grabbed, turn it off in access log
# and turn off it's not-found error in the error log
location = /favicon.ico { access_log off; log_not_found off; }	
location = /robots.txt { access_log off; log_not_found off; }
# Rather than just denying .ht* in the config, why not deny
# access to all .invisible files 
location ~ /\. { deny  all; access_log off; log_not_found off; }

Further comments will be appended here.

Configuring SNI with NginX

December 28th, 2009

Traditionally for every SSL certificate issued, you needed a separate and unique IP address. However if you compile OpenSSL and NginX with TLS SNI (Server Name Identification) support you can install multiple SSL certificates without having to bind a domain name to a specific IP address or require each certificate to have its own unique IP.

First thing we need to do is actually compile OpenSSL with TLS SNI support. We’ll start by downloading the latest source and unpacking it into a temporary directory. For this article I keep all sources in ~/src.

$ cd ~/src
$ wget
$ tar zxvf ./openssl-0.9.8l.tar.gz
$ cd ./openssl-0.9.8l

We’ll then configure with TLS support:

$ ./config enable-tlsext
$ make
$ make install
$ cd ..

Assuming the Nginx source also resides in ~/src you can add the following option to your configure statement when compiling Nginx from source (you can also instead use an absolute path such as /root/src/openssl-0.9.8l/).


Once compiled and installed you can check to see if TLS SNI is enabled:

[root@host src]# nginx -V
nginx version: nginx/0.8.31
built by gcc 4.1.2 20080704 (Red Hat 4.1.2-46)
TLS SNI support enabled
configure arguments: --prefix=/usr/local/nginx 
--pid-path=/usr/local/nginx/logs/ --sbin-path=/usr/local/sbin/nginx 
--with-md5=/usr/lib --with-sha1=/usr/lib --with-http_ssl_module 
--with-http_realip_module --with-http_gzip_static_module 

From there you will no longer be required to have an SSL enabled server block on it’s own unique IP. You won’t even have to have the block listening to a specific IP either since now OpenSSL will handle the certificate validation based on the server name.