Posts Tagged ‘performance’

Search Page Getting Hammered?

March 27th, 2011

On my WordPress Caching write up someone mentioned asked a very good question in the comments. What good is the caching if your site gets brought down by excessive search queries?

Great article.

However I have following problem.

My WP setup is like that.

Currently I am on shared hosting with WP + W3 Total cache and during peak hours, my site is very slow. That is mainly because I have a huge traffic from Google.

My webstie caches plenty of keywords with AskApache Google 404 and Redirection.

What happens is that traffic from Google goes to /search/what-ever-keywords dynamicly created everytime. And that is killing my system.
The problem is I have no idea how to help poor server and cache that kind of traffic.

Would you have any advice for that ?

Fortunately the Nginx webserver has a way to soften the impact with a module called Limit Request. For wordpress this is how you would implement it:

http {
	limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
	#... the rest of your content
	server {
		#... your server content
		location /search { 
			limit_req zone=one burst=3 nodelay;
			try_files $uri /index.php; 

What we have here is almost identical to the example provided in the Nginx Wiki (HttpLimitReqModule). Essentially we’ve created a zone that allows a user no more than 1 requests per seconds, assigning them via $binary_remote_addr (which is smaller than $remote_addr while still accomplishing the same goal), within a space of 10MB.

Then we’ve placed a limit_req directive into the /search location. (Remember to include the rewrite so that index.php receives the search request in wordpress) This directive uses the zone we created earlier, allowing for a burst of up to 3 requests from a single user. If the user exceeds the limit too many times (the burst limit) they’re presented with a 503 error which Google and others consider sort of a ‘back off’ response.

By default the burst value is zero.

We use try_files here because rewriting will occur before the limiting had a chance to act, since the rewrite phase takes precedence before most other processes.

Here is an example wordpress configuration utilizing the above, configured for W3TC w/ Memcache (or OpCode Caching), please see the article [ The Importance of Caching WordPress] for details on caching WordPress in other ways, and why you should be using caching with wordpress.

This is config is somewhat based off the same configuration I currently use for on a FreeBSD server.

user kb244 www;
worker_processes  2;
events {
	worker_connections	2048;
http {
	sendfile on;
	tcp_nopush on; # Generally on for linux, off for FreeBSD/Unix
	tcp_nodelay on;
	server_tokens off;
	include mime.types;
	default_type  application/octet-stream;
	index index.php index.htm index.html redirect.php;
	gzip  on;
	gzip_vary on;
	gzip_proxied any;	
	gzip_comp_level 6;
	gzip_buffers 16 8k;
	gzip_http_version 1.1;
	gzip_disable "MSIE [1-6]\.(?!.*SV1)";	
	gzip_types text/plain text/css application/json application/x-javascript text/xml 
                        application/xml application/xml+rss text/javascript;
	fastcgi_intercept_errors on;
	fastcgi_ignore_client_abort on;
	fastcgi_buffers 8 16k;
	fastcgi_buffer_size 32k;
	fastcgi_read_timeout 120;
	fastcgi_index  index.php;
	limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
	server {
		listen 80;
		# Replace with your domain(s)
		# Setting your root at the server level helps a lot
		# especially if you don't like having to customize your
		# PHP configurations every single time you create a vhost
		root /usr/local/www/;
		# Always a good idea to log your vhosts seperately
		access_log /var/log/nginx/kbeezie.access.log;
		error_log /var/log/nginx/kbeezie.error.log;
		# I'm using W3TC with memcache, otherwise this block
		# would look a lot more complicated for handling file
		# based caching.
		location / { 
			try_files $uri $uri/ /index.php; 
		# And here we have the search block limiting the requests
		location /search {
			limit_req zone=kbeezieone burst=3 nodelay;
			try_files $uri /index.php;
		# We want Wordpress to handle the error page
		fastcgi_intercept_errors off;
		# Handle static file requests here setting appropiate headers
		location ~* \.(ico|css|js|gif|jpe?g|png)$ {
			expires max;
			add_header Pragma public;
			add_header Cache-Control "public, must-revalidate, proxy-revalidate";
		# I prefer to save this location block in it's own php.conf
		# in the same folder as nginx.conf so I can just use:
		# include php.conf;
		# at the bottom of each server block I want PHP enabled on
		location ~ \.php$ {
			fastcgi_param  QUERY_STRING       $query_string;
			fastcgi_param  REQUEST_METHOD     $request_method;
			fastcgi_param  CONTENT_TYPE       $content_type;
			fastcgi_param  CONTENT_LENGTH     $content_length;
			fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
			fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;
			fastcgi_param  REQUEST_URI        $request_uri;
			fastcgi_param  DOCUMENT_URI       $document_uri;
			fastcgi_param  DOCUMENT_ROOT      $document_root;
			fastcgi_param  SERVER_PROTOCOL    $server_protocol;
			fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
			fastcgi_param  SERVER_SOFTWARE    nginx;
			fastcgi_param  REMOTE_ADDR        $remote_addr;
			fastcgi_param  REMOTE_PORT        $remote_port;
			fastcgi_param  SERVER_ADDR        $server_addr;
			fastcgi_param  SERVER_PORT        $server_port;
			fastcgi_param  SERVER_NAME        $server_name;
			fastcgi_pass unix:/tmp/php5-fpm.sock;
			# I use a Unix socket for PHP-Fpm, yours instead may be:
			# fastcgi_pass;
		# Browsers always look for a favicon, I rather not flood my logs
		location = /favicon.ico { access_log off; log_not_found off; }	
		# Make sure you hide your .hidden linux/unix files
		location ~ /\. { deny  all; access_log off; log_not_found off; }

As always a little research and tweaking to your own requirements is advised.