I am building two app boxes per site. They will host mail, web and DNS for all applications I’m hosting. If I was building a larger implementation I’d separate those tasks out but scale doesn’t justify it just yet.
This week we’re going to take one of the app servers we built previously and install the web server components. I am using NGINX compiled from source as I want to include a plugin called Google PageSpeed that helps make things very quick.
Lets get into it! This tutorial assumes you’ve got working btsync & mysql installs and the data for all your sites is replicated between your app servers.
I’ve written a script that installs most of the major stuff. The script is pretty well commented and explains what its doing throughout. To use this script, put it into a file called install.sh and run “sh install.sh” at the command line.
Web Server Install Script
# Update apt apt-get update # Install VMware Tools apt-get -y install open-vm-tools # Install NTP apt-get -y install ntp # OS Upgrades # Upgrade ulimits echo "* soft no file 9000" >> /etc/security/limits.conf echo "* hard no file 65000" >> /etc/security/limits.conf echo "session required pam_limits.so" >> /etc/pam.d/common-session # Make /tmp nice & fast echo "tmpfs /tmp tmpfs defaults,noexec,nosuid 0 0" >> /etc/fstab mount -a # Install packages required for NGINX install apt-get -y install build-essential zlib1g-dev libpcre3 libpcre3-dev unzip libssl-dev libxslt1-dev libgd2-xpm-dev libgeoip-dev libperl-dev postfix unzip # Prepare ngx_pagespeed cd /usr/src/ wget https://github.com/pagespeed/ngx_pagespeed/archive/master.zip unzip master.zip cd ngx_pagespeed-master wget https://dl.google.com/dl/page-speed/psol/1.5.27.3.tar.gz tar -xzvf 1.5.27.3.tar.gz # Prepare for NGINX install cd /usr/src/ wget http://nginx.org/download/nginx-1.4.1.tar.gz tar -zxvf nginx-1.4.1.tar.gz cd nginx-1.4.1/ # Configure NGINX for local environment ./configure --add-module=/usr/src/ngx_pagespeed-master --sbin-path=/usr/sbin --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/body --http-proxy-temp-path=/var/lib/nginx/proxy --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --with-debug --with-http_addition_module --with-http_dav_module --with-http_flv_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_mp4_module --with-http_perl_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 # Compile NGINX make # Install NGINX make install mkdir /var/lib/nginx/ mkdir /var/lib/nginx/body # Make tmpfs mount for pagespeed data echo "tmpfs /var/cache/pagespeed tmpfs size=256m,mode=0775,uid=www-data,gid=www-data 0 0" >> /etc/fstab mkdir /var/cache/pagespeed chown www-data:www-data /var/cache/pagespeed mount /var/cache/pagespeed # Create init script cd /etc/init.d/ wget http://www.hooton.org/downloads/scripts/nginx.txt -Onginx chmod +x nginx update-rc.d nginx defaults # Install php-fpm apt-get install -y php5 php5-xmlrpc php5-mysql php5-mcrypt php5-intl php5-gd php5-dev php5-curl php5-common php5-cli php5-cgi php-pear php5-mysql php-apc php5-fpm php5-imap php5-memcache php5-memcached libssh2-php php5-tidy # Install varnish cache & memcached apt-get -y install varnish memcached # Make varnish faster echo "tmpfs /var/lib/varnish tmpfs size=256m,mode=0775,uid=root,gid=root 0 0" >> /etc/fstab # Copy default configs to BTSYNC share (Skip if this is not the first server) cd /data/config/ wget http://www.hooton.org/downloads/configs/appserverconfig.tgz tar -zxvf appserverconfig.tgz # Create symlinks for configs cd /etc mv nginx nginx_old mv php5 php5_old mv varnish varnish_old ln -s /data/config/etc/nginx/ ln -s /data/config/etc/varnish/ ln -s /data/config/etc/php5/ cd /etc/default rm -rf varnish rm -rf memcached ln -s /data/config/etc/default/varnish ln -s /data/config/etc/default/memcached cd /etc/ rm -rf memcached.conf ln -s /data/config/etc/memcached.conf
Once you’ve finished that script, reboot the server and it should come up running a default install of everything.
MySQL – HAProxy
Because we’re running a MySQL cluster for databases and I want as few moving parts as possible, I’ve chosen to run HAProxy locally on each web node. If one of the database servers goes away HAProxy will mark it as failed and divert you to other database servers until it returns.
I used the following two articles to get to the solution below:
- http://alinux.web.id/2011/08/17/load-balancing-mysql-replication-master-to-master-with-haproxy.html
- http://www.alexwilliams.ca/blog/2009/08/10/using-haproxy-for-mysql-failover-and-redundancy/
Install MySQL checks on each MySQL server
apt-get -y install xinetd
Add the following into /etc/xinetd.d/mysqlchk
# /etc/xinetd.d/mysqlchk
# default: on
# description: mysqlchk
service mysqlchk
{
flags = REUSE
socket_type = stream
port = 9200
wait = no
user = nobody
server = /opt/mysqlchk
log_on_failure += USERID
disable = no
only_from = 0.0.0.0/0 # recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED # Recently added (May 20, 2010)
# Prevents the system from complaining
# about having too many connections open from
# the same IP. More info:
# http://www.linuxfocus.org/English/November2000/article175.shtml
}
Create a mysql user called “mysqlchk”
Edit /opt/mysqlchk put your new mysql users details into this script
#!/bin/bash # /opt/mysqlchk # This script checks if a mysql server is healthy running on localhost. It will # return: # # "HTTP/1.x 200 OK\r" (if mysql is running smoothly) # # - OR - # # "HTTP/1.x 500 Internal Server Error\r" (else) # # The purpose of this script is make haproxy capable of monitoring mysql properly # # Author: Unai Rodriguez # # It is recommended that a low-privileged-mysql user is created to be used by # this script. Something like this: # # mysql> GRANT SELECT on mysql.* TO 'mysqlchkusr'@'localhost' \ # -> IDENTIFIED BY '257retfg2uysg218' WITH GRANT OPTION; # mysql> flush privileges; MYSQL_HOST="localhost" MYSQL_PORT="3306" MYSQL_USERNAME="mysqlchk" MYSQL_PASSWORD="xxxxx" TMP_FILE="/tmp/mysqlchk.out" ERR_FILE="/tmp/mysqlchk.err" # # We perform a simple query that should return a few results :-p # mysql --host=$MYSQL_HOST --port=$MYSQL_PORT --user=$MYSQL_USERNAME \ --password=$MYSQL_PASSWORD -e"show databases;" > $TMP_FILE 2> $ERR_FILE # # Check the output. If it is not empty then everything is fine and we return # something. Else, we just do not return anything. # if [ "$(/bin/cat $TMP_FILE)" != "" ] then # mysql is fine, return http 200 /bin/echo -e "HTTP/1.1 200 OK\r\n" /bin/echo -e "Content-Type: Content-Type: text/plain\r\n" /bin/echo -e "\r\n" /bin/echo -e "MySQL is running.\r\n" /bin/echo -e "\r\n" else # mysql is fine, return http 503 /bin/echo -e "HTTP/1.1 503 Service Unavailable\r\n" /bin/echo -e "Content-Type: Content-Type: text/plain\r\n" /bin/echo -e "\r\n" /bin/echo -e "MySQL is *down*.\r\n" /bin/echo -e "\r\n" fi
At the command prompt:
echo "mysqlchk 9200/tcp # mysqlchk" >> /etc/services chmod +x /opt/mysqlchk /etc/init.d/xinetd restart
Configure HAProxy on the web server
- apt-get -y install haproxy
- nano /etc/default/haproxy change ENABLED=0 to ENABLED=1
- nano /etc/haproxy/haproxy.cfg
# this config needs haproxy-1.1.28 or haproxy-1.2.1
global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy daemon #debug #quiet
defaults log global mode tcp option tcplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 4000 clitimeout 50000 srvtimeout 30000 stats enable stats scope .
frontend mysql_cluster_read bind 0.0.0.0:3306 default_backend mysql_cluster_read
backend mysql_cluster_read mode tcp balance roundrobin stats enable option tcpka option httpchk server mysql01.pen01 10.0.0.30:3306 weight 1 check port 9200 inter 5s rise 2 fall 2 server mysql02.pen01 10.0.0.31:3306 weight 1 check port 9200 inter 5s rise 2 fall 2 backup server mysql01.syd01 10.1.0.30:3306 weight 2 check port 9200 inter 5s rise 2 fall 2 backup
listen stats 10.0.0.20:31337
mode http option httpclose balance roundrobin stats uri / stats realm Haproxy\ Statistics stats auth haproxy:xxxx
- Watch stats http://10.0.0.20:31337
Kernel Tuning
These are a few kernel adjustments I’ve found to be useful on web servers, your mileage may vary.
net.ipv4.ip_local_port_range = 2000 65000 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_max_syn_backlog = 3240000 net.core.somaxconn = 3240000 net.ipv4.tcp_max_tw_buckets = 1440000 net.core.rmem_default = 8388608 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_congestion_control = cubic
# To apply the changes
sysctl -p /etc/sysctl.conf
Thats it!
You’ve now got a working NGINX, PHP-FPM, Google PageSpeed web server running. You’ll probably want to create a couple of virtual hosts, a template for a virtual host with pagespeed enabled can be found here. To create a new virtual host, modify the template file for your site and add it to /data/config/etc/nginx/sites-available/ then symlink to it in /data/config/etc/nginx/sites-enabled/ and restart nginx.
Benchmarking
What good is a new car unless you know how fast it goes? So lets see how quick we can make this thing run compared to a standard install of Nnginx & PHP-FPM.
Setup
I’ve got two web servers with a pair of database servers behind them. The web servers are behind a pair of load balancers that face the outside world. The server I am doing testing from is in the same datacentre and has a 100 meg port to the internet just the same as the load balancers.
Testing Methodology
I am using “ab” the Apache Benchmarking tool to run these tests. The command line I’m using is “ab -n 1000 -c 10 http://www.domain.com/”
Once ab has finished its work it spits out a set of results like this:
Server Hostname: www.domain.com Server Port: 80
Document Path: / Document Length: 90276 bytes
Concurrency Level: 10 Time taken for tests: 8.483 seconds Complete requests: 1000 Failed requests: 0 (Connect: 0, Receive: 0, Length: 0, Exceptions: 0) Write errors: 0 Total transferred: 71812359 bytes HTML transferred: 71413359 bytes Requests per second: 117.88 [#/sec] (mean) Time per request: 84.830 [ms] (mean) Time per request: 8.483 [ms] (mean, across all concurrent requests) Transfer rate: 8267.08 [Kbytes/sec] received
Connection Times (ms) min mean[+/-sd] median max Connect: 5 7 1.5 7 22 Processing: 38 77 23.0 72 343 Waiting: 8 11 9.8 10 228 Total: 47 84 22.9 79 349
Percentage of the requests served within a certain time (ms) 50% 79 66% 90 75% 101 80% 103 90% 109 95% 115 98% 126 99% 135 100% 349 (longest request)
While this is useful data, I’m more of a visual kinda guy so I have stuck the results into gnuplot charts below. If you want to know how to dot his, check out this article.
Benchmarking Results
hooton.org Page Load – Pre-Optimised
As you can see from this plot, initial page loads take almost 2000ms, then they speed up to anywhere between 500ms and 1000ms. This is ok, but we can do better than that!!
The site is running WordPress so some PHP enhancements might help it. This test is after I enabled PHP-APC.
Page load times drop quite a bit immediately and appear to be much less scattered. APC has taken about 300ms off the average page load time. Nice.
What happens when we add varnish into the mix to cache static content?
Ok, something happened but I don’t really know how to interpret it! I think this means things got faster, but why let a set of weird results get in the way of a good story. Lets just say that adding varnish to the mix makes three small red dots on your gnuplot graphs for the moment.
Lets press on and add Pagespeed into the mix.
If Varnish adds three red dots, varnish & pagespeed make 4 red lines. I’m still a bit lost,but hey we made some graphs and now we can look at them, that in itself is a great success.
So what happens if we remove varnish from the mix cos it certainly seems to have made this story lose the plot a little.. (see what I did there..?)
Now this looks like a graph I can make up a more believable story about! Removing varnish shows a nice low page load speed that averages around 80ms. This is pretty good, but adding varnish back in definitely speeds things up, so I’m guessing the insanity in the graphing just happens because varnish is insanely fast. Thats my scientific opinion and i’m sticking to it for the moment..
So there you go, web servers built. You’ve got some pretty cool new features built into this server that should make even the most dreadful coders application run at least a little faster than dreadful.
Next week… FTP server. or something.
Like what I’ve written? Hate it? Think I’m going about things the wrong way? Tell me in the comments?
One Reply to “Project Titanicarus: Part 7 – Building the Web Servers”
Comments are closed.