Friday, April 11, 2014

Rebuild your Lighttpd / PHP / MySQL box in AWS

Update: see here for LAMP on Amazon Linux 2

After observing a weird attack, I've decided to rebuild my Lighttpd / PHP / MySQL box whose original setup is is documented here.

New Instance

Login to the AWS Console and go to the EC2 Dashboard. Your existing security group should look something like this, which lets only your IP access your custom SSH port.

 HTTP             TCP    80
 HTTPS            TCP   443
 Custom TCP Rule  TCP 12345

You'll temporarily need to open port 22 since that's what's in the default SSHD config.

 HTTP             TCP    80
 HTTPS            TCP   443
 Custom TCP Rule  TCP 12345
 SSH              TCP    22

Launch a new instance.

Launch Instance
Amazon Linux 64bit > Select
t1.micro > Review And Launch
Edit Security Groups > Select Existing Group > WebServerSecurityGroup
Review And Launch > Launch
Choose an existing key pair > WebServerKey
Launch Instances

After that, navigate to the EC2 dashboard where you can see the instance starting up.

Left-click the blank instance name and give it something meaningful.

Right-click your instance > Change Termination Protection > Yes, Enable

That just prevents you from accidentally trashing your instance when you just meant to power it off.

In the Description tab below your new instance, take note of its Public IP.

Clone your existing putty config but use the new IP and port 22. Connect to your instance with putty.

Basic Setup

sudo yum -y update

That will probably pickup a bunch of updates. Next we'll harden the SSHD config.

sudo vi /etc/ssh/sshd_config

Make the following edits.

# change the port to the custom SSH port from your security group
# this makes it a little bit harder for people to attack you as they
# now have to scan all ports to discover which is your SSH port
Port 12345

# Explicitly require strong protocol 2 (which is the default)
Protocol 2

# change this to no, we never want root access over SSH
PermitRootLogin no

# explicitly disable weak authentication systems
RhostsRSAAuthentication no
HostbasedAuthentication no

# Explicitly disable Kerberos Authentication
KerberosAuthentication no

# Explicitly disable GSSAPI Authentication
GSSAPIAuthentication no

# explicitly disable x11 forwarding, we will never connect with a gui
X11Forwarding no

Reboot. This will pickup the SSHD changes and anything from the yum update.

sudo shutdown -r now

In the AWS Console remove the temporary port 22 entry from your security group so that you just have something like this.

 HTTP             TCP    80
 HTTPS            TCP   443
 Custom TCP Rule  TCP 12345

Re-connect with putty now using the custom port. Note that the instance may have a new public IP.

Install Lighttpd / PHP / MySQL

sudo yum -y install lighttpd lighttpd-fastcgi
sudo yum -y install mysql mysql-server
sudo yum -y install php-cli php-mysql php-mbstring php-xml

The php-mbstring gives you utf8 support and the php-xml gives you the DOMDocument object.

Configure the Lighttpd and MySQL to auto-start.

sudo chkconfig --levels 235 lighttpd on
sudo chkconfig --levels 235 mysqld   on

Configure Lighttpd / PHP

sudo vi /etc/lighttpd/lighttpd.conf
# comment out this line so that startup doesn't complain
#server.use-ipv6 = "enable"

# insert this right after "modules.conf"
# we'll create it later and put url rewrite commands in it
# to support multiple virtual hosts
include "/etc/lighttpd/rewrites.conf"

# add this near the commented out 443 block
# we'll create the ssl.pem file later 
$SERVER["socket"] == ":443" {
  ssl.engine  = "enable"
  ssl.pemfile = "/etc/lighttpd/ssl/ssl.pem"
sudo vi /etc/lighttpd/modules.conf
# uncomment this to get access to fastcgi, which is required for php
include "conf.d/fastcgi.conf"

# uncomment these modules to support the commands we'll put in rewrites.conf
# they're in the server.modules block near the top
# the trailing comma is okay
sudo vi /etc/lighttpd/conf.d/fastcgi.conf
# leave the example commented blocks untouched
# and just add this new one
# it tells fastcgi that we'll be using php
fastcgi.server = ( ".php" =>
                   ( "php-local" =>
                       "socket" => socket_dir + "/php-fastcgi-1.socket",
                       "bin-path" => "/usr/bin/php-cgi",
                       "max-procs" => 1,
                       "broken-scriptfilename" => "enable",

Create a new file to support all the redirects you require. I've included some examples. You can rewrite urls and cause redirects and define custom 404 handlers. My strategy for hosting multiple domains is to define a document-root and error handler for each. I want *.mydomain to fail, so I don't include a rule for it. I want www.mydomain to 301 redirect so I define a special document-root for that subdomain at which I locate the custom 404 handler which explicitly performs a 301 redirect the the naked domain.

sudo vi /etc/lighttpd/rewrites.conf
url.rewrite-once = (
 "^/folder/whatever.html$" => "/somewhere/else.html",
 "^/something/else.html$" => "/another/destination.html"

url.redirect = (
 "^/obsolete/.*" => ""

server.error-handler-404 = "/_error.php"

$HTTP["host"] =~ "^www\.site1\.com$" {
  server.document-root = "/var/www/lighttpd/_redirects/site1"
else $HTTP["host"] =~ "^site1\.com$" {
  server.document-root = "/var/www/lighttpd/site1"
  server.error-handler-404 = "/_error.php"
else $HTTP["host"] =~ "^www\.site2\.com$" {
  server.document-root = "/var/www/lighttpd/_redirects/site2"
else $HTTP["host"] =~ "^site2\.com$" {
  server.document-root = "/var/www/lighttpd/site2"
  server.error-handler-404 = "/_error.php"

In the above example, there would exist the following files for site2.

  header('HTTP/1.1 301 Moved Permanently');
  header('HTTP/1.1 301 Moved Permanently');

Give read permissions to Lighttpd.

sudo chmod 640 /etc/lighttpd/rewrites.conf

Create some required folders.

# I added this a long time ago to fix a php startup error.
# I believe it needs a writable location for session data.
# Without this, lighttpd fails to start.
sudo mkdir -p /var/lib/php/session
sudo chown lighttpd:lighttpd /var/lib/php/session

# I added this because the default conf specifies var.home_dir as /var/lib/lighttpd
# I believe it needs a writable location for socket data.
# Without this, lighttpd fails to start.
sudo mkdir -p /var/lib/lighttpd/sockets
sudo chown lighttpd:lighttpd /var/lib/lighttpd/sockets

I'm assuming you already have your SSL configured and just need to migrate the file.

sudo mkdir /etc/lighttpd/ssl
sudo vi /etc/lighttpd/ssl/ssl.pem
# paste your cert and key into vi and save and close

Now for the php config.

sudo vi /etc/php.ini
; uncomment this (I forget why, something didn't work without it)
cgi.fix_pathinfo = 1

; set short tags to on, that's not necessary, just my style
; it lets you use <? instead of <?php
; don't be confused by the first hit during find, the actual setting is lower in the file
short_open_tag = On

; tell error reporting to ignore E_NOTICE so that your logs don't fill up
error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED

; append this to the [mbstring] block
; the existing mbstring should all be just commented out examples
; these changes make utf8 the default
mbstring.language = Neutral
mbstring.internal_encoding = UTF-8
mbstring.encoding_translation = Off
mbstring.http_input = auto
mbstring.http_output = UTF-8
mbstring.detect_order = auto
mbstring.substitute_character = none
default_charset = UTF-8

; decrease max post size so you don't waste time on bogus payloads
; this also limits the size of attack packets and reduces the risk of overflow
post_max_size = 1M

I find it useful to create the following showme script to display recent entries from the php error log.

mkdir ~/bin
vi ~/bin/showme
sudo tail -n 200 /var/log/lighttpd/error.log
chmod 744 ~/bin/showme

Give ownership of the web files folder to your ec2-user since that's how you'll connect with WinSCP. And get rid of the default website.

sudo chown ec2-user:ec2-user /var/www/lighttpd
rm -f /var/www/lighttpd/*

Create a WinSCP connection like your putty connection and upload your website. For the above rewrites example you would create the following. The root error and index files are only served when your site is accessed by IP.


While that's uploading, we'll setup MySQL.

Configure MySQL

sudo /etc/init.d/mysqld start

That gives a long prompt about security. Take it's advice and run this.

sudo /usr/bin/mysql_secure_installation
[enter]   # existing root mysql password is blank
Y         # yes set a root mysql password
password  # choose a password
password  # enter it again
Y         # remove anonymous users
Y         # disallow root login remotely
Y         # remove the test db
Y         # reload privilege tables now

The default config listens to the network.

# this shows that it is listening
sudo netstat -tap | grep mysql
sudo vi /etc/my.cnf
# append this line to the [mysqld] block to disable TCP/IP listening

# append these to the [mysqld] block to set utf8 as your default

Restart MySQL for the settings to take effect and look at netstat again to see that it's no longer listening.

sudo /etc/init.d/mysqld restart
sudo netstat -tap | grep mysql

Note that even with all that utf8 config, you still need to explicitly select utf8 in your php mysqli constructor like this $db->set_charset('utf8'); if you want $db->character_set_name() to return utf8 instead of latin1.

We will now create all the databases that we wish to migrate. If you don't remember which databases you created, you can use show databases; in mysql on your live box.

mysql -u root -p 

CREATE USER 'db1user'@'localhost' IDENTIFIED BY 'db1password';
GRANT ALL PRIVILEGES ON db1name.* TO 'db1user'@'localhost';

CREATE USER 'db2user'@'localhost' IDENTIFIED BY 'db2password';
GRANT ALL PRIVILEGES ON db2name.* TO 'db2user'@'localhost';



Once your files are uploaded, you'll want to perform some sanity checks on them. First note that if Lighttpd is expected to be able to create symlinks within a site then it will need write access to that location. You can achieve this by setting the group ownership to the lighttpd group. But this is a broad stroke and risks letting an attacker write to your file system.

sudo chown ec2-user:lighttpd /var/www/lighttpd/site1

You'll need to create any symlinks that exist on your source server. Run these commands on both servers to get lists of files and symlinks.

sudo find /var/www/lighttpd -type f | sort > /tmp/files.txt
sudo find /var/www/lighttpd -type l | sort > /tmp/links.txt

Create necessary symlinks on the new box. Use WinSCP to copy the file and link lists to your local box and use kdiff3 to confirm they are identical.

You should also fetch copies of the config from the old and new server and compare them. There will be differences since the new build will have the latest config files, so you have to use your judgement here. Copy them to /tmp then WinSCP them to your local box.

sudo mkdir /tmp/etc
sudo cp /etc/my.cnf /tmp/etc
sudo cp /etc/php.ini /tmp/etc
sudo cp -r /etc/lighttpd /tmp/etc
sudo chown -R ec2-user /tmp/etc

Make sure to delete them when you're done.


You're now ready to migrate the data. Dump the data from the live box.

mysqldump --user=user1 --password=password1 --skip-lock-tables --databases database1 > /tmp/database1.sql
mysqldump --user=user2 --password=password2 --skip-lock-tables --databases database2 > /tmp/database2.sql

Move those files to the new box and import them.

mysql -D database1 -u user1 -p
source /tmp/database1.sql

mysql -D database2 -u user2 -p
source /tmp/database2.sql

Be sure to delete the SQL from the /tmp folder once you're done.

Go Live

Restart you new box to ensure all the settings take effect. Edit your hosts file to test the new server before swapping your elastic IP.

From the AWS Console, disassociate your elastic IP from your running live server and associate it with the new running server.

You are now live. You can shutdown and archive the old box. Now is a good time to create an AMI from the new box.

Local Backup

You really should setup AMI backups, but it's not unreasonable to want a local copy of the data as well. Having rebuild the box you now have a local copy of everything, but you'll want a way to fetch nightly backups of your database. You can achieve that by hosting an encrypted dump of the db at a randomly named folder and get your local machine to fetch it each night.

Choose a web location for the backup.

mkdir /var/www/lighttpd/site1/randomFolderName/

Create the backup script.

sudo mkdir /backup
sudo chown ec2-user:ec2-user /backup
vi /backup/

# call this script with no arguments to create an encrypted dump of all your databases
# call this script with a file name as the only argument to decrypt that file
# here's how to setup a cron job to call this script nightly
# this example runs on the 5th minute fo the 4th hour each day, i.e. 4:05am
# watchout, that's the time of the server, which may not be your local time
# the output of the cronjob will be appended to /backup/chron.log
# crontab -e
# 5 4 * * * /backup/ 2>&1 >> /backup/chron.log

if [ "$1" = "" ]; then

  today=`date +%Y_%m_%d`
  if [ -e $today -o -e $today.dat ]; then
    echo "$today already exists"
  echo `date`
  mkdir $today

  mysqldump --user=user1 --password=password1 --skip-lock-tables --databases database1 > ./$today/database1_$today.sql
  mysqldump --user=user2 --password=password2 --skip-lock-tables --databases database2 > ./$today/database2_$today.sql

  tar czf - $today | openssl des3 -salt -k password | dd of=$today.dat
  rm -rf $today
  rm -f /var/www/lighttpd/site1/randomFolderName/*
  mv $today.dat /var/www/lighttpd/site1/randomFolderName/
  echo "finished"

elif [ -f "$1" ]; then

  dd if="$1" | openssl des3 -d -k password | tar xzf -


  echo "$1 doesn't exist"

chmod 700 /backup/

Create the cronjob.

crontab -e
5 4 * * * /backup/ 2>&1 >> /backup/chron.log

Windows Scheduler

To get a windows box to automatically wake up each night and download your backup, you can use the windows scheduler and the background intelligent transfer service (bits).

First create a local bat file that will fetch your backup from the web.

bitsadmin /TRANSFER jobname /DOWNLOAD,4%_%DATE:~3,2%_%DATE:~0,2%.dat C:\Backups\%DATE:~6,4%_%DATE:~3,2%_%DATE:~0,2%.dat

Next, schedule a task to wake the computer and run the script. I'm assuming that your computer is configured to automatically go back to sleep after an idle period.

Start > Control Panel > Administrative Tools > Task Scheduler

[right sidebar] > Create Task

 Name: Fetch Database
 [checked] Run with highest privileges

  Start: 2am (some time shortly after your web backup becomes available)

  Action: Start a program
  Program/script: C:\Backups\fetch.bat (or whatever you named the above script) 

 [checked] Wake the computer to run this task

Your task now appears in the "Active Tasks" list in the Task Scheduler.


It's a good idea to setup some AWS alarms to let you know when your systems are operating outside their expected range. For example, I have a billing alarm that lets me know if my estimated monthly charge exceeds a set value.

Basic Monitoring metrics (at five-minute frequency) for Amazon EC2 instances and EBS volumes are free of charge.

In the AWS Console, on your new instance:

[right-click] >Add/Edit Alarms > Create Alarm

[checked] send a notification to: your existing contact info
Whenever: Average of CPU Utilization
Is: >= 40 Percent
For at least 1 consecutive period of 5 minutes

Create Alarm > Close

I've only exceeded that CPU range when there was a bug in my code and php was stuck in a loop.

[right-click] >Add/Edit Alarms > Create Alarm

[checked] send a notification to: your existing contact info
Whenever: Average of Network Out
Is: >= 150000 Bytes
For at least 1 consecutive period of 6 hours

Create Alarm > Close

I've only exceeded that Network range when re-imaging a box or when I was under attack.

It's fairly easy to adjust the alarms to your system after it's been running for a few days as the alarms console shows you a graph of each and the red line after which the alarm would fire.

AMI Backups

The AMI is your best backup option. This is a snapshot of your disk and your instance details (micro, etc). From the AMI you can launch a new box and move over your IP in a few minutes. Ideally you'll create an AMI from your instance and take snapshots of your instance every 24 hours and keep the last week and perhaps a few older copies. We'll want to configure the creation of these snapshots to happen automatically.

We'll setup nightly Amazon EBS Snapshots of our instance. They can later be used as the basis for an AMI.

Something has to issue the nightly command, and that something must contain an unprotected copy of the credential that allows the snapshot to occur. First we'll create a constrained credential to reduce the risk of its exposure, then we'll piggyback of our existing local database backup script to kickoff the amazon snapshot. Your server is probably more likely to be attacked then your devbox.

IAM Credential

In the AWS Console, select IAM. If you haven't used this yet, you'll have zero groups, users and roles. First we'll create a group that can create snapshots and then we'll create a user and assign them that group. The backup script will connect as this user.

Create New Group
 Group Name: snapshot
 Policy Generator
   Effect: Allow
   AWS Service: Amazon EC2
   Actions: Create Snapshot
            Delete Snapshot
            Describe Snapshots
   ARN: *
   Add Statement
Create Group

Create New Users  
 User Name 1: snapshot
 [checked] generate and access key for each user 
 Download Credentials
 Close Window

  Groups > Add User to Groups > snapshot > Add to Groups


We'll use the following command from the AWS API Reference.

ec2-create-snapshot volume_id -d "Nightly Backup"

To find your volume_id, go the the AWS Console, select EC2, select Volumes from the sidebar, scroll right to the Attachment Information column which will show your WebServer instance, then scroll left and record the Volume ID.

First we have to setup the tools.

Download and unzip the latest tools. I used these. The latest link will be posted here. No install is required.

You have to have Java installed.

Run the following commands for a dos console to create your first snapshot.

SET JAVA_HOME=C:\Program Files (x86)\Java\jre7

SET EC2_HOME=C:\Amazon\ec2-api-tools-


ec2-create-snapshot vol-11111111 -d "Nightly Backup"

Run the following command to show existing snapshots.


You can filter that to only show the snapshots of a particular volume.

ec2-describe-snapshots --filter "volume-id=vol-11111111"

You can further restrict that to only show the snapshots of a particular volume that are tagged as "Nightly Backup", thus avoiding any ones you created manually.

ec2-describe-snapshots --filter "volume-id=vol-11111111" --filter "description=Nightly Backup"

Here's a windows script to create a snapshot and delete old nightly backup snapshots from a particular volume. You can call this from the script that you already setup to run nightly.

@echo off

SET JAVA_HOME=C:\Program Files (x86)\Java\jre7

SET EC2_HOME=C:\Amazon\ec2-api-tools-


SET EC2_VOLUME=vol-22222222

REM  This command lists all snapshots:
REM  ec2-describe-snapshots
REM  SNAPSHOT        snap-11111111   vol-22222222    completed       2014-04-12T20:29:58+0000        100%    333333333333    8       Nightly Backup
REM  SNAPSHOT        snap-11111111   vol-22222222    completed       2014-04-12T20:29:58+0000        100%    333333333333    8       Nightly Backup
REM  SNAPSHOT        snap-11111111   vol-22222222    completed       2014-04-12T20:29:58+0000        100%    333333333333    8       Created by CreateImage(i-44444444) for ami-55555555 from vol-22222222
REM  SNAPSHOT        snap-11111111   vol-33333333    completed       2014-04-12T20:29:58+0000        100%    333333333333    8       Nightly Backup
REM  This command lists only snapshots:
REM  - from the given volume
REM  - with the nightly backup tag
REM  - sorted from oldest to newest,
REM  - note that whitespace above is actually a tab character so it counts as one space
REM  ec2-describe-snapshots --filter "volume-id=vol-22222222" --filter "description=Nightly Backup" | sort /R /+49

echo List interesting snapshots:
call ec2-describe-snapshots --filter "volume-id=%EC2_VOLUME%" --filter "description=Nightly Backup" | sort /R /+49

REM  This loop finds the selected snapshots that are older than 7 days:
REM  usebackq - use `` to delimit the command to be executed so that it can contain ""
REM  skip=7   - skip the first 7 rows, so we keep a week's worth of backups
REM  tokens=2 - select the 2nd column, delimited by spaces
REM  note: that the | must be escaped as ^|

echo Delete old snapshots:
FOR /F "usebackq skip=7 tokens=2" %%G IN (`ec2-describe-snapshots --filter "volume-id=%EC2_VOLUME%" --filter "description=Nightly Backup" ^| sort /R /+49`) DO (
  echo Delete %%G
  call ec2-delete-snapshot %%G

REM  Create the snapshot after we delete old snapshots so that our list won't contain any 
REM  pending entries that would mess up our assumptions about the poistion of the date at /+49

echo Create a snapshot:
call ec2-create-snapshot %EC2_VOLUME% -d "Nightly Backup"
{ "loggedin": false, "owner": false, "avatar": "", "render": "nothing", "trackingID": "UA-36983794-1", "description": "", "page": { "blogIds": [ 492 ] }, "domain": "", "base": "\/michael", "url": "https:\/\/\/michael\/", "frameworkFiles": "https:\/\/\/michael\/_framework\/_files.4\/", "commonFiles": "https:\/\/\/michael\/_common\/_files.3\/", "mediaFiles": "https:\/\/\/michael\/media\/_files.3\/", "tmdbUrl": "http:\/\/\/", "tmdbPoster": "http:\/\/\/t\/p\/w342" }