Let’s Encrypt – Encrypt All The Things!

I’ve recently switched over a bunch of sites, including this blog, to using SSL.

I got my SSL certificate through a very interesting project called “Let’s Encrypt“. The goal of the project is to increase the amount of encryption used on the internet by offering free, trusted domain validated certificates. Right now they are still in a limited beta stage, but the go live date is currently set to 3rd of December.

It seems to me that the recommended way to make use of Let’s Encrypt certificates is to have the Let’s Encrypt client is on each and every server that will make use of the certificates. This is in order for the authentication to work properly, to make automation easier and to have the ability to renew your certificates easily.

I didn’t really want to have the client on every server, so instead I added a proxy pass in my front-end Nginx boxes as follows:

location /.well-known/ {
 proxy_pass http://letsencrypt-machine.local/.well-known/;
 proxy_redirect http://letsencrypt-machine.local/ http://$host:$server_port/.well-known/;

I have this block before my / proxy pass, so any requests for /.well-know/ will go to the machine I have the Let’s Encrypt client running.

Next, I ran the client to request the certificate as follows:

./letsencrypt-auto --agree-dev-preview --server https://acme-v01.api.letsencrypt.org/directory auth -d blog.hamzahkhan.com

Magic! I now have a freshly signed certificate, key, and certificate chain sitting in /etc/letsencrypt/live/blog.hamzahkhan.com/.

I’m not sure if there is a better way to do this, but it works for me, and I’m happy with it.

The only down side is that the certificates are only valid for 90 days after which you have to renew them. I believe this is one of the reason that it is recommended to have the client on every machine as it makes the renewal process a lot less work.

That said, I don’t have such a large number of sites that managing it manually would be difficult, so I’m just going to stick with my way for now. 🙂

Securing Your Postfix Mail Server with Greylisting, SPF, DKIM and DMARC and TLS

28/03/2015 – UPDATE- Steve Jenkins has created a more up-to-date version of this post which is definitely worth checking out if you are looking into deploying OpenDMARC. 🙂

A few months ago, while trying to debug some SPF problems, I came across “”Domain-based Message Authentication, Reporting & Conformance” (DMARC).

DMARC basically builds on top of two existing frameworks, Sender Policy Framework (SPF), and DomainKeys Identified Mail (DKIM).

SPF is used to define who can send mail for a specific domain, while DKIM signs the message. Both of these are pretty useful on their own, and reduce incoming spam A LOT, but the problem is you don’t have any “control” over what the receiving end does with email. For example, company1’s mail server may just give the email a higher spam score if the sending mail server fails SPF authentication, while company2’s mail server might outright reject it.

DMARC gives you finer control, allowing you to dictate what should be done. DMARC also lets you publish a forensics address. This is used to send back a report from remote mail servers, and contains details such as how many mails were received from your domain, how many failed authentication, from which IPs and which authentication tests failed.

I’ve had a DMARC record published for my domains for a few months now, but I have not setup any filter to check incoming mail for their DMARC records, or sending back forensic reports.

Today, I was in the process of setting up a third backup MX for my domains, so I thought I’d clean up my configs a little, and also setup DMARC properly in my mail servers.

So in this article, I will be discussing how I setup my Postfix servers using Greylisting, SPF, DKIM and DMARC, and also using TLS for incoming/outgoing mail. I won’t be going into full details for how to setup a Postfix server, only the specifics needed for SPF/DKIM/DMARC and TLS.

We’ll start with TLS as that is easiest.


I wanted all incoming and outgoing mail to use opportunistic TLS.

To do this all you need to do is create a certificate:
[[email protected] ~]# cd /etc/postfix/
[[email protected] ~]# openssl genrsa -des3 -out mx1.example.org.key
[[email protected] ~]# openssl rsa -in mx1.example.org.key -out mx1.example.org.key-nopass
[[email protected] ~]# mv mx1.example.org.key-nopass mx1.example.org.key
[[email protected] ~]# openssl req -new -key mx1.example.org.key -out mx1.example.org.csr

Now, you can either self sign it the certificate request, or do as I have and use CAcert.org. Once you have a signed certificate, dump it in mx1.example.crt, and tell postfix to use it in /etc/postfix/main.cf:
# Use opportunistic TLS (STARTTLS) for outgoing mail if the remote server supports it.
smtp_tls_security_level = may
# Tell Postfix where your ca-bundle is or it will complain about trust issues!
smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.trust.crt
# I wanted a little more logging than default for outgoing mail.
smtp_tls_loglevel = 1
# Offer opportunistic TLS (STARTTLS) to connections to this mail server.
smtpd_tls_security_level = may
# Add TLS information to the message headers
smtpd_tls_received_header = yes
# Point this to your CA file. If you used CAcert.org, this
# available at http://www.cacert.org/certs/root.crt
smtpd_tls_CAfile = /etc/postfix/ca.crt
# Point at your cert and key
smtpd_tls_cert_file = /etc/postfix/mx1.example.org.crt
smtpd_tls_key_file = /etc/postfix/mx1.example.org.key
# I wanted a little more logging than default for incoming mail.
smtpd_tls_loglevel = 1

Restart Postfix:
[[email protected] ~]# service postfix restart

That should do it for TLS. I tested by sending an email from my email server, to my Gmail account, and back again, checking in the logs to see if the connections were indeed using TLS.


Greylisting is method of reducing spam, which is so simple, yet so effective it’s quite amazing!

Basically, incoming relay attempts are temporarily delayed with a SMTP temporary reject for a fixed amount of time. Once this time has finished, any further attempts to relay from that IP are allowed to progress further through your ACLs.

This is extremely effective, as a lot of spam bots will not have any queueing system, and will not re-try to send the message!

As EPEL already has an RPM for Postgrey, so I’ll use that for Greylisting:
[[email protected] ~]# yum install postgrey

Set it to start on boot, and manually start it:

[[email protected] ~]# chkconfig postgrey on
[[email protected] ~]# service postgrey start

Next we need to tell Postfix to pass messages through Postgrey. By default, the RPM provided init scripts setup a unix socket in /var/spool/postfix/postgrey/socket so we’ll use that. Edit /etc/postfix/main.cf, and in your smtpd_recipient_restrictions, add “check_policy_service unix:postgrey/socket”, like I have:

check_policy_service unix:postgrey/socket,
reject_rbl_client dnsbl.sorbs.net,
reject_rbl_client zen.spamhaus.org,
reject_rbl_client bl.spamcop.net,
reject_rbl_client cbl.abuseat.org,
reject_rbl_client b.barracudacentral.org,
reject_rbl_client dnsbl-1.uceprotect.net,

As you can see, I am also using various RBLs.

Next, we restart Postfix:

[[email protected] ~]# service postfix restart

All done. Greylisting is now in effect!


Next we’ll setup SPF.

There are many different SPF filters available, and probably the most popular one to use with Postfix would be pypolicyd-spf, which is also included in EPEL, but I was unable to get OpenDMARC to see the Recieved-SPF headers. I think this is due to the order in which a message is passed through a milter and through a postfix policy engine, and I was unable to find a workaround. So instead I decided to use smf-spf, which is currently unmaintained, but from what I understand it is quite widely used, and quite stable.

I did apply some patches to smf-spf which were posted by Andreas Schulze on the the OpenDMARC mailing lists. They are mainly cosmetic patches, and aren’t necessary but I liked them so I applied them.

I was going to write a RPM spec file for smf-spf, but I noticed that Matt Domsch has kindly already submitted packages for smf-spf and libspf2 for review.

I did have to modify both packages a little. For smf-spf I pretty much only added the patches I mentioned eariler, and a few minor changes I wanted. For libspf2 I had to re-run autoreconf and update Matt Domsch’s patch as it seemed to break on EL6 boxes due to incompatible autoconf versions. I will edit this post later and add links to the SRPMS later.

I build the RPMs, signed them with my key and published it in my internal RPM repo.
I won’t go into detail into that, and will continue from installation:

[[email protected] ~]# yum install smf-spf

Next, I edited /etc/mail/smfs/smf-spf.conf, set smf-spf to start on boot and started smf-spf:

[[email protected] ~]# cat /etc/mail/smfs/smf-spf.conf|grep -v "^#" | grep -v "^$"
RefuseFail on
AddHeader on
User smfs
Socket inet:[email protected]

Set smf-spf to start on boot, and also start it manually:
[[email protected] ~]# chkconfig smf-spf on
[[email protected] ~]# service smf-spf start

Now we edit the Postfix config again, and add the following to the end of main.cf:
milter_default_action = accept
milter_protocol = 6
smtpd_milters = inet:localhost:8890

Restart Postfix:
[[email protected] ~]# service postfix restart

Your mail server should now be checking SPF records! 🙂
You can test this by trying to forge an email from Gmail or something.


DKIM was a little more complicated to setup as I have multiple domains. Luckily, OpenDKIM is already in EPEL, so I didn’t have to do any work to get an RPM for it! 🙂

Install it using yum:
[[email protected] ~]# yum install opendkim

Next, edit the OpenDKIM config file. I’ll just show what I done using a diff:
[[email protected] ~]# diff /etc/opendkim.conf.stock /etc/opendkim.conf
< Mode v
> Mode sv
< Selector default
> #Selector default
< #KeyTable /etc/opendkim/KeyTable
> KeyTable /etc/opendkim/KeyTable
< #SigningTable refile:/etc/opendkim/SigningTable
> SigningTable refile:/etc/opendkim/SigningTable
< #ExternalIgnoreList refile:/etc/opendkim/TrustedHosts
> ExternalIgnoreList refile:/etc/opendkim/TrustedHosts
< #InternalHosts refile:/etc/opendkim/TrustedHosts
> InternalHosts refile:/etc/opendkim/TrustedHosts

Next, I created a key:
[[email protected] ~]# cd /etc/opendkim/keys
[[email protected] ~]# opendkim-genkey --append-domain --bits=2048 --domain example.org --selector=dkim2k --restrict --verbose

This will give you two files in /etc/opendkim/keys:

  • dkim2k.txt – Contains your public key which can be published in DNS. It’s already in a BIND compatible format, so I won’t explain how to publish this in DNS.
  • dkim2k.private – Contains your private key.

Next, we edit /etc/opendkim/KeyTable. Comment out any of the default keys that are there and add your own:
[[email protected] ~]# cat /etc/opendkim/KeyTable
dkim2k._domainkey.example.org example.org:dkim2k:/etc/opendkim/keys/dkim2k.private

(Thank you to andrewgdotcom for spotting the typo here)

Now edit /etc/opendkim/SigningTable, again commenting out the default entries and entering our own:
[[email protected] ~]# cat /etc/opendkim/SigningTable
*@example.org dkim2k._domainkey.example.org

Repeat this process for as many domains as you want. It would also be quite a good idea to use different keys for different domains.

We can now start opendkim, and set it to start on boot:
[[email protected] ~]# chkconfig opendkim on
[[email protected] ~]# service opendkim start

Almost done with DKIM!
We just need to tell Postfix to pass mail through OpenDKIM to verify signatures of incoming mail, and to sign outgoing mail. To do this, edit /etc/postfix/main.cf again:
# Pass SMTP messages through smf-spf first, then OpenDKIM
smtpd_milters = inet:localhost:8890, inet:localhost:8891
# This line is so mail received from the command line, e.g. using the sendmail binary or mail() in PHP
# is signed as well.
non_smtpd_milters = inet:localhost:8891

Restart Postfix:
[[email protected] ~]# service postfix restart

Done with DKIM!
Now your mail server will verify incoming messages that have a DKIM header, and sign outgoing messages with your own!


Now it’s the final part of the puzzle.

OpenDMARC is not yet in EPEL, but again I did find an RPM spec waiting review, so I used it.

Again, I won’t go into the process of how to build an RPM, lets assume you have already published it in your own internal repos and continue from installation:
[[email protected] ~]# yum install opendmarc

First I edited /etc/opendmarc.conf:
< # AuthservID name
> AuthservID mx1.example.org
< # ForensicReports false
> ForensicReports true
< HistoryFile /var/run/opendmarc/opendmarc.dat/;
< s
> HistoryFile /var/run/opendmarc/opendmarc.dat
< # ReportCommand /usr/sbin/sendmail -t
> ReportCommand /usr/sbin/sendmail -t -F 'Example.org DMARC Report" -f '[email protected]'
< # Socket inet:[email protected]
> Socket inet:[email protected]
< # SoftwareHeader false
> SoftwareHeader true
< # Syslog false
> Syslog true
< # SyslogFacility mail
> SyslogFacility mail
< # UserID opendmarc
> UserID opendmarc

Next, set OpenDMARC to start on boot and manually start it:
[[email protected] ~]# chkconfig opendmarc on
[[email protected] ~]# service opendmarc start

Now we tell postfix to pass messages through OpenDMARC. To do this, we edit /etc/postfix/main.cf once again:
# Pass SMTP messages through smf-spf first, then OpenDKIM, then OpenDMARC
smtpd_milters = inet:localhost:8890, inet:localhost:8891, inet:localhost:8893

Restart Postfix:
[[email protected] ~]# service postfix restart

That’s it! Your mail server will now check the DMARC record of incoming mail, and check the SPF and DKIM results.

I confirmed that OpenDMARC is working by sending a message from Gmail to my own email, and checking the message headers, then also sending an email back and checking the headers on the Gmail side.

You should see that SPF, DKIM and DMARC are all being checked when receiving on either side.

Finally, we can also setup forensic reporting for the benefit of others who are using DMARC.

DMARC Forensic Reporting

I  found OpenDMARC’s documentation to be extremely limited and quite vague, so there was a lot of guess work involved.

As I didn’t want my mail servers to have access to my DB server, I decided to run the reporting scripts on a different box I use for running cron jobs.

First I created a MySQL database and user for opendmarc:
[[email protected] ~]# mysql -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 1474392
Server version: 5.5.34-MariaDB-log MariaDB Server

Copyright (c) 2000, 2013, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE opendmarc;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON opendmarc.* TO [email protected]'script-server.example.org' IDENTIFIED BY 'supersecurepassword';

Next, we import the schema into the database:

[[email protected] ~]# mysql -h mysql.example.org -u opendmarc -p opendmarc < /usr/share/doc/opendmarc-1.1.3/schema.mysql

Now, to actually import the data from my mail servers into the DB, and send out the forensics reports, I have the following script running daily:


set -e

cd /home/mhamzahkhan/dmarc/

HOSTS=”mx1.example.org mx2.example.org mx3.example.org”

for HOST in $HOSTS; do
# Pull the history from each host
scp -i /home/mhamzahkhan/.ssh/dmarc [email protected]${HOST}:/var/run/opendmarc/opendmarc.dat ${HOST}.dat
# Purge history on each each host.
ssh -i /home/mhamzahkhan/.ssh/dmarc [email protected]${HOST} “cat /dev/null > /var/run/opendmarc/opendmarc.dat”

# Merge the history files. Not needed, but this way opendmarc-import only needs to run once.
cat ${HOST}.dat >> merged.dat

/usr/sbin/opendmarc-import –dbhost=${DBHOST} –dbuser=${DBUSER} –dbpasswd=${DBPASS} –dbname=${DBNAME} –verbose < merged.dat
/usr/sbin/opendmarc-reports –dbhost=${DBHOST} –dbuser=${DBUSER} –dbpasswd=${DBPASS} –dbname=${DBNAME} –verbose –interval=86400 –report-email ‘[email protected]’ –report-org ‘Example.org’
/usr/sbin/opendmarc-expire –dbhost=${DBHOST} –dbuser=${DBUSER} –dbpasswd=${DBPASS} –dbname=${DBNAME} –verbose

rm -rf *.dat

That’s it! Run that daily, and you’ll send forensic reports to those who want them. 🙂

You now have a nice mail server that checks SPF, DKIM, and DMARC for authentication, and sends out forensic reports!

With this setup, I haven’t received any spam in the last two months! That’s just as far as I can remember, but I’m sure it’s been a lot longer than that! 🙂

Any comments, and suggestions welcome!

Home Lab Network Redesign Part 1: The Remote Dedicated Server

Home Lab Diagram
As promised, here is a very very basic diagram of my home lab. This is quite a high level overview of it, and the layer 2 information is not present as I suck at Visio, and all the connectors were getting messy on Visio with the layer 2 stuff present! What is not shown in the digram:

  1. There are two back-to-back links between the edge routers which are in an active-passive bond.
  2. Each edge router has two links going into two switches (one link per switch), both these links are in an active-passive bonded interface.
  3. The two edge firewalls only have two links going to each of those switches. One port is in the “inside” VLAN, and the other is on the “outside” VLAN. I wanted to have two links per VLAN, going to both switches, but the Cisco ASAs don’t do STP, or Port-Channels so I having two links would have made a loop.
  4. The link between the two ASAs is actually going through a single switch on a dedicated failover VLAN. From reading around, the ASAs go a little crazy sometimes if you use a crossover cable as the secondary will see it’s own port go down as well in the event the primary fails. It seems that this can cause some funny things to happen. Using a switch between them means that if the primary goes down, the secondary ASA’s port will still stay up avoiding any funnyness.
  5. The core gateway only has two interfaces, each going two a different switch. One port is on the “inside” VLAN that the firewalls are connected to, and the other port is a trunk port with all my other VLANs. This isn’t very redundant, but I’m hoping to put in a second router when I have some more rack space and use HSRP to allow high availability.

As I mentioned in my previous post, I have a dedicated server hosted with Rapid Switch, through I wanted to route all my connections. There were a few reasons I wanted to do this:

  1. Without routing through the dedicated server, if one of my internet connections went down, and I failed over to the other, then my IP would be different from my primary line. This will mess up some sessions, and create a problem for DNS as I can only really point records at one line or the other.
  2. My ISP only provides dynamic IP addresses. Although the DHCP lease is long enough to not make the IP addresses change often, it’s a pain updating DNS everywhere on the occasions that it does change. Routing via my dedicated server allows me to effectively have a static IP address, I only really need to change the end point IPs for the GRE tunnels should my Virgin Media provided IP change.
  3. I also get the benefit of  being able to order more IPs if needed, Virgin Media do not offer more than one!
  4. Routing via my dedicated server at Rapid Switch also has the benefit of keeping my IP even if I change my home ISP.

The basic setup of the dedicated server is as follows:

  1. There is a GRE tunnel going from the dedicated server (diamond) to each of my edge routers. Both GRE tunnels have a private IPv4 address, and an IPv6 address. The actual GRE tunnel is transported over IPv4.
  2. I used Quagga to add the IPv6 address to the GRE tunnels as the native RedHat ifup scripts for tunnels don’t allow you to add an IPv6 address through them.
  3. I used Quagga’s BGPd to create a iBGP peering over the GRE tunnels to each of the Mikrotik routers, and push down a default route to them. The edge routers also announced my internal networks back to the dedicated server.
  4. I originally wanted to use eBGP between the dedicated servers and the edge routers, but I was having some issues where the BGP session wouldn’t establish if I used different ASNs. I’m still looking into that.
  5. There are some basic iptables rules just forwarding ports, doing NAT, and a cleaning up some packets before passing them over the GRE tunnel, but that’s all really.

Other than that, there isn’t much to see on the dedicated server. It’s quite a simple setup on there. If anyone would like to see more, I can post any relevant config.

Upgrading Disks in my Home NAS

It’s been a few weeks since I switched my NAS from LVM (no RAID, pretty much RAID0), to ZFS. It’s been working great, but a couple of days ago, I recieved a nice email from Smartmontools informing me that one of my disks was about to die! I noticed that I was also getting errors on one of the disks during a scrub!

So it’s very lucky that I decided to change to ZFS in time! otherwise I would have had a bit of a problem (yes I realise it was quite silly for me to use RAID0 on my NAS in the first place!). 🙂

Anyway, instead of just replacing the single failed disk, I decided to take the opportunity to instead buy brand new disks.

The old disks were:

I decided to upgrade to:

I’m not really a big fan of Western Digital disks as I’ve had a lot of issues with them in the past. I usually favour Seagate. The reason I chose to give WD another chance is because I have read a lot of reviews of these disks being quite highly rated in performance and reliability, and because looking at Seagate’s site, they rank their “consumer” grade disks pretty poorly in terms of reliability (MTBF) and also only seem to provide a pretty ridiculous 1 year warranty on their consumer grade disks, and the higher end disks cost a little too much for home use!

I was unable to just do a “live” switch of the disks due to ZFS using ashift=9 even even though I had specified ashift=12 when creating my ZFS pool. The new disks use 4 kbyte sectors, meaning if ZFS was aligning for 512 byte sectors I’ll get quite a large performance drop. My only option was to create a new pool and use “zfs send” to copy my datasets over to the new pool.

It wasn’t a huge problem, I just put the old disks into a different machine, and after creating my new pool in the N40L, I used zfs send to send all the datasets from my old disks over. Fast forward a day or so, and everything was back to normal. 🙂

Performing the same test I done originally with my old disks, I get quite a good performance increase with these new disks and SSD!

[[email protected] ~]# spew -b 20m --write 20g /DiskArray/scratch/test.bin
WTR:   329368.36 KiB/s   Transfer time: 00:00:15    IOPS:       16.08
[[email protected] ~]# spew -b 20m --read 20g /DiskArray/scratch/test.bin
RTR:  1140657.64 KiB/s   Transfer time: 00:00:04    IOPS:       55.70

I’m satisfied with those numbers to be honest, it’s performing well enough for me, no lag, or slow IO, so I’m happy with it!

As another follow up to my previous post as well, I did end up buying two Cisco Catalyst 3508G XL Aggregation Switches. They aren’t the best gigabit switches, they are actually quite old and cruddy, but I got them for pretty cheap, and they are managed. They don’t even support Jumbo frames, but considering the price I got them for, I’m happy with them for now until I can find better gigabit switches to replace them with.

In my previous post I was also thinking about buying another MicroServer, as HP had a £50 cash-back deal. The cash-back has actually been increased to £100, meaning that buying an N54L with the cash-back offer, would work out to be only £109.99! So the temptation got to me, and I have ordered two more Microservers.

I’ll use one for backups, and the other I’m not sure about yet! 🙂

HP MicroServer N40L with ZFS on Red Hat Enterprise Linux (RHEL)

Last year some time in November 2012, I decided that keeping my media (photos, videos and music) on my main storage server was a little annoying as my family would complain each time I was making changes and the server was unavailable. So, I decided I wanted a small box that would just sit in the corner quietly and reliably serve out disk space with very little intervention from me.

After a little browsing on Ebuyer, I found the HP ProLiant MicroServer N40L. It’s a small little box with four disk bays, one optical drive bay, and could take up to 8GB RAM officially, but people have put 16GB without any issues. At the time, HP had a £100 cash-back deal as well, so without any delay I bought it.

The machine has sat quietly on top of my rack, running OpenFiler to serve files from a 4TB RAID-5 array for months, and it’s been great!

I have been using Linux RAID with LVM on top, and I’m a big fan of this combination, BUT I have some issues with it:

  • Running a consistency check on the RAID-5 array takes a very long time. Running a consistency check on a Linux RAID array requires checking each block of the array, even if it has not been allocated. This is a long, slow process, and due to the increased work that the disks would have to do, it increases the chances of it dying!
  • The above is also true when replacing a failed disk. When replacing a RAID-5 disk, the resyncing processes is very long, and slow, all the disks are read in order to recalculate parity etc for the replacement disk and the chances of a second disk failure are actually quite high due to the large capacity of the disks, and the amount of time it takes to rebuild. There are quite a few interesting articles about this around.
  • I haven’t done any proper tests, but the performance wasn’t as great as I would have hoped for, I was noticing quite bad performance when dealing with lots of small files. (I know it’s a little silly, but this was probably the biggest problem I was having, causing me to research ZFS!)

So, in light of these issues (mainly the performance one!), and from hearing a lot of good things about it from friends (Michael and Jamie), I decided to look into ZFS.

Although I have heard a lot of good things about ZFS in the past, I always avoided it due to having to either use Solaris or a BSD variant as due to some licensing issues ZFS couldn’t be included with Linux. While there was a FUSE module for ZFS, the performance was quite bad so I never really considered using it for a NAS.

Recently, there was an article on The Register about ZFS on Linux being “production ready”, so I decided to take the leap and move from OpenFiler to RHEL6 with ZFS on Linux!

Here is how I done so, and my experiences of it.

Hardware Specs


I will be creating a single RAIDZ pool with the four drives, with the SSD as a cache L2ARC drive.

The N40L has an internal USB port, so I will be using a SanDisk 16GB Flash drive for the OS.

I don’t plan on putting an optical drive into my N40L, so I decided to use the SATA port which HP have designated the ODD port for my SSD. In order to put the ODD port into AHCI mode, and bring the port up at 3Gb/s instead of 1.5Gb/s, I had to apply a BIOS hack which can be easily found on Google.

As I put in the note above, the two Seagate drives are terrible, and have a pretty high failure rate. I’ve had these for a few years, and they have failed and been replaced by Seagate many times. I’m only using them temporarily temporarily, and planning to replace all the drives with 2TB drives soon, and keep a backup on my main storage server.

The SSD will also be replaced later on with something a little newer, that can offer more IOPS than the current SSD I am using.


As the N40L has an internal USB port, I decided to use a USB flash drive for the OS.

I don’t think I had to do anything special during the installation of RHEL, I used my PXE booting environment and my kickstart scripts to do the base RHEL installation  but it’s nothing really fancy, so I won’t go into the installation process.

Once I had a clean RHEL environment, I added the EPEL and ZFS on Linux repositories:

[[email protected] ~]# yum localinstall --nogpgcheck http://mirror.us.leaseweb.net/epel/6/i386/epel-release-6-7.noarch.rpm http://archive.zfsonlinux.org/epel/zfs-release-1-2.el6.noarch.rpm

Next, we install ZFS:

[[email protected] ~]# yum install zfs

The ZFS on Linux documents recommend using the vdev_id.conf file to allow the use of easy to remember aliases for disks. Basically what this does is creates a symlink in /dev/disks/by-vdev/ to your real disk.

I created my vdev_id.conf file as follows:

[[email protected] ~]# cat /etc/zfs/vdev_id.conf
alias HDD-0 pci-0000:00:11.0-scsi-0:0:0:0
alias HDD-1 pci-0000:00:11.0-scsi-1:0:0:0
alias HDD-2 pci-0000:00:11.0-scsi-2:0:0:0
alias HDD-3 pci-0000:00:11.0-scsi-3:0:0:0

alias SSD-0 pci-0000:00:11.0-scsi-5:0:0:0

Once we have made the changes to the vdev_id.conf file, we must make udev trigger and create our symlinks:

[[email protected] ~]# udevadm trigger
[[email protected] ~]# ls -l /dev/disk/by-vdev/
total 0
lrwxrwxrwx 1 root root 9 Apr 27 16:56 HDD-0 -> ../../sda
lrwxrwxrwx 1 root root 9 Apr 27 16:56 HDD-1 -> ../../sdb
lrwxrwxrwx 1 root root 9 Apr 27 16:56 HDD-2 -> ../../sdc
lrwxrwxrwx 1 root root 9 Apr 27 16:56 HDD-3 -> ../../sdd
lrwxrwxrwx 1 root root 10 Apr 27 16:56 SDD-0 -> ../../sde

Now we can create our pool!

I decided to go with using RAIDZ1, which is effectively RAID-5. I regret this decision now, and should have gone with RAIDZ-2 (RAID-6), but too late now. :/

Although my drives are using 2^9 (512) byte sectors, I decided to tell ZFS to align the partitions for Advanced Format (AF) disks which use 2^12 (4k) byte sectors. The reasoning for this is that, once the pool has been created, the alignment cannot be changed unless you destroy the pool and recreate it. I’d prefer not to destroy the pool when upgrading the disks, and keeping the partitions aligned for 512 byte drives means that if I decide to upgrade to AF drives in the future, I would see performance degradation due to bad partition/sector alignment. As far as I know, the only disadvantage to aligning for AF drives on 512-byte sector drives is that there will be some disk space overhead and you will lose some usable disk space, but I think it’s better than the alternative of having to destroy the pool to upgrade the drives!

[[email protected] ~]# zpool create -o ashift=12 DiskArray raidz HDD-0 HDD-1 HDD-2 HDD-3 cache SSD-0
[[email protected] ~]# zpool status
  pool: DiskArray
 state: ONLINE
  scan: none requested

        NAME        STATE     READ WRITE CKSUM
        DiskArray   ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            HDD-0   ONLINE       0     0     0
            HDD-1   ONLINE       0     0     0
            HDD-2   ONLINE       0     0     0
            HDD-3   ONLINE       0     0     0
            SSD-0   ONLINE       0     0     0

errors: No known data errors

Magic! Our pool has been created! 😀

Now we can create a few data sets:

[[email protected] ~]# zfs create DiskArray/home
[[email protected] ~]# zfs create DiskArray/photos
[[email protected] ~]# zfs create DiskArray/scratch

Now you can fill it up! 🙂

I went further and setup NFS exports, and Samba. I opted to share my data stores the normal way using the /etc/exports and smb.conf file, but for this, Samba and NFS have to be started after ZFS has mounted the pool. ZFS does have the sharesmb and sharenfs options which basically add add the export/share to Samba and NFS as soon as it is available, but I prefer the traditional way as I am used to it. 🙂


I haven’t really done too many tests, but using spew, I get the following results:

[[email protected] ~]# spew -b 20m --write 20g /DiskArray/scratch/test.bin
WTR:    63186.63 KiB/s   Transfer time: 00:05:31    IOPS:        3.09
[[email protected] ~]# spew -b 20m --read 20g /DiskArray/scratch/test.bin
RTR:   190787.30 KiB/s   Transfer time: 00:01:49    IOPS:        9.32

It’s not the greatest performance, and I’m not 100% sure if this is what should be expected, I wish the IOPS would be higher, but comparing these results to a stand-alone Seagate Barracuda 7200.12 500 GB (ST3500418AS) Drive with an ext4 file system (I realise this isn’t really a good or accurate way to compare!), I don’t think it’s too bad:

[[email protected] ~]# spew -b 20m --write 20g /mnt/data/spew.bin
WTR:   125559.47 KiB/s   Transfer time: 00:02:47    IOPS:        6.13
[[email protected] ~]# spew -b 20m --read 20g /mnt/data/spew.bin
RTR:   131002.84 KiB/s   Transfer time: 00:02:40    IOPS:        6.40

The write speed of my ZFS RAIDZ pool seems to be half of the stand-alone disk, which is totally expected as it’s calculating parity and writing to multiple disks at the same time, and the read speed actually seems to be faster for my RAIDZ pool!

Also, as I am only on 100mbit ethernet at the moment, I am able to fully saturate the pipe when transferring large files, and I have noticed that things feel a lot more responsive now with ZFS than they were with Linux RAID + LVM + XFS/EXT4, but I haven’t got any numbers to prove that. 🙂

What’s next?

Well, as I’m using 100 mbit switches at home, not much. I’m planning on buying a SAS/SATA controller so I can add a few more drives and maybe a ZIL drive. As mentioned above, I’m also thinking about upgrading the drives to 2TB drives, and replace the SSD with something better as the current one has terrible read/write speeds and doesn’t even offer a very good number of IOPS.
HP currently has a £50 cash-back deal on the N54L, so I’m also really tempted to buy one for backups, but we’ll see about that! 🙂


If you decide to go down the ZFS road (on Linux, BSD or Solaris) on your N40L, I’d be very interested in hearing your experiences, hardware specs, and performance so I can figure out if I’m getting expected performance or terrible performance, so please leave a comment!

Open vSwitch 1.9.0 on Red Hat Enterprise Linux (RHEL) 6.4

I’ve been using Open vSwitch as a replacement for the stock bridge module in Linux for a few months now.

Open vSwitch is basically a open source virtual switch for Linux. It gives you much greater flexibility than the stock bridge module, effectively turning it into a managed, virtual layer 2 switch.

Open vSwitch has a very long list of features, but I chose to use it instead of the stock bridging module because Open vSwitch offers much greater flexibility with VLANing on Virtual Machines than what is possible with the stock Linux bridge module.

As my KVM servers are running an older version of Open vSwitch (1.4.6), I decided to upgrade to the latest version, which is 1.9.0 at time of writing this post.

RedHat actually provide RPMs for Open vSwitch as part of a tech preview in the Red Hat OpenStack Folsom Preview repository. They also include the Open vSwitch kernel module in their kernel, but they are using version 1.7.1, I wanted 1.9.0, so I decided to write this blog post.

EDIT: 10/04/2013 – Looking closer, it looks like RedHat also have an RPM for 1.9.0, but they do not include the brcompat module. If you need this module, then you’ll have to build your own RPMs.

RedHat have actually back-ported a number of functions from newer kernels into the kernel provided with RHEL. This causes a problem when compiling the Open vSwitch kernel module as the OVS guys have also back-ported those functions and were using kernel version checks to apply those backports.

The OVS guys have pushed a patch into the OVS git repo which fixes this problem, so I won’t be using the tarball provided on the OVS site, but rather building from the OVS 1.9 branch of the git repository.

When using the git version of Open vSwitch, we need to run the bootstrap script to create the configure script etc, but this requires a newer version of autoconf. You can either compile autoconf yourself, or I’m sure someone has create a RHEL6 RPM for a newer version of autoconf somewhere, but I just done this part on a Fedora machine instead as it was easier:
git clone git://openvswitch.org/openvswitch
git checkout -b branch-1.9 origin/branch-1.9
make dist

Now you’ll have a shiny new tarball: openvswitch-1.9.1.tar.gz

I moved this over to my dedicated RPM building virtual machine and extracted it:
tar -xf openvswitch-1.9.1.tar.gz
cd openvswitch-1.9.1

I got a compilation error when trying to build the Open vSwitch tools inside mock as openssl-devel isn’t listed as a requirement in the spec file so mock doesn’t pull it in. It’s an easy fix, I edited the spec file and added openssl/openssl-devel to it:
--- openvswitch.spec.orig 2013-04-01 18:43:50.337000000 +0100
+++ openvswitch.spec 2013-04-01 18:44:10.612000000 +0100
@@ -19,7 +19,8 @@ License: ASL 2.0
Release: 1
Source: openvswitch-%{version}.tar.gz
Buildroot: /tmp/openvswitch-rpm
-Requires: openvswitch-kmod, logrotate, python
+Requires: openvswitch-kmod, logrotate, python, openssl
+BuildRequires: openssl-devel

Open vSwitch provides standard network bridging functions and

Next, I created the SRPMs using mock:

mock -r epel-6-x86_64 --sources ../ --spec rhel/openvswitch.spec --buildsrpm
mv /var/lib/mock/epel-6-x86_64/result/*.rpm ./

mock -r epel-6-x86_64 --sources ../ --spec rhel/openvswitch-kmod-rhel6.spec --buildsrpm
mv /var/lib/mock/epel-6-x86_64/result/*.rpm ./

Then, actually build the RPMs:

mkdir ~/openvswitch-rpms/

mock -r epel-6-x86_64 --rebuild openvswitch-1.9.1-1.src.rpm
mv /var/lib/mock/epel-6-x86_64/result/*.rpm ~/openvswitch-rpms/

mock -r epel-6-x86_64 --rebuild openvswitch-kmod-1.9.1-1.el6.src.rpm
mv /var/lib/mock/epel-6-x86_64/result/*.rpm ~/openvswitch-rpms/

All done! Next either sign and dump the freshly built RPMs from ~/openvswitch-rpms/ into into your yum repository, or scp them over to each host you will be installing them on, and use yum to install:
yum localinstall openvswitch-1.9.1-1.x86_64.rpm kmod-openvswitch-1.9.1-1.el6.x86_64.rpm

I won’t go into configuration of Open vSwitch in this post, but it’s not too difficult, and there are many posts elsewhere that go into this.

Connecting to Usenet via Two Internet Connections

As I mentioned in a earlier post, I have two connections from Virgin Media at home and I wanted to use them both to grab content from usenet.

My Usenet provider is Supernews, I’ve used them for a couple of months, and from what I understand they are actually just a product of Giganews.

Supernews only actually allow you to connect to their servers from one IP per account, so even if I had set up load balancing to split connections over both my connections, it would not have worked very well for usenet as I will be going out via two IP addresses! So for this reason I decided to take another route.

I have a dedicated server with OVH which has a 100mbit line, my two lines with Virgin Media are 60mbit and 30mbit, so I figured if I route my traffic out via my dedicated server, I should be able to saturate my line when usenetting. 🙂

So the way I done this was to create two tunnels on my Cisco 2821 Integrated Services Router going to my dedicated server, one tunnel per WAN connection and basically “port forwarding” port 119 and 443 coming over the tunnels to go to Supernews. It’s working great so far and saturating both lines fully!

So the way I done this was as follows. First I setup the tunnels on my trusty Cisco 2821 ISR:

interface Tunnel1
description Tunnel to Dedi via WAN1
ip address
no ip redirects
no ip unreachables
no ip proxy-arp
ip tcp adjust-mss 1420
tunnel source GigabitEthernet0/0.10
tunnel mode ipip
tunnel destination

interface Tunnel2
description Tunnel to Dedi via WAN2
ip address
no ip redirects
no ip unreachables
no ip proxy-arp
ip tcp adjust-mss 1420
tunnel source GigabitEthernet0/1.11
tunnel mode ipip
tunnel destination

That isn’t the complete configuration, I also decided to NAT my home network to the IPs of the two tunnels. This was just in order to do it quickly. If I had not used NAT on the two tunnels, I would have to put a route on my dedicated server for my home network’s private IP range. Although this is easy, I was mainly doing this out of curiosity to see if it would work, and to do it without NAT on the tunnels I would have had to figure out how to do policy based routing in order to overcome asymmetric routing on Linux. That can be a project for another day. 🙂

My dedicated is running RHEL6, so to set up the tunnel on the dedicated server I created the relevant ifcfg-tunl* files:

[[email protected] ~]# cat /etc/sysconfig/network-scripts/ifcfg-tunl1

[[email protected] ~]# cat /etc/sysconfig/network-scripts/ifcfg-tunl2

I don’t really want to go into detail on how configure netfilter rules using IPtables, so I will only paste the relevant lines of my firewall script:

# This rule masquerades all packets going out of eth0 to the IP of eth0
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

# Forward packets coming in from tunl1 with the destination IP of and a source port of either 119 or 443 (Supernews use 443 for NNTP SSL port) to Supernews' server IP
iptables -t nat -A PREROUTING -p tcp -i tunl1 -d --dport 119 -j DNAT --to
iptables -t nat -A PREROUTING -p tcp -i tunl1 -d --dport 443 -j DNAT --to

# Forward packets coming in from tunl2 with the destination IP of and a source port of either 119 or 443 (Supernews use 443 for NNTP SSL port) to Supernews' server IP
iptables -t nat -A PREROUTING -p tcp -i tunl2 -d --dport 119 -j DNAT --to
iptables -t nat -A PREROUTING -p tcp -i tunl2 -d --dport 443 -j DNAT --to

That’s all there is to it really! Of course I have a more complete rule set, but I don’t really want to go into that in this post!

Next, I just added two servers in my usenet client, one pointing at and the other at And magic! Now both lines will be used when my usenet client is doing its thing!

Note: If you got to the end of this post, I apologize if I make no sense, I was pretty tired while writing this post, and really just wanted to go to sleep. If you have any questions or suggestions on how to do this better, I’d be very interested in hearing them.  :~)

My Goals for 2013: CCNP and RHCE?

I’ve been thinking about renewing my RHCE for quite sometime now, and completing my CCNP but I haven’t really got around to it, mainly due to the price of the exam being a little pricey (if I remember correctly!), but also due to not having enough time.

So for this year, I wanted to set a deadline for myself to complete them. With a deadline, it is easier to visualize and plan what to study and when, and allows you see your progress better.

So, my goal is to complete CCNP ideally by the end of May, or by the end of June at the latest. I think it should be possible! There are three parts to the CCNP: ROUTE, SWITCH, and TSHOOT. If I complete one per-month, it should be achievable!

Like most people, I am using the Cisco CCNP Routing and Switching Official Certification Library books as my study material, and highly recommend them.

On that note, I have added a “Home Lab” page where you can see pretty pictures of my rack, and my “CCNP Lab”. It’s nothing close to something as awesome as Scott Morris’ Lab, but it’s coming along! 😉

I have read that RedHat will be releasing RHEL7 in the second half of this year, so it is a perfect opportunity to renew my RHCE! My goal end date will depend on when RHEL7 will be released, and when the test centers are actually testing under RHEL7.

Hopefully this will be earlier into the second half of the year, so I have plenty of time to take the exam before the end of December!

Both CCNP and RHCE are great certifications, which are very highly regarded by employers and professionals.

A lot of people seem to think they don’t go well together, as RHCE is better for System Administrators and CCNP is more for Network Administrators, but I totally disagree since I feel that the lines between sysadmins and netadmins is very quickly disappearing thanks to virtualization and “cloud” technologies.

Apache Traffic Server Basic Configuration on RHEL6/CentOS 6

In this guide, I will explain how to get setup Apache Traffic Server with a very very basic configuration.

I will be using RHEL6/CentOS 6, but actually creating the configuration files for Traffic Server is exactly the same on all distributions.

As a pre-requisite for setting up Traffic Server, you must know a little about the HTTP protocol, and what a reverse proxy’s job actually is.

What is Apache Traffic Server?

I don’t really want to go into too much detail into this as there are many sites which explain this better than I ever could, but in short, Traffic Server is a caching proxy created by Yahoo! and donated to the Apache Foundation.


Apache Traffic Server is available from the EPEL repository, and this is the version I will be using.

Firstly, you must add the EPEL repositories if you haven’t already:
rpm -ivh http://mirror.us.leaseweb.net/epel/6/i386/epel-release-6-7.noarch.rpm
Next, we can just use yum to install Traffic Server:
yum install trafficserver
While we are at it, we might as well set Traffic Server to start at boot:
chkconfig trafficserver on


In this tutorial, I will only configure Apache Traffic Server to forward all requests to a single webserver.

For this, we really really only need to edit two files:

  • /etc/trafficserver/records.config
    This is the main configuration file which stores all the “global” configuration options.
  • /etc/trafficserver/remap.config
    This contains mapping rules for which real web server ATS should forward requests to.

Firstly, edit records.conf.

I didn’t really have to change much initially for a basic configuration.

The lines I changed were these:
CONFIG proxy.config.proxy_name STRING xantara.web.g3nius.net
CONFIG proxy.config.url_remap.pristine_host_hdr INT 1

Next we can edit remap.config.

Add the following line to the bottom:
regex_map http://(.*)/ http://webservers.hostname:80/
This should match everything and forward it to your web server.

Start traffic server:
service trafficserver start
And that’s it! It should now just work! 🙂

SSL Certificates for XMPP

Over the last few months, I have been slowly switching all my hostnames and service names from using my personal domain name “hamzahkhan.com” to another domain I have.

This is mainly because I am sharing some of the services I run with other people, and also because… well… I don’t like having my name in hostnames to be honest! 🙂

Today I finally got around to updating my Jabber/XMPP server.

In the process, I had to update the SSL certificate.

Quite some time ago, a friend of mine actually told me that I’ve created the certificate for my XMPP server incorrectly when using a single server to serve multiple domains.

For this, you are actually supposed to have a few extra attributes in the certificate.

To add these records, create a file called “xmpp.cnf” with the following contents:
HOME = .

oid_section = new_oids

[ new_oids ]
xmppAddr =
SRVName =

[ req ]
default_bits = 4096
default_keyfile = privkey.pem
distinguished_name = distinguished_name
req_extensions = v3_extensions
x509_extensions = v3_extensions
prompt = no

[ distinguished_name ]

# This is just your standard stuff!
countryName = GB
stateOrProvinceName = England
localityName = Cambridge
organizationName = G3nius.net
organizationalUnitName = XMPP Services
emailAddress = [email protected]

# Hostname of the XMPP server.
commonName = xmpp.g3nius.net

[ v3_extensions ]
basicConstraints = CA:FALSE
keyUsage = digitalSignature,keyEncipherment
subjectAltName = @subject_alternative_name

[ subject_alternative_name ]

# Do this for each of your domains
DNS.1 = domain1.com
otherName.0 = xmppAddr;FORMAT:UTF8,UTF8:domain1.com
otherName.1 = SRVName;IA5STRING:_xmpp-client.domain1.com
otherName.2 = SRVName;IA5STRING:_xmpp-server.domain1.com

DNS.2 = domain2.com
otherName.3 = xmppAddr;FORMAT:UTF8,UTF8:domain2.com
otherName.4 = SRVName;IA5STRING:_xmpp-client.domain2.com
otherName.5 = SRVName;IA5STRING:_xmpp-server.domain2.com

DNS.3 = domain3.com
otherName.6 = xmppAddr;FORMAT:UTF8,UTF8:domain3.com
otherName.7 = SRVName;IA5STRING:_xmpp-client.domain3.com
otherName.8 = SRVName;IA5STRING:_xmpp-server.domain3.com

Then you just continue the “certificate request” creation as normal specifying the configuration file on the command line:

# Create the private key
openssl genrsa -des3 -out xmpp.g3nius.net.key 4096

# Create the certificate request:
openssl req -config xmpp.cnf -new -key xmpp.g3nius.net.key -out xmpp.g3nius.net.csr

That’s all!

Now you can either use the CSR to request a certificate from CACert.org or anywhere else, or you could self-sign it and point your XMPP server at your shiny new certificate!