Home Lab Network Redesign with Mikrotik Routers

I have two cable connections from Virgin Media coming into my house due to some annoying contract problems.

I originally had one line on the 60Mbit package, and the other on 100mbit, but when Virgin Media upgraded me to 120mbit I downgraded the 60mbit line to 30mbit to reduce costs.

Since I got into this strange arrangement with Virgin Media, I have been using a Cisco 1841 Integrated Services Router on the 30mbit line, and a Cisco 2821 Integrated Services Router on the 120mbit line, but I found that I wasn’t able to max out the faster line using the Cisco 2821 ISR. Looking at Cisco’s performance sheet, the Cisco 2821 ISR is only really designed to support lines of up to around 87 mbit.

So naturally, it was time to upgrade! Initially I wanted to get a faster Cisco router, but looking at the second generation ISRs, it’ll be a bit pricey!

I did actually upgrade all my 7204 VXRs to have NPE-400 modules, which according to the performance sheet should do around 215 mbits, but the 7204s are extremely loud, and I only switch them on when I am using them.

Michael and Jamie have always been talking about Mikrotik routers so I figured since Cisco is a no go, I’ll give Mikrotik a chance. I ended up buying two RouterBOARD 2011UAS-RM from WiFi Stock.

To put the RB-20011UAS-RM boxes in, I decided I was going to restructure my network a bit. I will be making a series of posts discussing my re-designed network.

My goals for the redesign were as follows:

  • The RB-2011UAS-RM boxes will only function as edge routers, encapsulating traffic in GRE tunnels, and that’s all.
  • There will be a link between both edge routers, with a BGP peering for redirecting traffic should one of my lines go down.
  • They will have GRE tunnels to all my dedicated servers/VPSs.
  • I will use Quagga on all dedicated servers, and VPSs outside my network to create BGP peerings with my edge routers.
  • I wanted to route all my internet out of a server I currently have hosted with Rapid Switch, so BGP on the RapidSwitch box (called diamond) will have to push down a default route.
  • I wanted to use my Cisco ASA 5505 Adaptive Security Appliance as firewalls between the edge routers and the core.
  • I recently bought a Cisco 2851 Integrated Services Router, which I will use as a “core” router.
  • I wanted as much redundancy as possible.

In my next post I will create a diagram of what I will be doing, and discussing the setup of the server I have hosted at RapidSwitch.

As I have never used Mikrotik routers before, I will also attempt to discuss my experiences of RouterOS so far as I go along.

Upgrading Disks in my Home NAS

It’s been a few weeks since I switched my NAS from LVM (no RAID, pretty much RAID0), to ZFS. It’s been working great, but a couple of days ago, I recieved a nice email from Smartmontools informing me that one of my disks was about to die! I noticed that I was also getting errors on one of the disks during a scrub!

So it’s very lucky that I decided to change to ZFS in time! otherwise I would have had a bit of a problem (yes I realise it was quite silly for me to use RAID0 on my NAS in the first place!). 🙂

Anyway, instead of just replacing the single failed disk, I decided to take the opportunity to instead buy brand new disks.

The old disks were:

I decided to upgrade to:

I’m not really a big fan of Western Digital disks as I’ve had a lot of issues with them in the past. I usually favour Seagate. The reason I chose to give WD another chance is because I have read a lot of reviews of these disks being quite highly rated in performance and reliability, and because looking at Seagate’s site, they rank their “consumer” grade disks pretty poorly in terms of reliability (MTBF) and also only seem to provide a pretty ridiculous 1 year warranty on their consumer grade disks, and the higher end disks cost a little too much for home use!

I was unable to just do a “live” switch of the disks due to ZFS using ashift=9 even even though I had specified ashift=12 when creating my ZFS pool. The new disks use 4 kbyte sectors, meaning if ZFS was aligning for 512 byte sectors I’ll get quite a large performance drop. My only option was to create a new pool and use “zfs send” to copy my datasets over to the new pool.

It wasn’t a huge problem, I just put the old disks into a different machine, and after creating my new pool in the N40L, I used zfs send to send all the datasets from my old disks over. Fast forward a day or so, and everything was back to normal. 🙂

Performing the same test I done originally with my old disks, I get quite a good performance increase with these new disks and SSD!

[[email protected] ~]# spew -b 20m --write 20g /DiskArray/scratch/test.bin
WTR:   329368.36 KiB/s   Transfer time: 00:00:15    IOPS:       16.08
[[email protected] ~]# spew -b 20m --read 20g /DiskArray/scratch/test.bin
RTR:  1140657.64 KiB/s   Transfer time: 00:00:04    IOPS:       55.70

I’m satisfied with those numbers to be honest, it’s performing well enough for me, no lag, or slow IO, so I’m happy with it!

As another follow up to my previous post as well, I did end up buying two Cisco Catalyst 3508G XL Aggregation Switches. They aren’t the best gigabit switches, they are actually quite old and cruddy, but I got them for pretty cheap, and they are managed. They don’t even support Jumbo frames, but considering the price I got them for, I’m happy with them for now until I can find better gigabit switches to replace them with.

In my previous post I was also thinking about buying another MicroServer, as HP had a £50 cash-back deal. The cash-back has actually been increased to £100, meaning that buying an N54L with the cash-back offer, would work out to be only £109.99! So the temptation got to me, and I have ordered two more Microservers.

I’ll use one for backups, and the other I’m not sure about yet! 🙂

HP MicroServer N40L with ZFS on Red Hat Enterprise Linux (RHEL)

Last year some time in November 2012, I decided that keeping my media (photos, videos and music) on my main storage server was a little annoying as my family would complain each time I was making changes and the server was unavailable. So, I decided I wanted a small box that would just sit in the corner quietly and reliably serve out disk space with very little intervention from me.

After a little browsing on Ebuyer, I found the HP ProLiant MicroServer N40L. It’s a small little box with four disk bays, one optical drive bay, and could take up to 8GB RAM officially, but people have put 16GB without any issues. At the time, HP had a £100 cash-back deal as well, so without any delay I bought it.

The machine has sat quietly on top of my rack, running OpenFiler to serve files from a 4TB RAID-5 array for months, and it’s been great!

I have been using Linux RAID with LVM on top, and I’m a big fan of this combination, BUT I have some issues with it:

  • Running a consistency check on the RAID-5 array takes a very long time. Running a consistency check on a Linux RAID array requires checking each block of the array, even if it has not been allocated. This is a long, slow process, and due to the increased work that the disks would have to do, it increases the chances of it dying!
  • The above is also true when replacing a failed disk. When replacing a RAID-5 disk, the resyncing processes is very long, and slow, all the disks are read in order to recalculate parity etc for the replacement disk and the chances of a second disk failure are actually quite high due to the large capacity of the disks, and the amount of time it takes to rebuild. There are quite a few interesting articles about this around.
  • I haven’t done any proper tests, but the performance wasn’t as great as I would have hoped for, I was noticing quite bad performance when dealing with lots of small files. (I know it’s a little silly, but this was probably the biggest problem I was having, causing me to research ZFS!)

So, in light of these issues (mainly the performance one!), and from hearing a lot of good things about it from friends (Michael and Jamie), I decided to look into ZFS.

Although I have heard a lot of good things about ZFS in the past, I always avoided it due to having to either use Solaris or a BSD variant as due to some licensing issues ZFS couldn’t be included with Linux. While there was a FUSE module for ZFS, the performance was quite bad so I never really considered using it for a NAS.

Recently, there was an article on The Register about ZFS on Linux being “production ready”, so I decided to take the leap and move from OpenFiler to RHEL6 with ZFS on Linux!

Here is how I done so, and my experiences of it.

Hardware Specs


I will be creating a single RAIDZ pool with the four drives, with the SSD as a cache L2ARC drive.

The N40L has an internal USB port, so I will be using a SanDisk 16GB Flash drive for the OS.

I don’t plan on putting an optical drive into my N40L, so I decided to use the SATA port which HP have designated the ODD port for my SSD. In order to put the ODD port into AHCI mode, and bring the port up at 3Gb/s instead of 1.5Gb/s, I had to apply a BIOS hack which can be easily found on Google.

As I put in the note above, the two Seagate drives are terrible, and have a pretty high failure rate. I’ve had these for a few years, and they have failed and been replaced by Seagate many times. I’m only using them temporarily temporarily, and planning to replace all the drives with 2TB drives soon, and keep a backup on my main storage server.

The SSD will also be replaced later on with something a little newer, that can offer more IOPS than the current SSD I am using.


As the N40L has an internal USB port, I decided to use a USB flash drive for the OS.

I don’t think I had to do anything special during the installation of RHEL, I used my PXE booting environment and my kickstart scripts to do the base RHEL installation  but it’s nothing really fancy, so I won’t go into the installation process.

Once I had a clean RHEL environment, I added the EPEL and ZFS on Linux repositories:

[[email protected] ~]# yum localinstall --nogpgcheck http://mirror.us.leaseweb.net/epel/6/i386/epel-release-6-7.noarch.rpm http://archive.zfsonlinux.org/epel/zfs-release-1-2.el6.noarch.rpm

Next, we install ZFS:

[[email protected] ~]# yum install zfs

The ZFS on Linux documents recommend using the vdev_id.conf file to allow the use of easy to remember aliases for disks. Basically what this does is creates a symlink in /dev/disks/by-vdev/ to your real disk.

I created my vdev_id.conf file as follows:

[[email protected] ~]# cat /etc/zfs/vdev_id.conf
alias HDD-0 pci-0000:00:11.0-scsi-0:0:0:0
alias HDD-1 pci-0000:00:11.0-scsi-1:0:0:0
alias HDD-2 pci-0000:00:11.0-scsi-2:0:0:0
alias HDD-3 pci-0000:00:11.0-scsi-3:0:0:0

alias SSD-0 pci-0000:00:11.0-scsi-5:0:0:0

Once we have made the changes to the vdev_id.conf file, we must make udev trigger and create our symlinks:

[[email protected] ~]# udevadm trigger
[[email protected] ~]# ls -l /dev/disk/by-vdev/
total 0
lrwxrwxrwx 1 root root 9 Apr 27 16:56 HDD-0 -> ../../sda
lrwxrwxrwx 1 root root 9 Apr 27 16:56 HDD-1 -> ../../sdb
lrwxrwxrwx 1 root root 9 Apr 27 16:56 HDD-2 -> ../../sdc
lrwxrwxrwx 1 root root 9 Apr 27 16:56 HDD-3 -> ../../sdd
lrwxrwxrwx 1 root root 10 Apr 27 16:56 SDD-0 -> ../../sde

Now we can create our pool!

I decided to go with using RAIDZ1, which is effectively RAID-5. I regret this decision now, and should have gone with RAIDZ-2 (RAID-6), but too late now. :/

Although my drives are using 2^9 (512) byte sectors, I decided to tell ZFS to align the partitions for Advanced Format (AF) disks which use 2^12 (4k) byte sectors. The reasoning for this is that, once the pool has been created, the alignment cannot be changed unless you destroy the pool and recreate it. I’d prefer not to destroy the pool when upgrading the disks, and keeping the partitions aligned for 512 byte drives means that if I decide to upgrade to AF drives in the future, I would see performance degradation due to bad partition/sector alignment. As far as I know, the only disadvantage to aligning for AF drives on 512-byte sector drives is that there will be some disk space overhead and you will lose some usable disk space, but I think it’s better than the alternative of having to destroy the pool to upgrade the drives!

[[email protected] ~]# zpool create -o ashift=12 DiskArray raidz HDD-0 HDD-1 HDD-2 HDD-3 cache SSD-0
[[email protected] ~]# zpool status
  pool: DiskArray
 state: ONLINE
  scan: none requested

        NAME        STATE     READ WRITE CKSUM
        DiskArray   ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            HDD-0   ONLINE       0     0     0
            HDD-1   ONLINE       0     0     0
            HDD-2   ONLINE       0     0     0
            HDD-3   ONLINE       0     0     0
            SSD-0   ONLINE       0     0     0

errors: No known data errors

Magic! Our pool has been created! 😀

Now we can create a few data sets:

[[email protected] ~]# zfs create DiskArray/home
[[email protected] ~]# zfs create DiskArray/photos
[[email protected] ~]# zfs create DiskArray/scratch

Now you can fill it up! 🙂

I went further and setup NFS exports, and Samba. I opted to share my data stores the normal way using the /etc/exports and smb.conf file, but for this, Samba and NFS have to be started after ZFS has mounted the pool. ZFS does have the sharesmb and sharenfs options which basically add add the export/share to Samba and NFS as soon as it is available, but I prefer the traditional way as I am used to it. 🙂


I haven’t really done too many tests, but using spew, I get the following results:

[[email protected] ~]# spew -b 20m --write 20g /DiskArray/scratch/test.bin
WTR:    63186.63 KiB/s   Transfer time: 00:05:31    IOPS:        3.09
[[email protected] ~]# spew -b 20m --read 20g /DiskArray/scratch/test.bin
RTR:   190787.30 KiB/s   Transfer time: 00:01:49    IOPS:        9.32

It’s not the greatest performance, and I’m not 100% sure if this is what should be expected, I wish the IOPS would be higher, but comparing these results to a stand-alone Seagate Barracuda 7200.12 500 GB (ST3500418AS) Drive with an ext4 file system (I realise this isn’t really a good or accurate way to compare!), I don’t think it’s too bad:

[[email protected] ~]# spew -b 20m --write 20g /mnt/data/spew.bin
WTR:   125559.47 KiB/s   Transfer time: 00:02:47    IOPS:        6.13
[[email protected] ~]# spew -b 20m --read 20g /mnt/data/spew.bin
RTR:   131002.84 KiB/s   Transfer time: 00:02:40    IOPS:        6.40

The write speed of my ZFS RAIDZ pool seems to be half of the stand-alone disk, which is totally expected as it’s calculating parity and writing to multiple disks at the same time, and the read speed actually seems to be faster for my RAIDZ pool!

Also, as I am only on 100mbit ethernet at the moment, I am able to fully saturate the pipe when transferring large files, and I have noticed that things feel a lot more responsive now with ZFS than they were with Linux RAID + LVM + XFS/EXT4, but I haven’t got any numbers to prove that. 🙂

What’s next?

Well, as I’m using 100 mbit switches at home, not much. I’m planning on buying a SAS/SATA controller so I can add a few more drives and maybe a ZIL drive. As mentioned above, I’m also thinking about upgrading the drives to 2TB drives, and replace the SSD with something better as the current one has terrible read/write speeds and doesn’t even offer a very good number of IOPS.
HP currently has a £50 cash-back deal on the N54L, so I’m also really tempted to buy one for backups, but we’ll see about that! 🙂


If you decide to go down the ZFS road (on Linux, BSD or Solaris) on your N40L, I’d be very interested in hearing your experiences, hardware specs, and performance so I can figure out if I’m getting expected performance or terrible performance, so please leave a comment!

Open vSwitch 1.9.0 on Red Hat Enterprise Linux (RHEL) 6.4

I’ve been using Open vSwitch as a replacement for the stock bridge module in Linux for a few months now.

Open vSwitch is basically a open source virtual switch for Linux. It gives you much greater flexibility than the stock bridge module, effectively turning it into a managed, virtual layer 2 switch.

Open vSwitch has a very long list of features, but I chose to use it instead of the stock bridging module because Open vSwitch offers much greater flexibility with VLANing on Virtual Machines than what is possible with the stock Linux bridge module.

As my KVM servers are running an older version of Open vSwitch (1.4.6), I decided to upgrade to the latest version, which is 1.9.0 at time of writing this post.

RedHat actually provide RPMs for Open vSwitch as part of a tech preview in the Red Hat OpenStack Folsom Preview repository. They also include the Open vSwitch kernel module in their kernel, but they are using version 1.7.1, I wanted 1.9.0, so I decided to write this blog post.

EDIT: 10/04/2013 – Looking closer, it looks like RedHat also have an RPM for 1.9.0, but they do not include the brcompat module. If you need this module, then you’ll have to build your own RPMs.

RedHat have actually back-ported a number of functions from newer kernels into the kernel provided with RHEL. This causes a problem when compiling the Open vSwitch kernel module as the OVS guys have also back-ported those functions and were using kernel version checks to apply those backports.

The OVS guys have pushed a patch into the OVS git repo which fixes this problem, so I won’t be using the tarball provided on the OVS site, but rather building from the OVS 1.9 branch of the git repository.

When using the git version of Open vSwitch, we need to run the bootstrap script to create the configure script etc, but this requires a newer version of autoconf. You can either compile autoconf yourself, or I’m sure someone has create a RHEL6 RPM for a newer version of autoconf somewhere, but I just done this part on a Fedora machine instead as it was easier:
git clone git://openvswitch.org/openvswitch
git checkout -b branch-1.9 origin/branch-1.9
make dist

Now you’ll have a shiny new tarball: openvswitch-1.9.1.tar.gz

I moved this over to my dedicated RPM building virtual machine and extracted it:
tar -xf openvswitch-1.9.1.tar.gz
cd openvswitch-1.9.1

I got a compilation error when trying to build the Open vSwitch tools inside mock as openssl-devel isn’t listed as a requirement in the spec file so mock doesn’t pull it in. It’s an easy fix, I edited the spec file and added openssl/openssl-devel to it:
--- openvswitch.spec.orig 2013-04-01 18:43:50.337000000 +0100
+++ openvswitch.spec 2013-04-01 18:44:10.612000000 +0100
@@ -19,7 +19,8 @@ License: ASL 2.0
Release: 1
Source: openvswitch-%{version}.tar.gz
Buildroot: /tmp/openvswitch-rpm
-Requires: openvswitch-kmod, logrotate, python
+Requires: openvswitch-kmod, logrotate, python, openssl
+BuildRequires: openssl-devel

Open vSwitch provides standard network bridging functions and

Next, I created the SRPMs using mock:

mock -r epel-6-x86_64 --sources ../ --spec rhel/openvswitch.spec --buildsrpm
mv /var/lib/mock/epel-6-x86_64/result/*.rpm ./

mock -r epel-6-x86_64 --sources ../ --spec rhel/openvswitch-kmod-rhel6.spec --buildsrpm
mv /var/lib/mock/epel-6-x86_64/result/*.rpm ./

Then, actually build the RPMs:

mkdir ~/openvswitch-rpms/

mock -r epel-6-x86_64 --rebuild openvswitch-1.9.1-1.src.rpm
mv /var/lib/mock/epel-6-x86_64/result/*.rpm ~/openvswitch-rpms/

mock -r epel-6-x86_64 --rebuild openvswitch-kmod-1.9.1-1.el6.src.rpm
mv /var/lib/mock/epel-6-x86_64/result/*.rpm ~/openvswitch-rpms/

All done! Next either sign and dump the freshly built RPMs from ~/openvswitch-rpms/ into into your yum repository, or scp them over to each host you will be installing them on, and use yum to install:
yum localinstall openvswitch-1.9.1-1.x86_64.rpm kmod-openvswitch-1.9.1-1.el6.x86_64.rpm

I won’t go into configuration of Open vSwitch in this post, but it’s not too difficult, and there are many posts elsewhere that go into this.

Connecting to Usenet via Two Internet Connections

As I mentioned in a earlier post, I have two connections from Virgin Media at home and I wanted to use them both to grab content from usenet.

My Usenet provider is Supernews, I’ve used them for a couple of months, and from what I understand they are actually just a product of Giganews.

Supernews only actually allow you to connect to their servers from one IP per account, so even if I had set up load balancing to split connections over both my connections, it would not have worked very well for usenet as I will be going out via two IP addresses! So for this reason I decided to take another route.

I have a dedicated server with OVH which has a 100mbit line, my two lines with Virgin Media are 60mbit and 30mbit, so I figured if I route my traffic out via my dedicated server, I should be able to saturate my line when usenetting. 🙂

So the way I done this was to create two tunnels on my Cisco 2821 Integrated Services Router going to my dedicated server, one tunnel per WAN connection and basically “port forwarding” port 119 and 443 coming over the tunnels to go to Supernews. It’s working great so far and saturating both lines fully!

So the way I done this was as follows. First I setup the tunnels on my trusty Cisco 2821 ISR:

interface Tunnel1
description Tunnel to Dedi via WAN1
ip address
no ip redirects
no ip unreachables
no ip proxy-arp
ip tcp adjust-mss 1420
tunnel source GigabitEthernet0/0.10
tunnel mode ipip
tunnel destination

interface Tunnel2
description Tunnel to Dedi via WAN2
ip address
no ip redirects
no ip unreachables
no ip proxy-arp
ip tcp adjust-mss 1420
tunnel source GigabitEthernet0/1.11
tunnel mode ipip
tunnel destination

That isn’t the complete configuration, I also decided to NAT my home network to the IPs of the two tunnels. This was just in order to do it quickly. If I had not used NAT on the two tunnels, I would have to put a route on my dedicated server for my home network’s private IP range. Although this is easy, I was mainly doing this out of curiosity to see if it would work, and to do it without NAT on the tunnels I would have had to figure out how to do policy based routing in order to overcome asymmetric routing on Linux. That can be a project for another day. 🙂

My dedicated is running RHEL6, so to set up the tunnel on the dedicated server I created the relevant ifcfg-tunl* files:

[[email protected] ~]# cat /etc/sysconfig/network-scripts/ifcfg-tunl1

[[email protected] ~]# cat /etc/sysconfig/network-scripts/ifcfg-tunl2

I don’t really want to go into detail on how configure netfilter rules using IPtables, so I will only paste the relevant lines of my firewall script:

# This rule masquerades all packets going out of eth0 to the IP of eth0
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

# Forward packets coming in from tunl1 with the destination IP of and a source port of either 119 or 443 (Supernews use 443 for NNTP SSL port) to Supernews' server IP
iptables -t nat -A PREROUTING -p tcp -i tunl1 -d --dport 119 -j DNAT --to
iptables -t nat -A PREROUTING -p tcp -i tunl1 -d --dport 443 -j DNAT --to

# Forward packets coming in from tunl2 with the destination IP of and a source port of either 119 or 443 (Supernews use 443 for NNTP SSL port) to Supernews' server IP
iptables -t nat -A PREROUTING -p tcp -i tunl2 -d --dport 119 -j DNAT --to
iptables -t nat -A PREROUTING -p tcp -i tunl2 -d --dport 443 -j DNAT --to

That’s all there is to it really! Of course I have a more complete rule set, but I don’t really want to go into that in this post!

Next, I just added two servers in my usenet client, one pointing at and the other at And magic! Now both lines will be used when my usenet client is doing its thing!

Note: If you got to the end of this post, I apologize if I make no sense, I was pretty tired while writing this post, and really just wanted to go to sleep. If you have any questions or suggestions on how to do this better, I’d be very interested in hearing them.  :~)

Dead Cisco Catalyst 3560

I’ve been trying to acquire a Cisco Catalyst 3560 as it provides features which are not supported by my Catalyst 3550s, such as Private VLANs. I believe the QoS features differ on 3560 as well.

So, as I was browsing eBay (one of my favourite pastimes! :P), I found an auction for a WS-C3560-8PC-S which had been labelled “untested”. From past experiences, I have found that listings that state that they haven’t been tested are usually faulty devices, but I thought I would take the risk anyway. I was hoping it would be some small issue which I could either work around or repair, such as a bad port, or screwed up IOS image which I could just reload myself (hey! I’ve seen devices sell on eBay for pretty cheap due to non-techy people assuming it was broken because the IOS image was missing!). But I guess my luck was bad, and two days after the end of the auction, I received a large green paperweight. 🙁

After plugging the power in, the LEDs on the front of the Catalyst 3560 go on, but they just stay on in a solid state, where as they should be blinking during the boot process. I plugged the console cable in, only to find that there is no output whatsoever, not even from ROMMON, which is the first step before even loading IOS.

I have pretty little knowledge of electronics, but I did test basic things that I knew how, such as checking if the PSU was giving out the correct voltages, which it was, but that’s pretty much all I know how to check!

From my limited knowledge of electronics, I assume that something must be wrong with the Boot ROM chip since not even ROMMON is able to start. None of the parts on a Catalyst 3560 are field replicable, so I don’t think I can test any parts by switching them around either.

I am quite disappointed that this Catalyst 3560 is dead, but I tried my luck, and it turned out bad, no biggie. 🙂

Hopefully I will be able to find a Catalyst 3650 soon!

If anyone has any ideas I can try in order to fix this device, I would be quite eager to make an attempt! 🙂

Two more Cisco 7204 VXRs Added to My Home Lab!

Cisco 7204 VXRs Last week, I was browsing eBay (as you do!), and noticed two Cisco 7204 VXR routers auctions which were about to end pretty soon, price was £0.99, and there were no bids! So, I figured I would go ahead and bid. To my surprise, I won both!

I managed to win one of them for £20, and the other for £0.99! £20.99 for two 7204 VXRs isn’t bad at all, just a quick search on eBay shows that the NPE-300s, which came with both routers, is generally selling for £30, so I’m quite pleased.

The I/O controllers (C7200-I/O) are a bit old, and use DB-25 connector for the console port and not the normal RJ-45 that most Cisco devices use. The I/O controller don’t have any Ethernet ports either, but I did get some FastEthernet modules with both routers. I will probably upgrade the I/O controllers to C7200-I/O-2FE/E some time this year, but for now, it’ll do. 🙂

I now have three 7204 VXRs in my rack, the first one I bought last year some time.

In the picture:

  • Top 7204 VXR has: NPE-225, 128MB RAM, C7200-I/O, Dual FastEthernet Module and an Enhanced ATM module (ATM PA-A3).
  • Middle 7204 VXR has: NPE-300 with 256MB RAM (if I remember correctly), C7200-I/O, Single EthernetModule, and an Enhanced ATM module (ATM PA-A3).
  • Bottom 7204 VXR has: NPE-300 with 256MB RAM (if I remember correctly), C7200-I/O-2FE/E, and an Enhanced ATM module (ATM PA-A3).

I’m not really sure if the Enhanced ATM modules will be of any use to me, as I don’t think it is possible to use them back-to-back (please correct me if I am wrong!). I do want to get a few Cisco PA-4T+ 4 Port Serial modules but that’s for later on.

Cisco ASA 5505 RAM Upgrade

Edit: 3rd June 2014 – If you are reading this post, you should check out my follow up post: Cisco ASA 9.2 on Cisco ASA 5505 with Unsupported Memory Configuration Fail.

I have two Cisco ASA 5505s in my home lab which I acquired almost two years ago from eBay. I was pretty lucky, as I paid under £70 for each because the seller wasn’t too sure what they were! Looking on eBay now, they are selling for around £120! 🙂

Pretty much straight away, I wanted to upgrade to the ASA 8.3 code, which required a RAM upgrade, so I upgraded it.

Starting from ASA 8.3, the minimum required RAM needed to run 8.3 code and newer on a 5505 is 512MB. This is also the maximum officially supported amount of RAM.

Buying official Cisco RAM is, as always, quite expensive, but since the ASA 5505 uses standard DDR RAM, it is actually possible to use third-party RAM in the ASA 5505.

When I originally performed this upgrade, I found that on various forums many people had actually upgraded past the official supported amount of RAM, and upgraded their ASA 5505s to 1GB RAM.

Intrigued  by this, and due to needing the extra RAM for the 8.3 code, I decided to upgrade both my ASAs to 1GB as well!

There aren’t any ground breaking advantages to upgrading to 1GB as far as I know. I’m guessing the ASA will be able to hold a lot more entries in the NAT table, but I don’t really push my ASAs to their limits anyway.

I ended up buying two CT12864Z40B sticks from Crucial, which have worked flawlessly for the past year.

Almost 14 months later, I needed to crack open the case of the ASAs again to get to the CompactFlash. I thought I’d make a quick post about the RAM upgrade process while I’m at it.

The upgrade is very easy, anyone could do it, but I was bored, and wanted to write a blog post! 🙂

  1. Place the ASA upside down, and unscrew the three screws at the bottom.
    Cisco ASA 5505 Screws
  2. Remove the cover
    Cisco ASA 5505 Internals
  3. Take out the old RAM, and put in the new RAM.
    Cisco ASA 5505 RAM
  4. You can optionally also upgrade the CompactFlash at this time. I’m using the stock 128MB that came with the ASAs at the moment, but I will probably upgrade sometime soon. 🙂
    Cisco ASA 5505 CompactFlash
  5. Close everything up, and plug-in the power!
    Cisco ASA 5505 Failover Pair

All done! I haven’t got a screenshot of it booting at the moment, but I will probably update this post tomorrow with one.

I plan to upgrade the CompactFlash to 4GB as well so I have more working space when I am using the “packet sniffer” built into the ASA. This is a very easy process as well, but you have to be careful to copy over your licence files as well. I will be making a post about this as well when I have done the upgrade.

My Goals for 2013: CCNP and RHCE?

I’ve been thinking about renewing my RHCE for quite sometime now, and completing my CCNP but I haven’t really got around to it, mainly due to the price of the exam being a little pricey (if I remember correctly!), but also due to not having enough time.

So for this year, I wanted to set a deadline for myself to complete them. With a deadline, it is easier to visualize and plan what to study and when, and allows you see your progress better.

So, my goal is to complete CCNP ideally by the end of May, or by the end of June at the latest. I think it should be possible! There are three parts to the CCNP: ROUTE, SWITCH, and TSHOOT. If I complete one per-month, it should be achievable!

Like most people, I am using the Cisco CCNP Routing and Switching Official Certification Library books as my study material, and highly recommend them.

On that note, I have added a “Home Lab” page where you can see pretty pictures of my rack, and my “CCNP Lab”. It’s nothing close to something as awesome as Scott Morris’ Lab, but it’s coming along! 😉

I have read that RedHat will be releasing RHEL7 in the second half of this year, so it is a perfect opportunity to renew my RHCE! My goal end date will depend on when RHEL7 will be released, and when the test centers are actually testing under RHEL7.

Hopefully this will be earlier into the second half of the year, so I have plenty of time to take the exam before the end of December!

Both CCNP and RHCE are great certifications, which are very highly regarded by employers and professionals.

A lot of people seem to think they don’t go well together, as RHCE is better for System Administrators and CCNP is more for Network Administrators, but I totally disagree since I feel that the lines between sysadmins and netadmins is very quickly disappearing thanks to virtualization and “cloud” technologies.

Moving Back to London and Virgin Media

Two weeks ago, I moved back to London.

At my place in Cambridge, I had a 100mbit connection from Virgin Media, which I wanted to cancel as my parents already have a connection from Virgin Media so there wasn’t any need for me to move mine along with me!

So I called up VM, and they informed me that as I was still in contract, I would have to pay some ridiculous amount to cancel the contract (I think it may have been £280).

Alternatively, the other option was one which I was not aware of!

Usually, Virgin Media do not allow people to have two connections from them under a single address, BUT, in cases such as mine, they allow it!

So, instead of paying £280 or whatever it was, I decided it’d work out much cheaper if I just move my connection with me. It’d be nice to have anyway! 🙂

Today, the Virgin Media guys arrived at my house. Their first reaction was shock at seeing my server rack, but they were pretty nice guys. I did have to explain what I use all this equipment for, and had to explain how terrible the SuperHub actually is! 🙂

They didn’t really have to do much, they just really had to add a splitter and give me two new coax cables going from the splitter to the two modems. They did mention their frustration at VirginMedia about having to do installations for the more technical people when those technical people could really just do it themselves, and I totally agree! 😀

For some reason, they had to switch out my old SuperHub, and gave me a new one which has a matt-finish instead of the glossy look my old one had. I’m not sure if there is any other difference other than that, not that I care, I enabled modem mode ASAP so I don’t have to deal with this terrible device too much. 🙂

I was a little worried that I might not get the full bandwidth on both connections, but it looks like I am!

Next steps are to figure out how to do load-balancing on my Cisco 2821 ISR.

Virgin Media Cable Wall Outlet Two way splitter VirginMedia SuperHub and other goodies