Apache Traffic Server Basic Configuration on RHEL6/CentOS 6

In this guide, I will explain how to get setup Apache Traffic Server with a very very basic configuration.

I will be using RHEL6/CentOS 6, but actually creating the configuration files for Traffic Server is exactly the same on all distributions.

As a pre-requisite for setting up Traffic Server, you must know a little about the HTTP protocol, and what a reverse proxy’s job actually is.

What is Apache Traffic Server?

I don’t really want to go into too much detail into this as there are many sites which explain this better than I ever could, but in short, Traffic Server is a caching proxy created by Yahoo! and donated to the Apache Foundation.

Installation

Apache Traffic Server is available from the EPEL repository, and this is the version I will be using.

Firstly, you must add the EPEL repositories if you haven’t already:
rpm -ivh http://mirror.us.leaseweb.net/epel/6/i386/epel-release-6-7.noarch.rpm
Next, we can just use yum to install Traffic Server:
yum install trafficserver
While we are at it, we might as well set Traffic Server to start at boot:
chkconfig trafficserver on

Configuration

In this tutorial, I will only configure Apache Traffic Server to forward all requests to a single webserver.

For this, we really really only need to edit two files:

  • /etc/trafficserver/records.config
    This is the main configuration file which stores all the “global” configuration options.
  • /etc/trafficserver/remap.config
    This contains mapping rules for which real web server ATS should forward requests to.

Firstly, edit records.conf.

I didn’t really have to change much initially for a basic configuration.

The lines I changed were these:
CONFIG proxy.config.proxy_name STRING xantara.web.g3nius.net
CONFIG proxy.config.url_remap.pristine_host_hdr INT 1

Next we can edit remap.config.

Add the following line to the bottom:
regex_map http://(.*)/ http://webservers.hostname:80/
This should match everything and forward it to your web server.

Start traffic server:
service trafficserver start
And that’s it! It should now just work! 🙂

SSL Certificates for XMPP

Over the last few months, I have been slowly switching all my hostnames and service names from using my personal domain name “hamzahkhan.com” to another domain I have.

This is mainly because I am sharing some of the services I run with other people, and also because… well… I don’t like having my name in hostnames to be honest! 🙂

Today I finally got around to updating my Jabber/XMPP server.

In the process, I had to update the SSL certificate.

Quite some time ago, a friend of mine actually told me that I’ve created the certificate for my XMPP server incorrectly when using a single server to serve multiple domains.

For this, you are actually supposed to have a few extra attributes in the certificate.

To add these records, create a file called “xmpp.cnf” with the following contents:
HOME = .
RANDFILE = $ENV::HOME/.rnd

oid_section = new_oids

[ new_oids ]
xmppAddr = 1.3.6.1.5.5.7.8.5
SRVName = 1.3.6.1.5.5.7.8.7

[ req ]
default_bits = 4096
default_keyfile = privkey.pem
distinguished_name = distinguished_name
req_extensions = v3_extensions
x509_extensions = v3_extensions
prompt = no

[ distinguished_name ]

# This is just your standard stuff!
countryName = GB
stateOrProvinceName = England
localityName = Cambridge
organizationName = G3nius.net
organizationalUnitName = XMPP Services
emailAddress = [email protected]

# Hostname of the XMPP server.
commonName = xmpp.g3nius.net

[ v3_extensions ]
basicConstraints = CA:FALSE
keyUsage = digitalSignature,keyEncipherment
subjectAltName = @subject_alternative_name

[ subject_alternative_name ]

# Do this for each of your domains
DNS.1 = domain1.com
otherName.0 = xmppAddr;FORMAT:UTF8,UTF8:domain1.com
otherName.1 = SRVName;IA5STRING:_xmpp-client.domain1.com
otherName.2 = SRVName;IA5STRING:_xmpp-server.domain1.com

DNS.2 = domain2.com
otherName.3 = xmppAddr;FORMAT:UTF8,UTF8:domain2.com
otherName.4 = SRVName;IA5STRING:_xmpp-client.domain2.com
otherName.5 = SRVName;IA5STRING:_xmpp-server.domain2.com

DNS.3 = domain3.com
otherName.6 = xmppAddr;FORMAT:UTF8,UTF8:domain3.com
otherName.7 = SRVName;IA5STRING:_xmpp-client.domain3.com
otherName.8 = SRVName;IA5STRING:_xmpp-server.domain3.com

Then you just continue the “certificate request” creation as normal specifying the configuration file on the command line:

# Create the private key
openssl genrsa -des3 -out xmpp.g3nius.net.key 4096

# Create the certificate request:
openssl req -config xmpp.cnf -new -key xmpp.g3nius.net.key -out xmpp.g3nius.net.csr

That’s all!

Now you can either use the CSR to request a certificate from CACert.org or anywhere else, or you could self-sign it and point your XMPP server at your shiny new certificate!

RedHat Enterprise Linux 6 Beta

RedHat Enterprise Linux 6 Beta 1 has finally been released as a public beta. It is available as an ISO from the public RedHat FTP site.

A couple of days ago, I decided to play with the beta, and I discovered, as expected, there are a lot of significant differences between RHEL5 and RHEL6.

The the main difference which I found to be very frustrating is that there is no longer any support for Xen dom0.

I had heard about RedHat’s decision to stop supporting Xen, but I did not think that this would mean they would stop shipping it with the distribution.

The loss of dom0 support means that you can no longer use RHEL as a Xen virtualization host, rather only as a guest under other Xen supporting distributions.

Xen has been dropped in favour of Kernel-based Virtual Machine (KVM), which is a  virtualization infrastructure included with the Linux kernel. Linux KVM is a hardware-assisted virtualization infrastructure which requires the CPU to have a special CPU feature called Intel-VT on Intel CPUs and AMD-V on AMD CPUs.

KVM has limited paravirtulization support, but I found in my very simple tests that fully paravirtualizaed guests inside Xen had much better performance.

This latest release of RHEL also means that my RHCE will soon expire. I am hoping to get re-certified as soon as I can. At the same time, I am also considering taking the “Red Hat Certified Virtualization Administrator” course and exam, but I still have some time to think over that. 🙂

Cisco Unified IP Phone 7912G – SIP to SCCP

As stated in my last post, I received my CCNP Lab Kit in the post last week.

In my excitement, I decided to switch my IP Phones from the SCCP firmware (which was the software originally on the phones) to the SIP firmware so that I could connect to VoIPTalk.

Now that the excitement has died down a little, I wanted to switch back to SCCP as, from what I can tell, it provides more features than SIP.

As I’m not too familiar with Cisco IP Phones, I started Googling for instructions on how to switch back, but I couldn’t really find any instructions on how to do so.

In the end I tried the same way I had originally upgraded to the SIP firmware. I edited my gkdefault.txt which originally contained the following line:
upgradecode:3,0x601,0x0400,0x0100,0.0.0.0,69,0x060111a,CP7912080000SIP060111A.sbin
And replaced it with:

upgradecode:3,0x601,0x0400,0x0100,0.0.0.0,69,0x070409a,CP7912080003SCCP070409A.sbin

You can read what the values mean on the Cisco site, but all I had to change was two last values of the line:
0x060111a -> 0x070409a

CP7912080000SIP060111A.sbin -> CP7912080003SCCP070409A.sbin

The first value I had to change was the build ID/date, which is (from what I can tell) the last few characters in the file name after the “SIP” or “SCCP” bit.

The second value is pretty self explanatory, its just the file name of the firmware file you have.

Next, I used cfgfmt to convert the file into a .cfg file compatible with the phones, and put it on my TFTP server.

I then restarted the phones, and behold! They were downloading the SCCP firmware image. 🙂

I’m not sure if this is the “correct” way to switch back to the SCCP firmware, but it worked for me and I don’t see why it wouldn’t be correct, it seems pretty obvious. The only reason I am a little confused is the fact that while searching for instructions to switch back, I found a lot of people having difficulties switching back and even some companies offering a “recovery” service for people in this situation.

Hopefully my post will help other people who are in this situation.

Now I just need to figure out how to get the CTU ringtone onto the phone. 😀

Cisco CCNP Lab Kit

Cisco CCNA Lab Kit

UPDATE: You can see the latest pictures of my home lab on my “Home Lab” page

As I have pretty much completed my studies for the Cisco CCNA exams, I decided I would build up my lab so I could “practice” for the Cisco CCNP exams. A lot of people recommend using a simulator/emulator such as Dynamips, but I don’t think that works out to be just as good as using real hardware but that’s a different matter. 🙂

I had originally bought my CCNA Lab Kit from the nice people at ITelligentsia so I decided I would buy the rest of my equipment from them as well.

My current lab consists of the following:

  • Cisco 1800 Series : 1x Cisco 1841 (I bought this separately from someone else)
  • Cisco 2600 Series: 1x Cisco 2610, 2x Cisco 2511XM, 1x Cisck 2621XM
  • Cisco 2500 Series: 2x Cisco 2501, 1x Cisck 2509
  • Cisco 1700 Series: 1x Cisco 1721 (I bought this separately from someone else)
  • Cisco Catalyst 3550 Series: 2x WS-C3550-24 SMI
  • Cisco Catalyst 2950 Series: 3x WS-C2950-12
  • Catalyst 2900 Series XL: 2x Cisco 2924XL
  • Cisco 2000 Series Wireless LAN Controller: AIR-WLC2006-K9
  • Cisco Aironet 1200 Series: Cisco Aironet 1231 (AIR-LAP1231G-E-K9)
  • 3x Cisco Unified IP Phone 7912G

Hopefully this will be enough to allow me to get going, although I REALLY need a new rack. My 24U rack is already full, so my UPS (4U), Server (4U) and new lab equipment are sitting on the floor, and being very difficult to get access to.

Hopefully I will be able to get two from work in March as we will be moving offices, and from what I can tell, they will be getting new server racks. 🙂

I also bought a UPS a few weeks ago, but I’ve had some trouble with it. The UPS is a PowerWare 5119 RM 3000VA UPS. I have connected a few of my routers to it, and left it charging for over 24 hours, but when I kill the power the UPS goes into a strange state in which it seems to keep switching on and off and lighting up random lights on the front. From Googling a bit, I found that I might need to change some settings using the management serial port. Unfortunately, the UPS does not use a “standard” serial pin out, so I will have to build a cable when I can. Hopefully I will be able to sort the issue, otherwise I will have to send it back to the place I bought it from for repair. 🙁

Nginx, Varnish, HAProxy, and Thin/Lighttpd

Over the last few days, I have been playing with Ruby on Rails again and came across Thin, a small, yet stable web server which will serve applications written in Ruby.

This is a small tutorial on how to get Nginx, Varnish, HAProxy working together with Thin (for dynamic pages) and Lighttpd (for static pages).

I decided to take this route as from reading in many places I found that separating static and dynamic content improves performance significantly.

Nginx

Nginx is a lightweight, high performance web server and reverse proxy. It can also be used as an email proxy, although this is not an area I have explored. I will be using Nginx as the front-end server for serving my rails applications.

I installed Nginx using the RHEL binary package available from EPEL.

Configuration of Nginx is very simple. I have kept it very simple, and made Nginx My current configuration file consists of the following:

user nginx;
worker_processes 1;

error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;

events {
worker_connections 1024;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] $request "$status" $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"';

sendfile on;
tcp_nopush on;
tcp_nodelay off;

keepalive_timeout 5;

# This section enables gzip compression.
gzip on;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;

# Here you can define the addresses on which varnish will listen. You can place multiple servers here, and nginx will load balance between them.
upstream cache_servers {
server localhost:6081 max_fails=3 fail_timeout=30s;
}

# This is the default virtual host.
server {
listen 80 default;
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log;
charset utf-8;

# This is optional. It serves up a 1x1 blank gif image from RAM.
location = /1x1.gif {
empty_gif;
}

# This is the actual part which will proxy all connections to varnish.
location / {
proxy_pass http://cache_servers/;
proxy_redirect http://cache_servers/ http://$host:$server_port/;

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}

Varnish

Varnish is a high performance caching server. We can use Varnish to cache content which will not be changed often.

I installed Varnish using the RHEL binary package available from EPEL as well. Initially, I only needed to edit /etc/sysconfig/varnish, and configure the address on which varnish will listen on.

DAEMON_OPTS="-a localhost:6081 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-u varnish -g varnish \
-s file,/var/lib/varnish/varnish_storage.bin,10G"

This will make varnish listen on port 6081 for normal HTTP traffic, and port 8082 for administration.

Next, you must edit /etc/varnish/default.vcl to actually cache data. My current configuration is as follows:

backend thin {
.host = "127.0.0.1";
.port = "8080";
}

backend lighttpd {
.host = "127.0.0.1";
.port = "8081";
}

sub vcl_recv {
if (req.url ~ "^/static/") {
set req.backend = lighttpd;
} else {
set req.backend = thin;
}

# Allow purging of cache using shift + reload
if (req.http.Cache-Control ~ "no-cache") {
purge_url(req.url);
}

# Unset any cookies and autorization data for static links and icons, and fetch from catch
if (req.request == "GET" && req.url ~ "^/static/" || req.request == "GET" && req.url ~ "^/icons/") {
unset req.http.cookie;
unset req.http.Authorization;
lookup;
}

# Look for images in the cache
if (req.url ~ "\.(png|gif|jpg|ico|jpeg|swf|css|js)$") {
unset req.http.cookie;
lookup;
}

# Do not cache any POST'ed data
if (req.request == "POST") {
pass;
}

# Do not cache any non-standard requests
if (req.request != "GET" && req.request != "HEAD" &&
req.request != "PUT" && req.request != "POST" &&
req.request != "TRACE" && req.request != "OPTIONS" &&
req.request != "DELETE") {
pass;
}

# Do not cache data which has an autorization header
if (req.http.Authorization) {
pass;
}

lookup;
}

sub vcl_fetch {
# Remove cookies and cache static content for 12 hours
if (req.request == "GET" && req.url ~ "^/static/" || req.request == "GET" && req.url ~ "^/icons/") {
unset obj.http.Set-Cookie;
set obj.ttl = 12h;
deliver;
}

# Remove cookies and cache images for 12 hours
if (req.url ~ "\.(png|gif|jpg|ico|jpeg|swf|css|js)$") {
unset obj.http.set-cookie;
set obj.ttl = 12h;
deliver;
}

# Do not cache anything that does not return a value in the 200's
if (obj.status >= 300) {
pass;
}

# Do not cache content which varnish has marked uncachable
if (!obj.cacheable) {
pass;
}

# Do not cache content which has a cookie set
if (obj.http.Set-Cookie) {
pass;
}

# Do not cache content with cache control headers set
if(obj.http.Pragma ~ "no-cache" || obj.http.Cache-Control ~ "no-cache" || obj.http.Cache-Control ~ "private") {
pass;
}

if (obj.http.Cache-Control ~ "max-age") {
unset obj.http.Set-Cookie;
deliver;
}

pass;
}

HAProxy

HAProxy is a high performance TCP/HTTP load balancer. It can be used to load balance almost any type of TCP connection, although I have only used it with HTTP connections.

We will be using HAProxy to balance connections over multiple thin instances.

HAProxy is also available in EPEL. My HAProxy configuration is as follows:

global
daemon
log 127.0.0.1 local0
maxconn 4096
nbproc 1
chroot /var/lib/haproxy
user haproxy
group haproxy

defaults
mode http
clitimeout 60000
srvtimeout 30000
timeout connect 4000

option httpclose
option abortonclose
option httpchk
option forwardfor

balance roundrobin

stats enable
stats refresh 5s
stats auth admin:123abc789xyz

listen thin 127.0.0.1:8080
server thin 10.10.10.2:2010 weight 1 minconn 3 maxconn 6 check inter 20000
server thin 10.10.10.2:2011 weight 1 minconn 3 maxconn 6 check inter 20000
server thin 10.10.10.2:2012 weight 1 minconn 3 maxconn 6 check inter 20000
server thin 10.10.10.2:2013 weight 1 minconn 3 maxconn 6 check inter 20000
server thin 10.10.10.2:2014 weight 1 minconn 3 maxconn 6 check inter 20000
server thin 10.10.10.2:2015 weight 1 minconn 3 maxconn 6 check inter 20000
server thin 10.10.10.2:2016 weight 1 minconn 3 maxconn 6 check inter 20000
server thin 10.10.10.2:2017 weight 1 minconn 3 maxconn 6 check inter 20000
server thin 10.10.10.2:2018 weight 1 minconn 3 maxconn 6 check inter 20000
server thin 10.10.10.2:2019 weight 1 minconn 3 maxconn 6 check inter 20000

Thin

My Thin server is actually run on a separate Gentoo box. I installed Thin using the package in Portage.

To configure Thin, I used the following command:

thin config -C /etc/thin/config-name.yml -c /srv/myapp --servers 10 -e production -p 2010

This configures thin to start 10 servers, listening on port 2010 to 2019. If you want an init script for Thin, so you can start it at boot, run

thin init

This is will create the init script, and you can set it to start up at boot using the normal method (rc-update add thin default or chkconfig thin on).

You should now be able to reach your rails app through http://nginx.servers.ip.address

Next, we must configure the static webserver.

Lighttpd

I decided to go with Lighttpd as it is a fast, stable and lightweight webserver which will do the job perfectly with little configuration.

You could also use nginx as the static server instead of using lighttpd, but I decided to separate it.

I decided to use the package from EPEL for Lighttpd, and found that most of the default configuration was as I wanted it to be. The only thing I needed to change was the port and address the server was listening on:

server.port = 8081
server.bind = "127.0.0.1"

And that’s pretty much it! Now you just have to dump any static content into /var/www/lighttpd/ (the default location that the Lighttpd package in EPEL is configured to use) and reference any static links using “/static/document_path_of_file”, such as if I put an image into /var/www/lighttpd/images/ called “bg.png”, I can reach it using http://servers_hostname/static/images/bg.png.

I have not really done any performance tests on how well this works, and there are probably many things which I could have done better. This is the first time I made any attempt HTTP performance tuning, and so I am always looking for feedback or tips on how to make this better, so please do contact me if you have any suggestions! 🙂

Server Upgrade / Disk Failure

Last month I had a disk failure which caused most of my data to become inaccessible which is the main reason for my blog being down for so long.
I have three 1TB hard drives in a LVM VG…. without any RAID. This means if one drive fails, it is very unlikely I will be able to recover any data. It was very stupid of me, and I regret it VERY much. 🙁
The drives I was using in the LVM VG were Seagate Barracuda 7200.11 1TB (ST31000333AS) drives.
I originally bought these drives January 2009, but since then I’ve had multiple issues with the drives and so I don’t actually have the drives I originally bought, I sent them back for replacement as they all showed sign of failure sooner or later. Luckily, I was able to catch those failures pretty early, thanks to “SMARTmon Tools”. This time I was unable to do so, as I upgraded my SATA controller to a Adaptec 2820SA which does not allow SMART commands to be passed through to the drive.
After a bit of Googling, I discovered that there have been quite a few reports of these drives failing, unfortunately in January these reports were not available.
This frustrated me quite a bit, as not only did I lose 500GB worth of important data, I will now have to scrap these drives and buy new drives if eBuyer or Seagate is unwilling to give me a different model of 1TB drives – I don’t think it should be an issue for them to give me the Barracuda 7200.12 which seem to have much better reports, but I don’t think they will agree to this.
At the moment, I’ve sent the drives to Seagate’s i365 Data Recovery service, and they are building a list of files which they will be able to recover.
During the time the drives failed, I decided it would be a good idea to upgrade my server too. My new servers specs are as follows:
Intel Quad-Core Xeon E5405 2 GHz
2×4 GB DDR2 PC2-5300 RAM
Tyan Tempest i5100X (S5375)
Norco RPC-4220 case
The Norco RPC-4220 case is a 4U rack-mountable server case and has 20 hot-swappable hard drive bays, which allows quite a of room for storage expansion. When I first powered on the machine, I noticed that the fans which cool the hard drives are amazily loud and so switch them for quiter ones (relax! they are quck provide enough air flow to cool six drives!). The case comes with five SAS/SATA backplanes, which have a Mini-SAS connector. As I don’t have a SAS controller, I had to buy Mini-SAS reverse breakout cables which allowed me to connect the backplanes to my standard SATA cards. This was quite a pain, as I had no idea that there are two types of Mini-SAS to SATA cables, one for Mini-SAS on Backplane side to SATA on the controller, like I needed, and SATA on the backplane, to Mini-SAS on the controller. It was a pain that I discovered this after I had already bought the wrong cables.
The Tempest i5100X supports two Quad Core XEON processors, although I only bought one for the time being. The board also takes upto 32GB worth of RAM which also allows alot of room for expansion.
Thanks to this upgrade, I was finally able to play with XEN’s full-virtulization (HVM) functionality as the E5405 has the Intel VT-x extension.
When I get my drives back from i365, I will be sure to use RAID5 on the drives AND make regular backups….. although I haven’t really found a feasible solution (price wise, and time to actually do it) for backing up 500GB worth of data, so if anyone has any suggestions, please let me know!
I have looked at Bacula, and I really like it, but I still need media onto which I can backup the data.
I have lost my trust in hard drives for keeping my backups, and burning to DVDs or Bluray would not be very feasible as I would require 63 dual layer DVDs or 10 dual-layer bluray discs to backup 500GB worth of data, and both are not very reliable either (they are easily scratched!).
I also looked at online backup services, but this too I think is not a feasible idea as backing up 500GB over a connection with only 1.3mbit upload would take way too long.

Last month I had a disk failure which caused most of my data to become inaccessible which is the main reason for my blog being down for so long.

I have three 1TB hard drives in a LVM VG…. without any RAID. This means if one drive fails, it is very unlikely I will be able to recover any data. It was very stupid of me, and I regret it VERY much. 🙁

The drives I was using in the LVM VG were Seagate Barracuda 7200.11 1TB (ST31000333AS) drives.

I originally bought these drives January 2009, but since then I’ve had multiple issues with the drives and so I don’t actually have the drives I originally bought, I sent them back for replacement as they all showed sign of failure sooner or later. Luckily, I was able to catch those failures pretty early, thanks to “SMARTmon Tools”. This time I was unable to do so, as I upgraded my SATA controller to a Adaptec 2820SA which does not allow SMART commands to be passed through to the drive.

After a bit of Googling, I discovered that there have been a few reports of these drives failing, unfortunately in January these reports were not available.

This frustrated me, as not only did I lose 500GB worth of important data, I will now have to scrap these drives and buy new drives if Ebuyer or Seagate is unwilling to give me a different model of 1TB drives – I don’t think it should be an issue for them to give me the Barracuda 7200.12 which seem to have much better reports, but I don’t think they will agree to this.

At the moment, I’ve sent the drives to Seagate’s i365 Data Recovery service, and they are building a list of files which they will be able to recover.

During the time the drives failed, I decided it would be a good idea to upgrade my server too. My new servers specs are as follows:

The Norco RPC-4220 case is a 4U rack-mountable server case and has 20 hot-swappable hard drive bays, which allows a good amount of room for storage expansion. When I first powered on the machine, I noticed that the fans which cool the hard drives are amazingly loud and so switched them for quieter ones (relax! they are quick provide enough air flow to cool six drives!). The case comes with five SAS/SATA backplanes, which have a Mini-SAS connector. As I don’t have a SAS controller, I had to buy Mini-SAS reverse breakout cables which allowed me to connect the backplanes to my standard SATA cards. This was quite a pain, as I had no idea that there are two types of Mini-SAS to SATA cables, one for Mini-SAS on Backplane side to SATA on the controller, like I needed, and SATA on the backplane, to Mini-SAS on the controller. It was a pain that I discovered this after I had already bought the wrong cables.

The Tempest i5100X supports two Quad Core XEON processors, although I only bought one for the time being. The board also takes up to 32GB worth of RAM which also allows a lot of room for expansion.

Thanks to this upgrade, I was finally able to play with XEN’s full-virtulization (HVM) functionality as the E5405 has the Intel VT-x extension.

When I get my drives back from i365, I will be sure to use RAID5 on the drives AND make regular backups….. although I haven’t really found a workable solution (price wise, and time to actually do it) for backing up 500GB worth of data, so if anyone has any suggestions, please let me know!

I have looked at Bacula, and I really like it, but I still need media on to which I can backup the data.

I have lost my trust in hard drives for keeping my backups, and burning to DVDs or Bluray would not be very sensible as I would require 63 dual layer DVDs or 10 dual-layer Bluray discs to backup 500GB worth of data, and both are not very reliable either (they are easily scratched!).

I also looked at online backup services, but this too I think is not a workable idea as backing up 500GB over a connection with only 1.3mbit upload would take way too long.