Growing Date Palms from Seed

Recently my auntie gave me some Ajwah date fruit she got while she was in Medina in Saudi Arabia. I absolutely love dates and have always heard that dates have a lot of health benefits. While I was enjoying my dates, I decided to Google what the health benefits actually are. Somehow, I came across an article and discovered that it’s actually possible to grow date palms indoors using the seeds. I’m not sure why, but the thought hadn’t crossed my mind that they are grown from seed. Some people have even managed to have some success in crappy weather like we have in the UK..

While they didn’t really get a nice big, beautiful date palm tree, they did get some nice looking palms.

I started to wonder if I could grow some date palms in my small flat. My curiosity got the better of me, and ¬†I decided to give it a go and see what happens. It could be a small fun side experiment. ūüôā

Living in London, I don’t expect them to live very long if it even works. The weather isn’t really suited for growing date palm trees especially as I am starting at the beginning of November while it’s already cold, and going to get colder, but I still want to try to see how it goes.

I started by gathered a bunch of date seeds. I am using the seeds from the Ajwah dates I received from my auntie, and seeds from Jordanian Medjool dates I bought from a local grocery store.

Medjool dates are absolutely delicious. They are large and very sweet. Ajwah dates aren’t as big or as sweet as Medjool dates, but they are still extremely good.

Reading on various pages, and watching YouTube videos, I soaked the seeds in water for around one week, changing the water every day to avoid the growth of mold. I don’t really know much about plants, or gardening or anything like that, but I think this is done to¬†soften the outer shell and speed up germination time, and dissolve away any remaining fruit and sugars.

After one week, I put the seeds into a damp kitchen towel and put them in a cheap plastic food container, and put it on top of my water boiler where it should stay quite warm.

I forgot to take photos before I started, but if the seeds start to grow roots, I will make a secondary post with updates and photos. ūüôā

Upgrading Disks in my Home NAS

It’s been a few weeks since I switched my NAS from LVM (no RAID, pretty much RAID0), to ZFS. It’s been working great, but a couple of days ago, I recieved a nice email from¬†Smartmontools informing me that one of my disks was about to die! I noticed that I was also getting errors on one of the disks during a scrub!

So it’s very lucky that I decided to change to ZFS in time! otherwise I would have had a bit of a problem (yes I realise it was quite silly for me to use RAID0 on my NAS in the first place!). ūüôā

Anyway, instead of just replacing the single failed disk, I decided to take the opportunity to instead buy brand new disks.

The old disks were:

I decided to upgrade to:

I’m not really a big fan of Western Digital disks as I’ve had a lot of issues with them in the past. I usually favour Seagate. The reason I chose to give WD another chance is because I have read a lot of reviews of these disks being quite highly rated in performance and¬†reliability, and because looking at Seagate’s site, they rank their “consumer” grade disks pretty poorly in terms of¬†reliability¬†(MTBF) and also only seem to provide a pretty¬†ridiculous¬†1 year¬†warranty on their consumer grade disks, and the higher end disks cost a little too much for home use!

I was unable to just do a “live” switch of the disks due to ZFS using ashift=9 even even though I had specified ashift=12 when creating my ZFS pool. The new disks use 4 kbyte sectors, meaning if ZFS was aligning for 512 byte sectors I’ll get quite a large performance drop. My only option was to create a new pool and use “zfs send” to copy my datasets over to the new pool.

It wasn’t a huge problem, I just put the old disks into a different machine, and after creating my new pool in the N40L, I used zfs send to send all the datasets from my old disks over. Fast forward a day or so, and everything was back to normal. ūüôā

Performing the same test I done originally with my old disks, I get quite a good performance increase with these new disks and SSD!

[root@filer01 ~]# spew -b 20m --write 20g /DiskArray/scratch/test.bin
WTR:   329368.36 KiB/s   Transfer time: 00:00:15    IOPS:       16.08
[root@filer01 ~]# spew -b 20m --read 20g /DiskArray/scratch/test.bin
RTR:  1140657.64 KiB/s   Transfer time: 00:00:04    IOPS:       55.70

I’m satisfied with those numbers to be honest, it’s performing well enough for me, no lag, or slow IO, so I’m happy with it!

As another follow up to my previous post as well, I did end up buying two Cisco Catalyst 3508G XL Aggregation Switches. They aren’t the best gigabit switches, they are actually quite old and cruddy, but I got them for pretty cheap, and they are managed. They don’t even support Jumbo frames, but considering the price I got them for, I’m happy with them for now until I can find better gigabit switches to replace them with.

In my previous post I was also thinking about buying another MicroServer, as HP had a £50 cash-back deal. The cash-back has actually been increased to £100, meaning that buying an N54L with the cash-back offer, would work out to be only £109.99! So the temptation got to me, and I have ordered two more Microservers.

I’ll use one for backups, and the other I’m not sure about yet! ūüôā

Dead Cisco Catalyst 3560

I’ve been trying to¬†acquire a Cisco Catalyst 3560 as it provides features which are not supported by my Catalyst 3550s, such as Private VLANs.¬†I believe the QoS features differ on 3560 as well.

So, as I was browsing eBay (one of my¬†favourite pastimes! :P), I found an auction for a WS-C3560-8PC-S¬†which had been labelled¬†“untested”. From past experiences, I have found that listings that state that they haven’t been tested are usually faulty devices, but I thought I would take the risk anyway. I was hoping it would be some small issue which I could either work around or repair, such as a bad port, or screwed up IOS image which I could just reload myself (hey! I’ve seen devices sell on eBay for pretty cheap due to non-techy people assuming it was broken because the IOS image was missing!). But I guess my luck was bad, and two days after the end of the auction, I¬†received¬†a large green paperweight. ūüôĀ

After plugging the power in, the LEDs on the front of the Catalyst 3560 go on, but they just stay on in a solid state, where as they should be blinking during the boot process. I plugged the console cable in, only to find that there is no output whatsoever, not even from ROMMON, which is the first step before even loading IOS.

I have pretty little knowledge of electronics, but I did test basic things that I knew how, such as¬†checking if the PSU was giving out the correct voltages, which it was, but that’s pretty much all I know how to check!

From my limited knowledge of electronics, I assume that something must be wrong with the Boot ROM chip since not even ROMMON is able to start. None of the parts on a Catalyst 3560 are field¬†replicable, so I don’t think I can test any parts by switching them around either.

I am quite disappointed that this Catalyst 3560 is dead, but I tried my luck, and it turned out bad, no biggie. ūüôā

Hopefully I will be able to find a Catalyst 3650 soon!

If anyone has any ideas I can try¬†in order to fix this device, I would be quite eager to make an attempt! ūüôā

My Goals for 2013: CCNP and RHCE?

I’ve been thinking about renewing my RHCE for quite sometime now, and completing my CCNP but I haven’t really got around to it, mainly due to the price of the exam being a little pricey (if I remember correctly!), but also due to not having enough time.

So for this year, I wanted to set a deadline for myself to complete them. With a deadline, it is easier to visualize and plan what to study and when, and allows you see your progress better.

So, my goal is to complete CCNP ideally by the end of May, or by the end of June at the latest. I think it should be possible! There are three parts to the CCNP: ROUTE, SWITCH, and TSHOOT. If I complete one per-month, it should be achievable!

Like most people, I am using the Cisco CCNP Routing and Switching Official Certification Library books as my study material, and highly recommend them.

On that note, I have added a “Home Lab” page where you can see pretty pictures of my rack, and my “CCNP Lab”. It’s nothing close to something as awesome as Scott Morris’ Lab, but it’s coming along! ūüėČ

I have read that RedHat will be releasing RHEL7 in the second half of this year, so it is a perfect opportunity to renew my RHCE! My goal end date will depend on when RHEL7 will be released, and when the test centers are actually testing under RHEL7.

Hopefully this will be earlier into the second half of the year, so I have plenty of time to take the exam before the end of December!

Both CCNP and RHCE are great certifications, which are very highly regarded by employers and professionals.

A lot of people seem to think they don’t go well together, as RHCE is better for System Administrators and CCNP is more for Network Administrators, but I totally disagree since I feel that the lines between sysadmins and netadmins is very quickly disappearing thanks to virtualization and “cloud” technologies.

Moving Back to London and Virgin Media

Two weeks ago, I moved back to London.

At my place in Cambridge, I had a 100mbit connection from Virgin Media, which I wanted to cancel as my parents already have a connection from Virgin Media so there wasn’t any need for me to move mine along with me!

So I called up VM, and they informed me that as I was still in contract, I would have to pay some ridiculous amount to cancel the contract (I think it may have been £280).

Alternatively, the other option was one which I was not aware of!

Usually, Virgin Media do not allow people to have two connections from them under a single address, BUT, in cases such as mine, they allow it!

So, instead of paying ¬£280 or whatever it was, I decided it’d work out much cheaper if I just move my connection with me. It’d be nice to have anyway! ūüôā

Today, the Virgin Media guys arrived at my house. Their first reaction was shock at seeing my server rack, but they were pretty nice guys. I did have to explain what I use all this¬†equipment¬†for, and had to explain how terrible the SuperHub actually is! ūüôā

They didn’t really have to do much, they just really had to add a splitter and give me two new coax cables going from the splitter to the two modems. They did mention their frustration at VirginMedia about having to do installations for the more technical people when those technical people could really just do it themselves, and I totally agree! ūüėÄ

For some reason, they had to switch out my old SuperHub, and gave me a new one which has a matt-finish instead of the glossy look my old one had. I’m not sure if there is any other difference other than that, not that I care, I enabled modem mode ASAP so I don’t have to deal with this terrible device too much. ūüôā

I was a little worried that I might not get the full bandwidth on both connections, but it looks like I am!

Next steps are to figure out how to do load-balancing on my Cisco 2821 ISR.

Virgin Media Cable Wall Outlet Two way splitter VirginMedia SuperHub and other goodies

Nginx, Varnish, HAProxy, and Thin/Lighttpd

Over the last few days, I have been playing with Ruby on Rails again and came across Thin, a small, yet stable web server which will serve applications written in Ruby.

This is a small tutorial on how to get Nginx, Varnish, HAProxy working together with Thin (for dynamic pages) and Lighttpd (for static pages).

I decided to take this route as from reading in many places I found that separating static and dynamic content improves performance significantly.

Nginx

Nginx is a lightweight, high performance web server and reverse proxy. It can also be used as an email proxy, although this is not an area I have explored. I will be using Nginx as the front-end server for serving my rails applications.

I installed Nginx using the RHEL binary package available from EPEL.

Configuration of Nginx is very simple. I have kept it very simple, and made Nginx My current configuration file consists of the following:

user nginx;
worker_processes 1;

error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;

events {
worker_connections 1024;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] $request "$status" $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"';

sendfile on;
tcp_nopush on;
tcp_nodelay off;

keepalive_timeout 5;

# This section enables gzip compression.
gzip on;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;

# Here you can define the addresses on which varnish will listen. You can place multiple servers here, and nginx will load balance between them.
upstream cache_servers {
server localhost:6081 max_fails=3 fail_timeout=30s;
}

# This is the default virtual host.
server {
listen 80 default;
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log;
charset utf-8;

# This is optional. It serves up a 1x1 blank gif image from RAM.
location = /1x1.gif {
empty_gif;
}

# This is the actual part which will proxy all connections to varnish.
location / {
proxy_pass http://cache_servers/;
proxy_redirect http://cache_servers/ http://$host:$server_port/;

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}

Varnish

Varnish is a high performance caching server. We can use Varnish to cache content which will not be changed often.

I installed Varnish using the RHEL binary package available from EPEL as well. Initially, I only needed to edit /etc/sysconfig/varnish, and configure the address on which varnish will listen on.

DAEMON_OPTS="-a localhost:6081 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-u varnish -g varnish \
-s file,/var/lib/varnish/varnish_storage.bin,10G"

This will make varnish listen on port 6081 for normal HTTP traffic, and port 8082 for administration.

Next, you must edit /etc/varnish/default.vcl to actually cache data. My current configuration is as follows:

backend thin {
.host = "127.0.0.1";
.port = "8080";
}

backend lighttpd {
.host = "127.0.0.1";
.port = "8081";
}

sub vcl_recv {
if (req.url ~ "^/static/") {
set req.backend = lighttpd;
} else {
set req.backend = thin;
}

# Allow purging of cache using shift + reload
if (req.http.Cache-Control ~ "no-cache") {
purge_url(req.url);
}

# Unset any cookies and autorization data for static links and icons, and fetch from catch
if (req.request == "GET" && req.url ~ "^/static/" || req.request == "GET" && req.url ~ "^/icons/") {
unset req.http.cookie;
unset req.http.Authorization;
lookup;
}

# Look for images in the cache
if (req.url ~ "\.(png|gif|jpg|ico|jpeg|swf|css|js)$") {
unset req.http.cookie;
lookup;
}

# Do not cache any POST'ed data
if (req.request == "POST") {
pass;
}

# Do not cache any non-standard requests
if (req.request != "GET" && req.request != "HEAD" &&
req.request != "PUT" && req.request != "POST" &&
req.request != "TRACE" && req.request != "OPTIONS" &&
req.request != "DELETE") {
pass;
}

# Do not cache data which has an autorization header
if (req.http.Authorization) {
pass;
}

lookup;
}

sub vcl_fetch {
# Remove cookies and cache static content for 12 hours
if (req.request == "GET" && req.url ~ "^/static/" || req.request == "GET" && req.url ~ "^/icons/") {
unset obj.http.Set-Cookie;
set obj.ttl = 12h;
deliver;
}

# Remove cookies and cache images for 12 hours
if (req.url ~ "\.(png|gif|jpg|ico|jpeg|swf|css|js)$") {
unset obj.http.set-cookie;
set obj.ttl = 12h;
deliver;
}

# Do not cache anything that does not return a value in the 200's
if (obj.status >= 300) {
pass;
}

# Do not cache content which varnish has marked uncachable
if (!obj.cacheable) {
pass;
}

# Do not cache content which has a cookie set
if (obj.http.Set-Cookie) {
pass;
}

# Do not cache content with cache control headers set
if(obj.http.Pragma ~ "no-cache" || obj.http.Cache-Control ~ "no-cache" || obj.http.Cache-Control ~ "private") {
pass;
}

if (obj.http.Cache-Control ~ "max-age") {
unset obj.http.Set-Cookie;
deliver;
}

pass;
}

HAProxy

HAProxy is a high performance TCP/HTTP load balancer. It can be used to load balance almost any type of TCP connection, although I have only used it with HTTP connections.

We will be using HAProxy to balance connections over multiple thin instances.

HAProxy is also available in EPEL. My HAProxy configuration is as follows:

global
daemon
log 127.0.0.1 local0
maxconn 4096
nbproc 1
chroot /var/lib/haproxy
user haproxy
group haproxy

defaults
mode http
clitimeout 60000
srvtimeout 30000
timeout connect 4000

option httpclose
option abortonclose
option httpchk
option forwardfor

balance roundrobin

stats enable
stats refresh 5s
stats auth admin:123abc789xyz

listen thin 127.0.0.1:8080
server thin 10.10.10.2:2010 weight 1 minconn 3 maxconn 6 check inter 20000
server thin 10.10.10.2:2011 weight 1 minconn 3 maxconn 6 check inter 20000
server thin 10.10.10.2:2012 weight 1 minconn 3 maxconn 6 check inter 20000
server thin 10.10.10.2:2013 weight 1 minconn 3 maxconn 6 check inter 20000
server thin 10.10.10.2:2014 weight 1 minconn 3 maxconn 6 check inter 20000
server thin 10.10.10.2:2015 weight 1 minconn 3 maxconn 6 check inter 20000
server thin 10.10.10.2:2016 weight 1 minconn 3 maxconn 6 check inter 20000
server thin 10.10.10.2:2017 weight 1 minconn 3 maxconn 6 check inter 20000
server thin 10.10.10.2:2018 weight 1 minconn 3 maxconn 6 check inter 20000
server thin 10.10.10.2:2019 weight 1 minconn 3 maxconn 6 check inter 20000

Thin

My Thin server is actually run on a separate Gentoo box. I installed Thin using the package in Portage.

To configure Thin, I used the following command:

thin config -C /etc/thin/config-name.yml -c /srv/myapp --servers 10 -e production -p 2010

This configures thin to start 10 servers, listening on port 2010 to 2019. If you want an init script for Thin, so you can start it at boot, run

thin init

This is will create the init script, and you can set it to start up at boot using the normal method (rc-update add thin default or chkconfig thin on).

You should now be able to reach your rails app through http://nginx.servers.ip.address

Next, we must configure the static webserver.

Lighttpd

I decided to go with Lighttpd as it is a fast, stable and lightweight webserver which will do the job perfectly with little configuration.

You could also use nginx as the static server instead of using lighttpd, but I decided to√ā separate√ā it.

I decided to use the package from EPEL for Lighttpd, and found that most of the default configuration was as I wanted it to be. The only thing I needed to change was the port and address the server was listening on:

server.port = 8081
server.bind = "127.0.0.1"

And that’s pretty much it! Now you just have to dump any static content into /var/www/lighttpd/ (the default location that the Lighttpd package in EPEL is configured to use) and reference any static links using “/static/document_path_of_file”, such as if I put an image into /var/www/lighttpd/images/ called “bg.png”, I can reach it using http://servers_hostname/static/images/bg.png.

I have not really done any performance tests on how well this works, and there are probably many things which I could have done better. This is the first time I made any attempt HTTP performance tuning, and so I am always looking for feedback or tips on how to make this better, so please do contact me if you have any suggestions! ūüôā

Server Upgrade / Disk Failure

Last month I had a disk failure which caused most of my data to become inaccessible which is the main reason for my blog being down for so long.
I have three 1TB hard drives in a LVM VG…. without any RAID. This means if one drive fails, it is very unlikely I will be able to recover any data. It was very stupid of me, and I regret it VERY much. ūüôĀ
The drives I was using in the LVM VG were Seagate Barracuda 7200.11 1TB (ST31000333AS) drives.
I originally bought these drives January 2009, but since then I’ve had multiple issues with the drives and so I don’t actually have the drives I originally bought, I sent them back for replacement as they all showed sign of failure sooner or later. Luckily, I was able to catch those failures pretty early, thanks to “SMARTmon Tools”. This time I was unable to do so, as I upgraded my SATA controller to a Adaptec 2820SA which does not allow SMART commands to be passed through to the drive.
After a bit of Googling, I discovered that there have been quite a few reports of these drives failing, unfortunately in January these reports were not available.
This frustrated me quite a bit, as not only did I lose 500GB worth of important data, I will now have to scrap these drives and buy new drives if eBuyer or Seagate is unwilling to give me a different model of 1TB drives – I don’t think it should be an issue for them to give me the Barracuda 7200.12 which seem to have much better reports, but I don’t think they will agree to this.
At the moment, I’ve sent the drives to Seagate’s i365 Data Recovery service, and they are building a list of files which they will be able to recover.
During the time the drives failed, I decided it would be a good idea to upgrade my server too. My new servers specs are as follows:
Intel Quad-Core Xeon E5405 2 GHz
2×4 GB DDR2 PC2-5300 RAM
Tyan Tempest i5100X (S5375)
Norco RPC-4220 case
The Norco RPC-4220 case is a 4U rack-mountable server case and has 20 hot-swappable hard drive bays, which allows quite a of room for storage expansion. When I first powered on the machine, I noticed that the fans which cool the hard drives are amazily loud and so switch them for quiter ones (relax! they are quck provide enough air flow to cool six drives!). The case comes with five SAS/SATA backplanes, which have a Mini-SAS connector. As I don’t have a SAS controller, I had to buy Mini-SAS reverse breakout cables which allowed me to connect the backplanes to my standard SATA cards. This was quite a pain, as I had no idea that there are two types of Mini-SAS to SATA cables, one for Mini-SAS on Backplane side to SATA on the controller, like I needed, and SATA on the backplane, to Mini-SAS on the controller. It was a pain that I discovered this after I had already bought the wrong cables.
The Tempest i5100X supports two Quad Core XEON processors, although I only bought one for the time being. The board also takes upto 32GB worth of RAM which also allows alot of room for expansion.
Thanks to this upgrade, I was finally able to play with XEN’s full-virtulization (HVM) functionality as the E5405 has the Intel VT-x extension.
When I get my drives back from i365, I will be sure to use RAID5 on the drives AND make regular backups….. although I haven’t really found a feasible solution (price wise, and time to actually do it) for backing up 500GB worth of data, so if anyone has any suggestions, please let me know!
I have looked at Bacula, and I really like it, but I still need media onto which I can backup the data.
I have lost my trust in hard drives for keeping my backups, and burning to DVDs or Bluray would not be very feasible as I would require 63 dual layer DVDs or 10 dual-layer bluray discs to backup 500GB worth of data, and both are not very reliable either (they are easily scratched!).
I also looked at online backup services, but this too I think is not a feasible idea as backing up 500GB over a connection with only 1.3mbit upload would take way too long.

Last month I had a disk failure which caused most of my data to become inaccessible which is the main reason for my blog being down for so long.

I have three 1TB hard drives in a LVM VG…. without any RAID. This means if one drive fails, it is very unlikely I will be able to recover any data. It was very stupid of me, and I regret it VERY much. ūüôĀ

The drives I was using in the LVM VG were Seagate Barracuda 7200.11 1TB (ST31000333AS) drives.

I originally bought these drives January 2009, but since then I’ve had multiple issues with the drives and so I don’t actually have the drives I originally bought, I sent them back for replacement as they all showed sign of failure sooner or later. Luckily, I was able to catch those failures pretty early, thanks to “SMARTmon Tools”. This time I was unable to do so, as I upgraded my SATA controller to a Adaptec 2820SA which does not allow SMART commands to be passed through to the drive.

After a bit of Googling, I discovered that there have been a few reports of these drives failing, unfortunately in January these reports were not available.

This frustrated me, as not only did I lose 500GB worth of important data, I will now have to scrap these drives and buy new drives if Ebuyer or Seagate is unwilling to give me a different model of 1TB drives – I don’t think it should be an issue for them to give me the Barracuda 7200.12 which seem to have much better reports, but I don’t think they will agree to this.

At the moment, I’ve sent the drives to Seagate’s i365 Data Recovery service, and they are building a list of files which they will be able to recover.

During the time the drives failed, I decided it would be a good idea to upgrade my server too. My new servers specs are as follows:

The Norco RPC-4220 case is a 4U rack-mountable server case and has 20 hot-swappable hard drive bays, which allows a good amount¬†of room for storage expansion. When I first powered on the machine, I noticed that the fans which cool the hard drives are amazingly¬†loud and so switched them for¬†quieter¬†ones (relax! they are quick provide enough air flow to cool six drives!). The case comes with five SAS/SATA backplanes, which have a Mini-SAS connector. As I don’t have a SAS controller, I had to buy Mini-SAS reverse breakout cables which allowed me to connect the backplanes to my standard SATA cards. This was quite a pain, as I had no idea that there are two types of Mini-SAS to SATA cables, one for Mini-SAS on Backplane side to SATA on the controller, like I needed, and SATA on the backplane, to Mini-SAS on the controller. It was a pain that I discovered this after I had already bought the wrong cables.

The Tempest i5100X supports two Quad Core XEON processors, although I only bought one for the time being. The board also takes up to 32GB worth of RAM which also allows a lot of room for expansion.

Thanks to this upgrade, I was finally able to play with XEN’s full-virtulization (HVM) functionality as the E5405 has the Intel VT-x extension.

When I get my drives back from i365, I will be sure to use RAID5 on the drives AND make regular backups….. although I haven’t really found a workable solution (price wise, and time to actually do it) for backing up 500GB worth of data, so if anyone has any suggestions, please let me know!

I have looked at Bacula, and I really like it, but I still need media on to which I can backup the data.

I have lost my trust in hard drives for keeping my backups, and burning to DVDs or Bluray would not be very sensible as I would require 63 dual layer DVDs or 10 dual-layer Bluray discs to backup 500GB worth of data, and both are not very reliable either (they are easily scratched!).

I also looked at online backup services, but this too I think is not a workable idea as backing up 500GB over a connection with only 1.3mbit upload would take way too long.