Athan on Google Home (via Home Assistant)

I have a Google Home which I have been using for various things as I very slowly build my collection of “smart” devices.

One thing I was very interested in making my Google Home do is to have the Athan play when it is time for prayer. Unfortunately, there isn’t any native way to do this with a Google Home at the moment.

I have seen people do it using IFTTT, but as I am already using Home Assistant as my automation platform, I wanted to keep everything within it.

What is very interesting about doing it using Home Assistant is that while I can get the basic functionality of the Athan playing, I can also perform other automation that may be useful.

For example, I can switch off or pause what ever is on the TV, switch on the lights on dimly for Fajr prayer, even maybe switch on the Ambi Pur 3volution air freshener that I have plugged into an Espurna flashed Sonoff S20, so my flat smells nice during salah time!.

The way I have implemented this is as follows:

  •  Add a REST sensor which fetches the Athan time using the Al Adhan Service’s API:
  • Add template which extracts the timings for each prayer
  • Create an automation to play the Athan

I haven’t put my Home Assistant configuration on GitHub, so I’ll put it all here for now in case anyone else wants to do something similar.

sensor:
  - platform: rest
    name: "Prayer Times"
    json_attributes:
      - data
        resource: "http://api.aladhan.com/v1/timings?latitude=52.587904&longitude=-0.1458179&method=3"
        value_template: '{{ value_json["data"]["meta"]["method"]["name"].title() }}'
        scan_interval: 86400

  - platform: template
    sensors:
      fajr:
        friendly_name: 'Fajr Prayer Time'
        value_template: '{{ states.sensor.prayer_times.attributes.data.timings["Fajr"] | timestamp_custom("%H:%M") }}'
      dhuhr:
        friendly_name: 'Dhuhr Prayer Time'
        value_template: '{{ states.sensor.prayer_times.attributes.data.timings["Dhuhr"] | timestamp_custom("%H:%M") }}'
      asr:
        friendly_name: 'Asr Prayer Time'
        value_template: '{{ states.sensor.prayer_times.attributes.data.timings["Asr"] | timestamp_custom("%H:%M") }}'
      magrib:
        friendly_name: 'Magrib Prayer Time'
        value_template: '{{ states.sensor.prayer_times.attributes.data.timings["Maghrib"] | timestamp_custom("%H:%M") }}'
      isha:
        friendly_name: 'Isha Prayer Time'
        value_template: '{{ states.sensor.prayer_times.attributes.data.timings["Isha"] | timestamp_custom("%H:%M") }}'

automation:
  - alias: "Fajr Athan"
    initial_state: true
    hide_entity: true
    trigger:
      - condition: template
        value_template: '{{ states.sensor.time.state == states("sensor.fajr") }}'
    action:
      - service: media_player.volume_set
        data_template:
          entity_id: media_player.living_room_speaker
          volume_level: 0.75
      - service: media_player.play_media
        data:
          entity_id: media_player.living_room_speaker
          media_content_id: https://my_local_webserver/1003.mp3
          media_content_type: audio/mp3

  - alias: "Athan"
    initial_state: true
    hide_entity: true
    trigger:
      - platform: template
        value_template: '{{ states.sensor.time.state == states("sensor.dhuhr") }}'
      - platform: template
        value_template: '{{ states.sensor.time.state == states("sensor.asr") }}'
      - platform: template
        value_template: '{{ states.sensor.time.state == states("sensor.maghrib") }}'
      - platform: template
        value_template: '{{ states.sensor.time.state == states("sensor.isha") }}'
    action:
      - service: media_player.volume_set
        data_template:
          entity_id: media_player.living_room_speaker
          volume_level: 0.75
      - service: media_player.play_media
        data:
          entity_id: media_player.living_room_speaker
          media_content_id: https://my_local_webserver/1001.mp3
          media_content_type: audio/mp3

This is just a basic automation that sets the volume and plays the Athan. I will expand this so that it only plays the Athan when someone is home, and use input booleans so it can be disabled if needed (for example, during Ramadan when we switch on Islam Channel for the Maghrib Athan). Now that I think of it, it’s also possible to make Home Assistant automatically switch the TV on, and over to Islam Channel during Ramadan!

One annoyance I have found is that before anything is casted to the Google Home, it makes a “blomp” sound. Unfortunately, there isn’t any way to disable this, but there are some tricks on the Home Assistant forums which allow you to get around it.

I hope this is helpful for anyone else trying to achieve something similar.

Growing Date Palms from Seed

Recently my auntie gave me some Ajwah date fruit she got while she was in Medina in Saudi Arabia. I absolutely love dates and have always heard that dates have a lot of health benefits. While I was enjoying my dates, I decided to Google what the health benefits actually are. Somehow, I came across an article and discovered that it’s actually possible to grow date palms indoors using the seeds. I’m not sure why, but the thought hadn’t crossed my mind that they are grown from seed. Some people have even managed to have some success in crappy weather like we have in the UK..

While they didn’t really get a nice big, beautiful date palm tree, they did get some nice looking palms.

I started to wonder if I could grow some date palms in my small flat. My curiosity got the better of me, and  I decided to give it a go and see what happens. It could be a small fun side experiment. 🙂

Living in London, I don’t expect them to live very long if it even works. The weather isn’t really suited for growing date palm trees especially as I am starting at the beginning of November while it’s already cold, and going to get colder, but I still want to try to see how it goes.

I started by gathered a bunch of date seeds. I am using the seeds from the Ajwah dates I received from my auntie, and seeds from Jordanian Medjool dates I bought from a local grocery store.

Medjool dates are absolutely delicious. They are large and very sweet. Ajwah dates aren’t as big or as sweet as Medjool dates, but they are still extremely good.

Reading on various pages, and watching YouTube videos, I soaked the seeds in water for around one week, changing the water every day to avoid the growth of mold. I don’t really know much about plants, or gardening or anything like that, but I think this is done to soften the outer shell and speed up germination time, and dissolve away any remaining fruit and sugars.

After one week, I put the seeds into a damp kitchen towel and put them in a cheap plastic food container, and put it on top of my water boiler where it should stay quite warm.

I forgot to take photos before I started, but if the seeds start to grow roots, I will make a secondary post with updates and photos. 🙂

Let’s Encrypt – Encrypt All The Things!

I’ve recently switched over a bunch of sites, including this blog, to using SSL.

I got my SSL certificate through a very interesting project called “Let’s Encrypt“. The goal of the project is to increase the amount of encryption used on the internet by offering free, trusted domain validated certificates. Right now they are still in a limited beta stage, but the go live date is currently set to 3rd of December.

It seems to me that the recommended way to make use of Let’s Encrypt certificates is to have the Let’s Encrypt client is on each and every server that will make use of the certificates. This is in order for the authentication to work properly, to make automation easier and to have the ability to renew your certificates easily.

I didn’t really want to have the client on every server, so instead I added a proxy pass in my front-end Nginx boxes as follows:

location /.well-known/ {
 proxy_pass http://letsencrypt-machine.local/.well-known/;
 proxy_redirect http://letsencrypt-machine.local/ http://$host:$server_port/.well-known/;
}

I have this block before my / proxy pass, so any requests for /.well-know/ will go to the machine I have the Let’s Encrypt client running.

Next, I ran the client to request the certificate as follows:

./letsencrypt-auto --agree-dev-preview --server https://acme-v01.api.letsencrypt.org/directory auth -d blog.hamzahkhan.com

Magic! I now have a freshly signed certificate, key, and certificate chain sitting in /etc/letsencrypt/live/blog.hamzahkhan.com/.

I’m not sure if there is a better way to do this, but it works for me, and I’m happy with it.

The only down side is that the certificates are only valid for 90 days after which you have to renew them. I believe this is one of the reason that it is recommended to have the client on every machine as it makes the renewal process a lot less work.

That said, I don’t have such a large number of sites that managing it manually would be difficult, so I’m just going to stick with my way for now. 🙂

Cisco ASA 9.2 on Cisco ASA 5505 with Unsupported Memory Configuration Fail

EDIT: 16/11/2015 – It looks like it now works. I am currently running asa924-2-k8.bin on my 5505s, with my 1GB sticks of RAM, and it hasn’t complained! 🙂

The Cisco ASA 5505 officially supports a maximum of 512MB RAM.

Last year I wrote a post detailing a small experiment I done where I upgrade both my Cisco ASA 5505s to use 1GB sticks of RAM, double the officially supported value.

Since then, it has worked great and both boxes have been chilling out in my rack, but recently Cisco released ASA 9.2.

The full list of new features and changes can be read in the release notes, but the feature I was most excited about was BGP support being added.

The ASA has had OSPF support for some time, but it was lacking BGP, which I always thought was a feature it should have. Now that it has been added, I was quite excited to play with it!

So I grabbed the latest 9.1 image (asa921-k8.bin), and dropped it on both my ASAs. Switched the bootloader configuration to load the new image. Next I reloaded the secondary device, and waited for it to come back up. Half an hour later, nothing. So I connected a serial cable to see what’s up, and to my surprise I find that it not doing anything. It’s just stuck saying:

Loading disk0:/asa921-k8.bin...

Initially I wasn’t really sure what was causing this, so I tried switching out the RAM and putting the stock 512MB stick that I got with the box, and magic! It worked.

I’m quite disappointed that my 1GB sticks won’t work with 9.2, but it’s not a huge loss. My Cacti graphs I only use around 300MB anyway!

Memory Usage on my Cisco ASA 5505s
Memory Usage on my Cisco ASA 5505s

I’m going to have to buy a 512MB stick for my secondary ASA, as now they refuse to be in a failover configuration due to having different software versions and different memory sizes.

Alternatively, I’m thinking of just replacing these boxes with something else. My ISP (Virgin Media) will be upgrading my line to 152Mbit/s later this year. The ASA 5505 only has 100Mbit ports so I will be losing 52Mbits! I don’t want that, so I’ll have to get something faster. I’ll probably either go with just a custom Linux box with IPtables, or maybe a virtual ASA now that Cisco offers that! 🙂

Securing Your Postfix Mail Server with Greylisting, SPF, DKIM and DMARC and TLS

28/03/2015 – UPDATE- Steve Jenkins has created a more up-to-date version of this post which is definitely worth checking out if you are looking into deploying OpenDMARC. 🙂

A few months ago, while trying to debug some SPF problems, I came across “”Domain-based Message Authentication, Reporting & Conformance” (DMARC).

DMARC basically builds on top of two existing frameworks, Sender Policy Framework (SPF), and DomainKeys Identified Mail (DKIM).

SPF is used to define who can send mail for a specific domain, while DKIM signs the message. Both of these are pretty useful on their own, and reduce incoming spam A LOT, but the problem is you don’t have any “control” over what the receiving end does with email. For example, company1’s mail server may just give the email a higher spam score if the sending mail server fails SPF authentication, while company2’s mail server might outright reject it.

DMARC gives you finer control, allowing you to dictate what should be done. DMARC also lets you publish a forensics address. This is used to send back a report from remote mail servers, and contains details such as how many mails were received from your domain, how many failed authentication, from which IPs and which authentication tests failed.

I’ve had a DMARC record published for my domains for a few months now, but I have not setup any filter to check incoming mail for their DMARC records, or sending back forensic reports.

Today, I was in the process of setting up a third backup MX for my domains, so I thought I’d clean up my configs a little, and also setup DMARC properly in my mail servers.

So in this article, I will be discussing how I setup my Postfix servers using Greylisting, SPF, DKIM and DMARC, and also using TLS for incoming/outgoing mail. I won’t be going into full details for how to setup a Postfix server, only the specifics needed for SPF/DKIM/DMARC and TLS.

We’ll start with TLS as that is easiest.

TLS

I wanted all incoming and outgoing mail to use opportunistic TLS.

To do this all you need to do is create a certificate:
[root@servah ~]# cd /etc/postfix/
[root@servah ~]# openssl genrsa -des3 -out mx1.example.org.key
[root@servah ~]# openssl rsa -in mx1.example.org.key -out mx1.example.org.key-nopass
[root@servah ~]# mv mx1.example.org.key-nopass mx1.example.org.key
[root@servah ~]# openssl req -new -key mx1.example.org.key -out mx1.example.org.csr

Now, you can either self sign it the certificate request, or do as I have and use CAcert.org. Once you have a signed certificate, dump it in mx1.example.crt, and tell postfix to use it in /etc/postfix/main.cf:
# Use opportunistic TLS (STARTTLS) for outgoing mail if the remote server supports it.
smtp_tls_security_level = may
# Tell Postfix where your ca-bundle is or it will complain about trust issues!
smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.trust.crt
# I wanted a little more logging than default for outgoing mail.
smtp_tls_loglevel = 1
# Offer opportunistic TLS (STARTTLS) to connections to this mail server.
smtpd_tls_security_level = may
# Add TLS information to the message headers
smtpd_tls_received_header = yes
# Point this to your CA file. If you used CAcert.org, this
# available at http://www.cacert.org/certs/root.crt
smtpd_tls_CAfile = /etc/postfix/ca.crt
# Point at your cert and key
smtpd_tls_cert_file = /etc/postfix/mx1.example.org.crt
smtpd_tls_key_file = /etc/postfix/mx1.example.org.key
# I wanted a little more logging than default for incoming mail.
smtpd_tls_loglevel = 1

Restart Postfix:
[root@servah ~]# service postfix restart

That should do it for TLS. I tested by sending an email from my email server, to my Gmail account, and back again, checking in the logs to see if the connections were indeed using TLS.

Greylisting

Greylisting is method of reducing spam, which is so simple, yet so effective it’s quite amazing!

Basically, incoming relay attempts are temporarily delayed with a SMTP temporary reject for a fixed amount of time. Once this time has finished, any further attempts to relay from that IP are allowed to progress further through your ACLs.

This is extremely effective, as a lot of spam bots will not have any queueing system, and will not re-try to send the message!

As EPEL already has an RPM for Postgrey, so I’ll use that for Greylisting:
[root@servah ~]# yum install postgrey

Set it to start on boot, and manually start it:

[root@servah ~]# chkconfig postgrey on
[root@servah ~]# service postgrey start

Next we need to tell Postfix to pass messages through Postgrey. By default, the RPM provided init scripts setup a unix socket in /var/spool/postfix/postgrey/socket so we’ll use that. Edit /etc/postfix/main.cf, and in your smtpd_recipient_restrictions, add “check_policy_service unix:postgrey/socket”, like I have:

smtpd_recipient_restrictions=
permit_mynetworks,
reject_invalid_hostname,
reject_unknown_recipient_domain,
reject_non_fqdn_recipient,
permit_sasl_authenticated,
reject_unauth_destination,
check_policy_service unix:postgrey/socket,
reject_rbl_client dnsbl.sorbs.net,
reject_rbl_client zen.spamhaus.org,
reject_rbl_client bl.spamcop.net,
reject_rbl_client cbl.abuseat.org,
reject_rbl_client b.barracudacentral.org,
reject_rbl_client dnsbl-1.uceprotect.net,
permit

As you can see, I am also using various RBLs.

Next, we restart Postfix:

[root@servah ~]# service postfix restart

All done. Greylisting is now in effect!

SPF

Next we’ll setup SPF.

There are many different SPF filters available, and probably the most popular one to use with Postfix would be pypolicyd-spf, which is also included in EPEL, but I was unable to get OpenDMARC to see the Recieved-SPF headers. I think this is due to the order in which a message is passed through a milter and through a postfix policy engine, and I was unable to find a workaround. So instead I decided to use smf-spf, which is currently unmaintained, but from what I understand it is quite widely used, and quite stable.

I did apply some patches to smf-spf which were posted by Andreas Schulze on the the OpenDMARC mailing lists. They are mainly cosmetic patches, and aren’t necessary but I liked them so I applied them.

I was going to write a RPM spec file for smf-spf, but I noticed that Matt Domsch has kindly already submitted packages for smf-spf and libspf2 for review.

I did have to modify both packages a little. For smf-spf I pretty much only added the patches I mentioned eariler, and a few minor changes I wanted. For libspf2 I had to re-run autoreconf and update Matt Domsch’s patch as it seemed to break on EL6 boxes due to incompatible autoconf versions. I will edit this post later and add links to the SRPMS later.

I build the RPMs, signed them with my key and published it in my internal RPM repo.
I won’t go into detail into that, and will continue from installation:

[root@servah ~]# yum install smf-spf

Next, I edited /etc/mail/smfs/smf-spf.conf, set smf-spf to start on boot and started smf-spf:

[root@servah ~]# cat /etc/mail/smfs/smf-spf.conf|grep -v "^#" | grep -v "^$"
WhitelistIP 127.0.0.0/8
RefuseFail on
AddHeader on
User smfs
Socket inet:8890@localhost

Set smf-spf to start on boot, and also start it manually:
[root@servah ~]# chkconfig smf-spf on
[root@servah ~]# service smf-spf start

Now we edit the Postfix config again, and add the following to the end of main.cf:
milter_default_action = accept
milter_protocol = 6
smtpd_milters = inet:localhost:8890

Restart Postfix:
[root@servah ~]# service postfix restart

Your mail server should now be checking SPF records! 🙂
You can test this by trying to forge an email from Gmail or something.

DKIM

DKIM was a little more complicated to setup as I have multiple domains. Luckily, OpenDKIM is already in EPEL, so I didn’t have to do any work to get an RPM for it! 🙂

Install it using yum:
[root@servah ~]# yum install opendkim

Next, edit the OpenDKIM config file. I’ll just show what I done using a diff:
[root@servah ~]# diff /etc/opendkim.conf.stock /etc/opendkim.conf
20c20
< Mode v
---
> Mode sv
58c58
< Selector default
---
> #Selector default
70c70
< #KeyTable /etc/opendkim/KeyTable
---
> KeyTable /etc/opendkim/KeyTable
75c75
< #SigningTable refile:/etc/opendkim/SigningTable
---
> SigningTable refile:/etc/opendkim/SigningTable
79c79
< #ExternalIgnoreList refile:/etc/opendkim/TrustedHosts
---
> ExternalIgnoreList refile:/etc/opendkim/TrustedHosts
82c82
< #InternalHosts refile:/etc/opendkim/TrustedHosts
---
> InternalHosts refile:/etc/opendkim/TrustedHosts

Next, I created a key:
[root@servah ~]# cd /etc/opendkim/keys
[root@servah ~]# opendkim-genkey --append-domain --bits=2048 --domain example.org --selector=dkim2k --restrict --verbose

This will give you two files in /etc/opendkim/keys:

  • dkim2k.txt – Contains your public key which can be published in DNS. It’s already in a BIND compatible format, so I won’t explain how to publish this in DNS.
  • dkim2k.private – Contains your private key.

Next, we edit /etc/opendkim/KeyTable. Comment out any of the default keys that are there and add your own:
[root@servah ~]# cat /etc/opendkim/KeyTable
dkim2k._domainkey.example.org example.org:dkim2k:/etc/opendkim/keys/dkim2k.private

(Thank you to andrewgdotcom for spotting the typo here)

Now edit /etc/opendkim/SigningTable, again commenting out the default entries and entering our own:
[root@servah ~]# cat /etc/opendkim/SigningTable
*@example.org dkim2k._domainkey.example.org

Repeat this process for as many domains as you want. It would also be quite a good idea to use different keys for different domains.

We can now start opendkim, and set it to start on boot:
[root@servah ~]# chkconfig opendkim on
[root@servah ~]# service opendkim start

Almost done with DKIM!
We just need to tell Postfix to pass mail through OpenDKIM to verify signatures of incoming mail, and to sign outgoing mail. To do this, edit /etc/postfix/main.cf again:
# Pass SMTP messages through smf-spf first, then OpenDKIM
smtpd_milters = inet:localhost:8890, inet:localhost:8891
# This line is so mail received from the command line, e.g. using the sendmail binary or mail() in PHP
# is signed as well.
non_smtpd_milters = inet:localhost:8891

Restart Postfix:
[root@servah ~]# service postfix restart

Done with DKIM!
Now your mail server will verify incoming messages that have a DKIM header, and sign outgoing messages with your own!

OpenDMARC

Now it’s the final part of the puzzle.

OpenDMARC is not yet in EPEL, but again I did find an RPM spec waiting review, so I used it.

Again, I won’t go into the process of how to build an RPM, lets assume you have already published it in your own internal repos and continue from installation:
[root@servah ~]# yum install opendmarc

First I edited /etc/opendmarc.conf:
15c15
< # AuthservID name
---
> AuthservID mx1.example.org
121c121
< # ForensicReports false
---
> ForensicReports true
144,145c144
< HistoryFile /var/run/opendmarc/opendmarc.dat/;
< s
---
> HistoryFile /var/run/opendmarc/opendmarc.dat
221c220
< # ReportCommand /usr/sbin/sendmail -t
---
> ReportCommand /usr/sbin/sendmail -t -F 'Example.org DMARC Report" -f 'sysops@example.org'
236c235
< # Socket inet:8893@localhost
---
> Socket inet:8893@localhost
246c245
< # SoftwareHeader false
---
> SoftwareHeader true
253c252
< # Syslog false
---
> Syslog true
261c260
< # SyslogFacility mail
---
> SyslogFacility mail
301c300
< # UserID opendmarc
---
> UserID opendmarc

Next, set OpenDMARC to start on boot and manually start it:
[root@servah ~]# chkconfig opendmarc on
[root@servah ~]# service opendmarc start

Now we tell postfix to pass messages through OpenDMARC. To do this, we edit /etc/postfix/main.cf once again:
# Pass SMTP messages through smf-spf first, then OpenDKIM, then OpenDMARC
smtpd_milters = inet:localhost:8890, inet:localhost:8891, inet:localhost:8893

Restart Postfix:
[root@servah ~]# service postfix restart

That’s it! Your mail server will now check the DMARC record of incoming mail, and check the SPF and DKIM results.

I confirmed that OpenDMARC is working by sending a message from Gmail to my own email, and checking the message headers, then also sending an email back and checking the headers on the Gmail side.

You should see that SPF, DKIM and DMARC are all being checked when receiving on either side.

Finally, we can also setup forensic reporting for the benefit of others who are using DMARC.

DMARC Forensic Reporting

I  found OpenDMARC’s documentation to be extremely limited and quite vague, so there was a lot of guess work involved.

As I didn’t want my mail servers to have access to my DB server, I decided to run the reporting scripts on a different box I use for running cron jobs.

First I created a MySQL database and user for opendmarc:
[root@mysqlserver ~]# mysql -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 1474392
Server version: 5.5.34-MariaDB-log MariaDB Server

Copyright (c) 2000, 2013, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE opendmarc;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON opendmarc.* TO opendmarc@'script-server.example.org' IDENTIFIED BY 'supersecurepassword';

Next, we import the schema into the database:

[root@scripty ~]# mysql -h mysql.example.org -u opendmarc -p opendmarc < /usr/share/doc/opendmarc-1.1.3/schema.mysql

Now, to actually import the data from my mail servers into the DB, and send out the forensics reports, I have the following script running daily:

#!/bin/bash

set -e

cd /home/mhamzahkhan/dmarc/

HOSTS=”mx1.example.org mx2.example.org mx3.example.org”
DBHOST=’mysql.example.org’
DBUSER=’opendmarc’
DBPASS=’supersecurepassword’
DBNAME=’opendmarc’

for HOST in $HOSTS; do
# Pull the history from each host
scp -i /home/mhamzahkhan/.ssh/dmarc root@${HOST}:/var/run/opendmarc/opendmarc.dat ${HOST}.dat
# Purge history on each each host.
ssh -i /home/mhamzahkhan/.ssh/dmarc root@${HOST} “cat /dev/null > /var/run/opendmarc/opendmarc.dat”

# Merge the history files. Not needed, but this way opendmarc-import only needs to run once.
cat ${HOST}.dat >> merged.dat
done

/usr/sbin/opendmarc-import –dbhost=${DBHOST} –dbuser=${DBUSER} –dbpasswd=${DBPASS} –dbname=${DBNAME} –verbose < merged.dat
/usr/sbin/opendmarc-reports –dbhost=${DBHOST} –dbuser=${DBUSER} –dbpasswd=${DBPASS} –dbname=${DBNAME} –verbose –interval=86400 –report-email ‘sysops@example.org’ –report-org ‘Example.org’
/usr/sbin/opendmarc-expire –dbhost=${DBHOST} –dbuser=${DBUSER} –dbpasswd=${DBPASS} –dbname=${DBNAME} –verbose

rm -rf *.dat

That’s it! Run that daily, and you’ll send forensic reports to those who want them. 🙂

You now have a nice mail server that checks SPF, DKIM, and DMARC for authentication, and sends out forensic reports!

With this setup, I haven’t received any spam in the last two months! That’s just as far as I can remember, but I’m sure it’s been a lot longer than that! 🙂

Any comments, and suggestions welcome!

Home Lab: Added a Cisco 3845 ISR

Why? Well, I wanted more ISRs in my home lab.

That, plus my ISP (Virgin Media), will be upgrading my line from 120mbit to 152mbit in the second half of 2014. Looking at the Cisco docs, the 2851 ISR I am using can only do up to around 112mbit/s.

Although there is a long time for my ISP to go forward with this upgrade, I saw the 3845 going reasonably cheap on eBay, cheaper than what I expect it will be next year when my ISP WILL have upgraded my line. So, I decided to just buy it now. 🙂

I am really starting to have a problem with space for my home lab.  My rack is already pretty much fully populated, so I now have equipment on top of, and surrounding my rack. I don’t really have space for a second rack at the moment, so it looks like I can’t expand my lab any more for a while. Oh well. 🙁

Upcoming Games I Want

I’m not really much of a “gamer”. I wouldn’t even call myself a casual gamer, I really only when there is a new game I like that is coming out.

For example, I love the Final Fantasy games, and Metal Gear games, and lucky for me there are a few new games that are coming up which I am extremely excited for.

The first one is Lightning Returns: Final Fantasy XIII, which will be released in February 2014 in Europe and America, and November 2013 in Japan. I’m a little annoyed that the gap between Japan and EU/US release is so big, I don’t think it should be such a big gap these days considering how easy it is for spoilers to spread thanks to the internet! 🙂

I’ve already pre-ordered Lightning Returns from Amazon, and can’t wait to play it. The opening cut scene looks awesome:

Having played through the original Final Fantasy XIII, Lightning is probably my favourite character after Cloud from Final Fantasy VII. A lot of people didn’t really like XIII because the world is restricted, and quite linear, unlike any of the other Final Fantasy games. You pretty much just go forward and the story progresses, there is very little opportunity to explore the world. Although this did bothered me, I found the story to be good enough to continue playing the game, so I didn’t really mind so much.

I haven’t played through Final Fantasy XIII-2 yet, since I got a little side tracked with some other things, but I guess I have more than enough time before February to play it!

The next game I’m looking forward to is another Final Fantasy game, Final Fantasy XV. This one isn’t sequel or prequel to any existing Final Fantasy game, although it is based on the Fabula Nova Crystallis mythology, which the Final Fantasy XIII series was also based on, so it will be interesting to see if there is any links back to FF XIII.

I’m guessing Final Fantasy XV will be released on 2014, and will be PS4 only.

The next game I’m looking forward to is Metal Gear Solid V: The Phantom Pain. There is quite a long time till release, it’s some time in 2014, but the game play videos and trailers that have been released look amazing.

Having played Metal Gear Rising: Revengeance earlier this year, I wasn’t really satisfied as it’s not the traditional Metal Gear type of game. That’s not to say I didn’t enjoy it, it’s just that the story was rather weak and rather cheesy.

Metal Gear Solid V will take place after the events of Metal Gear Solid – Peace Walker. Since I don’t have a PSP, I am playing Peace Walker using the Metal Gear Solid HD – Collection, which I highly recommend if you are a Metal Gear fan.

Metal Gear Solid V will be released on the PS3 and PS4, I’d really prefer to buy it on the PS4 but I don’t really want to buy the PS4 so early while the price of the console will be high.

Those are the main three games I’m looking forward to, there are some others but I’m less excited for them.

At the moment, I’m currently, very slowly, playing The Last Of Us, which is REALLY awesome so far, but for some reason I get really bad motion sickness while playing it, so I can only really play for around 30 minutes before I have to lie down for a bit, so that’s making it a little difficult to finish the game!

Home Lab Network Redesign Part 2: The Edge Routers

As I have never used a Mikrotik router before, there was quite a big learning curve.

I’ve only really used Cisco/Juniper like interfaces to configure routers, and I’m a fan of them. Even though I have gotten a little more used to the RouterOS command line, I must say I’m not a huge fan of it. Most of the reasons are quite minor reasons, but some of the reasons I don’t really like it is:

  • I find it silly how the menus are structured. For example, I have to first configure an interface in “/interface” context first, then switch context to “/ip address” to add an IP address. Same goes for just getting an IP from a DHCP server. To do this, you can’t do it from the “/ip address” context, but rather “/ip dhcp-client” context. There are many other cases of this, and while none of this is really a big deal, I find it is quite inconvenient. I want to configure the options for a single interface in one place.
  • There are a lot of little things I think ROS is lacking. For example, creating a GRE tunnel from the “/interface gre” context, you have to provide a local-address to source the packets from. This is a pain because if you are on a dynamic IP address, it involves an extra step of editing the address every time your address changes. On Cisco routers, you can just do “tunnel source $INTERFACE” and it’ll automagically use the correct source address. This is also for adding routes via the DHCP provided default gateway. On IOS, I can just do “ip route 8.8.8.8 255.255.255.255 dhcp” to route some packets explicitly via the DHCP assigned default gateway. This is useful because in order to reach my dedicated server, I need a single route via my DHCP assigned default gateway, before BGP from my dedicated server pushes down a new default route. In ROS you can’t do this, and have to add a static route manually yourself, and edit it each time your address changes. Again, these are minor things, but I’m sure there are some bigger things which I cannot remember at the moment.

To be fair, these reasons are quite minor, and considering the price difference between a Mikrotik router, and a Cisco/Juniper router, I guess it is acceptable.

In terms of setting up the RB2011UAS-RM, I wanted to keep the config as simple as possible:

  • Make the DHCP client add the default route with a distance of 250. This allows the default route pushed from my dedicated server have priority, and be the active route.
  • Add a static route to my dedicated server via the DHCP assigned default gateway.
  • Setup VRRP on the “inside” interfaces of both edge routers
  • Setup GRE tunnels back to my dedicated server
  • Configure BGP between both edge routers to the dedicated server, and BGP peering to each other via the point-to-point connection.
  • Added static routes to my internal network behind my ASAs.

I didn’t want to add any masquerading/NAT rules on the edge routers, because I felt it’ll add extra CPU load for no reason since the default route will be via the dedicated server, and NAT will be done there, but I dedicated it might be better to just add a rule to NAT any traffic going straight out to the internet (not via the GRE tunnels) just incase for whatever reason, the BGP sessions on both routers were down, and traffic was no longer going via my dedicated server.

That’s pretty much it for the edge routers. It’s simple, and it’s working well so far!

Again, I can share config files if anyone wants to look at them!

Home Lab Network Redesign Part 1: The Remote Dedicated Server

Home Lab Diagram
As promised, here is a very very basic diagram of my home lab. This is quite a high level overview of it, and the layer 2 information is not present as I suck at Visio, and all the connectors were getting messy on Visio with the layer 2 stuff present! What is not shown in the digram:

  1. There are two back-to-back links between the edge routers which are in an active-passive bond.
  2. Each edge router has two links going into two switches (one link per switch), both these links are in an active-passive bonded interface.
  3. The two edge firewalls only have two links going to each of those switches. One port is in the “inside” VLAN, and the other is on the “outside” VLAN. I wanted to have two links per VLAN, going to both switches, but the Cisco ASAs don’t do STP, or Port-Channels so I having two links would have made a loop.
  4. The link between the two ASAs is actually going through a single switch on a dedicated failover VLAN. From reading around, the ASAs go a little crazy sometimes if you use a crossover cable as the secondary will see it’s own port go down as well in the event the primary fails. It seems that this can cause some funny things to happen. Using a switch between them means that if the primary goes down, the secondary ASA’s port will still stay up avoiding any funnyness.
  5. The core gateway only has two interfaces, each going two a different switch. One port is on the “inside” VLAN that the firewalls are connected to, and the other port is a trunk port with all my other VLANs. This isn’t very redundant, but I’m hoping to put in a second router when I have some more rack space and use HSRP to allow high availability.

As I mentioned in my previous post, I have a dedicated server hosted with Rapid Switch, through I wanted to route all my connections. There were a few reasons I wanted to do this:

  1. Without routing through the dedicated server, if one of my internet connections went down, and I failed over to the other, then my IP would be different from my primary line. This will mess up some sessions, and create a problem for DNS as I can only really point records at one line or the other.
  2. My ISP only provides dynamic IP addresses. Although the DHCP lease is long enough to not make the IP addresses change often, it’s a pain updating DNS everywhere on the occasions that it does change. Routing via my dedicated server allows me to effectively have a static IP address, I only really need to change the end point IPs for the GRE tunnels should my Virgin Media provided IP change.
  3. I also get the benefit of  being able to order more IPs if needed, Virgin Media do not offer more than one!
  4. Routing via my dedicated server at Rapid Switch also has the benefit of keeping my IP even if I change my home ISP.

The basic setup of the dedicated server is as follows:

  1. There is a GRE tunnel going from the dedicated server (diamond) to each of my edge routers. Both GRE tunnels have a private IPv4 address, and an IPv6 address. The actual GRE tunnel is transported over IPv4.
  2. I used Quagga to add the IPv6 address to the GRE tunnels as the native RedHat ifup scripts for tunnels don’t allow you to add an IPv6 address through them.
  3. I used Quagga’s BGPd to create a iBGP peering over the GRE tunnels to each of the Mikrotik routers, and push down a default route to them. The edge routers also announced my internal networks back to the dedicated server.
  4. I originally wanted to use eBGP between the dedicated servers and the edge routers, but I was having some issues where the BGP session wouldn’t establish if I used different ASNs. I’m still looking into that.
  5. There are some basic iptables rules just forwarding ports, doing NAT, and a cleaning up some packets before passing them over the GRE tunnel, but that’s all really.

Other than that, there isn’t much to see on the dedicated server. It’s quite a simple setup on there. If anyone would like to see more, I can post any relevant config.

Home Lab Network Redesign with Mikrotik Routers

I have two cable connections from Virgin Media coming into my house due to some annoying contract problems.

I originally had one line on the 60Mbit package, and the other on 100mbit, but when Virgin Media upgraded me to 120mbit I downgraded the 60mbit line to 30mbit to reduce costs.

Since I got into this strange arrangement with Virgin Media, I have been using a Cisco 1841 Integrated Services Router on the 30mbit line, and a Cisco 2821 Integrated Services Router on the 120mbit line, but I found that I wasn’t able to max out the faster line using the Cisco 2821 ISR. Looking at Cisco’s performance sheet, the Cisco 2821 ISR is only really designed to support lines of up to around 87 mbit.

So naturally, it was time to upgrade! Initially I wanted to get a faster Cisco router, but looking at the second generation ISRs, it’ll be a bit pricey!

I did actually upgrade all my 7204 VXRs to have NPE-400 modules, which according to the performance sheet should do around 215 mbits, but the 7204s are extremely loud, and I only switch them on when I am using them.

Michael and Jamie have always been talking about Mikrotik routers so I figured since Cisco is a no go, I’ll give Mikrotik a chance. I ended up buying two RouterBOARD 2011UAS-RM from WiFi Stock.

To put the RB-20011UAS-RM boxes in, I decided I was going to restructure my network a bit. I will be making a series of posts discussing my re-designed network.

My goals for the redesign were as follows:

  • The RB-2011UAS-RM boxes will only function as edge routers, encapsulating traffic in GRE tunnels, and that’s all.
  • There will be a link between both edge routers, with a BGP peering for redirecting traffic should one of my lines go down.
  • They will have GRE tunnels to all my dedicated servers/VPSs.
  • I will use Quagga on all dedicated servers, and VPSs outside my network to create BGP peerings with my edge routers.
  • I wanted to route all my internet out of a server I currently have hosted with Rapid Switch, so BGP on the RapidSwitch box (called diamond) will have to push down a default route.
  • I wanted to use my Cisco ASA 5505 Adaptive Security Appliance as firewalls between the edge routers and the core.
  • I recently bought a Cisco 2851 Integrated Services Router, which I will use as a “core” router.
  • I wanted as much redundancy as possible.

In my next post I will create a diagram of what I will be doing, and discussing the setup of the server I have hosted at RapidSwitch.

As I have never used Mikrotik routers before, I will also attempt to discuss my experiences of RouterOS so far as I go along.