Friday, December 7, 2012

Radio NZ's New Hosting

Since 2005 Radio NZ has hosted it's page server at ICONZ's hosting facility in Auckland. We also had a connection in Wellington to allow us to publish content to the live system via a VPN.

Also, since 2005, we've used Citylink for audio content delivery. Their CDN, initially run as a side project by Richard Naylor, was the only economic way for us to deliver our audio streaming and downloads.

Until this year.

The Background

We rely on computer-based audio tools, and as a life-line utility we also need to have resilient infrastructure including full UPS and generator back-up for our main offices.

So self-hosting was always possible, but the cost of bandwidth to our premises made it uneconomic. In late 2004 we were quoted $87,000 a month for a 1 Gig internet connection.

As late as 2010 we were still being quoted prices of $4,000 per month for local connectivity to all NZ consumers. Fibre connectivity and International data was on top of that!

Earlier this year we issued an RFP for gigabit fibre and bandwidth from our Wellington and Auckland offices. We became an APNIC member and have our own IP addresses and AS number, enabling us to participate directly in the internet at a technical level.

FX Networks won the tender process with an innovative data plan and is now the ISP for all our public facing web services. These includes our website, the live streaming service and our podcasts.

Because of FX's extensive peering arrangements, all traffic is delivered from New Zealand. For customers of two large national ISPs, their content will no longer have to be imported from a server in the U.S.

International visitors make up between five and 15% of traffic (time dependant) and they'll get the traffic from NZ as well. This means for the vast majority of visitors latency will reduce, and the speed of downloads may go up.

The Migration

The major constraint on the migration was downtime - there was to be none. The second was that we could not buy any more servers, having just bought eight just three years ago, meaning we had to work out a way to run the existing site at ICONZ and a syncronised clone of it from one of our offices via FX.

As it happens our old CMS (MySource Matrix) required 6 servers, and we had two originating servers for the Citylink CDN (one, a live spare). But we have since replaced our CMS with something that needed fewer resources, freeing up some servers to allow us to build the new system in tandem with the old.

So far, so good.

The new infrastructure uses virtual machines (VM), running on GNU/Linux (except for Windows Media Server). All VMs must be able to run in Auckland or Wellington to allow for services to continue if connectivity to one office is lost (such as in a disaster).

Each VM has a static IP address that is used at either location, and a combination of Routing Information Protocol (RIP) and Border Gateway Protocol (BGP) are used to ensure requests go to the right place within our network.

With two servers relocated to our Wellington office and two to Auckland, a basic cluster was built. At each site dual edge routers connect to FX Networks over fibre, and an SRX240 handles firewall duties. A number of VPN tunnels run between the sites to allow disc replication and communication between VMs in the cluster.

The Backup media and podcast server was moved from the ICONZ address space to the FX connection and prepped to take over from the CDN.

One the cluster bare-metal hardware was in place, the existing Web servers were cloned into new VMs, and reconfigured to work within the new environment.

The audio service change-over was the simplest, requiring a DNS change. The DNS records for and were updated to point to our server, and over the space of a day or so all load moved from Citylink's CDN to our server in Wellington. The second server will be moved from ICONZ to our Auckland office, to act as a live spare.

The website change-over was more complicated. Because we are publishing so frequently, we could not do a straight DNS change because some people would be getting the old (no longer updated site) for a period of time. There were several ways to keep the sites in sync, but given the complexity (we are still using parts of MySource Matrix), it wasn't worth the pain to cover the one- to eight-hour DNS time. (And before you ask, not all ISPs honour a short TTL).

The option I chose was to setup a Varnish caching server, point that at the current site, and change the DNS to direct traffic to Varnish. As request moved from the existing site to Varnish, they were forwarded silently back to the old. Once all the traffic requests were arriving at the Varnish, we could then syncronise the new VM servers with the old live servers, and update Varnish to fetch from the those.

This approach meant we could synchronise the new servers with the old (which took about 30 minutes), and then instantly switch Varnish to send requests to the new machines. This we did at 8:45 pm last night. During the syncing process our newsroom could still publish updates to the old server, and the first publish on the new server brought everything up to date.

It is interesting to note that we started directing traffic to the Varnish host just before a tornado hit Auckland, and during this period we saw four times the usual amount of traffic to the site. All of this was going via Varnish, and there was not a single outage.

We will soon move the remaining servers out of ICONZ and add them to the new cluster, improving redundancy and load-balancing capabilities further. From there we'll be fine-tuning the overall performance of the cluster, and next year launching a new design.

I'm happy to answer any technical question in the comments.

Saturday, June 30, 2012

Latest Stats for Radio NZ

I periodically publish stats on browser and OS use at, and this time I'm going to include some content and mobile stats.

Size of Content Library

The audio library now contains 17,000 hours of material, all of it searchable from Most programmes go back to 2008, some (Our Changing World - 2005 and Spectrum - 2001) go back further.

We are adding to this library at the rate of 13.5 hours a day. Almost everything is downloadable and embeddable.


Browser 06/2012 11/2011 11/2010 11/2009 11/2008
IE 37.5 41.250.65663
Firefox 19.8 23.225.5227.527.73
Safari 17.3 5.613.1105.66
Chrome 14.6 13.88.754.21.47
Android 7.5
Opera 0.69

IE is in decline, and IE6 is currently 3.25% of IE browser share.

Operating System

OS 06/2012 11/2011 11/2010 11/2009 11/2008
Windows 67.3 728184.889.3
Mac 14.7 15.614.212.68.5
iOS 7.88
Android 7.79
iPad 2.40.6300
Linux 1.27 1.531.41.451.72
iPod 0.50.350.220.08


The number of people visiting from mobile is high. 16% in fact.
Here is a breakdown of the top ten devices.
Device % Average
visit duration
iPad 25.5 2:13 2.3
iPhone 21.0 1:52 1.7
not set 11.12 1:14 1.56
Samsung GT-I9100 Galaxy S II 6.85 0:54 1.43
Huawei U8150 Ideos 3.36 0.53 1.26
Samsung GT-S5570 Galaxy Mini 3.34 0:37 1.27
SonyEricsson LT15i Xperia Arc 3.1 1:00 1.51
iPod Touch 3.01 3:42 2.26
Samsung GT-S5830 Galaxy Ace 2.73 0:41 1.25
Motorola MB525 DEFY 2.73 0:48 1.42

Site Speed

We have shaved nearly a second off our Google site speed in the last 12 months, although we are still being hampered by having two CMSs running different parts of the site. I suspect this is one reason for the high mobile use of the site - it is fast on these devices.

Tuesday, June 26, 2012

The Homeopathic Hot Water Bottle

I am departing from my usual posts about technology, to share a remarkable discovery.

For years I have been skeptical about the claims of homeopathy - that minute amounts of a substance can have therapeutic properties. In the last few weeks I have changed my mind after finding proof that anyone can try for themselves. And the discovery was made completely by chance!

Everyone knows that a hot water bottle (HWB) gets cold at a certain (fixed) rate. The rate is dependent on the amount of bedding and any other factors which determine heat-loss. One night I recycled the cold water from the night before, and a family member commented that the HWB was still warm in the morning. This seemed odd, so I investigated further. 

What I found is that the rate of heat loss can be altered using homeopathic techniques. Here is how.

On the first night make the water for your bottle with 400mL of cold tap-water and 800 mL of boiling water. This mix gives a temperature which is safe - hotter than this can cause the bottle to perish more quickly and leak. Extensive scientific testing has also found this temperature to about right for certain family members.

On the second night use 400mL of cold water taken from the bottle, adding 800 mL of fresh boiling water. Repeat each night for a week.

You will find each night the HWB retains it heat better. The effect seems to flatten out after a week.

The only explanation I have for this is that the recycled cold water carries the memory of being hot, and for some reason, when exposed to fresh hot water, prefers to stay hot.

Amazing (and useful)!

Wednesday, March 14, 2012

Rails - How to compile and commit assets locally

The Rails asset pipeline has one major drawback: if you have a large number of assets it can take many minutes to compile these during deploy. The worst I've heard of is 6 minutes! If you deploy frequently this can be a major nuisance.

One answer to this problem is compile within your development environment, commit the files to source control, and deploy them along with the rest of the code.

There is one major issue with this approach; these files will be served by the development web server instead of the requests being handled dynamically by Rails & Sprockets. That stops changes to the underlying assets showing on the front-end.

This is very simple to fix.

In development.rb place the following line:

config.assets.prefix = "/dev-assets"

This over-rides whatever is set in application.rb (normally "/assets").

You will also need this in application.rb:

config.assets.initialize_on_precompile = false

That stops the task trying to connect to your database. (Beware if you are referring to ActiveRecord models in your assets, as this won't work).

These changes allow you to compile and commit the assets to your repository locally, and have those files in your working development tree, but for development requests to still be sent to Sprockets. Plus, you only have to precompile and commit when something has actually changed.

NB: You will need to make sure any Javascript and CSS compressors you use are available on your local machine. And you will need to change the deploy task to NOT precompile assets or create an 'assets' symlink.