As well as a new design, we built a new content management system to handle our publishing requirements.
Starting from the inside out, this is what our technology stack looks like.
The Hardware
We have eight HP servers, all with 8-core hyper-threading CPUs, and a minimum of 12 gigs of RAM. Four of these are at our Wellington office, and four are at our Auckland office. We'd previously hosted five machines at ICONZ data centre in Auckland.Each group of machines is connected to the internet via a Juniper Firewall with dual edge-routers (for redundancy) connecting to FX Networks via Gigabit fibre. And yes, we can run both sites at 1 Gigabit if we need to.
Self-hosting was, as they say, a no-brainer. We have a server room at both locations, each with UPS and on-site generator. The deal we have with FX provides price certainty under a range of extreme scenarios. For example, the traffic we experienced after the Christchurch earthquake in February 2011 was 50 times higher than normal and stayed at 15% higher for days afterwards. We already move about 500 Gigabytes of data a day, so large peaks in traffic can cause large peaks in cost.
The dual site approach allows for us to continue operating if one site is cut off for some reason, and to share load between them if required.
We did consider hosting in the cloud, but two things counted against this approach. The first is we had eight high-end servers with 3 years depreciation still to run, and secondly the cost of provisioning for peaky and unpredictable traffic patterns. Our strong preference is to host in New Zealand to give local visitors the fastest possible service, and the price-point of local cloud services is not quite there for an operation of our size.
The Operating Systems
Our primary operating system (on 6 servers) is GNU/Debian Linux. All run as bare-metal with virtual machines (VM) running on top. VMs can be moved between cities with almost no gap in service.We also have a pair of Windows Media Servers (one in each town) to handle live streaming and on-demand audio, but these will be retired later in the year and replaced with a Wowza streaming server. That change will give us much better cross-platform streaming capability.
The whole cluster was set up by Simon Blake of Katipo.
The Content Management System
Our content management system (CMS) is called ELF, and was built using the Ruby On Rails framework. We chose Rails because it is used internally for some of our intranet applications, and because it was a good fit for type of content we offer.This replaces MySource Matrix, which we used from 2005 until we started phasing it out in 2010.
The first code for ELF was written on 8 March 2010, and in the month that followed Nigel Ramsay and Marcus Baguley from AbleTech wrote a proof on concept based around the recipes section of the site.
I was learning Rails myself at this point, from a background in 8086 Assembly, C, Perl and PHP. Later in the project we used external contractors for larger blocks of work and what I call 'heavy lifting' - complex features that require extended periods of work. Shevaun Coker has done much of this work and was joined by Cameron Prebble recently (both from AbleTech).
A proof of concept was used to assess the search tool (Apache Solr) and load profile of the system (how many pages it could serve). This was successful, with the platform being able to serve around 850 requests a second. This compared very well to the 20-30 requests a second upper limit of the old system.
Over the following 18 months large sections of the site were replaced, with news (the largest) replaced just weeks before the 22 February earthquake.
The last major section of Matrix was replaced late last year. In all, several hundred thousand pieces of content were migrated from Matrix to ELF.
The administration section of ELF has been written specifically to cater to the needs of busy broadcast producers and is highly optimised for the work that we do. We have built only the features that we need, avoiding clutter from features we don't want or use.
A producer can learn all they need to know about publishing their content in about 10 minutes.
ELF is about 126,000 lines of code, and the test suite has 2,200 tests that help us ensure the system remains free of bugs. A recent check of the system found that on average vistors to the site got a system error (a 500 error page) once every 40,000 page views. All bugs get attended to, so that number is now even lower.
We use a Varnish cache in front of the Rails application server, and they work in concert to deliver pages as quickly as possible, even when there is high demand.
There is a very detailed series of posts about the design and building of ELF, and the migration of content, right here on this blog. The series is called Rebuilding Radio NZ.
Publishing Systems
We publish content in three main ways:- manually, via the administration interface of ELF
- from our news editing system (iNews);
- and from our on-air audio system (CoSTAR)
News content takes 15 seconds to update once the publish key is pressed, while audio takes a little longer due to the need to encode it for web delivery. We typically can have a piece of audio on the web a few minutes after broadcast.
Pre-recorded programmes (such as Our Changing World, Insight, Spectrum and many others) are pre-published, with content going live automatically after broadcast.
All these processes have been automated wherever possible, and they are highly optimised for speed, flexibility and reliability.
The Design
The aim was to simplify the site and let the content shine through.We have retained many familiar elements in the design, improved others, and thrown away stuff that did not work. In the last three months there has been a vast amount of polishing work done - selecting great images, taking new staff photos, and reviewing all the content. It's been a long process!
Two years ago I wrote a post about dealing with doubt, and I wondered if we'd ever finish the project. I think if I'd known it would take another two years I might have given up then. Part of the problem was that a number of large projects I was working on got delayed, and I ended up working on them all at the same time.
Now we're done with this phase of the project, I am more than satisfied that the journey has been worth the trouble. We have a great new design, a CMS that'll cope with all our needs, and can continue to grow and be modified as we evolve the site.
The next phase is....
...you'll have to wait and see!
The Credits
My colleagues Frances Hopkins and Helena Nimmo deserve special mention. Their commitment and drive has ensured that the finished site has reached a level that is much, much more that the sum of its parts.We've been supported by a great group of contractors (in no particular order): Shevaun Coker, Nigel Ramsay, Cameron Prebble, Marcus Baguley, Michael Koziarski, Alison Green, Susannah 'Roxy' Field, David Buck, Simon Blake, Amnon Ben-Or, Amanda Dorrell, John Moore, Phillipa Devenport-Johns, and Emma Harrison.
Feedback or comment about the the site can be sent to rnzwebsite@radionz.co.nz.
I am happy to answer any technical questions in the comments.
4 comments:
Congratulations on such a cool, clean new Radio NZ on-line layout. I also welcome the frank open story of how you got there from a techy perspective.
For this septuagenarian you fulfill a lifelong dream to control and order my news and current affairs environment on a time-warp 24/7 basis. I love it. Of course my grandchildren will accept it all without comment as "normal" without wonder.
David Hindin
Christchurch
Can you explain why you decided to migrate from Matrix?
OK... ignore that last question about Matrix! Just been back and looked through your wonderfully thorough blog. Thanks ... most interesting.
Congrats on the new Radio NZ site Richard et al. Is a fantastic piece of work. Love the new design, the typography and structure is much improved, and the focus on the user experience is a great example to other NZ websites. Radio NZ is a taonga on the airwaves and the internet.
Post a Comment