I have added a new Javascript function to the Radio NZ site called Oggulate.
This function parses through pages that have Ogg download links and replaces them with a Play/Pause button. The button plays (and pauses) the Ogg file using Firefox 3.1's built in support for Ogg Vorbis.
The Oggulator can be activated by installing a small bookmarklet:
javascript:oggulate();
Clicking on the bookmarklet runs the function and gives you a nice page full of buttons to play Ogg. You will need one of the latest Firefox 3.1 builds for this to work.
The functionality in the script is rudimentary, and 3.1 is still in development, so don't expect it to be perfect. (I notice there is often a delay before playback starts for the first clip you play on a page, and I have had a few crashes and lock-ups).
This feature of the RNZ site is experimental at the moment, so I'm not able to offer support. It is primarily so people can test Firefox.
If anyone wants to add features or improve the function, I'll upload it (after reviewing the changes) so everyone can access them. webmaster at radionz dot co dot nz.
(NB: As at 2013 this feature is no longer supported.)
Thursday, September 25, 2008
Wednesday, September 10, 2008
Clearing the Cache on Matrix
Here's the problem:
We run a master/slave server pair. Each has a web server and database server. The master is accessed via our intranet, and is where we do all our editing and importing of content. This is replicated to the public slave servers.
The slave server has a Squid reverse proxy running in front of it to cushion the site against large peaks in traffic. These peaks occur when our on-air listeners are invited to go to the site to get some piece of information related to the current programme. The cache time in Matrix (and therefore in Squid) is 20 minutes.
The database is replicated with Slony, while the filesystems is syncronised with a custom tool based on rsync.
If we update content it can take up to 20 minutes for that content to show on the public side of the site. This is a problem when we want to do fast updates, especially for news content.
We've looked at a number of solutions, but none quite do what we wanted.
In a stand-alone (un-replicated) system clearing the cache is simple. There is a trigger in Matrix called Set Cache Expiry, that allows you to expire the Matrix cache early. This works OK on a single server system but not if you have a cluster and use Squid. The main issue in that case is that even though the trigger is syncronous, the clearing is not. If Slony has a lot of work to do, there is still a chance that the expiry date has passed before the asset is actually updated on the slave.
A clearing system needs to be 100% predictable, which led me to devise an alternative solution.
We needed to do three things:
a) Determine when changes made on the Master have been replicated to the Slave.
b) Collect ids of assets that are changed and the pages they appear on.
c) Clear the assets collected in b) when we know a).
This is how we do it.
a) There are two queries that can be run on the Master database to get this information:
psql -U postgres -h server_name db_name -qAtc "SELECT st_last_event FROM _replication.sl_status"
returns a sequence number which represent where the Slony master is currently at.
psql -U postgres -h server_name db_name -qAtc "SELECT st_last_received FROM _replication.sl_status
returns the sequence number where the slave is up to.
If you grab the master sequence number after a content change (a database query), you can tell when that change has reached the slave when it's sequence number is the same or greater.
b) We have a script that imports news items to matrix. One of the attributes in the imported data is a list of asset ids that are affected by the import action. We know in advance what asset lists and pages the content will show on.
When the script runs it collects these for each imported asset, and compiles a list of asset ids (with no duplicates).
c) This is how it is bolted together.
After some assets have been imported, the import script calls a second script which adds the items to a queue:
system( 'perl add_to_cache_queue.pl --assets="' . $asset_list . '"' );
This script is short, so I'll reproduce it here.
The queue itself is Perl's IPC::DirQueue, a very cool module for managing a filesystem-based queue.
When an item is found on the queue it checks the Slony sequence number that was saved with the data. If the number has passed on the slave, then yet another script is run, but this time on the slave (public) server.
php ./matrixFlushCache.php --site="/path/to/site/radionz" --assets="comma_sep_list"
This last script resolves the asset id numbers in Matrix to a list of file system cache buckets and URLs. The cache buckets are removed, and the URLs are also cleared from the Squid cache. The cache is them primed with the new page. The script was written by Colin Macdonald.
This is what the output looks like:
** lock gained Sat Aug 30 20:03:01 2008 running jobs: * * * * ?
Slony slave is at 485328, Got a job at: 485327
data:200,1585920,1584495,764616,etc
Clearing cache for #200
Unlinking cache/1313/e8f12c0d7b5889d748872bdad215c0cf
Unlinking cache/1113/aaa1666c26af81a0b044ab2fecb950ae
Deleting DB records (2 reported, 2 deleted)
.
snip a bunch of the same but for different assets.
.
Refreshing urls:
http://www.radionz.co.nz/ ... 200
http://www.radionz.co.nz/home ... 200
http://www.radionz.co.nz/news/business ... 200
http://www.radionz.co.nz/news/business/ ... Skipped
snip a bunch of URLs
- lock released
The script is looping and checking every three seconds for a new job (queued asset ids to clear). The * means it checked for a queued job and none was found. The ? means that a job was found but that the slave had a lower sequence number than the one stored with the job.
The top 5 stories (the ones on the home page) are also cleared and refreshed.
The Flush cache has a URL filter so you can exclude certain URLs from being flushed - an example is the query ridden script kiddie hacks that people try to run against sites. There is no point in re-caching those.
Another is URLs ending in /. In our case this mostly means the someone has deleted the story off the end of the URLs to see what they get, so there is no reason to refresh these either.
A feature that I'm working on will clear just the Matrix and Squid caches for the non-front page stories. These all have a 2 minute expiry time and if we expire all the caches the end user will re-prime the cache. There is no performance hit in doing this as the browser and squid come back for these pages every two minutes anyway.
The system I have just outlined allows us to remotely add and update items in Matrix and for those changes to appear on the site within 5 minutes. I hope someone finds this useful.
We run a master/slave server pair. Each has a web server and database server. The master is accessed via our intranet, and is where we do all our editing and importing of content. This is replicated to the public slave servers.
The slave server has a Squid reverse proxy running in front of it to cushion the site against large peaks in traffic. These peaks occur when our on-air listeners are invited to go to the site to get some piece of information related to the current programme. The cache time in Matrix (and therefore in Squid) is 20 minutes.
The database is replicated with Slony, while the filesystems is syncronised with a custom tool based on rsync.
If we update content it can take up to 20 minutes for that content to show on the public side of the site. This is a problem when we want to do fast updates, especially for news content.
We've looked at a number of solutions, but none quite do what we wanted.
In a stand-alone (un-replicated) system clearing the cache is simple. There is a trigger in Matrix called Set Cache Expiry, that allows you to expire the Matrix cache early. This works OK on a single server system but not if you have a cluster and use Squid. The main issue in that case is that even though the trigger is syncronous, the clearing is not. If Slony has a lot of work to do, there is still a chance that the expiry date has passed before the asset is actually updated on the slave.
A clearing system needs to be 100% predictable, which led me to devise an alternative solution.
We needed to do three things:
a) Determine when changes made on the Master have been replicated to the Slave.
b) Collect ids of assets that are changed and the pages they appear on.
c) Clear the assets collected in b) when we know a).
This is how we do it.
a) There are two queries that can be run on the Master database to get this information:
psql -U postgres -h server_name db_name -qAtc "SELECT st_last_event FROM _replication.sl_status"
returns a sequence number which represent where the Slony master is currently at.
psql -U postgres -h server_name db_name -qAtc "SELECT st_last_received FROM _replication.sl_status
returns the sequence number where the slave is up to.
If you grab the master sequence number after a content change (a database query), you can tell when that change has reached the slave when it's sequence number is the same or greater.
b) We have a script that imports news items to matrix. One of the attributes in the imported data is a list of asset ids that are affected by the import action. We know in advance what asset lists and pages the content will show on.
When the script runs it collects these for each imported asset, and compiles a list of asset ids (with no duplicates).
c) This is how it is bolted together.
After some assets have been imported, the import script calls a second script which adds the items to a queue:
system( 'perl add_to_cache_queue.pl --assets="' . $asset_list . '"' );
This script is short, so I'll reproduce it here.
#!/usr/local/bin/perlA second script runs on the machine as a worker process, watching the queue. This uses the loop code I outlined in my last post.
# This script is used to add items to the DirQueue on the current machine
# it is for testing purposes
use strict;
use Getopt::Long;
use DirQueue;
use lib ".";
my $assets_to_clear = '';
GetOptions( "assets=s" => \$assets_to_clear );
if( $assets_to_clear eq '' ){
exit(1);
}
my $command = 'psql -U postgres -h host db -qAtc "SELECT st_last_event FROM _replication.sl_status"';
my $last_event = `$command`;
$last_event =~ s/\n//;
print "Last event: $last_event\n";
# a queue to add items to. Locks can last for 2 minutes
my $string = " this is a test string add to the file at " . time ."\n";
my $dq = DirQueue->new({ dir => "matrix-cache-queue",data_file_mode => 0777, active_file_lifetime => 120 });
if( $dq->enqueue_string ($assets_to_clear, { 'id' => $last_event, 'time' => localtime(time)} ) ){
exit 0;
}
print "could not queue file";
exit 1;
The queue itself is Perl's IPC::DirQueue, a very cool module for managing a filesystem-based queue.
When an item is found on the queue it checks the Slony sequence number that was saved with the data. If the number has passed on the slave, then yet another script is run, but this time on the slave (public) server.
php ./matrixFlushCache.php --site="/path/to/site/radionz" --assets="comma_sep_list"
This last script resolves the asset id numbers in Matrix to a list of file system cache buckets and URLs. The cache buckets are removed, and the URLs are also cleared from the Squid cache. The cache is them primed with the new page. The script was written by Colin Macdonald.
This is what the output looks like:
** lock gained Sat Aug 30 20:03:01 2008 running jobs: * * * * ?
Slony slave is at 485328, Got a job at: 485327
data:200,1585920,1584495,764616,etc
Clearing cache for #200
Unlinking cache/1313/e8f12c0d7b5889d748872bdad215c0cf
Unlinking cache/1113/aaa1666c26af81a0b044ab2fecb950ae
Deleting DB records (2 reported, 2 deleted)
.
snip a bunch of the same but for different assets.
.
Refreshing urls:
http://www.radionz.co.nz/ ... 200
http://www.radionz.co.nz/home ... 200
http://www.radionz.co.nz/news/business ... 200
http://www.radionz.co.nz/news/business/ ... Skipped
snip a bunch of URLs
- lock released
The script is looping and checking every three seconds for a new job (queued asset ids to clear). The * means it checked for a queued job and none was found. The ? means that a job was found but that the slave had a lower sequence number than the one stored with the job.
The top 5 stories (the ones on the home page) are also cleared and refreshed.
The Flush cache has a URL filter so you can exclude certain URLs from being flushed - an example is the query ridden script kiddie hacks that people try to run against sites. There is no point in re-caching those.
Another is URLs ending in /. In our case this mostly means the someone has deleted the story off the end of the URLs to see what they get, so there is no reason to refresh these either.
A feature that I'm working on will clear just the Matrix and Squid caches for the non-front page stories. These all have a 2 minute expiry time and if we expire all the caches the end user will re-prime the cache. There is no performance hit in doing this as the browser and squid come back for these pages every two minutes anyway.
The system I have just outlined allows us to remotely add and update items in Matrix and for those changes to appear on the site within 5 minutes. I hope someone finds this useful.
Subscribe to:
Posts (Atom)