Friday, 29 March 2013

I see you are wounded, my Lord


Thursday, 28 March 2013

Bookmarks for 20130328

Hogeweyk, a "hyperrealistic" care-village in the Netherlands, offering an environment for dementia patients that closely resembles normal life, but is a carefully maintained illusion - front-stage / back-stage. This is a page about the architecture of the place.

Short film version of Tim Maughan's excellent "Paintwork" story (augmented reality, graffiti)

Thursday, 21 March 2013

Bookmarks for 20130321

Instant Healing Gel.  This is so awesome I can't even.!/11409/the-gel-that-stops-bleeding-instantly/

Behold, the travesty that is the F-35; feast your eyes on this AlterNet article:

Also, I had no idea AlterNet were still going. Yay!

Epic snow crystals under electron microscope:

Bookmarks for 20130320 (ish)

After a lot of messing about tonight, finally got some sense out of XmlReader after reading the following codeproject article:

Tuesday, 19 March 2013

Bookmarks for 20130319

Fantastic article about Sean Smith, aka Vile Rat, Eve Online's greatest diplomat, killed in action in the attack on the US Embassy in Benghazi on September 11th 2012. It explains a little bit about the game and, in some part, about the community that I love. Reading it brought back the initial shock at hearing about the attack and then the ripples it sent through the game's players. Memento mori; Vita brevis.

And a first look at new Mozilla Firefox dev tools:

Monday, 18 March 2013

Friday, 15 March 2013

Bookmarks for 20130315

"Huginn is a system for building agents that perform automated tasks for you online. They can read the web, watch for events, and take actions on your behalf. Huginn's Agents create and consume events, propagating events along a directed event flow graph. Think of it as Yahoo! Pipes plus IFTTT on your own server. You always know who has your data. You do."

The Lunar Orbiter Image Recovery Project is seeking funds to continue saving the original imagery. Last few days for this!

Thursday, 14 March 2013

Bookmarks for 20130314

"@sup3rmark: Two weeks of no pope: baby cured of HIV, breath test for cancer, salt water found on moon of Jupiter. Day one with pope: Google Reader dies."


De facto list of Google Reader alternatives:

Google, you f***ing suck sometimes; and just so you know, this is one of those times.

Wednesday, 13 March 2013

Bookmarks for 20130313

Retinal implant allowing blind people to navigate doors, read words. Amazing stuff. (via @realityisbuggy)

HTML5 seems to be replete with people making the same old mistakes about website UI.

Tuesday, 12 March 2013


Metro story today:

"How David Bowie infiltrated our minds with his viral ad campaign"

Did he now?

There seem to be a lot of people like myself who, for whatever reason, have simply opted out of being exposed to mainstream media. This could in some cases be described as a coping mechanism - the individual knows that being immersed in mainstream media is akin to drowning oneself in the constant push of stories, celebrities and happenings that vie for attention from resources which that person may think better directed elsewhere. So the act of avoidance becomes a kind of self-defence against distraction.

I don't have a TV. In truth, I haven't had a TV for many years. Oh, I have certainly watched TV; I think I spent most of my teenage years doing that (when I wasn't playing bass, smoking weed or failing at maths) (those items are vaguely related, by the way). Teenage Fractos can be found on the sofa, lying on one side, the remote held in an outstretched, balanced grip; watching every detail of a programme before discarding it and flicking channel - a remarkably cyclic act in pre-Cable-TV days. At some point, after years of this behaviour, I was distracted by other things and that was the end of that.

The principal reason I avoid TV now is simply the wall-to-wall advertising. That said, I don't think TV has been the same since Horizon went shit, although that was somewhere in the mid-1990s. But there is something else, too, and I think it has to do with not connecting with the attitudes and personalities that present news and media. Programmes have personalities; channels have personalities, and it is my belief that I have found them to be more compatible with my own personality in the past, but no longer.

Now, we can choose our news, we can choose our streams of information. For example, I may not have a TV but I will catch up on Twitter every chance that I get. Within that medium are things of my own choice: channels and conversations that I have either expressed an interest in, in the people behind those words, or in particular flavours of news reporting.

Often I will learn from Twitter of news items before they ever hit the TV or news websites, and indeed some will never actually appear elsewhere. That is important because there is a sense of beating the media at its own game. Media dictates its own presentation pace and content; All will be revealed... right after this commercial, provided the board and the editorial team lets it through! I don't like that and I definitely do not want that. News should not be squandered and we are not all huddled around the radio any more.

When important things happen, news travels faster through some mediums than others. Even if I have turned away from the mainstream, I still feel superluminal.

Bookmarks for 20130312

If you haven't read/heard this, then you really should. Bradley Manning's court statement leaked. Sounds to me like they didn't actually break him. \o/

Another Guardian link, this is Cory Doctorow's piece on why Tim Berners-Lee is wrong about practically endorsing DRM in HTML5 and that the W3C should send packing all those who require bringing DRM to the browser.

Monday, 11 March 2013

Saturday, 9 March 2013


Now I have played Dear Esther, which is less of a game and more of one of those dreams where you're a ghost.

But this looks... mental.

Bookmarks for 20130309

This was pretty cool. Coder breaks down the technique behind Amazon's "instant" sub-menu presentation.

Friday, 8 March 2013

Bookmarks for 20130308

.Net object-oriented question that should be on interview tests

Amazing video showing retired lab chimps stepping outside for the first time into a more natural habitat. I fully cried.

"@slugnads: ...and see a sky without bars. RT @wiredscience: Video: Retired lab chimps step outside for the first time. ... " - Nadia Drake, reporter for @wiredscience

Wednesday, 6 March 2013

Bookmarks for 20130306

I've been playing this to death. To Death.

How To Destroy Angels - "Welcome oblivion" - live on Soundcloud now.

Turns out it's (probably) not a Hadley cell. Lost that bet.

Some extra bits for that SqlBulkCopy blog. Defaults of the behaviour which mean I have to correct my article a little bit.

Migration to Blogger

The process of moving my Dirty Fire Project blog to Blogger has been, frankly, hair-pullingly annoying.

However, I would like to save for posterity how I managed to get it working.

Importing the articles proved impossible as I couldn't give it an RSS feed and let it do the rest, which was a shame. The utility of that would be immense. So, I ended up copy pasting work in and then stripping the formatting (after realising that it had kept the background colour of the text the first time I did it). It did give me a chance to go through every article and add edits, fix links and so on (desperately trying to find silver cloud here).

The main problem with migration was the use of a custom domain, or rather, persuading Blogger to accept that I had authority for my existing domain.

What it boils down to is that to make it work I have the following zone entries:
(my verify key) CNAME (my verify token)
Where (my verify key) is the CNAME key and (my verify token) is the CNAME domain they want you to add in the domain authentication settings.

The most critical part of this that mattered was the inclusion of the full stops after the two CNAME values.

Basically, the first page of instructions, the one on the Settings->Basic->Publishing screen, and critically the one you see when you enter information which hasn't worked (giving the infamous Error 12 / Error 13 messages) - has the wrong information. It advises putting the address in without the full stop.

If I was more clear about how to put DNS records together then I probably would have spotted this sooner. As it is, it took until the next morning when I stumbled across a different page - the one that is linked as "settings instructions" and tailors its information based on what registrar your domain uses - before I noticed that it included a trailing full stop in the domain verification value.

Once that was set up, it all went quite well. The "missing files host" facility works really well, so I can point that back at my old server and it picks up any files that are not present on the new www host.

The site looks a lot better. And I don't have to run a really contrived set of operations to add an article any more. Though, in all fairness, I did *write* that contrived set of operations. My bad :)

I've isolated what the confusion is. The set of instructions that appear when you first type in your custom domain, where the operation fails with an Error 12/13 - THOSE INSTRUCTIONS ARE WRONG. Then, the linked page called "settings instructions" claims that you need a full stop after the name of the verify token (which is correct), but doesn't include one after the "" address - THOSE INSTRUCTIONS ARE THEREFORE WRONG.
The only page with the correct information on is the "webmaster's verification tool" page which you have to click around to find. The CNAME method verification page in that area has the full stops added to the back of both the www / and verify token addresses.

Tuesday, 5 March 2013

Don't fear the Bulk Copy

While searching for a way to optimise parts of a particular system today, I had managed to get a write to SQL Server (2005, non-local) down to about 0.5 seconds for 121 rows. Not great, but I was prepared to believe that it was working hard as it is a fat table with a dodgy looking schema, with the less said the better about the network in-between.

My boss pointed me at this page: which has details on the SqlBulkCopy object in the System.Data library.

To be honest, I was a bit cagey about this. My belief has always been that there is a lower limit of rows before the pros of using a bulk copy operation outweigh the cons. Perhaps that is true, but certainly for a mere 121 rows, it is not an issue at all.

Previously, I had implemented this insert operation using a variety of methods:

  • Injection via an XML parameter, processed by use of @xml.nodes style queries. 
  • Dynamically generated SQL, creating a large INSERT statement with multiple rows using SELECT and UNION ALL. 
  • Individual parameterised INSERT statements. 

Of these, the XML method had the worst performance; SQL Server may have tools to navigate and process XML, but they are definitely not quick and this was to be expected. This process is *marginally* quicker than doing individual insert statements, but only after the number of rows has increased past, say, 50.

The dynamically generated query looked a bit like this:

INSERT INTO [dbo].[tblData] ( [UserId], [CreateDate], [Value] )
SELECT 1001, '2013-03-04 00:14:30.00', 25
UNION ALL SELECT 1023, '2013-03-04 00:14:30.15', 67
UNION ALL SELECT 1038, '2013-03-04 00:14:30.32', 21
Examining the execution plan for this yielded that it spent most of its time doing a clustered index insert; about 98% of its time to be exact. Although performance for this increased between runs (eventually down to 21 milliseconds for the writing of 121 rows), there was still room for improvement. Time for 1000 users in a Parallel.ForEach loop of the entire operation (which included this write) was 00:02:30

It was hoped that the use of parameterised INSERT statements would allow SQL Server to cache an execution plan and use it. In tests, it does perform faster. Time for 1000 users, as above, was 00:01:45

So... on to an implementation using SqlBulkCopy.

If you are throwing arbitrary data at it then this entails a little bit of set up. The WriteToServer method accepts a DataTable object and this must be tailored exactly to the schema for the bulk copy to work.

(Please note that this is an extremely cut down / Noddy version of the table for brevity.)
public override void Put(int userID, List; myData)
    DataTable dt = new DataTable();
    dt.Columns.Add(new DataColumn("DataId", typeof (System.Int32)));
    dt.Columns.Add(new DataColumn("UserId", typeof(System.Int32)));
    dt.Columns.Add(new DataColumn("CreateDate",typeof(System.DateTime)));
    dt.Columns.Add(new DataColumn("Value",typeof(System.Int32)));
After this, you add the data, row by row into the table. If you have a nullable field, then test for HasValue and use either the Value or enter DBNull.Value for the assignment.
foreach (MyData row in myData) {
    DataRow dr = dt.NewRow();
    dr["DataId"] = DBNull.Value;
    dr["UserId"] = row.UserId;
    dr["CreateDate"] = row.CreateDate;
    dr["Value"] = row.Value;
Now we set up the SqlBulkCopy object. I've added two option flags for it - KeepNulls and KeepIdentity. KeepNulls so it will honour the DBNull.Value encountered on some fields, and KeepIdentity so that it leaves the destination table in control of the assignment of row identity. I have included the row ID in the DataTable's columns, set it to DBNull.Value in the rows themselves, but I shall now make sure that it is removed from the column mappings by clearing them and re-adding the columns I require.

The KeepIdentity flag does NOT do that. This code only works because I do not include the identity column in the ColumnMappings collection.

using (
    SqlBulkCopy bulkCopy = new SqlBulkCopy(
        SqlBulkCopyOptions.KeepNulls | SqlBulkCopyOptions.KeepIdentity))

        bulkCopy.DestinationTableName = "dbo.tblData";
        bulkCopy.ColumnMappings.Add("UserId", "UserId");
        bulkCopy.ColumnMappings.Add("CreateDate", "CreateDate");
        bulkCopy.ColumnMappings.Add("Value", "Value");
Then I can perform the write.
        catch (Exception)
The performance of the write, at 121 rows, was 0.025 seconds, instead of 0.5 seconds. Time for 1000 users, in the parallel test mentioned above, was 00:00:45.

This technique is *lightning* fast and totally worth doing for much smaller numbers of rows than I originally thought.

Don't fear the Bulk Copy.

It should be noted that the default behaviour of the WriteToServer command is that the following apply:
  • Table constraints will not be enforced
  • Insert triggers will not fire
  • The operation will use Row locks
This behaviour can be tailored using the SqlBulkCopyOptions enumeration, as detailed here:

Bookmarks for 20130305

DesignModo's Flat-UI, a free UI kit based on Bootstrap, JQuery:

EDIT: This is now defunct. GitHub got hit with a DMCA takedown on the software as the guy's previous employer deigned it to be their property.

TIL that SqlBulkCopy is really awesome:

Saturday, 2 March 2013

Wine and DNS

Wake up. Blink. Rehydrate. Discover emailed receipt for two domains purchased sometime on Friday night.


I'd better finish this project then!