Comment

Christchurch attack of and for the internet

We don’t know for sure when Brenton Tarrant was radicalised, or how. We know from his manifesto that he’d like us to think he wasn’t always a racist, and that certain things he witnessed during his life led him down the dark path of white supremacy. 

But to say that this is the unimpeachable truth is to miss the point. This is attack was conceived of and for the internet. Every piece of information has passed through its digital hall of mirrors and must be judged as such. 

As David D. Kirkpatrick wrote in The New York Times, Tarrant "seemed to be inspired by the social media world and performing for it.”

But it appears both Tarrant’s radicalisation, and the spreading of his crimes were aided and abetted not just by the internet but the algorithms that lie at the heart of what makes some of its largest companies so profitable. 

Shitposting

Yesterday we added “shitposting” to the list of darkly ironic E-ologisims we use to describe these distressing times. 

The term is meant to describe posting large amounts of aggressive and ironic content. It doesn’t appear to be massively common. Netsafe chief executive Martin Cocker told me that although his frontline staff were aware of the term, he himself had never heard of it. 

Some shitposts are dark and distressing in an obvious way, others seem blithely ironic. Many, so far as I can tell, would make little sense to people not in the know.

Friday's attack was begotten by pages of shitposts. Some on mainstream social media, but mainly on a website called 8chan, which is subject to very little policing of content. Tarrant’s manifesto seems relatively conventional for a killer of his kind; in equal parts deranged and depraved. 

But the document is more than that. It’s a product of internet culture (complete with poor phrasing and unusual punctuation), and includes a variety of puzzling memes designed to amuse some people and puzzle the rest of us. 

Tarrant’s last words before committing his crime were “remember lads, subscribe to PewDiePie”, a YouTube celebrity currently trying to boost his following to outflank his rivals. It appears this too, was some kind of joke on his part. 

Meme culture clearly isn’t at the heart of Tarrant’s crimes — it’s more like an ‘easter egg’ — a silly joke often hidden in films and recognisable only to those in the know. But it’s also clear that people’s relative ignorance about these memes can be exploited. 

The symbol Tarrant made with his hands in court was initially interpreted as a far-right slogan — in fact he’s making a simple ‘A-OK’ sign. This too, appears to be something of a joke. While some far-right activists have tried to appropriate the symbol, it’s also associated with “Operation O-KKK” a dark attempt to confuse liberals into thinking the symbol was somehow racist. 

Meme culture raises other concerns too. Today, we learned that the attacker was not known to security services either in New Zealand or in Australia, in spite of leaving a lengthy trail of far-right writing online.

But Cocker noted that the volume of shitposting on the internet is so great it can be difficult to filter out posters who represent a real threat from those who don’t. The only way, he said, was through greater inter-agency cooperation, helping to match people in the real world, with their aliases online. 

Algorithmic violence 

Questions will rightly be asked about how Tarrant was radicalised. His manifesto lists people whose views he admired, which were presumably accessed on the internet. 

If so, he will join a long list of people radicalised online. Questions have rightly been asked about how online algorithms, used by companies like YouTube, keep people hooked by recommending harder and harder content. In what we now call the attention economy, keeping people level simply isn’t that great for your bottom line. 

Yesterday of course Tarrant himself became a vehicle for the radicalisation of others. He live-streamed his deadly attack. Facebook acted swiftly, removing the video, but it was already too late. It spead, eventually reaching YouTube, where it was shared further still. 

A spokesperson for Facebook told Newsroom the video was “hashed”, allowing algorithms to detect similar videos for eventual deletion. Google told Newsroom its “smart-detection” technology also used algorithms to detect and take down harmful content. 

But Netsafe told Newsroom that minor edits will allow videos to slip past algorithmic detection. Those edited videos must themselves be reflagged and rehashed for blocking.

Once the video has left the main social-media and search sites, it can be uploaded on the darker corners of the web, whose moderators — if there are any — make little effort to remove it.

The problem for big tech is virality is central to their business model. Moderating content before it was uploaded would be one way to stamp out objectionable content.

But hiring that many moderators would have a dramatic effect on those companies’ bottom lines and their ability to generate vast amounts of clickable, viral content. 

Then again, many media companies, hardly flush for cash like Facebook and Google, take this approach to their online comments. When companies face a deluge of comments they simply cannot moderate — they simply turn them off. 

No doubt big tech will continue to beef up algorithmic policing of its content. But the problem is bigger and darker than than simple policing and may in fact be baked in to the very model of the companies themselves. 

Newsroom is powered by the generosity of readers like you, who support our mission to produce fearless, independent and provocative journalism.

Become a Supporter

Comments

Newsroom does not allow comments directly on this website. We invite all readers who wish to discuss a story or leave a comment to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: contact@newsroom.co.nz. Thank you.

PARTNERS