Comment

Twitter’s ‘huge fail’ on online abuse

Online abuse is more than personal bullying and it won’t be adequately controlled through policies, laws or regulatory responses that treat it as if it is, writes Marianne Elliott.

Last week Twitter CEO Jack Dorsey admitted social media companies, including his own, have not done enough to prevent online abuse, describing it as a ‘huge fail’ and saying Twitter had put most of the burden of response on the victims of abuse.

"We’ve made progress, but it has been scattered and not felt enough. Changing the experience hasn’t been meaningful enough. And we’ve put most of the burden on the victims of abuse (that’s a huge fail)," Dorsey said in a tweet.

Lani Wendt Young knows that burden very well. An award-winning writer, publisher and journalist of Samoan and Māori heritage, she is also a someone who is consistently and viciously attacked online. I’ve read some of the attacks Lani has been subjected to, and they are horrifyingly violent. They include death threats, rape threats and threats to her family - enough to make her fear for her life, and report them to the police.

There are arguments against writing and talking about online abuse. Some research suggests one of the key motivations for online harassment is the social pleasure derived from knowing that others are harmed by it. As one researcher put it, “The more negative social impact the troll has, the more their behaviour is reinforced.”  

On the other hand, until more of the people responsible for responding to the threat of online abuse, really grasp the extent and severity of the harm done that abuse, we wait for sufficiently serious and robust policy responses from both the platforms and governments. Which makes a case for being explicit about the violent nature of this abuse and the severity of it’s impact.

Systematic abuse requires systematic responses

Online abuse is already being used as a political tactic to distort debate and silence certain groups, and it demands systematic responses that understand both the collective impact and the political motivations of some abuse.

A recent report by Amnesty International showed how online abuse was being used against women in politics around the world. A joint statement by the United Nations Special Rapporteurs on Violence against Women and Freedom of Expression highlighted how violence and abuse against women online can “chill and disrupt the online participation of women journalists, activists, human rights defenders, artists and other public figures and private persons.”

None of this will come as a surprise to women journalists, politicians, activists or artists in New Zealand, like Lani Wendt Young.

The response of tech giants

For an industry committed to analytics, agility and iteration, these tech giants have been very slow to notice, and uninterested in mitigating, the harm they were doing. Even the best systems to review and revise your impact will only work if you involve the right people and ask the right questions. And of course, you have to actually care.

As Jack admitted last week, at least one of the reasons Twitter’s response to the serious nature of abuse on their platform has been so inadequate is that key decision-makers at the company had no personal experience of how bad this abuse can be.

... teams of predominantly white, middle-class men trying to create something ‘disruptive’ nevertheless managed to build a platform that just replicates social structures of the 1950s

Journalist Kara Swisher asked Dorsey whether one of the blocks to action had been that, “you all could not conceive of what it is to feel unsafe”. The Twitter CEO agreed, “No question. Our org has to be reflective of the people we’re trying to serve.”

The idea that the people making policy, or building technology, need to be as diverse as the people whom that policy or technology is supposed to serve is not a new one. Some tech journalists (predominantly women and minorities) have been reporting on the inadequate response to abuse on digital platforms, and the role that a lack of diversity in senior management was playing in that failure, for many years.

In 2016 journalist Queena Kim reported on the impact that abusive behaviour on Twitter was having on user growth and advertising sales, and explained that “women and minorities who work at Twitter had brought the issue to the attention of senior management as far back as 2008, and how that leadership remains largely white and male”.

Which begs the question of how accepting the abuse and alienation of a huge proportion of your customer base as part of your business model is agile, progressive or innovative? Is it surprising that teams of predominantly white, middle-class men trying to create something ‘disruptive’ nevertheless managed to build a platform that just replicates social structures of the 1950s? No, but that doesn’t make it acceptable.

So there is clearly a case for telling the stories of online abuse, for researching and reporting on the scale and severity of that abuse. And for identifying the systematic gaps and failures that mean neither the platforms nor the relevant local authorities have adequately prevented or responded to this abuse when it happens.

A new report launched by ActionStation on Monday does just that. Along with some new quantitative research on the incidence of online abuse in New Zealand, it sets out several illustrative and alarming case studies. One of those case studies documents the lengths Lani Wendt Taylor had to go to to get the police and relevant social media platforms to take any action to stop the attacks on her and her family.

Eventually, after repeated requests by Netsafe, Facebook did act.

“[Netsafe] filed with the website host provider to have abusive content removed. They were unsuccessful. They filed with Facebook to have abusive content removed. Facebook refused. Netsafe appealed. Facebook agreed and shut down the lead abusers and their anonymous pages. But it was a temporary respite only because the pages appealed and Facebook put them back up within a few weeks, only this time, they were more cocky and assured of their untouchability.”

Widespread abuse

ActionStation’s report shows that online abuse is common and widespread in New Zealand, and worse for people of colour, young people, LGBT folk and women. Sadly, this won’t be news to most of us. Research by Netsafe last year found that one in ten New Zealanders experienced hate speech online and three in ten encountered hateful content. And nearly one in five New Zealand teenagers received an unwanted digital communication that had a negative impact on their daily life in 2018. Unsurprisingly, it’s worse for teenage girls, teenagers who are Māori or teenagers with a disability.

Over the past six months, as part of research funded by the Law Foundation’s Information Law and Policy Project, I’ve interviewed 36 New Zealand experts on the impacts of digital media on our democracy. Alongside the positive impacts, I’ve heard concerns about the impact of digital media on the advertising income of traditional media, the spread of mis- and dis-information on matters of public interest and the lack of transparency in political advertising online.

I asked each expert which of the potentially harmful impacts of social media on democracy was most urgent in New Zealand. I’m still processing the data from these interviews, and the full report won’t be out for a few months, but one of the common answers to that question was online abuse. There was widespread agreement that online abuse was more than a problem of bad behaviour from certain individuals directed at other individuals. Interviewees described ‘swarms’ of abusive actors online, acting en masse to attack people who were expressing political opinions they wanted to shut down.

Twitter had witnessed “abuse, harassment, troll armies, manipulation through bots and human-coordination, [and] misinformation campaigns ...”

Several interviewees questioned whether our existing regulatory framework, which depends heavily on individuals bringing complaints of abuse to the attention of the authorities, was fit for the purpose of responding to large scale, even coordinated, online attacks. Especially as those attacks are more likely to be on people who - as ActionStation’s report shows - belong to a group already experiencing discrimination, marginalisation and harassment offline.  

As I work my way through hundreds of pages of interview transcripts, one thing is already clear, online abuse is more than personal bullying and it won’t be adequately controlled through laws or regulatory responses that treat it as such.

Almost exactly a year ago, Twitter CEO seemed to recognise this when he wrote that Twitter had witnessed “abuse, harassment, troll armies, manipulation through bots and human-coordination, misinformation campaigns, and increasingly divisive echo chambers” and were not proud of their inability to address these issues fast enough.

"We’ve focused most of our efforts on removing content against our terms, instead of building a systemic framework to help encourage more healthy debate, conversations, and critical thinking. This is the approach we now need," Dorsey tweeted.

Last week Jack returned to the topic of what it would take to fix Twitter, and concluded that they were likely to “have to change more fundamentals to do so”, saying: “Most of our system today works reactively to someone reporting it. If they don’t report, we don’t see it. Doesn’t scale. Hence the need to focus on proactive.”

This is the key recommendation of ActionStation’s report as well - the need for more systemic and proactive responses not only from the platforms, but also from our government. The report calls on the New Zealand government to do more to ensure that platforms are both removing harmful content quickly, and reducing the reach of harmful content. Like several of the people I interviewed for my research, they also call on the government to review our current laws - including the Harmful Digital Communications Act - to ensure they are protecting people online.

Finally, and in my view most significantly, the report calls for a recalibration of our approach to online abuse to ‘to attend not just to individualised concerns but also to collective dynamics’ and to ‘ensure that all internet safety and hate speech agencies funded by the Crown reflect the increasing diversity of our country.’

Online abuse has become a tactic used not only to attack individuals, but to manipulate our democracy by distorting online debate and silencing certain groups of people, most of whom are already marginalised. It’s doing real and severe harm to individual people and their communities, and is also undermining our capacity for informed public conversation between a diverse range of people online.

An adequate response to this problem will require a recalibration of our policy approach, it’ll almost certainly take some international diplomacy and co-operation, and it will need a sufficiently diverse group of decision-makers at the helm.

All of that is within the capacity of the New Zealand government, perhaps more than some others. So there is likely to be a leadership role for our country in global efforts to combat online abuse and, as Sir Tim Berners-Lee has put it, ‘fight for the web’.

Newsroom is powered by the generosity of readers like you, who support our mission to produce fearless, independent and provocative journalism.

Become a Supporter

Comments

Newsroom does not allow comments directly on this website. We invite all readers who wish to discuss a story or leave a comment to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: contact@newsroom.co.nz. Thank you.

PARTNERS