4 billion messages are sent on Facebook every single day. Within these 14 billion communications are innocuous remarks, such as compliments and LOLs. There are also rape and death threats.
While the popular view of cyber-bullies is that they operate behind a cloak of anonymity, the fact that so many online harassers are willing to post abusive messages to Facebook, right next to their name and face, paints a picture of a very different reality. And the fact that so many trolls aren’t even bothered by the notion of being de-anonymised can probably be linked to the fact that so many of them get away with it.
Left to right: Katherine Cross (sociologist), Caroline Sinders (IBM), Randi Harper (Online Abuse Prevention Initiative) and Kami Huyse (Civilination).
Assembling panelists from the worlds of technology, politics, law, government and academia, the SXSW Online Harassment Summit asks what can be done, and by whom, to prevent or mitigate the seemingly ubiquitous practice of cyber-bullying and abuse.
It’s hard to talk about online abuse without talking about bigotry; as Massachusetts Congresswoman Katherine Clark points out, victims of online harassment are 27 per cent more likely to be women, especially LGBT women and women of colour. There is a huge danger, says Clark, that these women will be forced offline, move home, and change their profession in attempts to escape abuse — and these are the best case scenarios when contrasted with incidents of suicide by victims of bullying who feel they have no recourse. “The abuse may start online, but it creates a 24/7 dynamic where the victim has no relief at all,” says IBM’s Lisa Hammitt.
So whose responsibility is it to address this social issue? Do tech companies need to look at their user guidelines, or should they actually design around the problem? Do judges need to take instances of cyber-bullying more seriously? Is it incumbent upon parents to learn about all of the platforms through which their kids live their lives? The simple answer is yes to all of the above. “This is a multi-front war that we all need to fight,” says Hammitt. And this is a war in its infancy.
“The easiest way to get tech companies involved is to put it to them as a quality of content problem,” says Randi Harper, founder of the Online Abuse Prevention Initiative. Caroline Sinders, a Design Researcher at IBM, posits a number of simple design solutions. “What if, in moments of harassment, you can turn notifications off?” She asks. suggesting that if people are unable to retweet, reply to, or embed your original tweet, then it may deprive the fire of oxygen.
The leading tech firms might be adopting best practices regarding cyber safety, but sociologist Katherine Cross points out that while moderation is in place for abuse, it’s almost all outsourced. “There are whole buildings in the Philippines dedicated to moderating Facebook comments,” she says. And those moderation centres, grim as the idea may be, will only stand to multiply; Cross believes that while algorithms can “pick up the slack,” human beings will always be the final arbiter for ethics.
It’s not just about jailing trolls
There is a huge awareness gap among the public, law enforcement and judges when it comes to this technology, and the laws which can be used to go after harassers. For many, the Internet is still very much “other,” and harassment in the online world is not taken as seriously as it would be in the physical world. And local law enforcement agencies, no matter how well-intentioned, simply don’t know how to respond. The consensus among victims of online harassment is that the police are there to punish, but not necessarily protect.
Katherine Clark has authored anti-online harassment legislation, but is constantly encountering ignorance of the topic among her peers in Congress. “When I use words like swatting and doxxing, people look at me like I’ve lost my mind,” she says. Clark sees a number of similarities between the issues of online harassment and domestic violence, especially regarding how domestic violence was initially perceived by the law, and the culture change that has since taken place around that subject. “We don’t just need laws that punish,” says Ari Waldman, Associate Professor of Law at New York Law School; “we need laws that educate.”
Even putting that knowledge gap aside for a moment, the issue of what counts as harassment is highly nebulous, and so much harmful content is, arguably, constitutionally protected speech. Take the practice of sharing “jailbait” photos on Reddit, or more actively taking a woman’s face and photoshopping it onto a pornographic image. Perhaps more clear designations for these activities will enable a more effective legal response?
Clark and Waldman both acknowledge that in some ways the law will always be racing to catch up with the ever-evolving, increasingly sophisticated tools and perpetrators of online abuse. But at the same time, they insist that educating lawmakers and law enforcers will make a measurable difference in both penalising abuse and protecting people online. As Cisco’s Michelle Dennedy puts it: ““The assholes know they’re assholes. It’s the good guys that need to know.”