What Facebook, Twitter, YouTube and others are doing to tackle hate speech

This week end’s events in Charlottesville, VA left one person dead, 30 people injured and poured more kerosene onto the fire of national debate around hate groups and free speech.
For all the decades of talk about the internet as a great uniter across geographical and ideological divides, it’s just as often used as a tool to deepen divides, as many users are content to stay within their ideological echo chambers. There’s no easy fix, of course, but when hateful rhetoric begins to manifest itself as real-world violence, a failure to take a stand moves beyond theoretical.
In the wake of recent events, more companies are making their stance on the issue known, whether by choice or due to the threat of public accountability or boycott. Whatever the case, it’s clear that most public companies don’t want to be viewed as propping up hate groups. On Monday, GoDaddy announced it was cancelling Daily Stormer’s domain registration, and Google quickly followed suit, stating that the neo-Nazi news site violated its terms of service. Responding to social pushback, Zoho and SendGrid also cut ties with the site.
The president’s initial ineffectual response to the events that led to the killing of counter-protester Heather Heyer have resulted in the withdrawal of executives from Trump’s advisory council. Earlier today, Intel CEO Brian Krzanich stepped down, writing, “I resigned because I wanted to make progress, while many in Washington seem more concerned with attacking anyone who disagrees with them.”
In the wake of all of this news, we’ve reached out to a number of top sites to gauge what proactive steps they’re taking to address the rising tide of online hate speech.
Facebook: A spokesperson provided the following statement. “Our hearts go out to the people affected by the tragic events in Charlottesville. Facebook does not allow hate speech or praise of terrorist acts or hate crimes, and we are actively removing any posts that glorify the horrendous act committed in Charlottesville.” The site has also reportedly removed a number of pages since this weekend, featuring names like “Right Wing Death Squad” and “White Nationalists United.” 
Kickstarter: A spokesperson for the company pushed back on a recent report that its policies have changed in the wake of this weekend’s events. “To be clear,” the spokesperson told TechCrunch, “there’s been no change on our stance. Our policies have never allowed hate speech and projects that promote it are not permitted on Kickstarter.” The site’s guidelines specifically prohibit projects that promote “Offensive material (e.g., hate speech, encouraging violence against others, etc).”


Patreon: While not a direct response to this weekend’s events, a Patreon spokesperson pointed me to a video posted by CEO Jack Conte late last month that addresses the company’s decision to remove certain content from its platform. The topics brush up against some of these issues, but don’t really address them directly. The closest is probably Defend Europe, a right-wing, anti-immigrant group patrolling the waters around North Africa in attempt to block refugees. In the video, Conte explains that political ideology played no role in the site’s decision to remove the campaign.
Reddit: A spokesperson for the site tells TechCrunch, “Reddit is the home of some of the most authentic conversations online. We strive to be a welcoming, open platform for all by trusting our users to maintain an environment that cultivates genuine conversation and adheres to our content policy. We are very clear in our site terms of service that posting content that violates our policies will get users and/or communities banned from Reddit. We work continuously to refine our policies to ensure the integrity of the site.”
The site’s guidelines acknowledge Reddit’s “leeway in what content is acceptable,” while calling out language that is threatening or “encourages or incites violence.” The ToS seems less concerned with hate speech than many other prominent platforms, however.
Snapchat: According to a spokesperson, “Hate speech, and any group that promotes it, will never be tolerated on Snapchat.” The company also directed us to its community guidelines, which separately call out “Terrorism, Threats & Violence and Hate Speech.” The latter prohibits content that, “demeans, defames, or promotes discrimination on the basis of race, ethnicity, national origin, religion, sexual orientation, gender, disability, or veteran status.” 
Tumblr: A spokesperson for the site directed us to a statement that mostly points users toward advocacy and empowerment groups like the NAACP and Black Girls Code. The statement doesn’t really address the site’s own response, however, but does address the complexities, “The Tumblr team is continually having important and tough conversations around the complexities of free speech,” a spokesperson added, “and we are dedicated to empowering positive and productive conversations among the communities that thrive on the platform.”
That said, the company does address “Malicious Speech” in its community guidelines, attempting to get users to “dismantle negative speech through argument rather than censorship.” It’s a nice thought, certainly, but how often are internet arguments won through carefully crafted reasoning?For direct threats of violence, Tumblr encourages users to report incidents. “Don’t encourage violence or hatred,” the guidelines read. “Don’t make violent threats or statements that incite violence, including threatening or promoting terrorism. Especially don’t do so on the basis of things like race, ethnic origin, religion, disability, gender, gender identity, age, veteran status, or sexual orientation.”
Twitter: We received the following statement from a spokesperson. “The Twitter Rules prohibit violent threats, harassment, hateful conduct, and multiple account abuse, and we will take action on accounts violating those policies.” The company also lists Hateful Conduct among its bannable offenses. That section of the terms and conditions prohibits conduct that will “promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.”
Of course, Twitter has been in a fair bit of hot water in recent months regarding a perceived lack of action in the wake of perceived threats, leading one activist to stencil hate language on the streets outside the company’s Berlin headquarters. The nature of Trump’s tweets that teeter on the edge of threatening nuclear war also seem to walk that line. The site has a policy of not commenting on action taken against specific accounts, though some, like the Daily Stormer, do appear to have already been taken offline

YouTube: Earlier this month, the video site announced plans to institute stricter guidelines with regard to hate speech. “We’ll soon be applying tougher treatment to videos that aren’t illegal but have been flagged by users as potential violations of our policies on hate speech and violent extremism. If we find that these videos don’t violate our policies but contain controversial religious or supremacist content, they will be placed in a limited state. The videos will remain on YouTube behind an interstitial, won’t be recommended, won’t be monetized, and won’t have key features including comments, suggested videos, and likes.”
What Facebook, Twitter, YouTube and others are doing to tackle hate speech What Facebook, Twitter, YouTube and others are doing to tackle hate speech Reviewed by SpOOn on August 26, 2017 Rating: 5

No comments:

Powered by Blogger.