Twitter to put warnings before swastikas, other hate images
Twitter said on December 18 it would begin issuing a warning before a user can see pictures with Nazi swastikas and other items it determines are hateful imagery, as well as prohibit their use in any profile photos on its social media network.
The new policies also ban users who associate either online or offline with organizations that promote violence against civilians. The step is one of several that Twitter said it would take to crack down on white nationalists and other violent or hateful groups, which have become unwelcome on a service that once took an absolutist view of free speech.
Twitter said in a statement that it would shut down accounts affiliated with non-government organizations that promote violence against civilians and ban usernames that constitute a violent threat or racial slur. It said it would also remove tweets that it determined to celebrate violence or glorify people who commit it.
Twitter suspended an unknown number of accounts on December 18, including one belonging to Jayda Fransen, the Britain First leader whose videos critical of Islam were retweeted multiple times by US President Donald Trump last month.
A Twitter spokeswoman declined to comment on Fransen’s ban or whether it was due to the new policies.
Founded in 2006, the San Francisco company had called itself “the free speech wing of the free speech party” and tried to stay out of battles among users. But that has changed as persistent harassers have driven some women and minorities off Twitter, limiting their ability to express themselves.
A rise in white nationalism in the United States has also changed tech industry standards. In August, social media networks began removing white nationalists after hundreds gathered in Charlottesville, Virginia, and one of them was charged with murdering a 32-year-old woman by running her down in a car.
In October, Twitter vowed to toughen rules on online sexual harassment, bullying and other forms of misconduct. Tweets can still include hate imagery, but users will have to click on a warning to see them, the company said. Besides being banned from profile photos, hate images may be further restricted where national laws require, as in Germany.
The Nazi swastika was the only specific example of a hateful image that Twitter gave, but the company said it would try to give warnings for all symbols historically associated with hate groups or that depict people as less than human. Twitter said it had decided not to categorize the US Confederate flag as hateful imagery, citing its place in history.