This Article is From Jan 05, 2016

Twitter's New Rules: An Attempt To #Stopharassment

Twitter's New Rules: An Attempt To #Stopharassment

The announcement of changes is the latest in a series of attempts by the social-media powerhouse to fix its poor reputation for dealing with harassment.

In an attempt to limit harassment of its users, Twitter is changing the rules for what you are allowed to tweet.

Among the policy updates Twitter announced last week:
  • Abusive behavior, once part of an "abuse and spam" section, has been devoted the largest section of the rules. It states, "We do not tolerate behavior that crosses the line into abuse, including behavior that harasses, intimidates, or uses fear to silence another user's voice."
  • Users cannot tweet "hateful conduct," which means: "You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease."
  • Twitter will attempt to assist people who have threatened suicide or self-harm on the site, including "reaching out to that person expressing our concern and the concern of other users on Twitter or providing resources such as contact information for our mental health partners."
  • The definition of "violence" now includes "threatening or promoting terrorism."

If users do not follow the rules, their accounts may be temporarily locked or permanently suspended.

The announcement of these changes is the latest in a series of attempts by the social-media powerhouse to fix its poor reputation for dealing with harassment. Last year, Twitter chief executive Dick Costolo wrote in a memo that he is "ashamed" at how poorly Twitter has handled trolls.

"We're going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them. Everybody on the leadership team knows this is vital," he wrote.

The memo signaled a long-awaited move for those who deal with digital harassment - which turns out to be almost everyone. The Pew Research Center found that 73 percent of Internet users have witnessed online abuse, from name-calling and physical threats to stalking and sexual harassment. High-profile hotbeds of abuse - such as the attacks on people advocating for inclusion of women in gaming, better known as "Gamergate" - are just a slice of the world's largest harassment pie, which targets minorities, religious groups, journalists, people who express political viewpoints, celebrities, gay people, homophobic people and elderly people - like we said, almost everyone.

Perhaps that's why some feel that changing the rules to ban speech against specific groups is going too far: It is vague enough to frame any non-positive speech as "hateful conduct." The National Review's Katherine Timpf appeared on the "Fox and Friends" TV show to argue that Twitter executives are harder on conservatives than they are on liberals, and that this policy of trying to make the site a "nice happy place-land" will make the situation worse.

"This language is so vague that you could really get anyone in trouble that you want to," Timpf said.

The rule-change announcement did not name specific groups it was trying to shoo or protect. But an obvious target is the Islamic State, the terrorist group whose social-media savvy has immensely accelerated its growth. The Brookings Institution found that there were at least 46,000 Islamic State-supporting Twitter accounts in 2014. Accounts such as these, which have helped the Islamic State assert responsibility for terrorist acts, cause a significant problem for a platform that prides itself on promoting free speech.

In its rule changes, Twitter followed in the footsteps of Facebook by explicitly calling out all terrorism, rather than simply "violence" or a specific terrorist organization. The rules now state, "You may not make threats of violence or promote violence, including threatening or promoting terrorism."

© 2015 The Washington Post
.