01 December 2017
Twitter has published a timeline of changes they plan to make to their Safety features, including the ban of “hateful display names,” “witness reporting” implementation, as well as changes to the content allowed on their platform. The company’s timeline shows that the new safety rules will start rolling out from next week and will continue until next year.
The microblogging site has also apologized for its previous failure to prevent abuse on the platform in a blog post: “Far too often in the past, we’ve said we’d do better and promised transparency but have fallen short in our efforts.”
“This won’t be a quick or easy fix, but we’re committed to getting it right,” Twitter said. “Far too often in the past, we’ve said we’d do better and promised transparency but have fallen short in our efforts.”
“Making a policy change requires in-depth research around trends in online behaviour, developing language that sets expectations around what’s allowed, and reviewer guidelines that can be enforced across millions of Tweets,” Twitter said. “Once drafted, we gather feedback from our teams and Trust & Safety Council.”
Twitter’s head of policy wrote in an internal email, “We realize that a more aggressive policy and enforcement approach will result in the removal of more content from our service. We are comfortable making this decision, assuming that we will only be removing abusive content that violates our Rules. To help ensure this is the case, our product and operational teams will be investing heavily in improving our appeals process and turnaround times for their reviews.”
Twitter’s CEO Jack Dorsey revealed last week that the company is “working intensely” over the past few months on reducing online harassment. He added that Twitter will soon see “new rules around unwanted sexual advances, non-consensual nudity, hate symbols, violent groups, and tweets that glorify violence.”