The microblogging service on Tuesday announced it has extended its violent threats policy to explicitly prohibit "threats of violence against others or promot[ing] violence against others." Previously, the prohibition was limited to "direct, specific threats of violence against others."

"Our previous policy was unduly narrow and limited our ability to act on certain kinds of threatening behavior," Twitter's Director of Product Management, Shreyas Doshi, wrote in a blog post. "The updated language better describes the range of prohibited content and our intention to act when users step over the line into abuse."

Twitter is also beefing up its efforts on the enforcement side. From now on, the company's support team will be able to lock users out of their accounts for specific periods of time. Those who get locked out may be asked to complete additional tasks to resume using Twitter, like verify their phone number and delete offending tweets.

"This option gives us leverage in a variety of contexts, particularly where multiple users begin harassing a particular person or group of people," Doshi wrote.

Finally, Twitter has also begun testing a new feature designed to help limit the reach of abusive tweets. The feature takes into account a range of signals that frequently correlate with abuse — such as the age of the account itself, and the similarity of a tweet to other content that has been flagged as abusive in the past.

"It will not affect your ability to see content that you've explicitly sought out, such as Tweets from accounts you follow, but instead is designed to help us limit the potential harm of abusive content," Doshi wrote. "This feature does not take into account whether the content posted or followed by a user is controversial or unpopular."

The changes come after Twitter CEO Dick Costolo earlier this year admitted that the company sucks at dealing with abusive trolls, but promised to do something about it. Since then, Twitter has also streamlined the process of reporting harassment and overhauled how it reviews reports of abuse.

PC Magazine