Twitter announced today that it will stop troll tweets from popping up at the top of your conversations and searches using "new behavioral signals" to pinpoint the offensive messages.

According to a company spokeswoman, the approach will rely on machine learning, and the goal is to stop the trolling before people need to report it. "The result is that people contributing to the healthy conversation will be more visible in conversations and search," Twitter said in a blog post.

Twitter is using "many" behavioral signals to identify trolls, like "if an account has not confirmed their email address, if the same person signs up for multiple accounts simultaneously, accounts that repeatedly Tweet and mention accounts that don't follow them, or behavior that might indicate a coordinated attack," the company said.

"We're also looking at how accounts are connected to those that violate our rules and how they interact with each other," Twitter added.

Critics were quick to pounce on the policy with concerns that it will unfairly censor users. But a Twitter spokeswoman said the company is looking at behaviors, not what content is tweeted.

Twitter will not shut down accounts, just make them harder to spot. "Because this content doesn't violate our policies, it will remain on Twitter, and will be available if you click on 'Show more replies' or choose to see everything in your search setting," Twitter said.

Still, the new system won't be perfect; Twitter expects mistakes will be made, such as "false positives" with its detection methods. Nevertheless, the company remains upbeat about the troll-fighting tactic.

"We've already seen this new approach have a positive impact, resulting in a 4 percent percent drop in abuse reports from search and 8 percent fewer abuse reports from conversations," it said. "That means fewer people are seeing Tweets that disrupt their experience on Twitter."

PC Magazine