iu2

Twitter updates algorithm to battle online trolls

Twitter have announced changes to their algorithm in an attempt to improve the health of conversations on the platform and fight back against online trolls.

Twitter have announced changes to their algorithm in an attempt to improve the health of conversations on the platform and fight back against online trolls.

The changes will use behavioral signals to detect “troll like behaviors” and decide how tweets from certain accounts are represented. Most of the behavioral signals won’t be visible externally but some examples include:

  • Accounts not confirming their email address
  • Accounts that repeatedly tweet and mention accounts that don’t follow them
  • Accounts that are connected to or interact with accounts that violate Twitter’s rules
  • Behavior that might indicate a coordinated attack
  • People signing up for multiple accounts simultaneously

Tweets from accounts that are deemed to exhibit “troll like behaviors” but do not violate Twitter’s policies will remain on the platform but will only be visible when users click “Show more replies” or choose to see everything in their search settings.

Twitter say they are making these changes as part of their new approach to improving the health of public conversation on Twitter which was announced by CEO Jack Dorsey in March.

They also report that early testing of these algorithm changes has led to a 4% drop in abuse reports from search and an 8% drop in abuse reports from conversations.

The algorithm changes will roll out to all Twitter users this week and more changes are expected to follow with Twitter’s blog post stating, “This is only one part of our work to improve the health of the conversation and to make everyone’s Twitter experience better.”

While the intentions of Twitter’s algorithm changes are noble, they may have unintended consequences for users of the platform.

For example, active users who regularly tweet out articles and mention the author to give them credit could be identified as exhibiting “troll like behaviors” under these rules.

Using algorithms to police content often proves problematic. We only need to look to the recent backlash against YouTube’s use of machine learning to demonitize videos on the platform to see how a computerized algorithm designed to target bad actors can unintentionally have an impact on the wider community.

For now, we’ll have to wait and see how these changes play out but expect to see some fallout.

 

Share this post

Ready-To-Use Content Makes Marketing Easy

Get a free weekly delivery of ready-made lifestyle content assets you can use
in your blog, newsletter, courses and more – including commercial projects.