Following Elon Musk’s takeover of Twitter, the platform was hit by a so-called coordinated trolling campaign. According to Yoel Roth, the company’s head of safety and security, the attackers started to spread hateful content through 300 accounts. Their goal was to make users think Twitter has changed its content moderation policies. Roth already asserted that Twitter had taken necessary steps to battle the campaign, and now he’s saying over 1,500 accounts were removed following the attack.
Of course, Roth says that 1,500 removed accounts don’t correspond with 1500 people. Additionally, many of them are “repeat bad actors.” This Twitter executive describes the trolling campaign as a “focused, short-term” campaign. And that its impression was taken down to zero to prevent attackers from spreading hateful content. Roth says their primary success measure for content moderation is impressions. The company is also investing in policy and technology to improve things.
Since Saturday, we’ve been focused on addressing the surge in hateful conduct on Twitter. We’ve made measurable progress, removing more than 1500 accounts and reducing impressions on this content to nearly zero. Here’s the latest on our work, and what’s next.
— Yoel Roth (@yoyoel) October 31, 2022
1,500 accounts removed from Twitter following the trolling campaign attack
Roth also gave an explanation about Twitter’s policy toward hateful conduct and how they deal with these materials. First, users who report hateful conduct to Twitter might receive a message saying the material is not a violation.
Roth argues that they treat first-person and bystander reports differently. First-person means the hateful interaction is happening to or targeting the reporter. A bystander also means another user is targeted by hateful conduct. The Twitter executives added that bystanders don’t always have the full context, so they have to set a higher bar for bystander reports to find a violation. That’s why tweets violating Twitter policies are labeled as non-violative in the first review. In the end, he said they are changing how they enforce these policies, not the policies themselves.
Twitter content moderators reportedly lost their access to internal tools
A report by Bloomberg claims that some Twitter employees working at the Trust and Safety have lost access to the internal tools used for content moderation and other policy enforcement. The decision is made ahead of the US midterms, and it still needs to be made clear how the platform wants to fight misinformation during the election. Musk already ordered the formation of a content moderation council with “widely diverse viewpoints” to review Twitter’s policies.
Of course, the news outlet says the new owner wants to freeze Twitter’s software code to prevent employees from making changes to the website.
2022-11-02 15:07:35