NEW YORK: Elon Musk’s Twitter is leaning heavily on automation to moderate content, doing away with certain manual reviews and favoring restrictions on distribution rather than removing certain speech outright, its new head of trust and safety told Reuters.

Twitter is also more aggressively restricting abuse-prone hashtags and search results in areas including child exploitation, regardless of potential impacts on “benign uses” of those terms, said Twitter Vice President of Trust and Safety Product Ella Irwin.

“The biggest thing that’s changed is the team is fully empowered to move fast and be as aggressive as possible,” Irwin said on Thursday, in the first interview a Twitter executive has given since Musk’s acquisition of the social media company in late October.

Her comments come as researchers are reporting a surge in hate speech on the social media service, after Musk announced an amnesty for accounts suspended under the company’s previous leadership that had not broken the law or engaged in “egregious spam.” The company has faced pointed questions about its ability and willingness to moderate harmful and illegal content since Musk slashed half of Twitter’s staff and issued an ultimatum to work long hours that resulted in the loss of hundreds more employees.

And advertisers, Twitter’s main revenue source, have fled the platform over concerns about brand safety.

On Friday, Musk vowed “significant reinforcement of content moderation and protection of freedom of speech” in a meeting with France President Emmanuel Macron.

Irwin said Musk encouraged the team to worry less about how their actions would affect user growth or revenue, saying safety was the company’s top priority. “He emphasizes that every single day, multiple times a day,” she said.

The approach to safety Irwin described at least in part reflects an acceleration of changes that were already being planned since last year around Twitter’s handling of hateful conduct and other policy violations, according to former employees familiar with that work.

One approach, captured in the industry mantra “freedom of speech, not freedom of reach,” entails leaving up certain tweets that violate the company’s policies but barring them from appearing in places like the home timeline and search.

Twitter has long deployed such “visibility filtering” tools around misinformation and had already incorporated them into its official hateful conduct policy before the Musk acquisition. The approach allows for more freewheeling speech while cutting down on the potential harms associated with viral abusive content.

The number of tweets containing hateful content on Twitter rose sharply in the week before Musk tweeted on Nov. 23 that impressions, or views, of hateful speech were declining, according to the Center for Countering Digital Hate – in one example of researchers pointing to the prevalence of such content, while Musk touts a reduction in visibility.

Tweets containing words that were anti-Black that week were triple the number seen in the month before Musk took over, while tweets containing a gay slur were up 31%, the researchers said.—Reuters

Comments

Comments are closed.