Did the Twitter moderation loophole make it possible to keep Holocaust denial posts online? – Liberation

In the name of freedom of expression, will Twitter moderate? Concern has reigned since the takeover of the social network by libertarian billionaire Elon Musk on October 28, followed a week later by announcements about the layoff of half of the American company’s staff. In the process, a textbook case came to reinforce these fears: it took almost a day for Twitter to retract a tweet denying the reality of the Holocaust.

“The Holocaust never happened, it’s just fake news to sugarcoat what they did to our ancestors,” former American football player Junior Galette wrote on the platform on Friday morning. Some Internet users denounced, at night, the existence of this online content, long hours after its publication, and made the link with the cuts in Twitter personnel decided by Elon Musk. Holocaust denial of verified accounts, hours after Elon Musk fired most of Twitter’s content moderation team, one of them complained in a tweet (in English) that was liked more than 38,000 times. Then there is a complaint taken by otherslike Quebec Twitch content creator TheLeoVinci.

This tweet from Junior Galette was followed by another, posted at the end of the day. The former football player reacted to the photos taken in the concentration and extermination camps established by Nazi Germany: “Die laughing. Where are these invented people? I guess you have moon landing videos too? Before being deleted, this publication remained online for a long time: for the last time CheckNews consulted it, it was in the account within seventeen hours.

These two contents were finally removed from the platform. And Junior Galette’s account hasn’t posted anything, liked or commented, since Friday.

Twitter moderation “Philosophy”

This silence may be the consequence of applying Twitter’s standard policy on “hateful behavior”, which has not (for the moment) moved. In fact, the social network explained, in its support centerforbid “the targeting of people or groups of people with content that refers to forms of violence or violent events where a protected category has become the main target or victim, with the intention of harassing them. This includes, but is not limited to, media or text that refers to or describes: genocides (eg, the Holocaust); lynchings”.

As a result, Twitter can be specific “requires a person to remove infringing content and requires them to use their account in read-only mode for a specified period of time before they can tweet again.” If the user offends again, his offenses “may result in a longer period of read-only mode, before possible permanent account suspension.” In addition, in cases where the offending account “mainly exhibiting inappropriate behavior”, or even if one of its contents constitutes “a violent threat”, it may be permanently suspended from the first report reviewed by Twitter.

The “philosophy” of moderation of Twitter allows certain content to remain online even if it violates the policies in place. Tweets about “a matter of legitimate public interest”, especially if they come from the accounts of political representatives and institutions of a country, are of particular concern.

Hateful posts shared by hundreds of people

Regarding certified accounts (whose blue badge indicates that their identity has been verified) such as that of Junior Galette, this certification can be taken from them if they “commit serious or repeated violations of Twitter’s policies”. These offenses include all offenses that fall under “hateful conduct”. “Badge removal following repeated offenses is considered on a case-by-case basis and is not automatic”, said Twitter. For his part, Galette was therefore, for the moment, saved.

If his two tweets denying the reality of the Holocaust were lost on the social network, should we be surprised that they could survive for so long – and by the way be shared by hundreds of people? For the Internet users mentioned above, the responsibility is found on the side of the reduced staff, due to the decisions of Elon Musk, and therefore cannot ensure satisfactory moderation work. Since the Tesla founder took control of the site, 50% of the staff has been fired, including the marketing and design departments and front line managers.

Management wants to be certain at this point. “Our core moderation capabilities remain,” defended the platform’s safety and integrity chief Yoel Roth, in a tweet posted on Friday. Deduction is indicated “affected by about 15%” of its Trust & Safety service, and that the “frontline moderation staff” is the most preserved. Moreover, he asserted, “daily volume of moderation actions remains stable” since changing ownership.

“Firm and unwavering commitment”

On the same dayElon Musk hammered that “Twitter’s steadfast commitment to content moderation remains completely unchanged.” And accepting the fact that his teams have “once found this week that hate speech is below previous levels.” The new boss, known for promoting an absolutist view of freedom of expression, reiterated that he does not want to radically relax the moderation of content, and intends to form a moderation council, composed of “of representatives of very different opinions”. In addition, Musk said that he spoke with several leaders of associations for the defense of minority rights in “how Twitter will continue to address hate speech and harassment, and enforce its election integrity policy”.

It is this moderation system, beyond “human” interventions, is essentially based on software that detects certain words, expressions or images. This automatic moderation has been in effect since 2017. Dismissals or not, Junior Galette’s tweets could – or should – have been deleted. But moderation on Twitter, faced with millions of pieces of content published every day, suffers from many flaws. And it’s not from Musk’s time, as several subsequent investigations have shown.

Holocaust denial content was not considered and moderated as a priority

In May 2020, four French associations (SOS Racisme, Union of Jewish Students of France, SOS Homophobia and J’accuse) took Twitter to court for “chronic frustration” of the platform in terms of processing reports of illegal content. The applicant organizations relied on a test conducted from March 17 to May 5 of that year, which showed that of more than 1,100 reported racist, anti-Semitic and homophobic tweets, only 12% were removed from platform for five days.

Regarding the more specific content of Holocaust denial, former CEO Jack Dorsey indicated, in a hearing before the United States Senate in October 2020, that it was not considered and moderated as a priority because it was not considered by Twitter as misinformation (in the same way as those denying the severity of the Covid-19 pandemic for example). And this, although the company hammered at the same time that it wants to punish publications that “deny or minimize” violent events, including the Holocaust.

However the needs exist: a study commissioned from researchers at the Oxford Internet Institute of UNESCO and the UN, the results of which were released in July, found that 19% of Holocaust-related content on Twitter contains distortions and inaccuracies. But of the 137 anti-Semitic posts reported from May to June 2021 by the British NGO Center for Countering Digital Hatred – because they were under Holocaust denial, or conspiracy theories –, Twitter did not remove only 11%. This rate even dropped to 5% in a survey by the Anti-Defamation League, the world’s leading anti-hate organization, which reported 225 tweets from February 18 to April 21, 2022.

Leave a Reply

Your email address will not be published. Required fields are marked *