The latest batch of Twitter files show the CIA and FBI were involved in content moderation

The CIA has been meddling in the moderation of Twitter’s internal content for years, according to the latest dispatches from Elon Musk’s ‘Twitter Files’ – which also revealed “mountains of forced moderation request” from the Democratic National Committee, but not the GOP.

Two separate threads on the Elon Musk-sponsored deep dive into internal social media docs are released on saturday by freelance journalist Matt Taibbi, who documents how the platform often bows to government and political pressure.

On June 29, 2020, Taibbi revealed, the FBI’s Elvis Chan — who has featured prominently in previous versions of the Twitter Files — asked company executives to “invite an OGA” to an upcoming conference.

“OGA, or ‘Other Government Organization,’ may be an understatement for the CIA, according to some former intelligence officials and contractors,” Taibbi said.

A week later, Stacia Cardille, a chief legal officer at Twitter, made the link explicit.

“I invited the FBI and the CIA will almost certainly attend,” Cardille wrote to colleague — and former FBI chief counsel — James Baker on July 8, 2020. “You don’t have to attend.”

Baker, one of dozens of former FBI agents and executives on Twitter’s ranks at the time, was fired this month for interfering with Musk’s efforts to expose the company’s past violations.

Since then, Taibbi writes that “regular meetings[s] of the multi-agency Foreign Influence Task Force (FITF)” – attended by Twitter and “virtually every major technology company [including] Facebook, Microsoft, Verizon, Reddit, even Pinterest and more” – with “FBI personnel and – almost always – one or two participants marked ‘OGA’”.

“The meeting agenda almost always includes, at or near the beginning, an ‘OGA briefing,’ usually on foreign issues,” Taibbi wrote.

Through the FITF, US intelligence tasked Twitter analysts with investigating domestic Twitter accounts suspected of having nefarious connections abroad, the documents show – intensifying as the 2020 presidential election nears. , but will continue until 2022.

Twitter’s content monitors check user IP data, phone numbers and even assess whether usernames “sound Russian” to confirm the government’s accusations – but often fail to do so.

Taibbi shows how a series of intelligence reports in 2022 worked to shape news narratives related to Ukraine and the Russian invasion.

One such report, which listed accounts allegedly linked to neo-Nazi “propaganda” from Ukraine, prompted Twitter to block sites highlighting Hunter Biden’s helpful role on the board of Busima, the Ukrainian energy company, under a cloud of official suspicion.

Other reports, including one from August 2022, include “a long list of newspapers, tweets or YouTube videos” that US intelligence has found guilty of “anti-Ukrainian narratives”.

“The information about the dubious origin of these accounts may be true,” Taibbi wrote. “But so is some of the information they contain – about neo-Nazis, rights abuses in Donbass, even about our own government. Should we block this material? »

Meanwhile, a separate Taibbi thread documented that “Twitter has a clear political monoculture” — one that favors Democrats.

Democratic Party operatives, and one staffer in particular, bombarded Twitter moderators with complaints about Republican memes and parodies in the run-up to the 2020 election.

In one instance, Twitter refused to remove an obviously comedic parody of a “Todos Con Biden” event – in which then-candidate Joe Biden played a pro-Trump song for a crowd of Hispanics who is a voter.

Moderators also refuse to post “misleading” video mashup of Biden coughing repeatedly at a campaign event.

“Because the video is an unedited excerpt from the vice president’s speech, our teams consider it out of context, but not misleading,” DNC staffer Timothy Durigan said on Twitter. .

“These policies need to be changed,” fumes Durigan — the senior analyst for the DNC’s counter-disinformation program, according to his LinkedIn account.

In a cautiously polite response, Twitter sent Durgan what Taibbi called a “strange moderation flowchart” that “shows they can still apply labels to non-misleading material.”

“If this type of mechanized speech control can be used in one way today, it can be used in another tomorrow, especially if invisible enforcers push the levers,” Taibbi said.

Leave a Reply

Your email address will not be published. Required fields are marked *