Brands have criticized Twitter for ads linked to child pornography accounts.

Several major advertisers, including Dyson, Mazda, Forbes and PBS Kids, have suspended their marketing campaigns or removed their ads from parts of Twitter because their promotions appeared alongside tweets asking for child pornography, the companies said.

Brands ranging from Walt Disney Co, NBCUniversal and Coca-Cola Co to a children’s hospital are among more than 30 advertisers who have appeared on the profile pages of Twitter accounts peddling links to exploitative material, according to a Reuters analysis of accounts identified in new research into online child sexual abuse from cybersecurity group Ghost Data.

Some of these tweets include keywords related to “rape” and “teenagers”, and appear alongside tweets promoted by advertising companies, according to a Reuters analysis. In one example, a tweet promoted for footwear and accessories brand Cole Haan appeared alongside a tweet in which a user said they were “trading teen/kids content.”

“We’re scared,” Cole Haan brand president David Maddocks told Reuters after being told the company’s ads were appearing alongside the tweets. “Twitter will fix this or we will fix this in any way possible, which includes not buying ads on Twitter.”

In another example, a user searching for content tweeted for “Yung girls ONLY, NO Boys”, which was immediately followed by a tweet promoting Texas-based Scottish Rite Children’s Hospital. Scottish Rite did not respond to multiple requests for comment.

In a statement, Twitter spokeswoman Celeste Carswell said the company has “zero tolerance for child sexual exploitation” and is investing more resources focused on child safety, including hiring new positions to write of policy and implement solutions.

He added that Twitter is working closely with its customers and advertising partners to investigate and take action to prevent the situation from happening again.

Twitter’s difficulties in identifying child abuse content were first reported in an investigation by tech news site The Verge at the end of August. Reuters reported here for the first time the reaction of advertisers who are important to Twitter’s revenue stream.

Like all social media platforms, Twitter prohibits depictions of child sexual exploitation, which is illegal in most countries. But it allows adult content in general and is home to a thriving exchange of pornographic images, which make up about 13% of all content on Twitter, according to an internal company document seen by Reuters.

Twitter declined to comment on the amount of adult content on the platform.

Ghost Data identified more than 500 accounts that openly shared or requested child pornography material within 20 days this month. Twitter did not remove more than 70% of accounts during the study, according to the group, which shared its findings exclusively with Reuters.

Reuters could not independently confirm the accuracy of Ghost Data’s findings in its entirety, but reviewed dozens of accounts that remained online soliciting material for “13+” and “nude youth.”

After Reuters shared a sample of 20 Twitter accounts on Thursday, the company removed about 300 additional accounts from the network, but more than 100 remained on the site the next day, according to Ghost Data and a Reuters analysis. .

Reuters then shared the full list of more than 500 accounts on Monday after it was provided by Ghost Data, which was reviewed and permanently suspended by Twitter for violating its rules, Twitter’s Mr Carswell said on Tuesday.

In an email to advertisers Wednesday morning before this story was published, Twitter said it “discovered that ads were running within profiles involved in the public sale or solicitation of child pornography.”

Andrea Stroppa, the founder of Ghost Data, said that the study was an attempt to assess Twitter’s ability to remove this material. He said he personally funded the research after receiving a tip on the topic.

Twitter’s transparency reports on its website show it suspended more than a million accounts last year for child sexual exploitation.

He made about 87,000 reports to the National Center for Missing and Exploited Children, a government-funded nonprofit that facilitates the sharing of information with law enforcement, according to that organization’s annual report.

“Twitter needs to address this issue as soon as possible, and until it does, we will cease all paid activity on Twitter,” a Forbes spokesperson said.

“There is no place for this type of content online,” a spokeswoman for automaker Mazda USA said in a statement to Reuters, adding that in response, the company is now banning its ads from appearing on profile pages on Twitter.

A Disney spokesperson called the content “desirable” and said they are “working hard to ensure that the digital platforms we advertise on, and the media buyers we use, are stepping up their efforts to prevent mistakes from happening again”.

A spokesperson for Coca-Cola, whose promoted tweet appeared on an account monitored by researchers, said it does not allow material associated with its brand and insisted that “any violation of these standards unacceptable. and taken seriously.”

NBCUniversal said it has asked Twitter to remove ads associated with inappropriate content.


Twitter is not alone in grappling with moderation failures related to online child safety. Child protection advocates say the number of known child sexual abuse images has jumped from thousands to tens of millions in recent years as predators have taken to social media, including Facebook and Instagram from Meta, to fix their victims and change explicit photos.

For the accounts identified by Ghost Data, almost all marketers of child sexual abuse material marketed the material on Twitter and then asked consumers to contact them on messaging services such as of Discord and Telegram to complete the payment and receive the files, which are stored in cloud storage services such as New Zealand-based Mega and US-based Dropbox, according to the group’s report.

A Discord spokesperson said the company banned a server and a user for violating its rules against sharing links or content that sexualizes children.

Mega said a link referenced in the Ghost Data report was created in early August and shortly thereafter deleted by the user, whom it declined to identify. Mega said it permanently closed the user’s account two days later.

Dropbox and Telegram said they use different tools to moderate content, but did not provide further details on how they would respond to the report.

However, the advertiser backlash poses a risk to Twitter’s business, which derives more than 90% of its revenue from selling digital ad placements to brands that aim to market products to 237 million customers . daily active users of the service.

Twitter is also taking legal action against Tesla CEO and billionaire Elon Musk, who is trying to walk away from a $44 billion deal to buy the social media company over complaints about the spread of spam accounts and their impact on social media. the activity.

A team of Twitter employees concluded in a report dated February 2021 that the company needs more investment to identify and eliminate child exploitation material at scale, and noted that the company has a backlog. of cases to be reviewed for possible reporting to law enforcement.

“While the amount of (child sexual exploitation content) has increased dramatically, Twitter’s investment in technologies to detect and manage this growth has not increased,” according to the report, which was prepared by an internal team to provide an overview of the state of child exploitation material on Twitter and receive legal advice on suggested strategies.

“The recent reports on Twitter provide only a cursory, dated view of one aspect of our work in this space, and do not accurately reflect the current situation,” said Carswell.

Traffickers often use code words such as “cp” for child pornography and are “deliberately as vague as possible” to avoid detection, according to internal documents. The more Twitter cracks down on certain keywords, the more users are incentivized to use obfuscated text, which “tends to be more difficult for (Twitter) to automate,” according to the documents.

Ghost Data’s Stroppa said such tricks would complicate surveillance efforts, but noted that his small team of five researchers, without access to Twitter’s internal resources, found hundreds of accounts within 20 days.

Twitter did not respond to a request for further comment. (Reporting by Sheila Dang New York and Katie Paul Palo Alto; Additional reporting by Dawn Chmielewski Los Angeles; Editing by Kenneth Li and Edward Tobin)

Leave a Reply

Your email address will not be published. Required fields are marked *