Drastic dismissals, resignations in chains, dialogue with the air of ultimatum… the first weeks of Elon Musk at the helm of Twitter were incredibly chaotic. This chaotic seizure will at least have the merit of explaining a major problem in social networks: the armies of fake profiles that grow there every day a little more, the famous “bots” that This is what Elon Musk keeps talking about. , vowed that they would perish under his rule. In recent years, social networks have been attacked by artificial accounts, created in the chain of malicious computer programs. In 2022, Twitter announced that it was deleting an average of one million fake accounts… every day.
It must be said that abusing the system could not be simpler. A first name and a name invented from scratch, combined with a functional email address are enough to create a profile on many platforms. For anyone with some computer knowledge, automating this process is child’s play. The possibility then arose of creating mass fake accounts on social networks and with it, of manipulating debates on certain topics.
“What would you do if you could count yourself as one million people? Describes Tamer Hassan, co-founder and CEO of Human, a company specializing in detecting fake accounts. You can give visibility to a story. You can target and troll public figure or elected officials.” The possibilities are huge.
Limited offer. 2 months for 1€ without commitment
Insults and hate… automated
“Some groups of fake accounts are programmed to automatically send insults to people who talk about certain topics,” says Emmanuelle Patry, founder of Social Media Lab, a training organization for professionals in communication. Sending is triggered as soon as the bots detect the presence of certain keywords (for example: “Putin”, “Covid-19 vaccine”, etc.) in a publication. “The goal is to scare Internet users so that they hesitate in the future to venture into these topics”, analyzes Emmanuelle Patry. Groups of fake accounts are sometimes used to target specific people. According to a Mediapart survey published in October, PSG would have paid for the services of a “digital army” of this kind to influence certain debates on the internet and to intimidate personalities.
These battalions of robots also spread the ideas of their administrators. Fake news, political messages, polemical phrases… a few clicks are enough to organize mass publications, which will be further strengthened by fake likes and fake shares. “Bots can spread specific content at such high frequency and volume that the message will trend and gain a lot of visibility,” the Mandiant Intelligence team told us.
Finally, distrust of sleeping water. Many fake accounts are inactive, but that doesn’t take away their nuisance power. For a long time, these dormant accounts were bought by people or companies who wanted to artificially increase their rating. Today, they are used in a more insidious way. “Some people order fake subscribers to their opponents’ accounts because they know that social networks reduce the visibility of accounts with too many fake profiles in their circle,” explains Emmanuelle Patry, founder of the Social Media Lab. A real virtual stab in the back: if the victim has no idea to clean his subscriber base, he will suffer without understanding what is stopping him.
When Elon Musk put the question of fake profiles on the table in the spring, the signal was quite encouraging. “We’ll kill the spam bots or die trying,” he joked at the time. When asked about his vision for the network, he also emphasized that he considers Twitter to be the global benchmark and that “the business of this network is the dissemination of truth”.
Elon Musk made the bot problem worse
Partially arrived, the businessman announced the launch of a function that should prevent the phenomenon of fake profiles, while generating new revenue for Twitter. However, he broke his teeth on the subject. The entrepreneur really had the idea of opening up to everyone, by paying for a mechanism that already exists on Twitter: certification. Until then, this system only addressed certain profiles (governments, companies, influential personalities, media, etc.) particularly exposed to identity theft attempts. These people can apply for a small blue badge free of charge, by sending to Twitter elements that confirm their identity. If their file is approved, Twitter attaches to their profile a small colored dot, confirming that this is the person’s official account, to distinguish it from possible imitators.
Arbitrary and imperfect, this system still has the merit of fairly effectively guaranteeing the authenticity of happy couple who saw their file accepted. When Elon Musk proposed setting up a parallel paying certification circuit, however, he opened the door wide to overflowing. Because the new owner of Twitter does not talk much about a key element of this new option: paid certification does not include any verification of human identity. The use of the term “certification” therefore strongly misleads the uninformed Internet user – especially since this term was previously reserved for people whose identity has been verified. An error of this magnitude has amusing consequences, others are more serious.
Disaster turns on Twitter certification
Following the announcement, clever little boys had fun creating fake certified Elon Musk accounts and having an official Pepsi pseudo profile that said “Coke is better.” The pharmaceutical group Eli Lilly, however, paid the price for Musk’s hasty changes: an Internet user who created a fake certified profile in his name posted a shocking message in mid-November that saying “insulin has been made free”, causing a 6% drop in the company’s share price in the hours that followed. The failure was such that Elon Musk had to put his paid subscription project on hold, time to redo it.
In his defense, the army of fake accounts is really hard to beat. “The bots of ten years ago – or even three years ago – are not the bots of today. Before, it was easier to catch them by monitoring abnormal behavior. For example, a a profile that posts messages 24 hours a day is suspicious because it’s not something a human being does who needs to eat and sleep.Nowadays, bots mimic human behavior and are harder to detect. ” , said Tamer Hassan, co-founder and CEO of Human.
People themselves behave in a very different way (some publish a little, others a lot, some post content prohibited by the platforms, others respect the conditions of use ). All this makes it difficult to detect fake profiles. “A bot can gain people’s trust by posting normally for a while, then suddenly switch programs,” said Jason Soroko, cybersecurity expert at Sectigo. But the main strength of bots is the ease of their creation: “You kill one, ten reappear”, explains Loïc Guézo, Director of Cybersecurity Strategy for Southern Europe, Middle East and Europe. of Proofpoint Africa. A digital hydra that is hard to fight.
Elon Musk’s ambiguity on fake news
However, we can doubt the sincerity of Elon Musk’s crusade against this scourge of the web. To begin with, Twitter teams alerted their new boss to the dangers that this new subscription would exacerbate the problem of fake accounts. The very maximalist view that Elon Musk has about freedom of expression also makes his pro-truth positions very vague. Crying censorship has actually become the favorite sport of Internet users who spread fake news or hateful content when social networks properly remove this content.
When Elon Musk says he thinks Twitter “censors” its users too much, he’s really opening the door for more misinformation. He himself once shared it, most notably in October when he delivered false information about an attack on Paul Pelosi, husband of the American speaker of the House of Representatives Nancy Pelosi (Editor’s note: he concluded by of deleting his tweet). Cuts to Twitter’s moderation services also raise questions. “Having reactive teams at this level is very important when we want to fight fake accounts”, recalls Emmanuelle Patry.
Elon Musk’s fight against fake accounts is undoubtedly driven primarily by monetary motives. This summer, he tried to use the existence of fake Twitter profiles as a motive to back out of his $44 billion takeover bid. This month, he used it as a selling point for a poorly made $8 subscription that, as it stands, has no chance of working.
If Elon Musk really wants to tackle the problem of fake accounts, he needs to seriously rethink his certification concept, or put other measures in place. “It seems the most effective is to fight as far upstream as possible in the stages of creating accounts (with the verification of a real identity)”, argues Loïc Guézo, of Proofpoint. The whole difficulty, defined by the expert, is to find the right balance: if the creation of a profile is too tedious (with a lot of information to provide, checks to work), the risk is to prevent people trying to register in network. However, the presence of fake profiles that often have malicious intentions can also drive users away. Therefore, social networks have every interest to engage more vigorously in this battle.
Christophe Donner’s narration