United States: Facebook, Twitter, Microsoft and others ask Supreme Court not to allow lawsuits against algorithms
A wide range of corporations, internet users, academics and even human rights experts on Thursday defended Big Tech’s accountability in a landmark Supreme Court case involving search algorithms. YouTube, with some arguing that excluding AI-driven recommendation engines from federal legal protections would lead to sweeping changes to the open internet.
The diverse group that spoke before the court ranged from big tech companies like Meta, Twitter and Microsoft to some of Big Tech’s most vocal critics, including Yelp and the Electronic Frontier Foundation. Even Reddit and a set of volunteer Reddit moderators got involved.
In amicus curiae filings, the companies, organizations and individuals said the federal law the court is likely to narrow in the case — Section 230 of the Communications Decency Act — is essential to the function. basics of the web. Section 230 was used to protect all websites, not just social media platforms, from lawsuits related to third-party content.
The issue at the heart of the case, Gonzalez v. Google, is whether Google can be sued for recommending pro-ISIS content to users through its YouTube algorithm; the company argued that Section 230 does not cover such proceedings. But the plaintiffs in this case, family members of a person killed in a 2015 ISIS attack in Paris, argued that YouTube’s recommendation algorithm could be held liable under anti-terrorism law. American.
In their filing, Reddit and Reddit moderators argued that a decision allowing litigation against the tech industry’s algorithms could lead to future lawsuits against even non-algorithmic recommendation methods, and potentially targeted lawsuits against individual Internet users.
Reddit’s entire platform is built around users “recommending” content for the benefit of others by taking actions such as upvoting and pinning content,” their filing states .” The consequences of petitioners’ claim in this case should not be underestimated: their theory would greatly expand the potential for Internet users to be prosecuted for their online interactions. »
Yelp, a longtime antagonist of Google, has argued that its business depends on delivering relevant, non-misleading reviews to its users, and that a judgment that creates liability for its recommendation algorithms could break Yelp’s core functions by effectively forcing it to stop curating all reviews. , even those that may be manipulative or wrong.
“If Yelp can’t verify and recommend reviews without liability, the costs of submitting a fraudulent review will be lost,” Yelp wrote. “If Yelp has to show every review submitted […], business owners can submit hundreds of positive reviews for their own business with minimal effort or risk of penalty. »
Section 230 ensures that platforms can moderate content to show users the most relevant data from the massive amount of information being added to the internet every day, Twitter said.
“It would take an average user nearly 181 million years to download all the data from the web today,” the company wrote.
If the Supreme Court advances a new interpretation of Section 230 that protects the right of platforms to remove content, but excludes protections for their right to recommend content, it will open new questions about what it means to recommend something online. , argued Meta in its filing.
If the mere act of displaying third-party content in a user’s feed qualifies as a “recommendation,” then many services are potentially liable for nearly all of the third-party content they host,” writes Meta. , “because almost any decision about how to classify, select, organize and display third-party content can be considered a ‘recommendation’ of that content”.
A ruling that tech platforms can be sued for their recommendation algorithms would jeopardize GitHub, the vast online code repository used by millions of programmers, Microsoft said.
“The feed uses algorithms to recommend software to users based on projects they’ve worked on or shown interest in the past,” Microsoft wrote. He added that for “a platform with 94 million developers, the consequences [de la limitation de la section 230] could potentially destroy the global digital infrastructure. »
Microsoft’s search engine, Bing, and its social network, LinkedIn, also have algorithmic protections under Section 230, the company said.
According to New York University’s Stern Center for Business and Human Rights, it would be nearly impossible to create a rule that singles out algorithmic recommendation as a meaningful category of liability, and could “result in the loss or obfuscation of enormous amounts of valuable speech . ,” especially speech from marginalized or minority groups.
CNN