Skip to main content
Categories
News

E.U. Takes Aim at Social Media’s Harms With Landmark New Law

The European Union reached a deal on Saturday on landmark legislation that would force Facebook, YouTube and other internet services to combat misinformation, disclose how their services amplify divisive content and stop targeting online ads based on a person’s ethnicity, religion or sexual orientation.

The law, called the Digital Services Act, is intended to address social media’s societal harms by requiring companies to more aggressively police their platforms for illicit content or risk billions of dollars in fines. Tech companies would be compelled to set up new policies and procedures to remove flagged hate speech, terrorist propaganda and other material defined as illegal by countries within the European Union.

The law aims to end an era of self-regulation in which tech companies set their own policies about what content could stay up or be taken down. It stands out from other regulatory attempts by addressing online speech, an area that is largely off limits in the United States because of First Amendment protections. Google, which owns YouTube, and Meta, the owner of Facebook and Instagram, would face yearly audits for “systemic risks” linked to their businesses, while Amazon would confront new rules to stop the sale of illegal products.

{snip}

“This will be a model,” Alexandra Geese, a Green party member of the European Parliament from Germany, said of the new law. Ms. Geese, who helped draft the Digital Services Act, said she had already spoken with legislators in Japan, India and other countries about the legislation.

A deal was reached by European policymakers in Brussels early Saturday after 16 hours of negotiations.

“Platforms should be transparent about their content moderation decisions, prevent dangerous disinformation from going viral and avoid unsafe products being offered on marketplaces,” said Margrethe Vestager, who has spearheaded much of the bloc’s work to regulate the tech industry as the executive vice president of the European Commission, the executive arm of the European Union.

Yet even as the European authorities gain newfound legal powers to rein in the tech behemoths, critics wondered how effective they will be. Writing laws can be easier than enforcing them, and while the European Union has a reputation as the world’s toughest regulator of the tech industry, its actions have sometimes appeared tougher on paper than in practice.

An estimated 230 new workers will be hired to enforce the new laws, a figure that critics said was insufficient when compared with the resources available to Meta, Google and others.

{snip}

Tech companies and industry trade groups have warned that the laws could have unintended consequences, like harming smaller businesses and undercutting Europe’s digital economy.

{snip}

Backers of the new laws said they had learned from past mistakes. While enforcement of G.D.P.R. was left to regulators in individual countries — which many felt were overmatched by multinational corporations with seemingly bottomless legal budgets — the new laws will largely be enforced out of Brussels by the European Commission, a major shift in approach.

{snip}

The law, which would begin taking effect by next year, does not order internet platforms to remove specific forms of speech, leaving that to individual countries to define. (Certain forms of hate speech and references to Nazism are illegal in Germany but not in other European countries.) The law forces companies to add ways for users to flag illicit content.

Inspired by the war in Ukraine and the pandemic, policymakers gave regulators additional power to force internet companies to respond quickly during a national security or health crisis. This could include stopping the spread of certain state propaganda on social media during a war or the online sale of bogus medical supplies and drugs during a pandemic.

Google would face new obligations to stop the spread of illegal content on its search engine.

Many provisions related to social media track closely with recommendations made by Frances Haugen, the former Facebook employee who became a whistle-blower. The law requires companies to offer a way for users to turn off recommendation algorithms that use their personal data to tailor content.

{snip}