“Every major internet company now has a group of haters who will never be satisfied,” said Eric Goldman, who codirects the High Tech Law Institute at the Santa Clara University School of Law. “They are opposed to anything that would benefit their target. It leads to wacky situations.”
One such wacky situation: Fox News and the Wall Street Journal have spent years attacking Section 230 for protecting the platforms they allege are prejudiced against conservatives. Now their owner, Rupert Murdoch, potentially faces a new universe of defamation claims in the country of his birth, where he still owns a media empire.
Another: A tech watchdog group that includes Laurence Tribe, the constitutional law scholar, and Maria Ressa, the Filipina journalist who has been hounded by the Duterte regime through the country’s libel laws, has released a favorable public statement about the expansion of defamation liability — an expansion that, as Joshua Benton suggested at Nieman Lab, presents a tempting model for authoritarians around the world.
Started in September 2020, the Real Facebook Oversight Board promised to provide a counterweight to the actual Oversight Board. Itself a global superteam of law professors, technologists, and journalists, the official board is where Facebook now sends thorny public moderation decisions. Its most important decision so far, to temporarily uphold Facebook’s ban of former president Trump while asking the company to reassess the move, was seen paradoxically as both a sign of its independence and a confirmation of its function as a pressure relief valve for criticism of the company.
On its website and elsewhere, the Real Facebook Oversight Board criticizes the original board for its “limited powers to rule on whether content that was taken down should go back up” and its timetable for reaching decisions: “Once a case has been referred to it, this self-styled ‘Supreme Court’ can take up to 90 days to reach a verdict. This doesn’t even begin to scratch the surface of the many urgent risks the platform poses.” In other words: We want stronger content moderation, and we want it faster.
Given the role many allege Facebook has played around the world in undermining elections, spreading propaganda, fostering extremism, and eroding privacy, this might seem like a no-brainer. But there’s a growing acknowledgment that moderation is a problem without a one-size-fits-all solution, and that sweeping moderation comes with its own set of heavy costs.
In a June column for Wired, the Harvard Law lecturer evelyn douek wrote that “content moderation is now snowballing, and the collateral damage in its path is too often ignored.” Definitions of bad content are political and inconsistent. Content moderation at an enormous scale has the potential to undermine the privacy many tech critics want to protect — particularly the privacy of racial and religious minorities. And perhaps most importantly, it’s hard to prove that content moderation decisions do anything more than remove preexisting problems from the public eye.
Journalists around the world have condemned the Australian court’s decision, itself a function of that country’s famously soft defamation laws. But the Real Facebook Oversight Board’s statement is a reminder that the impulses of the most prominent tech watchdog groups can be at odds with a profession that depends on free expression to thrive. Once you get past extremely obvious cases for moderation — images of child sexual abuse, incitements to violence — the suppression of bad forms of content inevitably involves political judgments about what, exactly, is bad. Around the world, those judgments don’t always, or even usually, benefit journalists.
“Anyone who is taking that liability paradigm seriously isn’t connecting the dots,” Goldman said.