The New York Times Warns That Freedom of Speech ‘Threatens Public Health’ and ‘Democracy Itself’

Are federal officials violating the First Amendment when they pressure social media companies to suppress “misinformation”? That is the question posed by a federal lawsuit that the attorneys general of Missouri and Louisiana filed last May.

New York Times reporter Steven Lee Myers warns that the lawsuit “could disrupt the Biden administration’s already struggling efforts to combat disinformation.” He worries that “the First Amendment has become, for better or worse, a barrier to virtually any government efforts to stifle a problem that, in the case of a pandemic, threatens public health and, in the case of the integrity of elections, even democracy itself.” As Myers frames the issue, freedom of speech is a threat to “public health” and “even democracy itself.”

There is no denying that when people are free to express their opinions, no matter how misguided, ill-informed, or hateful, some of them will say things that are misleading, demonstrably false, or divisive. The First Amendment nevertheless guarantees their right to say those things, based on the premise that the dangers posed by unfettered speech are preferable to the dangers posed by government attempts to regulate speech in what it perceives as the public interest.

Myers may disagree with that calculation or recoil at its implications. But the First Amendment clearly bars the government from banning speech it views as hazardous to public health or democracy. The plaintiffs in Missouri v. Biden, who include individual social media users represented by the New Civil Liberties Alliance (NCLA), argue that federal officials have violated the First Amendment by trying to accomplish that goal indirectly, blurring the distinction between private moderation and state censorship. The government “can’t use third parties to do what it can’t do,” NCLA attorney Jenin Younes tells the Times.

Myers does not buy it. He thinks the private communications that the plaintiffs see as evidence of censorship by proxy actually show that social media companies made independent decisions about which speech and speakers they were willing to allow on their platforms.

Those emails were produced during discovery in response to orders from U.S. District Judge Terry A. Doughty, whom Myers portrays as biased against the Biden administration. He notes that Doughty was “appointed by [Donald] Trump in 2017” and “has previously blocked the Biden administration’s national vaccination mandate for health care workers and overturned its ban on new federal leases for oil and gas drilling.” In this case, Myers says, Doughty “granted the plaintiffs’ request for extensive discovery even before considering their request for a preliminary injunction.”

Myers also suggests that the plaintiffs are motivated by dubious ideological grievances. “Their claims,” he says, “reflect a narrative that has taken root among conservatives that the nation’s social media companies have joined with government officials to discriminate against them, despite evidence showing the contrary.”

Although Myers implies that the case and Doughty’s handling of it are driven by partisan animus, he notes that “many of the examples cited in the lawsuit also involved official actions taken during the Trump administration, including efforts to fight disinformation ahead of the 2020 presidential election.” That suggests the plaintiffs’ objections to government meddling in moderation decisions go beyond a desire to score political points.

The emails revealed by this litigation, like the internal Twitter communications that Elon Musk has been sharing with journalists, indicate that social media platforms  generally were eager to address the content concerns raised by public health and law enforcement officials. They responded promptly to take-down requests and solicited additional suggestions. The tone of the communications is, by and large, cordial and collaborative.

The plaintiffs in Missouri v. Biden see that coziness as troubling. But Myers emphasizes the exceptions. “The growing trail of internal communications,” he writes, “suggests a more convoluted and tortured struggle between government officials frustrated by the spread of dangerous falsehoods and company officials who resented and often resisted government entreaties.”

Myers concedes that “government officials” were trying to prevent “the spread of dangerous falsehoods” by encouraging Facebook et al. to delete specific posts and banish specific users. He also concedes that the people running those platforms “resented and often resisted” those efforts. But he does not think those facts are grounds for concern that officials used their positions to shape moderation decisions, resulting in less speech than otherwise would have been allowed.

Myers misrepresents the context of these “government entreaties,” which is important in assessing the extent to which they increased suppression of disfavored speech. He notes a June 16, 2021, text message in which Nick Clegg, Facebook’s vice president of global affairs, “testily” told Surgeon General Vivek Murthy, “It’s not great to be accused of killing people.”

In Myers’ telling, that remark was prompted by Murthy’s conclusion that COVID-19 “misinformation” had resulted in “avoidable illnesses and death,” which prompted him to demand “greater transparency and accountability” from social media companies. Myers does not mention that Clegg sent that message after President Joe Biden publicly accused Facebook and other platforms of “killing people” by failing to suppress misinformation about COVID-19 vaccines. Nor does Myers mention that Murthy had just published an advisory in which he urged a “whole-of-society” effort to combat the “urgent threat to public health” posed by “health misinformation,” possibly including “appropriate legal and regulatory measures.”

Myers also omits something else that Clegg said in that text message: He was “keen to find a way to deescalate and work together collaboratively.” What Myers presents as evidence that Facebook “testily” resisted “government entreaties,” in other words, is actually evidence that the platform was desperate to assuage the president’s anger.

Toward that end, Facebook did what Biden and Murthy demanded. “Thanks again for taking the time to meet earlier today,” Clegg said in an email to Murthy a week later. “I wanted to make sure you saw the steps we took just this past week to adjust policies on what we are removing with respect to misinformation, as well as steps taken to further address the ‘disinfo dozen.'” He bragged that his company has removed objectionable pages, groups, and Instagram accounts; taken steps to make several pages and profiles “more difficult to find on our platform”; and “expanded the group of false claims that we remove to keep up with recent trends.”

As White House spokeswoman Robyn M. Patterson describes it, the administration is merely asking Facebook et al. to enforce “their own policies to address misinformation and disinformation.” But federal officials also have pressed social media platforms to expand their definitions of those categories. And according to Clegg, Facebook responded to Biden’s homicide charge by “adjust[ing] policies on what we are removing with respect to misinformation.”

Myers thinks there is nothing to see here. “The legal challenge for the plaintiffs is to show that the government used its legal or regulatory power to punish the companies when they did not comply,” he says. But the companies typically did “comply,” and it is not a stretch to suggest that they did so because they anticipated how that “legal or regulatory power” might be deployed against them.

“As evidence of pressure,” Myers writes, “the lawsuit cites instances when administration officials publicly suggested that the companies could face greater regulation.” In her interview with the Times, for example, Patterson “reiterated President Biden’s call for Congress to reform Section 230 of the Communications Decency Act, a law that broadly shields internet companies from liability for what users post on their sites.” But Myers suggests fear of losing that protection is implausible, because the Biden administration “could not repeal the law on its own” and “Congress has shown little appetite for revisiting the issue, despite calls by Mr. Biden and others for greater accountability of social media companies.”

Since scaling back or repealing Section 230 is a bipartisan cause, it is hardly crazy to think that angering federal officials by refusing to “work together collaboratively” would make such legislation more likely. Complaints about unrestrained misinformation would strengthen Biden’s argument that “greater accountability” requires increased exposure to liability, and Congress might be more inclined to agree.

Even without new legislation, the administration could make life difficult for social media companies through regulation, litigation, and antitrust enforcement. As Myers sees it, that would not be a problem unless officials threatened the companies with retaliation and then delivered on that threat. That standard would leave the government free to regulate online speech as long as it never engaged in explicit extortion.

The post <i>The New York Times</i> Warns That Freedom of Speech ‘Threatens Public Health’ and ‘Democracy Itself’ appeared first on Reason.com.