All posts by JP Messina

Fact-Checking and the Conditions of Responsible Citizenship

The history of classical liberal thought is replete with (empirical) arguments that run basically this way: If the government increases its involvement in X, then ordinary people will stop seeing X as their responsibility. Instead of being concerned about X and working to advance X, they will leave care of X to the state, which will do a worse job at it.

Perhaps the most frequent context in which this argument is invoked involves care for the less fortunate. To wit, if we take it that the government bears responsibility for caring for the poor and downtrodden, this will predictably (and unfortunately) undercut support for mutual aid organizations that can often leverage local knowledge to be more effective at alleviating problems than large, centralized bureaucracies like states. Here’s Wilhelm von Humboldt in a characteristic passage (from The Limits of State Action).

As each individual abandons himself to the solicitous aid of the State, so, and still more, he abandons to it the fate of his fellow-citizens. This weakens sympathy and renders mutual assistance inactive; or, at least, the reciprocal interchange of services and benefits will be most likely to flourish at its liveliest, where the feeling is most acute that such assistance is the only thing to rely upon; and experience teaches us that oppressed classes of the community which are, as it were, overlooked by the government, are always bound together by the closest ties.

https://oll.libertyfund.org/titles/humboldt-the-sphere-and-duties-of-government-1792-1854

My fellow blogger Andrew (J.) Cohen recently advanced a similar argument in the case of state-provided education: the more we see the education of children as the state’s responsibility, the less we (particularly parents) see it as something that we ought to look after.

There are many worries one might have about such arguments. First, is the empirical claim that state solutions crowd out non-state solutions even true? Second, even if the empirical claim is true and private individuals and mutual aid organizations are more effective in some ways, still their help can be bad news for freedom insofar as it can be withheld unless recipients meet oppressive conditions. Third, decentralized efforts to address public problems lack mechanisms for ensuring competence and fairness. Even if fully supported, perfectly fair, and much more effective where they operate, such organizations may under-provide needed services elsewhere. And so on.

One thing my own work has forced me to think about lately are the increased calls for fact-checking and labeling of misinformation by social media giants.

My previous posts (here and here) have briefly touched upon reasons for worrying that social media censors and fact-checkers are bound to be fallible. (Indeed, fact-checkers have long shown troubling signs of fallibility, see here, here, here, here, here and here—though also here and here for some reasons for optimism that these shortcomings might be overcome by more thoughtful fact-checking strategies.)

But set aside these issues with the quality of the fact-checking and the political power it might or might not involve. Suppose that the fact-checkers do a decent enough job. Still, the old classical liberal argument above provides reason to worry that widespread fact-checking of this kind might undermine conditions of epistemic responsibility. In short, if we come to expect others to do the hard work of fact-checking for us, we will lose the skills and sense of responsibility for doing it ourselves.

Of course, fact-checking and labeling misinformation is often proposed as an alternative to outright censorship, and it’s likely that it is indeed better than outright censorship. After all, it allows individuals to access and assess the mistaken content for themselves, rather than blocking it from view altogether. Moreover, labeling false or misleading content in this way might well improve our epistemic situation by stopping the spread of misinformation that might otherwise “go viral”. But even if we accept that these benefits of the practice reliably obtain, they need to be weighed against its costs. And one set of costs I’ve heard little about involves those associated with the kinds of people an over-reliance on fact-checking might produce. I’m wary (I think reasonably, but maybe not) of anything that will encourage average people to be more lazy regarding their epistemic duties than they already are.

Now, social media giants are not states. Accordingly, it might be that their efforts to take greater responsibility for fact-checking the content they host is best-interpreted as an instance of voluntary organizations doing what the state is not now doing (better than the state could do), rather than a threat to voluntary solutions for misinformation. And it is clear to me that it is preferable to have non-state entities in charge of fact-checking than to empower the state to do it. In general, it’s healthy to have lots of different institutions with lots of different norms surrounding what kinds of content they tolerate in their jurisdictions.

Still, lots of people get their information on social media platforms. Many have argued this means that they have certain state-like powers. Though I’m skeptical of the strongest of these claims, it’s reasonable to be concerned that, under conditions of wide-spread fact-checking across platforms, users might come to be disposed to accept what they read in these spaces somewhat uncritically. After all, people might develop the reasonable expectation that someone is looking out to ensure that nothing misleading is to be found there. And even if we ignore the fact that, in practice, fact-checking will be “gappy” (with much factually inaccurate information making it through the filters) is difficult to overstate the dangers associated with allowing other people to do our reading and thinking for us.

It’s fair to object that, because the impetus for further fact checking is itself the fact that people are bad at processing information, likely to believe lots of foolish nonsense for bad reasons, and so on, there’s nowhere to go from our present situation but up. Still, this seems to admit that the root of the problem lies with how individuals are trained to evaluate information and its sources. Widespread, public fact-checking can at best ensure that the worst of the problem’s consequences are averted. But it does nothing to address the problem itself–and indeed, it may even make it worse.

In a provocative passage in The Conflict of the Faculties, Immanuel Kant reminds us that many calls to “take human beings as they are” rather than “good-natured visionaries fancy they ought to be” ignore the role that political institutions play in making people the way they are. The lesson is that, if we find that we are bad at discharging our epistemic duties, it is worth asking whether this because of the incentives we face or whether it is it a fixed feature human nature. If the former, then, other things equal, we should avoid strengthening those bad incentives and should rather work to improve them.

For various reasons, I suspect that the trend of increased reliance on independent fact-checkers is here to stay. If I’m right, we must take care to avoid a situation in which we become complacent, off-loading the difficult work of responsible citizenship to strangers with their own sets of interests (which might not track our own). It is true that this is demanding work. But if we can’t figure out how to do what it takes (or if indeed failure is inevitable given deep features of human nature), then it is harder to gainsay the increasingly popular (but in fact ancient) claim that there might be more attractive alternatives for governance than democracy (CE*).

(Thanks to Andrew Cohen for his thoughts on a previous version of this piece.)

CE*=RCL earns commissions if you buy from this link; commissions support this site.

Social media censorship: Further reflections on suppressed coronavirus disinformation

On Wednesday August 5th, Donald Trump posted a snippet of a Fox and Friends segment to his social media accounts. Discussing the important matter of school reopenings, the president said the following:

Schools should be reopened. When you look at children, children are almost—and I would almost say definitely, but almost immune from this disease…they’ve got stronger—it’s hard to believe depending how you feel about it—much stronger immune systems than we do somehow for this and they…don’t have a problem…and I’ve seen some doctors say that they’re totally immune.

Trump goes on to cite as evidence the fact that only one person under 18 died from the virus in the state of New Jersey, which he (no less than his viewers) should know falsifies any claim of total immunity. Charitably, he likely means that children are shielded from the worst effects of the virus. Evidence: Just weeks ago, Trump made more reasonable claims that (1) children face less risk from the virus (they recover quicker) than adults and (2) that they may transmit it less readily than adults. As in the case of transmission to and from animals, the evidence concerning children’s role in transmitting the virus is still coming in. Whereas about a week ago, experts were optimistic about children’s role in transmission (believing on the basis of limited evidence that it might be lower), a recent German study has raised doubts (though has yet to pass through peer review). But regarding this risk, Trump admitted that further research was necessary and that his administration was taking this factor seriously.

Speculation about what he really means aside, the false and misleading nature of the letter of his claim (that children are immune) led Facebook and Twitter to remove the video for violating their policies around misinformation and covid-19. Were they right to do so?

In my previous post, I indicated that there was good reason to worry that this was actually the best way of promoting what might be dangerous content. Once again, I awoke to numerous headlines which repeated Trump’s claim. Thanks to the Streisand effect, people will see this claim, that children are ‘almost immune’ to coronavirus, repeated over and over again; thanks to the illusory truth effect, people may be more susceptible to believing it, even if they know it’s false.

Here, I want to emphasize a different strategic aspect of all of this. Suppose that Trump knows that the more strident claims are strictly false and that they will cause controversy. (If you watch the video closely, he indicates that he knows as much when he hedges: “I hate to use the word totally because the news will say, ‘oh, he made the word totally and he shouldn’t have used that word”.)  Might censoring it frustrate the aims of the censoring parties and ultimately serve Trump’s interests?

Perhaps. It is unprecedented for social media platforms to remove the president’s speech. Their policy, to date, has (reasonably in my view) been that, though what the president says might be false, misleading, or harmful, the people have a right to know that he’s said it (even if they should also be informed that it is misleading). But such platforms have been facing increased pressure by representatives of more traditional media, by politicians, by advertisers, and by some users to exercise a heavier hand in this regard, and to stop exempting Trump’s speech from their community standards. Trump, already so annoyed by the ways in which social media platforms have handled his content as to issue an executive order barring them from engaging in censorship, presumably knows this. The more he can get social media companies to censor him, the more he may be able to convince his base that these platforms are untrustworthy.

Supposing that it is true that a majority of users of social media platforms (including 38% of democrats) already believe that social media platforms are biased against conservatives, censoring the president’s speech in this manner might further negatively impact the reputations of these platforms (reputations which have already taken a large hit in the past year). Not only can this sort of censorship further increase polarization by leading conservatives to disengage (costing the platforms active users, and, ultimately, advertisement revenue), it may also cause people more generally to doubt social media platforms’ disclaimers about the dangerous or misleading content they choose to leave up, reducing their credibility and leaving vulnerable persons more susceptible to misinformation (though see Goldman’s Knowledge in a Social World, ch. 7). In the particular case at hand, these effects may be amplified because the censored party here is the president and it is reasonable to believe that voters have a legitimate interest in knowing what their leaders are saying, true or false, good or evil (though some evidence suggests that, with respect to offensive content in particular, many think this kind of censorship, even of the U.S. president, is desirable).

One thing that these considerations put into sharp relief is that despite the bare facts that social media companies have a right to censor and legitimate interests in censoring, there is no guarantee that they will censor well, even relative to their own goals (somewhat narrowly construed). If they are sufficiently bad at choosing which content to censor to advance their ends (was this really the most dangerous segment we’ve heard?), this establishes a weak presumption against their censorship—not as a matter of law or even of ethics, but a matter of organizational rationality.

Still, the claim that children are basically immune from coronavirus is false and may mislead parents into taking risks with their children that they ought not to take. While I think Trump did not mean exactly what he said (and that most people can understand this), surely an interest in protecting children favors censoring the content, outweighing this presumption.

Yet, whether this consideration is decisive in favor of censorship does not simply depend upon the magnitude of the risks unreflective uptake of the content poses (which might be slight). It also depends crucially upon censorship’s being sufficiently effective in stopping parents from taking such risks as to outweigh people’s legitimate interest in hearing what our elected officials say about important topics and the costs to credibility that platforms might incur as a result. Here, it is important that the censorship will not achieve this much unless enough people who would have otherwise seen and taken Trump’s words at face value are now shielded from their harmful effects. My other worries about the unintended consequences of censorship aside, I wonder how many now find themselves in such a position.

In the end, though Trump is wrong that children are immune to covid-19, he might well be right that the evidence favors reopening schools. Given plausible hypotheses about the importance of early education in socializing children, for adding meaning and purpose to their lives, for helping parents get back to work, for taking children out of deprived and abusive environments, and for ensuring that vulnerable children are not left behind, reasons to favor reopening hang heavy in the balance. These reasons must be weighed carefully against risks to children and to teachers and family members from reopening, risks that Trump himself has previously acknowledged. If the United States chooses to reopen its schools, it will not be alone. Sweden never closed them, and a number of other countries (many of them apparently faring better in the fight against coronavirus) have similar plans.  All other matters aside, it would be unfortunate if discussion of these serious issues were to take a back seat to the political theater of a battle between Silicon Valley executives and the commander in chief. Sadly, all parties involved are acting in ways that may predictably realize this unfortunate outcome.

Social Media Censorship: Four Lessons from the Recent Suppression of Covid-19 ‘Disinformation’

On July 27th 2020, a group of physicians calling themselves “Frontline Doctors” posted a video to Facebook, YouTube and Twitter. The video displays licensed medical doctors in front of a supreme court building (1) advocating the reopening of schools, (2) suggesting that there are public health costs of lockdowns (e.g., excess suicides, cases of depression, domestic violence, and substance abuse) and (3) extolling the virtues of zinc and hydroxychloroquine (a drug whose robust supply is essential for managing lupus and other ailments) in treating and preventing COVID-19 infections. By the morning of July 28th, the video had roughly 14 million views and had been removed from every mainstream platform that had initially hosted it for violating their coronavirus misinformation policies. On the same morning, I became curious and watched the video elsewhere. It was not hard to find.

Lesson one: Despite claims that private social media companies regularly violate persons’ free speech rights, actions by private companies to censor content are much less worrying than similar actions by state agents. This is partially because it’s relatively easy to access content that private parties take down. Less so when the state does it.

On July 29th, the New York Times’ David Leonhardt ran a “morning briefing” indicating that the video had been removed for suggesting that hydroxychloroquine was an effective cure and that masks were unnecessary. The remark on masks was a mere snippet of the much broader message. “You don’t need a mask,” Stella Immanuel said, “there’s a cure.”  She herself admits to wearing a surgical mask, so presumably she does not mean that there is no reason to wear a mask in the absence of the drug’s widespread deployment. Other doctors who spoke at the event clearly advocate social distancing and mask-wearing practices.

But leaving this claim aside, there is at least some truth in the main of what these doctors were saying. The segment lasted over 45 minutes, only a small portion of which contained anything about masks and only some of which concerned hydroxychloroquine . Many of the group’s claims about the safety of reopening schools and the hidden public health costs of lockdowns are largely uncontroversial. Others, e.g., that Sweden’s response represents an alternative approach to locking down are likewise true, even if the results of Sweden’s alternative approach have been mixed. Labeling the entire segment false or misleading thus does disservice to what’s true in it.

Lesson two: John Stuart Mill was right that censored content that is false often contains important half-truths and that this matters when considering whether to suppress it.

In the same piece, Leonhardt claimed that confusion induced by social media platforms’ failure to aggressively censor content is among the most noteworthy causes of the United States’ comparatively bad coronavirus outcomes. (Leonhardt also cited Sinclair’s media network, which broadcasts content downplaying the risks of the virus.) Let’s leave aside the fact that the causal explanation of the U.S.’s performance relative to its peers is a matter of some complexity and focus instead on something striking about the causal claim he in fact makes: that social media companies’ lack of censorship deserves a large portion of the blame for these outcomes.

But notice that reporters like Leonhardt at mainstream media outlets have likely done more than any social media platform to spread this particular video’s message. Had the message merely remained on Twitter, YouTube, and Facebook (as so much content does) I would not have watched it. The same is surely true for countless others. But because the video’s content, which might have otherwise maintained a kind of cult viewership, was covered by all of the major news outlets, lots of people sought it out. This is the Streisand Effect in action: very often, attempts to suppress information lead to its viral spread. This matters because there are in effect two possibilities: either the ineptly suppressed content is dangerous or it isn’t. If it is genuinely dangerous, then Leonhardt (and others like him) have acted irresponsibly by their own lights by drawing much more attention to it. If the content of the video is not genuinely dangerous, on the other hand, then the main justification for removing the content in the first place is implausible.

Now, you might say the way that mainstream outlets spread the speech was not dangerous insofar as it was framed explicitly as containing disinformation. The problem is twofold. First,  the current media climate is so polarized right now that even once reputable outlets like the New York Times are deemed untrustworthy by a significant subset of the population. (Some go so far as to claim that these outlets are anti-reliable.) When such outlets declare something to be disinformation, then, there is real reason to worry that people skeptical of the outlet will be more favorably disposed to the bad speech than they’d otherwise have been. Second, some research has uncovered an Illusory Truth Effect, according to which people are more likely to believe things that they hear constantly repeated, even if listeners know the repeated claim is false.

Lesson Three: If there’s dangerous content out there, it’s often better to ignore it than draw increased attention to it. Paradoxically, censoring content is among the best ways of promoting it. Given the newsworthiness of social media censorship, were these companies to do what Leonhardt wants them to do and censor content more often, the effect might well be that the allegedly dangerous content reaches a wider audience than it otherwise would.

None of this is to deny that some of what these doctors said sounds crazy. (Though, notably, for some of them, their professional views on the efficacy of hydroxychloroquine are among their most innocuous.) Still, it’s important not to pretend that the coronavirus treatment science is settled—there is still much that we don’t know, and the mainstream medical researchers at least deem the hypothesis that hydroxychloroquine is an effective treatment worthy of study in high profile scientific outlets. Until these questions are settled, it’s important for professionals, even fringe professionals, to make their arguments without being dismissed out of hand and derided. Importantly, the arguments regarding hydroxychloroquine offered by the so-called “Frontline Doctors” are largely anecdotal, rely on small sample sizes (n=350), and are afflicted with other problems evident to anyone remotely well-versed in critical thinking. Were these arguments to become widely accepted, it would be important to recognize their flaws and to draw public attention to them. But to think that the conclusions of such arguments are beyond the pale—especially in the context of the broader pandemic, during which those insisting on proper data collection techniques have been derided for not acting quickly enough—is, frankly, not credible. Thus even if these arguments should be discredited and derided, it’s important to take care not to similarly deride and discredit those who argue for similar conclusions from more solid grounds.

Lesson four: If you must draw attention to a bad argument that someone makes on some important issue, focus on the argument’s substance, rather than discrediting what speakers say by taking small claims they make out of context. Doing so is a small first step toward establishing credibility with those who disagree with you. Again, there is no first amendment issue here, but even the most fastidious protection of our rights to speak against government interference is insufficient for ensuring a healthy atmosphere for discourse.