A bunch of comments to my Large Libel Models posts suggest that, when users believe (say) ChatGPT-4’s fake quotes about others, the true responsibility is on the supposedly gullible users and not on OpenAI. I don’t think this is consistent with how libel law sees things, and I want to explain why.
Say that the Daily Tattler, a notoriously unreliable gossip rag, puts out a story about you, saying that “rumor has it that Dr. [you] had been convicted of child molestation ten years ago in Florida, as the Miami Herald reported.” This is utterly false, and the result of careless reporting on their part; there was no conviction and no Miami Herald report. Yet some people believe the story, and as a result stop doing business with you. (Say you’re a doctor, so your business relies on people’s confidence in you.)
Now there are three parties here we can think about.
There’s you, and you’re completely innocent.
There’s the Daily Tattler, which published a story that’s negligently false.
And there are the people who stop doing business with you. They too might be viewed negatively: Perhaps they’re gullible for believing what the Daily Tattler says. Perhaps they’re unfair in not looking things up themselves (maybe checking the Miami Herald’s archives), or calling you and asking your side of the story.
But the premise of libel law is that you can sue the Daily Tattler, even though, in a perfect world, the readers would have done better. You can’t, after all, sue the readers—it’s not a tort for them to avoid you based on their gullibility. And the Daily Tattler is at fault for negligently putting out the false assertion of fact that could deceive the unwise reader. Yes, perhaps people should be educated not to trust gossip rags. But so long as readers do in some measure trust them (at least as to matters where the reader lacks an incentive to do further research), libel law takes that into account.
Now to be sure the law doesn’t always allow liability for publishers based on all unwise reactions by readers. In particular, the question whether the statement “state[s] or impl[ies] assertions of objective fact” turns on the reaction of a reasonable reader. A statement that a reasonable reader would recognize is parody, for instance, wouldn’t be actionable even if some readers might miss the joke.
But when it comes to statements that a reasonable reader would perceive as factual assertions, they are potentially actionable if they are false and reputation-damaging. That the reader might be unwise for trusting the source doesn’t get the source off the hook.
So if you sue the Daily Tattler for negligently publishing the false allegation against you, the Tattler can’t turn around to say, “It’s not our fault! It’s the fault of the stupid readers who trusted us, notwithstanding our having specifically labeled this as ‘rumor.'” Under well-established libel law, it would lose.
Now maybe there’s some public policy reason why OpenAI should be off the hook for ChatGPT-4 communications, because it has warned people that the communications may be inaccurate, when the Daily Tattler isn’t off the hook for its communications, despite its warning people that the communications may be inaccurate (since they’re just rumor). But standard libel law seems to take a different view.
[* * *]
Here’s what Part I.C of my Large Libel Models? Liability for AI Output article has to say about the general legal background here; note, though, that I had posted an earlier version of that chapter last week.
AIs could, of course—and probably should—post disclaimers that stress the risk that their output will contain errors. Bard, for instance, includes under the prompt box, “Bard may display inaccurate or offensive information that doesn’t represent Google’s views.” But such disclaimers don’t immunize AI companies against potential libel liability.
To begin with, such disclaimers can’t operate as contractual waivers of liability: Even if the AIs’ users are seen as waiving their rights to sue based on erroneous information when they expressly or implicitly acknowledge the disclaimers, that can’t waive the rights of the third parties who might be libeled.
Nor do the disclaimers keep the statements from being viewed as actionable false statements of fact. Defamation law has long treated false, potentially reputation-damaging assertions about people as actionable even when there’s clearly some possibility that the assertions are false. No newspaper can immunize itself from libel lawsuits for a statement that “Our research reveals that John Smith is a child molester” by simply adding “though be warned that this might be inaccurate” (much less by putting a line on the front page, “Warning: We may sometimes publish inaccurate information”). Likewise, if I write “I may be misremembering, but I recall that Mary Johnson had been convicted of embezzlement,” that could be libelous despite my “I may be misremembering” disclaimer.
This is reflected in many well-established libel doctrines. For instance, “when a person repeats a slanderous charge, even though identifying the source or indicating it is merely a rumor, this constitutes republication and has the same effect as the original publication of the slander.”[1] When speakers identify something as rumor, they are implicitly saying “this may be inaccurate”—but that doesn’t get them off the hook.
Indeed, according to the Restatement (Second) of Torts, “the republisher of either a libel or a slander is subject to liability even though he expressly states that he does not believe the statement that he repeats to be true.”[2] It’s even more clear that a disclaimer that the statement merely may be inaccurate can’t prevent liability.
Likewise, say that you present both an accusation and the response to the accusation. By doing that, you’re making clear that the accusation “may [be] inaccurate.” Yet that doesn’t stop you from being liable for repeating the accusation.
To be sure, there are some narrow and specific privileges that defamation law has developed to free people to repeat possibly erroneous content without risk of liability, in particular contexts where such repetition is seen as especially necessary. For instance, some courts recognize the “neutral reportage” privilege, which immunizes “accurate and disinterested” reporting of “serious charges” made by “a responsible, prominent organization” “against a public figure,” even when the reporter has serious doubts about the accuracy of the charges.[3] But other courts reject the privilege altogether.[4] And even those that accept it apply it only to narrow situations: Reporting false allegations remains actionable—even though the report makes clear that the allegations may be mistaken—when the allegations relate to matters of private concern, or are made by people or entities who aren’t “responsible” and “prominent.”[5] It certainly remains actionable when the allegations themselves are erroneously recalled or reported by the speaker.
The privilege is seen as needed precisely because of the general rule that—absent such a privilege—passing on allegations can be libelous even when it’s made clear that the allegations may be erroneous. And the privilege is a narrow exception justified by the “fundamental principle” that, “when a responsible, prominent organization . . . makes serious charges against a public figure,” the media must be able to engage in “accurate and disinterested reporting of those charges,” because the very fact that “they were made” makes them “newsworthy.”[6]
Likewise, the narrow rumor privilege allows a person to repeat certain kinds of rumors to particular individuals to whom the person owes a special duty —such as friends and family members—if the rumors deal with conduct that may threaten those individuals. (This stems because from what is seen as the special legitimacy of people protecting friends’ interests.[7]) This is why, for instance, if Alan tells Betty that he had heard a rumor that Betty’s employee Charlie was a thief, Alan is immune from liability.[8] But the privilege exists precisely because, without it, passing along factual allegations to (say) a stranger or to the general public—even with an acknowledgement that they “may [be] inaccurate”—may be actionable.[9]
Now a disclaimer that actually describes something as fiction, or as parody or as a hypothetical (both forms of fiction), may well be effective. Recall that, in libel cases, a “key inquiry is whether the challenged expression, however labeled by defendant, would reasonably appear to state or imply assertions of objective fact.”[10] It’s not actionable to state something that obviously contains no factual assertion at all—as opposed to just mentioning a factual assertion about which the speaker expresses uncertainty, or even disbelief.[11] But neither ChatGPT nor Bard actually describe themselves as producing fiction, since that would be a poor business model for them. Rather, they tout their general reliability, and simply acknowledge the risk of error. That acknowledgment, as the cases discussed above show, doesn’t preclude liability.
[1] Ringler Associates Inc. v. Maryland Casualty Co., 80 Cal. App. 4th 1165, 1180 (2000).
[2] Restatement (Second) of Torts § 578 cmt. e; see also Martin v. Wilson Pub. Co., 497 A.2d 322, 327 (R.I. 1985); Hart v. Bennet, 267 Wis. 2d 919, 944 (App. 2003).
[3] Edwards v. National Audubon Soc’y, 556 F.2d 113 (2d Cir. 1977). A few later cases have extended this to certain charges on matters of public concern against private figures. Others have rejected the privilege as to statements about private figures, without opining on its availability as to public figures. See, e.g., Khawar v. Globe Int’l, Inc., 965 P.2d 696, 707 (Cal. 1998); Fogus v. Cap. Cities Media, Inc., 444 N.E.2d 1100, 1102 (App. Ct. Ill. 1982).
[4] Norton v. Glenn, 860 A.2d 48 (Pa. 2004); Dickey v. CBS, Inc., 583 F.2d 1221, 1225–26 (3d Cir.1978); McCall v. Courier-J. & Louisville Times, 623 S.W.2d 882 (Ky. 1981); Postill v. Booth Newspapers, Inc., 325 N.W.2d 511 (Mich. App. 1982); Hogan v. Herald Co., 84 A.D.2d 470, 446 (N.Y. App. Div. 1982).
[5] A few authorities have applied this privilege to accurate reporting of allegations on matters of public concern generally, but this appears to be a small minority rule. Barry v. Time, Inc., 584 F. Supp. 1110 (N.D. Cal. 1984); Tex. Civ. Code § 73.005.
[6] Edwards, 556 F.2d at 120. Likewise, the fair report privilege allows one to accurately repeat allegations that were made in government proceedings, because of the deeply rooted principle that the public must be able to know what was said in those proceedings, even when those statements damage reputation. But it too is sharply limited to accurate repetition of allegations originally made in government proceedings.
[7] Restatement (Second) of Torts § 602.
[8] Id. cmt. 2. Another classic illustration is a parent warning an adult child about a rumor that the child’s prospective spouse or lover is untrustworthy. Id. cmt. 1.
[9] See, e.g., Martin v. Wilson Pub. Co., 497 A.2d 322, 327 (R.I. 1985).
[10] Takieh v. O’Meara, 497 P.3d 1000, 1006 (Ariz. Ct. App. 2021).
[11] See, e.g., Greene v. Paramount Pictures Corp., 813 F. App’x 728, 731–32 (2d Cir. 2020). Even then, a court might allow liability if it concludes that a reasonable person who knows plaintiff would understand that defendant’s ostensible fiction is actually meant to be as roman à clef that conveys factual statements about plaintiff. The presence of a disclaimer wouldn’t be dispositive then. See, e.g., Pierre v. Griffin, No. 20-CV-1173-PB, 2021 WL 4477764, *6 n.10 (D.N.H. Sept. 30, 2021).
The post Defamation, Responsibility, and Third Parties appeared first on Reason.com.