Why § 230 Likely Doesn’t Provide Immunity for Libels Composed by ChatGPT, Bard, etc.

This week and likely next, I’ll be serializing my Large Libel Models? Liability for AI Output draft. I had already posted on why I think such AI programs’ communications are reasonably perceived as factual assertions, and why disclaimers about possible errors are insufficient to avoid liability. Here, I want to explain why I think § 230 doesn’t protect the AI companies, either.

[* * *]

To begin with, 47 U.S.C. § 230 likely doesn’t immunize material produced by AI programs. Section 230 states that, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” “[I]nformation content provider” is defined to cover “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”[1] A lawsuit against an AI company would aim to treat it as publisher or speaker of information provided by itself, as an entity “that is responsible, in whole or in part, for the creation or development of [such] information.”[2]

As the leading early § 230 precedent, Zeran v. AOL, pointed out, in § 230 “Congress made a policy choice . . . not to deter harmful online speech through the . . . route of imposing tort liability on companies that serve as intermediaries for other parties’ potentially injurious messages.”[3] But Congress didn’t make the choice to immunize companies that themselves create messages that had never been expressed by third parties.[4] Section 230 thus doesn’t immunize defendants who “materially contribut[e] to [the] alleged unlawfulness” of online content.[5]

An AI company, by making and distributing an AI program that creates false and reputation-damaging accusations out of text that entirely lacks such accusations, is surely “materially contribut[ing] to [the] alleged unlawfulness” of that created material.[6] Recall that the AI programs’ output isn’t merely quotations from existing sites (as with snippets of sites offered by search engines[7]) or from existing user queries (as with some forms of autocomplete that recommend the next word or words by essentially quoting them from user-provided content).

To be sure, LLMs appear to produce each word based on word frequency connections drawn from sources in the training data. Their output is thus in some measure derivative of material produced by others.[8]

But of course all of us rely almost exclusively on words that exist elsewhere, and then arrange them in an order that likewise stems in large part from our experience reading material produced by others. Yet that can’t justify immunity for us when we assemble others’ individual words in defamatory ways. Courts have read § 230 as protecting even individual human decisions to copy-and-paste particular material that they got online into their own posts: If I get some text that was intended for use on the Internet (for instance, because it’s already been posted online), I’m immune from liability if I post it to my blog.[9] But of course if I don’t just repost such text, but instead write a new defamatory post about you, I lack § 230 immunity even if I copied each word from a different web page and then assembled them together: I’m responsible in part (or even in whole) for the creation of the defamatory information. Likewise for AI programs.

And this makes sense. If Alan posts something defamatory about Betty on his WordPress blog, that can certainly damage her reputation, especially if the blog comes up on Google searches—but at least people will recognize it as Alan’s speech, not Google’s or WordPress’s. Section 230 immunity for Google and WordPress thus makes some sense. But something that is distributed by an AI company (via its AI program) and framed as the program’s own output will be associated in the public’s mind with the credibility of the program. That may make it considerably more damaging, and would make it fair to hold the company liable for that.

Relatedly, traditional § 230 cases at least in theory allow someone—the actual creator of the speech—to be held liable for it (even if in practice the creator may be hard to identify, or outside the jurisdiction, or lack the money to pay damages). Allowing § 230 immunity for libels output by an AI program would completely cut off any recourse for the libeled person, against anyone.

In any event, as noted above, § 230 doesn’t protect entities that “materially contribut[e] to [the] alleged unlawfulness” of online content.[10] And when AI programs output defamatory text that they have themselves assembled, word by word, they are certainly materially contributing to its defamatory nature.

[1] 47 U.S.C. §§ 230(c)(1), (f)(3).

[2] I thus agree with Matt Perault’s analysis on this score. [Cite forthcoming J. Free Speech L. article.]

[3] 129 F.3d 327, 330­–31 (4th Cir. 1997).

[4] The statement in Fair Housing Council, 521 F.3d at 1175, that “If you don’t encourage illegal content, or design your website to require users to input illegal content, you will be immune,” dealt with websites that republish “user[]” “input”—it didn’t provide immunity to websites that themselves create illegal (e.g., libelous) content based on other material that they found online.

[5] Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157, 1167–68 (9th Cir. 2008) (en banc). Many other courts have endorsed this formulation. Fed. Trade Comm’n v. LeadClick Media, LLC, 838 F.3d 158, 174 (2d Cir. 2016); Jones v. Dirty World Ent. Recordings LLC, 755 F.3d 398, 410 (6th Cir. 2014); F.T.C. v. Accusearch Inc., 570 F.3d 1187, 1200 (10th Cir. 2009); People v. Bollaert, 248 Cal. App. 4th 699, 719 (2016); Vazquez v. Buhl, 150 Conn. App. 117, 135–36 (2014); Hill v. StubHub, Inc., 219 N.C. App. 227, 238 (2012).

[6] If the AI program merely accurately “restat[es] or summariz[es]” material in its training data, even if it doesn’t use the literal words, it may still be immune. See Derek Bambauer, Authorbots, 3 J. Free Speech L. __ (2023). But I’m speaking here of situations where the AI program does “produced . . . new semantic content” rather than “merely repackage[ing] existing content.” Id. at __.

[7] See O’Kroley v. Fastcase, Inc., 831 F.3d 352 (6th Cir. 2016) (“Under [§ 230], Google thus cannot be held liable for these claims — for merely providing access to, and reproducing, the allegedly defamatory text.”).

[8] See Derek Bambauer, supra note 7, at __; Jess Miers, Yes, Section 230 Should Protect ChatGPT and Other Generative AI Tools, TechDIrt, Mar. 17, 2023, 11:59 am.

[9] See, e.g., Batzel v. Smith, 333 F.3d 1018, 1026 (9th Cir. 2003), superseded in part by statute on other grounds as stated in Breazeale v. Victim Servs., Inc., 878 F.3d 759, 766–67 (9th Cir. 2017); Barrett v. Rosenthal, 146 P.3d 510 (Cal. 2006); Phan v. Pham, 182 Cal. App. 4th 323, 324–28 (2010); Monge v. Univ. of Pennsylvania, No. CV 22-2942, 2023 WL 2471181, *3 (E.D. Pa. Mar. 10, 2023); Novins v. Cannon, No. CIV 09-5354, 2010 WL 1688695, *2 (D.N.J. Apr. 27, 2010).

[10] Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157, 1167–68 (9th Cir. 2008) (en banc). Many other courts have endorsed this formulation. Fed. Trade Comm’n v. LeadClick Media, LLC, 838 F.3d 158, 174 (2d Cir. 2016); Jones v. Dirty World Ent. Recordings LLC, 755 F.3d 398, 410 (6th Cir. 2014); F.T.C. v. Accusearch Inc., 570 F.3d 1187, 1200 (10th Cir. 2009); People v. Bollaert, 248 Cal. App. 4th 699, 719 (2016); Vazquez v. Buhl, 150 Conn. App. 117, 135–36 (2014); Hill v. StubHub, Inc., 219 N.C. App. 227, 238 (2012).

The post Why § 230 Likely Doesn’t Provide Immunity for Libels Composed by ChatGPT, Bard, etc. appeared first on Reason.com.