Can Fake News Be Regulated Without Violating the First Amendment?
Most fake news is shared on social media platforms, prominently Facebook[1], that fail to contain the spread of such misinformation, further exacerbating the problems discussed. Given the sheer volume of users on these platforms and fake news traffic, they have been the target of much concern in the fight against fake news.[2] Proponents of fake news regulation argue Facebook’s importance as a vehicle is clear given the large user base and the potential of such misinformation to reach large numbers of people.[3] Worldwide, Facebook has 1.79 billion users[4].[5] In the U.S. Facebook is “used by more than two-hundred million people … each month, out of a total population of three-hundred twenty million,” and it “reaches approximately sixty-seven percent of U.S. adults.”[6] On average, U.S. adult Facebook users spend over fifty minutes per day on the platform and about forty-four percent reported they get news from the platform.[7] Additionally, of U.S. adults surveyed, about forty-eight percent answered they did so from news sites or applications and forty-four percent from social networking sites.[8] While other countries have taken aggressive actions to stop the spread of fake news, the U.S. has been slow to embrace proactive regulatory measures as such regulation presents challenges.[9] In addition to the public outcry on self-expression, lawmakers in the United States face a unique obstacle compared to other countries that have been able to pass aggressive laws: The First Amendment to the United States Constitution.[10]
A. First Amendment Limitations of Fake News
Does the fact that fake news is often disseminated online affect its protection under the First Amendment? The answer to this is no, as speech on the Internet enjoys “the same level of constitutional protection as traditional forms of speech.”[11] The plain language of the First Amendment states, “Congress shall make no law … abridging the freedom of speech, or of the press,” essentially reducing the government’s power to restrict the speech of its citizens.[12] When the founding fathers drafted the First Amendment, they “did not trust any government to separate the truth from the false for us.”[13] In turn, “the free marketplace of ideas, true ideas are supposed to compete with false ones until the truth wins.”[14] With respect to this notion, the classic “marketplace of ideas” [15] model argues that truth can be discovered through “robust, uninhibited debate.” [16]
Content-based laws are defined as “those that target speech based on its communicative content” or “apply to particular speech because of the topic discussed or message expressed.”[17] Although there are limits with regards to what constitutes protected speech, restrictions on speech that are based on the content are subject to strict scrutiny by courts.[18] To survive strict scrutiny, the government must show that a content-based law “is necessary to serve a compelling state interest and is narrowly drawn to achieve that end,” but this argument rarely survives strict scrutiny.[19] Despite content based laws being those that “target speech based on its communicative content . . . may be justified only if the government proves that they are narrowly tailored to serve compelling state interests,” the Court has recognized a compelling interest in maintaining the integrity of the electoral process.[20] Commercial speech, on the other hand, receives a lesser degree of protection under the First Amendment than other forms of speech. Commercial speech is “speech that proposes a commercial transaction.” [21] When the government seeks to impose a restriction on commercial speech, it must show that the speech in question is commercial within the parameters set by the constitution. [22] Thus, although content is considered to be commercial speech, it still maintains the protections of the First Amendment where the speech is associated to lawful activities. [23] Congress has recognized the threat tort-based lawsuits have on freedom of speech on the Internet through its enactment of § 230 of the Communications Decency Act (“CDA”).
B. Section 230 Of The Communications Decency Act
Congress enacted § 230 of the CDA as part of the Telecommunications Act of 1996.[24] In short, it is perhaps the most influential law to protect the kind of innovation that has allowed the Internet to thrive since 1996.[25] The Stored Communications Decency Act 47 U.S.C. § 230 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[26] Section 230 was enacted partly to maintain the robust nature of Internet communication as well as to keep government interference in the interactive medium to a minimum. Congress recognized the Internet and interactive computer services as offering “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.” [27] It also found that the Internet and interactive computer services “have flourished, to the benefit of all Americans, with a minimum government regulation.” [28] Moreover, Congress enacted the legislation as a choice of public policy not to deter harmful online speech through the separate route of imposing tort liability on companies that serve as intermediaries for third parties’ potentially harmful messages. [29] While congress acted to keep government regulation of the Internet to a minimum, it also found it to be the policy of the United States to “ensure vigorous enforcement of Federal criminal laws to deter and punish trafficking in obscenity, stalking, and harassment by means of computer.” [30] Congress further stated that it is “the policy of the United States… to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.”[31] However, the legislation doesn’t mean that the original party at fault who posted defamatory messages would not be held liable.
The CDA grants immunity to a defendant if: (1) Defendant is a provider or user of an interactive computer service;[32] (2) the information for which Plaintiff seeks to hold Defendant liable is information provided by another information content provider;[33] and (3) Plaintiff’s claim seeks to hold Defendant liable as the publisher or speaker of that information[34].[35] In other words, the current legislative language creates a broad federal immunity[36] to “any cause of action that would make service providers liable for information originating with a third-party user of the service.”[37] Courts have also interpreted § 230 to give broad immunity to website administrators, irrespective of whether they implement editorial control over defamatory content posted on their platforms, a characteristic that conventionally would have been significance under common law defamation.[38] Thus, the immunity not only shields Facebook and other social media websites for defamatory fake news content posted on the sites by others, but it also safeguards users of the sites who share that content.[39] In sum, the immunity attaches even if ISPs or users know that the stories they are republishing are false or defamatory, and presumably even if such knowledge satisfies the actual malice standard but does not shield the actual authors of the defamatory content; they may still be held liable.[40]
As the algorithms and artificial intelligence become more advanced and social media platforms have more of an impact on our daily lives, not holding service providers liable has significant negative societal impacts. The purpose of providing the immunity was because interactive computer services have millions of uses and the amount of information communicated on these providers would make it almost impossible for service providers to screen each and every post for potential issues. Faced with potential liability for each message republished by their services, interactive computer service providers might choose to severely restrict the number and type of messages posted. Congress considered the weight of the speech interests implicated and choose to immunize service providers to avoid any such restrictive effect. Moreover, Congress wanted to encourage service providers to self-regulate the dissemination of harmful content on their platforms and remove the disincentives to self-regulation.
C. ISP Self- Regulation of Fake News
Under the current framework, the U.S. government calls for ISPs to remove content or delete user accounts that are deemed problematic, but challenges exist with such requests.[41] First, many technology companies, such as Facebook are financially incentivized to promote and enable social engagement.[42] When companies are financially incentivized by maximizing user engagement and retention, their users can and have been exposed to harmful content.[43] Companies are “trusted” to make their own rules, often times changing them in response to business decisions or public pressure, for example.[44] In other words, when social media companies create and enforce their own rules, they must traverse a landscape of demonstrable harms that may result from their own action, and inaction.[45] Second, in the process of self-regulation, companies are selectively choosing which accounts to suspend, block or permit thus deciding which content to regulate and how through various processes such as posting warning notices, fact-checking, or making the content less visible by means of algorithms.[46] Company staff have even admitted to wrongdoing, either by leaving or taking down certain content.[47]
In the last few years, under pressure to regulate more effectively, social media platforms have hired thousands more moderators, developed software to detect misleading content, and consistently modified their rules or made exceptions to them.[48] Facebook,[49] arguably the primary vehicle for fake news is striving to make it easier for users to report fake news stories appearing on their platform.[50] Facebook is also partnering “with outside fact-checking organizations to help it indicate when articles are false,” who will have the ability to “label stories in the News Feed as fake.”[51] Additionally, Facebook will also be “changing some advertising practices to stop purveyors of fake news from profiting from it.”[52] Adhering to a single body of standards as opposed to self-regulation would instead give companies a source of “forceful normative responses against undue State restrictions.”[53] Legislators have suggested that Internet media companies are abusing their immunity under the CDA and that removal of such immunity is not out of the question.[54] If this immunity were removed, this would end the self-regulation framework in the U.S., thus moving regulations more in line with the E.U. model.[55]
[1] Although this paper frequently references Facebook, it should be noted that fake news can be on any ISP and/or social media website and is not exclusive only to Facebook.
[2] NOTE: PROTECTING THE DEMOCRATIC ROLE OF THE PRESS: A LEGAL SOLUTION TO FAKE NEWS, 96 Wash. U. L. Rev. 419, 436-437.
[3] 35 Cardozo Arts & Ent LJ 669, 672-673.
[4] Worldwide, Facebook has 1.79 billion users. In the U.S. Facebook is “used by more than 200 million people … each month, out of a total population of 320 million,” and it “reaches approximately 67% of U.S. adults.” Facebook users spend an average of over fifty minutes a day on the site. About 44% of U.S. adults say they get news from Facebook. 35 Cardozo Arts & Ent LJ 669, 672-673.
[5] Id.
[6] Id. at 672.
[7] Id. at 673.
[8] 35 Cardozo Arts & Ent LJ 669, 672-673.
[9] Id. at 675.
[10] 71 Fed. Comm. L.J. 105, 113-114.
[11] 35 Cardozo Arts & Ent LJ 669, 686.
[12] 71 Fed. Comm. L.J. 105, 113-114.
[13] 35 Cardozo Arts & Ent LJ 669, 677.
[14] Id. at 677.
[15] The value of free speech under this model is derived from unimpeded discussion where, “any loss from allowing speech is so small, that society should tolerate no restraint on the verbal search for truth.” While several free speech scholars like Baker defer to this model and system of free expression, the objectives of the Constitution require more than a limited application of the clause than just a prohibition on government interference. Id.
[16] 46 Hastings Const. L.Q. 1, 7-8.
[17] 71 Fed. Comm. L.J. 105, 113-114.
[18] Id. at 113.
[19] 71 Fed. Comm. L.J. 105, 114.
[20] 43 Hastings Comm. & Ent. L.J. 81, 87.
[21] 71 Fed. Comm. L.J. 105, 115
[22] Id. at 114.
[23] Fed. Comm. L.J. 105, 113-114.
[24] 65 Buffalo L. Rev. 1101, 1138.
[25] Id.
[26] 47 U.S.C § 230(c)(1).
[27] § 230(a)(3).
[28] § 230(a)(4).
[29] 71 Fed. Comm. L.J. 105, 115.
[30] § 230(b)(5).
[31] § 230(b)(2).
[32] To satisfy the first prong of the CDA’s immunity test, the defendant must be an “interactive computer service.” An “interactive computer service” is defined as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet . . .” 47 U.S.C. § 230(f)(2). The Supreme Court has previously held that Facebook is an “interactive computer service” because Facebook “provides or enables computer access by multiple users to a computer service.” Sikhs for Justice I, 144 F. Supp. 3d at 1093. Additionally, Facebook is considered to be an interactive computer service because “it is a service that provides information to multiple users by giving them computer access . . . to a computer server, namely the servers that host its social networking website.” Klayman v. Zuckerberg, 753 F.3d 1354, 1357, 410 U.S. App. D.C. 187 (D.C. Cir. 2014).
[33] To satisfy the second prong of the CDA’s immunity test, the information for which Plaintiffs seeks to hold Facebook liable must be information provided by an “information content provider” that is not Facebook. An “information content provider” is defined as “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.” 47 U.S.C. § 230(f)(3).
[34] The third and final prong of the CDA’s immunity test requires that Plaintiffs seek to hold Facebook liable as a publisher or speaker of Plaintiffs’ content. “Publication involves reviewing, editing, and deciding whether to publish or to withdraw from publication third-party content.” Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1102 (9th Cir. 2009).
[35] Sikhs for Justice, Inc. v. Facebook, Inc., 144 F. Supp. 3d 1088, 1092.
[36] In Zeran v. American Online, one of the first cases interpreting § 230, the Fourth Circuit claimed that, “by its plain language, § 230 creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.” The court reasoned that Congress intended § 230 to immunize both publishers and distributors because, while a distributor is indeed distinct from a publisher in determining the standard of liability, both can be considered a subset within the broader definition of publisher for defamation purposes.[36] Few courts have challenged this interpretation of the CDA, though in Doe v. GTE Corp. the Seventh Circuit questioned whether disclaiming all liability for ISPs achieves the goals of § 230, the title of which promises protection for “Good Samaritan” screening of offensive materials. The court correctly pointed out that because both websites and ISPs that screen for offensive material and those that refrain from screening are granted immunity, websites and ISPs can be expected to take the less expensive, non-screening route. Thus, an interpretation of § 230 that treats websites and ISPs exercising editorial control the same as those that do not defeats the original policy goals of the “Good Samaritan” law and likely serves few of the purposes Congress intended.
[37] 35 Cardozo Arts & Ent LJ 669, 687-689.
[38] 96 Wash. U. L. Rev. 419, 433.
[39] 35 Cardozo Arts & Ent LJ 669, 690.
[40] Id. at 687.
[41] 43 Hastings Comm. & Ent. L.J. 81, 100-101.
[42] 46 Hastings Const. L.Q. 1, 7.
[43] ARTICLE: But Facebook’s Not a Country: How to Interpret Human Rights Law for Social Media Companies, 38 Yale J. on Reg. Bulletin 86, 87-88
[44] Id.
[45] Id. at 97.
[46] Id. at 87.
[47] Id. at 88.
[48] 38 Yale J. on Reg. Bulletin 86, 88.
[49] On Facebook, fake news articles look almost identical to those from reputable news organizations. Each article displays a headline, a picture, the originating website, the person or company who posted it, and the number of likes, shares, and comments.
65 Buffalo L. Rev. 1101, 1115.
[50] 65 Buffalo L. Rev. 1101, 1117.
[51] 35 Cardozo Arts & Ent LJ 669, 699.
[52] Id.
[53] Id. at 89.
[54] 43 Hastings Comm. & Ent. L.J. 81, 94
[55] Id. at 94-95.