ARTICLE | To what extent, if any, can fake news be regulated without violating the First Amendment?

 Introduction

In today’s digital landscape “fake news” can go viral in minutes.[1] While fake news is not a new phenomenon, what is new about it is the ease and speed at which it can be disseminated and spread to large audiences.[2] Anyone with Internet connection can record or livestream events in real-time or post “news” that has the look and feel of actual news.[3] As digital media becomes increasingly utilized, the barriers to widespread publication and speech can be easily exploited as a “weaponized tool” to suppress basic dialogue and generate civil unrest.[4] Despite the presence of misinformation in the public discourse, the phenomenon of fake news has become exceptionally problematic in recent years.[5] Pursuant to First Amendment principles, which seeks and maintains a system of free expression, the federal government has a compelling interest in addressing the concerns of such rapid dissemination of misinformation.[6] As researchers Alice Marwick and Rebecca Lewis describe, the spread of fake news leads to a lack of trust in media, an impact of which “weakens the political knowledge of citizens, inhibits its watchdog function, and may impede the full exercise of democracy.”[7]  This phenomenon is further magnified by social and political divides which undermine the traditional ways in which truth customarily prevails.[8] Thus, when private interests impact citizens’ ability to make well-informed decisions, the government has a duty to intervene.[9] 

The question thus arises: to what extent, if any, can fake news be regulated without violating the First Amendment? What changes could be made to current norms? This paper seeks to answer that question. Part I of this Article seeks to define fake news, despite there not being an exact definition, further adding to the difficulty of its regulation. Part II of the Article turns to an examination of the dangers of fake news and its societal impacts. Part III provides an analysis of how fake news can be regulated without violating the First Amendment, taking a closer look at § 230 of the Communications Decency Act and internet service provider (ISP) self-regulation. Part IV will then provide recommendations for rectifying the issues related to § 230, suggesting ISPs should have some liability. 

I. What is Fake News?

The Collins English Dictionary named “Fake News” the “Word of the Year for 2017” due to the 365 percent increase in terminology usage from the year prior. [10] Fake news, in the traditional sense is, “a media product fabricated and disguised to look like credible news that is posted online and circulated via social media.”[11] The New York Times defined fake news as “a made-up story with an intention to deceive, often geared  toward getting clicks.”[12] The former President of the United States, Donald Trump, has often used the term to refer to the media and news stories that reflect poorly on his administration and himself.[13]  According to Danielle Kurtzleben, NPR political reporter, although the contemporary political figures have chosen the term fake news to devalue unflattering news, fake news has traditionally referred to “lies posing as news.”[14] However, as these different definitions show, there is not an exact definition of “fake news” further magnifying the difficulty of its regulation.[15] Nonetheless, as journalist Larry Atkins explains, extreme criticism or even biased doesn’t make a news story fake if the article doesn’t falsify or misrepresent the real facts.[16]  For the purposes of this article, we will be defining fake news as: “an article that is intentionally and verifiably false and distributed via social media with the purpose of:

(1) Swaying opinion, sparking emotion, or even causing outrage among individuals who - believing the information to be true - click, comment, and/or spread the information and/or take some form of action that supports a particular cause or point of view

(2) Getting the reader to click through the content, driving ‘click revenue,’ and view and even click on web ads, driving more revenue and, potentially, purchases.”[17]

 

II. Societal Dangers of Fake News

The manufacturing of fake news causes a host of problems which stems from both a financial and ideological motivation.[18] Such misinformation is oftentimes “masterfully manipulated” to look like reliable news reports and serves to “inflame tensions and deepen partisan divisions,” causing people to “double down on opinions they already have.”[19] According to one journalistic study, outrageous and fake news stories spread faster because they are outrageous and tend to have sensationalist headlines that draw in viewer “clicks” which are then are converted into revenue dollars.[20] It could be argued that fake news and misinformation has always been around and is an age-old problem. However, this argument would fail because the characteristics that incentivize the creation of fake news, makes it easier than ever before to spread. In turn, this now poses a serious threat to society as a whole by eroding the public’s trust in established, reputable sources of reliable information.[21] Moreover, a recent poll reported that trust in the mainstream media dropped so sharply that only thirty-two percent of respondents claimed to have “a great deal” or “a fair amount” of trust[22] for the established news outlets, the lowest in the report’s history.[23] Such growing distrust is not restricted to the media alone as the credibility of intelligence agencies and scientists are being called into question more.[24] Pew Research Center found in a recent survey that sixty-two percent of US adults get at least some of their news from multi-service media platforms. [25] Of this sixty-two percent, eighteen percent get their news from social media “often,” twenty-six percent got their news from social media “sometimes,” and eighteen percent get it “hardly ever.” [26] On the one hand, established news organizations have reputational concerns that deter the reporting of false or unverified information.[27]  On the other hand, fake news publishers do not share these same concerns but their stories are much more widely shared than the top, actual news stories on social media.[28] As a result, fake news creates confusion and fools people into believing deceptive information.[29] Thus, the erosion of public trust in traditional news sources creates a vicious cycle with regards to which sources users can trust, further creating a vacuum that fake news is quick to fill.[30]

III. Can Fake News Be Regulated Without Violating the First Amendment?

Most fake news is shared on social media platforms, prominently Facebook[31], that fail to contain the spread of such misinformation, further exacerbating the problems discussed. Given the sheer volume of users on these platforms and fake news traffic, they have been the target of much concern in the fight against fake news.[32] Proponents of fake news regulation argue Facebook’s importance as a vehicle is clear given the large user base and the potential of such misinformation to reach large numbers of people.[33] Worldwide, Facebook has 1.79 billion users[34].[35]  In the U.S. Facebook is “used by more than two-hundred million people … each month, out  of a total population of three-hundred twenty million,” and it “reaches approximately sixty-seven percent of U.S. adults.”[36] On average, U.S. adult Facebook users spend over fifty minutes per day on the platform and about forty-four percent reported they get news from the platform.[37] Additionally, of U.S. adults surveyed, about forty-eight percent answered they did so from news sites or applications and forty-four percent from social networking sites.[38] While other countries have taken aggressive actions to stop the spread of fake news, the U.S. has been slow to embrace proactive regulatory measures as such regulation presents challenges.[39] In addition to the public outcry on self-expression, lawmakers in the United States face a unique obstacle compared to other countries that have been able to pass aggressive laws: The First Amendment to the United States Constitution.[40]

A.    First Amendment Limitations of Fake News

Does the fact that fake news is often disseminated online affect its protection under the First Amendment? The answer to this is no, as speech on the Internet enjoys “the same level of constitutional protection as traditional forms of speech.”[41] The plain language of the First Amendment states, “Congress shall make no law … abridging the freedom of speech, or of the press,” essentially reducing the government’s power to restrict the speech of its citizens.[42] When the founding fathers drafted the First Amendment, they “did not trust any government to separate the truth from the false for us.”[43]  In turn, “the free marketplace of ideas, true ideas are supposed to compete with false ones until the truth wins.”[44] With respect to this notion, the classic “marketplace of ideas” [45] model argues that truth can be discovered through “robust, uninhibited debate.” [46]

Content-based laws are defined as “those that target speech based on its communicative content” or “apply to particular speech because of the topic discussed or message expressed.”[47] Although there are limits with regards to what constitutes protected speech, restrictions on speech that are based on the content are subject to strict scrutiny by courts.[48] To survive strict scrutiny, the government must show that a content-based law “is necessary to serve a compelling state interest and is narrowly drawn to achieve that end,” but this argument rarely survives strict scrutiny.[49]   Despite content based laws being those that “target speech based on its communicative content . . . may be justified only if the government proves that they are narrowly tailored to serve compelling state interests,” the Court has recognized a compelling interest in maintaining the integrity of the electoral process.[50] Commercial speech, on the other hand,  receives a lesser degree of protection under the First Amendment than other forms of speech. Commercial speech is “speech that proposes a commercial transaction.” [51]  When the government seeks to impose a restriction on commercial speech, it must show that the speech in question is commercial within the parameters set by the constitution. [52] Thus, although content is considered to be commercial speech, it still maintains the protections of the First Amendment where the speech is associated to lawful activities. [53] Congress has recognized the threat tort-based lawsuits have on freedom of speech on the Internet through its enactment of § 230 of the Communications Decency Act (“CDA”).    

B. Section 230 Of The Communications Decency Act 

Congress enacted § 230 of the CDA as part of the Telecommunications Act of 1996.[54] In short, it is perhaps the most influential law to protect the kind of innovation that has allowed the Internet to thrive since 1996.[55] The Stored Communications Decency Act 47 U.S.C. § 230 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[56] Section 230 was enacted partly to maintain the robust nature of Internet communication as well as to keep government interference in the interactive medium to a minimum. Congress recognized the Internet and interactive computer services as offering  “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.” [57] It also found that the Internet and interactive computer services “have flourished, to the benefit of all Americans, with a minimum government regulation.” [58] Moreover, Congress enacted the legislation as a choice of public policy not to deter harmful online speech through the separate route of imposing tort liability on companies that serve as intermediaries for third parties’ potentially harmful messages. [59] While congress acted to keep government regulation of the Internet to a minimum, it also found it to be the policy of the United States to “ensure vigorous enforcement of Federal criminal laws to deter and punish trafficking in obscenity, stalking, and harassment by means of computer.” [60] Congress further stated that it is “the policy of the United States… to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.”[61] However, the legislation doesn’t mean that the original party at fault who posted defamatory messages would not be held liable.

The CDA grants immunity to a defendant if: (1) Defendant is a provider or user of an interactive computer service;[62] (2) the information for which Plaintiff seeks to hold Defendant liable is information provided by another information content provider;[63] and (3) Plaintiff’s claim seeks to hold Defendant liable as the publisher or speaker of that information[64].[65]  In other words, the current legislative language creates a broad federal immunity[66] to “any cause of action that would make service providers liable for information originating with a third-party user of the service.”[67] Courts have also interpreted § 230 to give broad immunity to website administrators, irrespective of whether they implement editorial control over defamatory content posted on their platforms, a characteristic that conventionally would have been significance under common law defamation.[68] Thus, the immunity not only shields Facebook and other social media websites for defamatory fake news content posted on the sites by others, but it also safeguards users of the sites who share that content.[69] In sum, the immunity attaches even if ISPs or users know that the stories they are republishing are false or defamatory, and presumably even if such knowledge satisfies the actual malice standard but does not shield the actual authors of the defamatory content; they may still be held liable.[70]

As the algorithms and artificial intelligence become more advanced and social media platforms have more of an impact on our daily lives, not holding service providers liable has significant negative societal impacts.  The purpose of providing the immunity was because interactive computer services have millions of uses and the amount of information communicated on these providers would make it almost impossible for service providers to screen each and every post for potential issues. Faced with potential liability for each message republished by their services, interactive computer service providers might choose to severely restrict the number and type of messages posted. Congress considered the weight of the speech interests implicated and choose to immunize service providers to avoid any such restrictive effect. Moreover, Congress wanted to encourage service providers to self-regulate the dissemination of harmful content on their platforms and remove the disincentives to self-regulation.

C.    ISP Self- Regulation of Fake News

Under the current framework, the U.S. government calls for ISPs to remove content or delete user accounts that are deemed problematic, but challenges exist with such requests.[71] First, many technology companies, such as Facebook are financially incentivized to promote and enable social engagement.[72] When companies are financially incentivized by maximizing user engagement and retention, their users can and have been exposed to harmful content.[73]  Companies are “trusted” to make their own rules, often times changing them in response to business decisions or public pressure, for example.[74] In other words, when social media companies create and enforce their own rules, they must traverse a landscape of demonstrable harms that may result from their own action, and inaction.[75] Second, in the process of self-regulation, companies are selectively choosing which accounts to suspend, block or permit thus deciding which content to regulate and how through various processes such as posting warning notices, fact-checking, or making the content less visible by means of algorithms.[76]  Company staff have even admitted to wrongdoing, either by leaving or taking down certain content.[77] 

In the last few years, under pressure to regulate more effectively, social media platforms have hired thousands more moderators, developed software to detect misleading content, and consistently modified their rules or made exceptions to them.[78] Facebook,[79] arguably the primary  vehicle for fake news is striving to make it easier for users to report fake news stories appearing on their platform.[80] Facebook is also partnering “with outside fact-checking organizations to help it indicate when articles are false,” who will have the ability to “label stories in the News Feed as fake.”[81]  Additionally, Facebook will also be “changing some advertising practices to stop purveyors of fake news from profiting from it.”[82] Adhering to a single body of standards as opposed to self-regulation would instead give companies a source of “forceful normative responses against undue State restrictions.”[83] Legislators have suggested that Internet media companies are abusing their immunity under the CDA and that removal of such immunity is not out of the question.[84] If this immunity were removed, this would end the self-regulation framework in the U.S., thus moving regulations more in line with the E.U. model.[85]

IV. Reforming Sec. 230 Of The CDA to Combat Fake News  

The implications of the CDA’s immunity provisions are clear given the continuous rise of fake news; legal solutions are essential to address and further prevent its spread.[86] Despite the U.S. Supreme Court refusing to address the scope of § 230, lower courts have interpreted its immunity provision broadly stating, “there has been near-universal agreement that § 230 should not be construed grudgingly.” [87] Although social media companies such as Facebook and Twitter heavily influence public discourse, reliance on ISP self-regulation is idealistic as these providers have self-interested financial motives.[88] Moreover, concerns have been raised that content moderation strategies adopted by social media platforms may not comport with customary First Amendment norms and doctrine.[89]   Congress[90] should reign in the comprehensive problems of fake news by clarifying the “intended implications of § 230 of the CDA on defamation liability for internet distributors.” [91] Amending the CDA to enact a modified version of “common law distributor liability” on ISPs and websites could solve the threat of fake news or at a minimum substantially decrease it because it would hold social media companies liable for fake news they know is posted on their platforms. [92] “Applying a modified standard of common law distributor liability specifically targeted to address fake news to ISPs and websites would hold social media websites like Facebook responsible for fake news that site administrators have been informed is defamatory.” [93]

Opponents may argue that subjecting ISPs to common law defamation distributor liability years after the CDA’s initial enactment would lead to ruinous liability for some of the most profitable companies in our economy, but these fears are overwrought. [94] However, as discussed above, websites such as Facebook are already making efforts to combatting the plague of fake news. [95]  Despite these voluntary efforts, though encouraging, the threat that fake news poses to our society is substantial enough that we shouldn’t rely exclusively on these companies’ self-monitoring and self-regulating their own efforts, especially given their revenue models.[96] Social media sites would also contend that CDA § 230 and the First Amendment protect them from any attempts to regulate content. [97]   These platforms would likely push back to the classic “marketplace of ideas” model, where the belief of “the more information in the marketplace, the better” rules the system of free speech and free expression. [98] Nevertheless, it is hard to reconcile how today’s social media sites do not operate like information content providers, and therefore should be treated as such under the law.[99]  Until the regulatory environment catches up with technology (if it ever does), leaders of all companies are on the hook for making ethical decisions about their use of AI applications and products. Therefore, as a matter of public policy, Congress should establish a bright-line rule to hold platforms accountable for harmful content, or at least at a minimum to remove false or misleading information which negatively impact the lives of its users.

V.  Conclusion 

Worldwide fake news presents an ever-growing concern. Fake news sites’ masquerade as online news outlets, empowering readers to legitimize and believe that what he or she is reading is from a reputable source, lighting the fuel for their success.[100] Irrespective of the reasoning for the success of fake news, what is clear is that the average person, regardless of their critical thinking capabilities, now has a difficult time discerning what is accurate news.[101]  Furthermore, “individuals and entities are able to profit quickly off this misunderstanding in larger quantities than their predecessors experienced.”[102] Creating liability for ISPs and social media websites would incentivize websites to take down fake news once they are made aware of it.[103] Modifying the CDA also provides a solution that would be not only be feasible but effective too.[104]  Although fake news is not a new concept, the reach and influence is unlike past iterations of miscommunication due to the technological advances  and online platform communications we have developed as a society.[105]  “Understanding the motives behind fake news and the effects it can have is crucial to developing an effective solution to combat the issue of rapidly spreading misinformation without unduly treading on rights of free expression.” [106]


[1] ESSAY: Separating Fact From Fiction: The First Amendment Case for Addressing “Fake News” on Social Media, 46 Hastings Const. L.Q. 1, 2.

[2] ARTICLE: FIGHTING FALSITY: FAKE NEWS, FACEBOOK, AND THE FIRST AMENDMENT, 35 Cardozo Arts & Ent LJ 669, 672.

[3] 46 Hastings Const. L.Q. 1, 3.

[4] Id.

[5] NOTE: From Diet Pills to Truth Serum: How the FTC Could Be a Real Solution to Fake News, 71 Fed. Comm. L.J. 105, 107-108.

[6] 46 Hastings Const. L.Q. 1, 2.

[7] ARTICLE: Combating Fake News with “Reasonable Standards”, 43 Hastings Comm. & Ent. L.J. 81, 85.

[8] Id.

[9] 46 Hastings Const. L.Q. 1, 5.

[10] 46 Hastings Const. L.Q. 1, 3.

[11] 43 Hastings Comm. & Ent. L.J. 81, 84-85.

[12] COMMENT: Fake News: No One Is Liable, and That Is a Problem, 65 Buffalo L. Rev. 1101, 1102-1103.

[13] 71 Fed. Comm. L.J. 105, 107-108.

[14] 43 Hastings Comm. & Ent. L.J. 81, 82.

[15] 71 Fed. Comm. L.J. 105, 107-108.

[16] Id.

[17] Id.

[18] 46 Hastings Const. L.Q. 1, 3-4.

[19] 35 Cardozo Arts & Ent LJ 669, 670.

[20] 46 Hastings Const. L.Q. 1, 3-4.

[21] 71 Fed. Comm. L.J. 105, 107-108.

[22] The decline of trust in the mainstream media drop was reportedly more pronounced among Republicans than Democrats, plummeting less than twenty percent in 2016. Id.

[23] Fed. Comm. L.J. 105, 107-108

[24] Id.

[25] Id.

[26] Id. at 106.

[27] Id.

[28] Id. at 107.

[29] Id. at 107.

[30] Id. at 108.

[31] Although this paper frequently references Facebook, it should be noted that fake news can be on any ISP and/or social media website and is not exclusive only to Facebook.

[32] NOTE: PROTECTING THE DEMOCRATIC ROLE OF THE PRESS: A LEGAL SOLUTION TO FAKE NEWS, 96 Wash. U. L. Rev. 419, 436-437.

[33] 35 Cardozo Arts & Ent LJ 669, 672-673.

[34] Worldwide, Facebook has 1.79 billion users. In the U.S. Facebook is “used by more than 200 million people … each month, out of a total population of 320 million,” and it “reaches approximately 67% of U.S. adults.” Facebook users spend an average of over fifty minutes a day on the site. About 44% of U.S. adults say they get news from Facebook. 35 Cardozo Arts & Ent LJ 669, 672-673.

[35] Id.

[36] Id. at 672.

[37] Id.  at 673.

[38] 35 Cardozo Arts & Ent LJ 669, 672-673.

[39] Id. at 675.

[40] 71 Fed. Comm. L.J. 105, 113-114.

[41] 35 Cardozo Arts & Ent LJ 669, 686.

[42] 71 Fed. Comm. L.J. 105, 113-114.

[43] 35 Cardozo Arts & Ent LJ 669, 677.

[44] Id. at 677.

[45] The value of free speech under this model is derived from unimpeded discussion where, “any loss from allowing speech is so small, that society should tolerate no restraint on the verbal  search for truth.” While several free speech scholars like Baker defer to this model and system of free expression, the objectives of the Constitution require more than a limited application of the clause than just a prohibition on government interference. Id.

[46] 46 Hastings Const. L.Q. 1, 7-8.

[47] 71 Fed. Comm. L.J. 105, 113-114.

[48] Id.  at 113.

[49] 71 Fed. Comm. L.J. 105, 114.

[50] 43 Hastings Comm. & Ent. L.J. 81, 87.

[51] 71 Fed. Comm. L.J. 105, 115

[52] Id.  at 114.

[53] Fed. Comm. L.J. 105, 113-114.

[54] 65 Buffalo L. Rev. 1101, 1138.

[55] Id.

[56] 47 U.S.C § 230(c)(1).

[57] § 230(a)(3).

[58] § 230(a)(4).

[59] 71 Fed. Comm. L.J. 105, 115.

[60] § 230(b)(5).

[61] § 230(b)(2).

[62] To satisfy the first prong of the CDA’s immunity test, the defendant must be an “interactive computer service.” An “interactive computer service” is defined as “any information service, system, or access software provider that provides or enables computer access  by multiple users to a computer server, including specifically a service or system that provides access to the Internet . . .” 47 U.S.C. § 230(f)(2). The Supreme Court has previously held that Facebook is an “interactive computer service” because Facebook “provides or enables computer access by multiple users to a computer service.” Sikhs for Justice I, 144 F. Supp. 3d at 1093. Additionally, Facebook is considered to be an interactive computer service because “it is a service that provides information to multiple users by giving them computer access . . . to a computer server, namely the servers that host its social networking website.” Klayman v. Zuckerberg, 753 F.3d 1354, 1357, 410 U.S. App. D.C. 187 (D.C. Cir. 2014).

[63] To satisfy the second prong of the CDA’s immunity test, the information for which Plaintiffs seeks to hold Facebook liable must be information provided by an “information content provider” that is not Facebook. An “information content provider” is defined as “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.” 47 U.S.C. § 230(f)(3).

[64] The third and final prong of the CDA’s immunity test requires that Plaintiffs seek to hold Facebook liable as a publisher or speaker of Plaintiffs’ content. “Publication involves reviewing, editing, and deciding whether to publish or to withdraw from publication third-party content.” Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1102 (9th Cir. 2009).

[65] Sikhs for Justice, Inc. v. Facebook, Inc., 144 F. Supp. 3d 1088, 1092.

[66] In Zeran v. American Online, one of the first cases interpreting § 230, the Fourth Circuit claimed that, “by its plain language, § 230 creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.” The court reasoned that Congress intended § 230 to immunize both publishers and distributors because, while a distributor is indeed distinct from a publisher in determining the standard of liability, both can be considered a subset within the broader definition of publisher for defamation purposes.[66] Few courts have challenged this interpretation of the CDA, though in Doe v. GTE Corp. the Seventh Circuit questioned whether disclaiming all liability for ISPs achieves the goals of § 230, the title of which promises protection for “Good Samaritan” screening of offensive materials. The court correctly pointed out that because both websites and ISPs that screen for offensive material and those that refrain from screening are granted immunity, websites and ISPs can be expected to take the less expensive, non-screening route. Thus, an interpretation of § 230 that treats websites and ISPs exercising editorial control the same as those that do not defeats the original policy goals of the “Good Samaritan” law and likely serves few of the purposes Congress intended.

[67] 35 Cardozo Arts & Ent LJ 669, 687-689.

[68] 96 Wash. U. L. Rev. 419, 433.

[69] 35 Cardozo Arts & Ent LJ 669, 690.

[70] Id. at 687.

[71] 43 Hastings Comm. & Ent. L.J. 81, 100-101.

[72] 46 Hastings Const. L.Q. 1, 7.

[73] ARTICLE: But Facebook’s Not a Country: How to Interpret Human Rights Law for Social Media Companies, 38 Yale J. on Reg. Bulletin 86, 87-88

[74] Id.

[75] Id. at 97.

[76] Id. at 87.

[77] Id. at 88.

[78] 38 Yale J. on Reg. Bulletin 86, 88.

[79] On Facebook, fake news articles look almost identical to those from reputable news organizations. Each article displays a headline, a picture, the originating website, the person or company who posted it, and the number of likes, shares, and comments.

65 Buffalo L. Rev. 1101, 1115.

[80] 65 Buffalo L. Rev. 1101, 1117.

[81] 35 Cardozo Arts & Ent LJ 669, 699.

[82] Id.

[83] Id. at 89.

[84] 43 Hastings Comm. & Ent. L.J. 81, 94

[85] Id. at 94-95.

[86] 96 Wash. U. L. Rev. 419, 437-438

[87] 46 Hastings Const. L.Q. 1, 10-11.

[88] 43 Hastings Comm. & Ent. L.J. 81, 90-91.

[89] 43 Hastings Comm. & Ent. L.J. 81, 100-101.

[90] Congress has already recognized the importance of holding these platforms liable. The Platform Accountability and Consumer Technology Act (PACT) was introduced this past June. If passed, the new legislation would require interactive computer services who provide internet platforms to issue public statements about their content policies.

[91] 96 Wash. U. L. Rev. 419, 420.

[92] Id. at 435.

[93] 96 Wash. U. L. Rev. 419, 420.

[94] Id. at 439.

[95] Id.

[96] 96 Wash. U. L. Rev. 419, 439.

[97] 46 Hastings Const. L.Q. 1, 10-11.

[98] Id. at 11.

[99] 46 Hastings Const. L.Q. 1, 10-11.

[100] 65 Buffalo L. Rev. 1101, 1112.

[101] Id. at 1122.

[102] Id.

[103] 96 Wash. U. L. Rev. 419, 440.

[104] Id.

[105] 71 Fed. Comm. L.J. 105, 107-108.

[106] Id.

Previous
Previous

EU Data Act

Next
Next

ARTICLE | GDPR’s “Real Seat Approach” Effect on Corporate Law and Corporate Policies