Fake News, ISPs and Section 230 of the CDA

I.Reforming Sec. 230 Of The CDA to Combat Fake News  

The implications of the Communications Decency Act’s (“CDA”) immunity provisions are clear given the continuous rise of fake news; legal solutions are essential to address and further prevent its spread.[1] Despite the U.S. Supreme Court refusing to address the scope of § 230, lower courts have interpreted its immunity provision broadly stating, “there has been near-universal agreement that § 230 should not be construed grudgingly.” [2] While social media companies such as Facebook and Twitter heavily influence public discourse, reliance on ISP self-regulation is idealistic as these providers have self-interested financial motives.[3] Moreover, concerns have been raised that content moderation strategies adopted by social media platforms may not comport with customary First Amendment norms and doctrine.[4]   Congress[5] should reign in the comprehensive problems of fake news by clarifying the “intended implications of § 230 of the CDA on defamation liability for internet distributors.” [6] Amending the CDA to enact a modified version of “common law distributor liability” on ISPs and websites could solve the threat of fake news or at a minimum substantially decrease it because it would hold social media companies liable for fake news they know is posted on their platforms. [7] “Applying a modified standard of common law distributor liability specifically targeted to address fake news to ISPs and websites would hold social media websites like Facebook responsible for fake news that site administrators have been informed is defamatory.” [8]

Opponents may argue that subjecting ISPs to common law defamation distributor liability years after the CDA’s initial enactment would lead to ruinous liability for some of the most profitable companies in our economy, but these fears are overwrought. [9] However, as discussed above, websites such as Facebook are already making efforts to combatting the plague of fake news. [10]  Despite these voluntary efforts, though encouraging, the threat that fake news poses to our society is substantial enough that we shouldn’t rely exclusively on these companies’ self-monitoring and self-regulating their own efforts, especially given their revenue models.[11] Social media sites would also contend that CDA § 230 and the First Amendment protect them from any attempts to regulate content. [12]   These platforms would likely push back to the classic “marketplace of ideas” model, where the belief of “the more information in the marketplace, the better” rules the system of free speech and free expression. [13] Nevertheless, it is hard to reconcile how today’s social media sites do not operate like information content providers, and therefore should be treated as such under the law.[14]  Until the regulatory environment catches up with technology (if it ever does), leaders of all companies are on the hook for making ethical decisions about their use of AI applications and products. Therefore, as a matter of public policy, Congress should establish a bright-line rule to hold platforms accountable for harmful content, or at least at a minimum to remove false or misleading information which negatively impact the lives of its users.

II. Moving Forward

Worldwide fake news presents an ever-growing concern. Fake news sites’ masquerade as online news outlets, empowering readers to legitimize and believe that what he or she is reading is from a reputable source, lighting the fuel for their success.[15] Irrespective of the reasoning for the success of fake news, what is clear is that the average person, regardless of their critical thinking capabilities, now has a difficult time discerning what is accurate news.[16]  Furthermore, “individuals and entities are able to profit quickly off this misunderstanding in larger quantities than their predecessors experienced.”[17] Creating liability for ISPs and social media websites would incentivize websites to take down fake news once they are made aware of it.[18] Modifying the CDA also provides a solution that would be not only be feasible but effective too.[19]  Although fake news is not a new concept, the reach and influence is unlike past iterations of miscommunication due to the technological advances  and online platform communications we have developed as a society.[20]  “Understanding the motives behind fake news and the effects it can have is crucial to developing an effective solution to combat the issue of rapidly spreading misinformation without unduly treading on rights of free expression.” [21]


III. Sources

[1] 96 Wash. U. L. Rev. 419, 437-438

[2] 46 Hastings Const. L.Q. 1, 10-11.

[3] 43 Hastings Comm. & Ent. L.J. 81, 90-91.

[4] 43 Hastings Comm. & Ent. L.J. 81, 100-101.

[5] Congress has already recognized the importance of holding these platforms liable. The Platform Accountability and Consumer Technology Act (PACT) was introduced this past June. If passed, the new legislation would require interactive computer services who provide internet platforms to issue public statements about their content policies.

[6] 96 Wash. U. L. Rev. 419, 420.

[7] Id. at 435.

[8] 96 Wash. U. L. Rev. 419, 420.

[9] Id. at 439.

[10] Id.

[11] 96 Wash. U. L. Rev. 419, 439.

[12] 46 Hastings Const. L.Q. 1, 10-11.

[13] Id. at 11.

[14] 46 Hastings Const. L.Q. 1, 10-11.

[15] 65 Buffalo L. Rev. 1101, 1112.

[16] Id. at 1122.

[17] Id.

[18] 96 Wash. U. L. Rev. 419, 440.

[19] Id.

[20] 71 Fed. Comm. L.J. 105, 107-108.

[21] Id.

Previous
Previous

U.S-China Trade Wars: The Fight for Digital Dominance

Next
Next

Can Fake News Be Regulated Without Violating the First Amendment?