•  
  •  
 

Authors

Michal Lavi

Abstract

On May 26, 2020, the forty-fifth President of the United States, Donald Trump, tweeted: “There is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent. Mail boxes will be robbed, ballots will be forged & even illegally printed out & fraudulently signed.” Later that same day, Twitter appended an addendum to the President’s tweets so viewers could “get the facts” about California’s mail-in ballot plans and provided a link. In contrast, Facebook’s CEO Mark Zuckerberg refused to take ac- tion on President Trump’s posts. Only when it came to Trump’s support of the Capitol riot did both Facebook and Twitter suspend his account. Differences in attitude between platforms are reflected in their policies toward political advertisements. While Twitter bans such ads, Facebook generally neither bans nor fact-checks them.

The dissemination of fake news increases the likelihood of users believing it and passing it on, consequently causing tremendous reputational harm to public representatives, impairing the general public interest, and eroding long-term democracy. Such dissemination depends on online intermediaries that operate platforms, facilitate dissemination, and govern the flow of information by moderating, providing algorithmic recommendations, and targeting third-party advertisers. Should intermediaries bear liability for moderating or failing to moderate? And what about providing algorithmic recommendations and allowing data-driven advertisements directed toward susceptible users?

In A Declaration of the Independence of Cyberspace, John Perry Barlow introduced the concept of internet exceptionalism, differentiating it from other existing media. Internet exceptionalism is at the heart of Section 230 of the Communications Decency Act, which provides intermediaries immunity from civil liability for content created by other content providers. Intermediaries like Facebook and Twitter are thereby immune from liability for content created by users and advertisers. However, Section 230 is currently under attack. In 2020, Trump issued an “Executive Order on Preventing Online Censorship” that aimed to limit platforms protections against liability for intermediary-moderated content. Legislative bills seeking to narrow Section 230’s scope soon followed. From another direction, attacks on the overall immunity provided by Section 230 emerged alongside the transition from an internet society to a data- driven algorithmic society—one that changed intermediaries’ scope and role in information dissemination. The changes in the utility of intermediaries requires reevaluation of their duties; that is where this Article steps in.

This Article focuses on dissemination of fake news stories as a test case. It maps the roles intermediaries play in the dissemination of fake news by hosting and moderating content, deploying algorithmically personalized recommendations, and using data-driven targeted advertising. The first step toward developing a legal policy for intermediary liability is identifying the different roles intermediaries play in the dissemination of fake news stories. After mapping these roles, this Article examines intermediary liability case law and reflects on internet exceptionalism’s current approach and recent developments. It further examines normative free speech considerations regarding intermediary liability within the context of the different roles they play in fake news dissemination and argues that the liability regime must correspond with the intermediary’s role in dissemination. By targeting exceptions to internet exceptionalism, this Article outlines a nuanced framework for intermediary liability. Finally, it proposes subjecting intermediaries to transparency obligations regarding moderation practices and imposing duties to conduct algorithmic impact assessments as part of consumer protection regulation.

Share

COinS