Skip to content

Supreme Court seems hesitant to transform online discourse

Post categories

  1. Content Restrictions
The Supreme Court recently heard oral arguments in Gonzalez v. Google and Twitter v. Taamneh.
Title card for RCFP's The Nuance newsletter. Purple and black background with white text that reads: The Nuance: Tackling the legal issues at the forefront of a free press

Last week, the Supreme Court heard arguments in two related cases that we have talked about before in this newsletter and have the potential to completely transform liability for internet platforms.

Together, the cases raise two issues: first, whether internet platforms can be liable for “aiding and abetting” terrorism under the federal Anti-Terrorism Act by recommending terrorist content to their users. If the Court answers no, it need not reach the second and more fundamental question: whether internet platforms can be liable for the content they “recommend.” To answer the second question, the Supreme Court must interpret the scope of 47 U.S.C. 230, a provision of the Communications Decency Act, which was enacted in the early days of the internet and prevents internet platforms from being held liable for certain third-party content. (The Supreme Court has never interpreted the law, but the federal appeals courts to address the issue take a uniformly broad view of it.)

Gonzalez v. Google, the case that raises the Section 230 issue, has received a tremendous amount of public attention because of its potential impact on the internet as we know it. But during oral argument, the justices from all over the ideological spectrum said they were confused by arguments made by Gonzalez’s lawyer and seemed reticent to take big swings at the law.

In a friend-of-the-court brief, the Reporters Committee and the Media Law Resource Center, represented by attorneys at Debevoise & Plimpton LLP, point out that this confusion demonstrates how untenable the line is between recommendations and hosting content. What is the difference between the search results generated by Google every time we type something into the search bar (which are tailored to our location, language, and preferences based on search history) and the recommended videos that show up after we’ve watched something on YouTube? The arguments did not yield a clear answer to that question, and we think that is because there isn’t one.

It is hard to come up with a narrow way to define this nebulous category of “recommendations” without undoing the liability shield that Section 230 created for all internet platforms that host third-party content and engage in some amount of content moderation. And as professor Eric Goldman points out, the simple act of removing content that violates one’s policies “prioritizes” other content that stays up.

The Court can avoid all this by finding that an internet company cannot be held liable under the Anti-Terrorism Act for making “recommendations” and instead requiring “actual knowledge that a specific piece of user-generated content on its platform provides substantial assistance to a terrorist act before imposing aiding-and-abetting liability on the basis of its function as a speech intermediary.” The brief we joined in Twitter v. Taamneh, the second of the companion cases, advocated for this outcome because of a concern about what the alternative might mean for the First Amendment and press freedom.

Referencing from an example very similar to one we offered in our brief, Justice Brett Kavanaugh asked Taamneh’s lawyer at oral argument whether, under his theory, CNN could be liable for aiding and abetting the 9/11 terrorist attacks by airing an interview with Osama bin Laden in 1997 where he declared war against the United States for the first time to a Western audience. Taamneh’s lawyer did not offer a clear answer, except to suggest that the First Amendment would preclude such liability.

But this line of questioning puts the Reporters Committee’s concerns about this theory of liability under the Anti-Terrorism Act into sharp relief. As was clear at the argument, it is hard (probably impossible) to articulate a rule that would allow liability for YouTube’s content recommendation algorithm but not sweep in newsworthy national security journalism.

Though not briefed as a case about free speech and the free press by the parties, it has big implications for both. We’re glad that these issues got some attention at oral argument and hope that the Court skirts the Section 230 issue by rejecting this dangerous theory of liability under the Anti-Terrorism Act.

Stay tuned for updates!


Like what you’ve read? Sign up to get The Nuance newsletter delivered straight to your inbox!

The Technology and Press Freedom Project at the Reporters Committee for Freedom of the Press uses integrated advocacy — combining the law, policy analysis, and public education — to defend and promote press rights on issues at the intersection of technology and press freedom, such as reporter-source confidentiality protections, electronic surveillance law and policy, and content regulation online and in other media. TPFP is directed by Reporters Committee attorney Gabe Rottman. He works with RCFP Staff Attorney Grayson Clary and Technology Press Freedom Project Fellow Emily Hockett.

Stay informed by signing up for our mailing list

Keep up with our work by signing up to receive our monthly newsletter. We'll send you updates about the cases we're doing with journalists, news organizations, and documentary filmmakers working to keep you informed.