New York wades into social media regulation waters with ‘hateful conduct’ law
New York Gov. Kathy Hochul recently signed a legislative package into law that includes new regulations governing how platforms police what the law calls “hateful conduct” online, making New York the latest state to attempt to control how platforms moderate content. While the law takes a different approach, it suffers from a similar constitutional flaw as the measures currently blocked in Florida and Texas that purport to regulate “bias” by platforms.
The bill requires “social media networks to provide and maintain mechanisms for reporting hateful conduct on their platform.” It defines hateful conduct broadly as “the use of a social media network to vilify, humiliate, or incite violence against a group or a class of persons on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.” Platforms must have a “clear and concise policy readily available and accessible on their website and application which includes how such social media network[s] will respond and address the reports of incidents of hateful conduct on their platform[s].”
Despite the reference to “conduct,” this bill has free speech issues, ably articulated by Mike Masnick at Techdirt.
Mainly, much of what the bill defines as “hateful conduct” is protected by the First Amendment. Despite a disclaimer that “[n]othing in this section shall be construed … as an obligation imposed on a social media network that adversely affects the rights or freedoms of any persons, such as exercising the right of free speech pursuant to the [F]irst [A]mendment,” the bill’s constraint on content that “vilif[ies]” or “humiliate[s]” people would clearly apply to some protected speech, as those terms are so broad and vague they could be read or misread to encompass plenty of public discourse. (It’s also worth noting that disclaimers like that are usually a blinking red alert that a law has some constitutional flaw.)
Further, the definition of “social media network” under the statute — “service providers, which, for profit-making purposes, operate internet platforms that are designed to enable users to share any content with other users or to make such content available to the public” — would also apply broadly, including possibly to journalistic sites. As Masnick notes, Techdirt and other traditional blogs and outlets could be included in this definition. And the law’s mandate that those sites articulate “hateful conduct” policies would clearly interfere with their editorial choices, something expressly prohibited by the First Amendment, as the Reporters Committee has noted in multiple friend-of-the-court briefs opposing the Florida and Texas laws.
The law plainly creates more problems than it solves, constraining platforms’ editorial discretion, targeting constitutionally protected speech and dissuading smaller websites with fewer resources from even venturing up to the line of a possible violation.
Finally, those concerns with the law really put a fine point on the broader flaws in efforts by states to regulate how platforms moderate lawful content. This bill is geared toward encouraging more moderation, whereas the Texas and Florida laws are focused on deterring moderation. Not only does that place the platforms in an untenable position, those editorial choices simply cannot be in the hands of the state, as the end of that road is censorship.
*Sara Grace Kennedy is an intern at the Reporters Committee, where she works with the First Amendment Clinic at the University of Virginia School of Law.
Like what you’ve read? Sign up to get the full This Week in Technology + Press Freedom newsletter delivered straight to your inbox!
The Technology and Press Freedom Project at the Reporters Committee for Freedom of the Press uses integrated advocacy — combining the law, policy analysis, and public education — to defend and promote press rights on issues at the intersection of technology and press freedom, such as reporter-source confidentiality protections, electronic surveillance law and policy, and content regulation online and in other media. TPFP is directed by Reporters Committee attorney Gabe Rottman. He works with Stanton Foundation National Security/Free Press Legal Fellow Grayson Clary and Technology and Press Freedom Project Legal Fellow Gillian Vernick.