Skip to content

Facebook Oversight Board issues its first decisions

Post categories

  1. Policy
Its decisions largely hinged on interpretations of Facebook's policies, though some language the Board used echoed First Amendment law.

Facebook’s independent Oversight Board has issued its first decisions reviewing Facebook content moderation determinations, affirming the company’s removal of a post in one instance but overturning the company in four others. Although none of these cases dealt with journalists or news outlets, the Reporters Committee is following the Oversight Board’s work because content moderation can have implications for online news (in, for instance, the implementation of policies concerning misinformation).

The first case concerned a removed post from a user in Myanmar that appeared to disparage Muslims. The Board overturned the decision to remove it, stating: “While the post might be considered pejorative or offensive towards Muslims, it did not advocate hatred or intentionally incite any form of imminent harm.”

The second case concerned a similar removed post, but about Azerbaijanis. In this circumstance, the Board affirmed the removal, stating that, in this case, “the context in which the term was used makes clear it was meant to dehumanize its target.” In both this case and the first case, the Board emphasized the importance of the context of speech in making content moderation decisions.

The third case involved a removed Instagram post intended to promote breast cancer awareness, which featured women’s nipples. The Board reversed the removal decision, pointing to Facebook’s own already-stated policy exception for breast cancer awareness. More important, though, were the Board’s recommendations about the automated technology that flagged the post in the first place. Among other things, the Board recommended that Facebook should inform users when posts have been flagged using automation and should audit the accuracy of its automated moderation systems.

The fourth case involved a removed post that included a quote from Joseph Goebbels, the Nazi propaganda minister. Facebook’s internal policies state that quotes attributed to individuals on a certain list (including Goebbels) are, by default, an expression of support for the individual unless otherwise clarified. The particular post at issue, the Board found, did not, however, “support the Nazi party’s ideology or the regime’s acts of hate and violence.”

The fifth case dealt with a removed post containing COVID-19 disinformation and criticizing the French government’s response to the pandemic. The Board found that the post was primarily a statement of opposition to a government policy and that the specifics of the disinformation meant that few people would act on it.

The Board held that a sixth case, involving comments by one user on another user’s post, was rendered moot when the creator of the underlying post removed the original content.

Although the decisions each included an analysis of the case under international human rights law, the Board’s determinations largely hinged on interpretations of Facebook’s own internal policies. Many decisions also used this or similar language: “The decision to remove the content was not consistent with Facebook’s values.”

In a few instances, language the Board used echoed First Amendment law; for example, the Board invoked First Amendment concepts of imminent harm and the use of least restrictive means when regulating speech in several of its decisions. Overall, though, the theme in the Board’s recommendations and decisions appears to be that Facebook should ensure that internal policies around moderation are clear, be transparent about those policies and follow them scrupulously.


Like what you’ve read? Sign up to get the full This Week in Technology + Press Freedom newsletter delivered straight to your inbox!

The Technology and Press Freedom Project at the Reporters Committee for Freedom of the Press uses integrated advocacy — combining the law, policy analysis, and public education — to defend and promote press rights on issues at the intersection of technology and press freedom, such as reporter-source confidentiality protections, electronic surveillance law and policy, and content regulation online and in other media. TPFP is directed by Reporters Committee attorney Gabe Rottman. He works with Stanton Foundation National Security/Free Press Legal Fellow Grayson Clary and Technology and Press Freedom Project Legal Fellow Mailyn Fidler.

Stay informed by signing up for our mailing list

Keep up with our work by signing up to receive our monthly newsletter. We'll send you updates about the cases we're doing with journalists, news organizations, and documentary filmmakers working to keep you informed.