Some reflections on the announced Facebook Oversight Board

Author: Marta Maroni, Phd candidate in Constitutional Law, Faculty of law, University of Helsinki *

After announcing the idea less than a year ago, Facebook has unveiled its experiment to establish an independent Oversight Board, known as the “Facebook Supreme Court”, which will decide over content moderation issues.[1] In other words, when users are unhappy about the removal of their content posted on Facebook, they are granted a second chance: the possibility of appealing to the established Oversight Board and the decision will be binding on Facebook itself.

Second chances are indeed welcome. They sound fairer for those whose content has been removed, and at the same time, they also forge a space for some self-reflection for Facebook itself: an additional assessment of its own decisions to remove content might be needed. The Oversight Board might be a plus in light of the intense use that Facebook makes of AI to “proactively enforce policies”,[2] given AI’s ability to identify and remove harmful content, or generally content which is against the Community Standards, a global list of rules that determine what content stays up and what comes down on Facebook[3]  The Oversight Board might help to prevent public scandals caused by wrong and quick decisions. In addition, the creation of the Oversight Board meets the request of International Human Rights Bodies which encourage self-regulation as a regulatory model and requested platforms to provide “access to remedies and redresses for the decision adopted by platforms”.  For example, the CM/Rec(2018)2 of the Committee of Ministers to member States on the roles and responsibilities of internet intermediaries” remarks that

“[They] [Platforms] should furthermore ensure that intermediaries provide users or affected parties with access to prompt, transparent and effective reviews for their grievances and alleged terms of service violations, and provide for effective remedies, such as the restoration of content, apology, rectification or compensation for damages. Judicial review should remain available, when internal and alternative dispute settlement mechanisms prove insufficient or when the affected parties opt for judicial redress or appeal”[4]

Regardless of these beneficial aspects, one cannot help but feel uneasy about the Facebook’s Oversight Board, its attempt to enhance and legitimize its position of power by strengthening those “values” that contribute to Facebook activities. Rightly, one might notice that Facebook’s Oversight Board cannot substitute the role of the domestic courts and public intervention will not indeed disappear. While  Facebook-Oversight-Board redress mechanisms within the sphere of “content moderation” is unique in its genre, the overall dynamic looks  rather familiar   to  transnational law studies, which can thus offer some insights on private companies’ self-constitutionalizing tendency.  Bearing in mind previous experience helps to anticipate potential problems that might emerge  when Facebook’s Board becomes operative.  

Drawing on Teubner’s Societal Constitutionalism, one might realise how a private organization develops its own set or understanding of fundamental rights, coupled with stronger administrative procedures to reinforce its organization’s position, legitimacy and autonomy. Teubner’s description of these dynamics suits this case very well as he illustrates how the emergence of what he calls self-contained regimes is intertwined with the creation of substantive rules in special fields of law, and the production of procedural norms.[5]

 Teubner’s model can be inspirational for analysing Facebook’s Court, where 1) a parallelism can be drawn between the substantive rules and Facebook community standards and 2)  Artificial Intelligence and the Oversight Board are administrative tools meant to be functional towards the ends of enforcing and implementing Facebook community standards. These standards, in turn, express Facebook’s understanding of what counts for freedom of expression. If community standards are drafted in order to promote freedom of expression, they also promise to keep Facebook a safe place,[6] and consequently they do not allow hate speech, terrorist content and now misinformation. Whereas moderation is necessary, and the platforms are in the best position to exert control over content, the problem might turn out to be that Facebook develops its own standards for freedom of expression regardless of the protection that different legal cultures afford it.  

It is noticeable that, together with the launch of this “Court”, Facebook has refined its own community standards, which Facebook’s Oversight Board is bound to implement. In this light, Facebook’s values would guide the decisions of the Board. Human rights are somehow in the picture,[7] but it will also be important to assess how they are qualitatively understood and elaborated, because the framework adopted by Facebook requires members of the Board to “ensure their commitment to the principles they must uphold”.[8]  

This framing could potentially weaken the decisional autonomy of the members, as they have less room for manoeuvre to analyse cases beyond Facebook polices. Because community standards are limited, the Board members do not have much space for judgments based on wider consideration of interests, rights, experience and conflicts.

In a nutshell, according to the current Board Charter, the Board has no mandate to shape and challenge Facebook Community Standards. The Board has an advisory capacity which can be provided upon Facebook’s request and in case one of its resolutions includes policy guidance, and yet Facebook will analyse the suggestion according to its formal policy.[9] In addition, Facebook is not bound to implement a Board’s decision to “identical content with parallel context”.[10]  

Scholars have also argued how the Board’s capacity to assess moderation problems is restricted to content which has been removed and does not apply to any evaluation concerning what stays up, political ads, nor does the Board have the possibility to assess the way algorithms arrange the visibility of information.[11]

 In such a setting, the Board merely has the role of checking whether or not Facebook complies with its own rules, and this corroborates the idea that the Board is constructed to protect Facebook as such.

One might correctly claim that this is a premature evaluation, since we are still missing important Governing Documents of the Board (the Bylaws, Code of conduct of the Members[12]) which will confirm, or not, this interpretation of the Board as a device for Facebook self-reinforcing rationale. However, Facebook anticipated this strategy in “A blueprint for content Governance and Enforcement”,[13] where it emphasises the need to reduce human subjectivity and have consistent decisions adopted according to the community standards, which are ultimately adopted by Facebook.[14]

It goes without saying that the use of AI is at the core of Facebook “perfect enforcement” of its community standards and this is mostly done proactively, that is, even before the allegedly illegal content is visible. If indeed none want to be exposed to terroristic or harmful content, again, AI might be used to eliminate content that is also protected by freedom of expression, but this Facebook does not regard as such.

To check the concreteness of the underlying critique, let us consider a few characteristics of the Oversight Board Charter.

1) Facebook has a set of values that guide its content policies and decisions. The Board will review content enforcement decisions and determine whether they are consistent with Facebook’s content policies and values. Elsewhere, the Charter also reiterates that “The board will review and decide on content in accordance with Facebook’s content policies and values”.[15]

2) The Charter indicates that the Board “Interpret[s] Facebook’s Community Standards and other relevant policies (collectively referred to as “content policies”) and this interpretation is done considering “Facebook’s articulated values”.

 3) The Charter further establishes that “Facebook’s content policies and values” are the basis for the Board’s decision-making. The Charter recognizes an important role for “precedents” and establishes that, “For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar.”[16] Whilst precedents enable continuity of the Board decisions, they will also keep re-routing the Board actions in the same direction over and over again.

In this way the whole machinery, that is AI proactive enforcement, the Board and Facebook’s community standards, become a self-enhancing device for strengthening Facebook policies.

This criticism is not an end in itself, but it is raised in light of the consequences that Facebook activities have globally. Whereas Facebook remains a private platform, it still has 2.4 billion regular global users, and it is one of the main internet actors in terms of network capacity, whose impact in terms of freedom of expression, politics, and political economy has been widely discussed.

The criticism raised above seems to clash with Facebook’s intent to establish a body “designed to oversee important matters of expression and to make independent final decisions”.[17]

 As article 1 states, “The board will be composed of a diverse set of members whose names will be public. They will exercise neutral, independent judgment and render decisions impartially.”[18] Yet to assess the neutrality and impartiality of the Board one might focus on three questions while reading the Charter: how is independence insured?  Who decides that the decisions are neutral and impartial? Who appoints the Board?

On The independence of the Board

Admittedly, the charter formally offers guarantees of independence.

 For example, Members must not have conflicts of interests that could compromise their independent judgment. Although their membership is public, the panel decision “will remain anonymous to ensure the safety and independent judgment of panel members” and a Member’s compensation is not dependent on the outcome of decisions. Finally, the Board will have discretionary power in the choice, but it will mostly have to select the ones with the greatest potential to guide future decisions and policies.  A drawback might lie in the three-year contract, with possible renewal, which could affect (but not necessarily) the independence of the Members of the Board, because it could make the duration of their contract dependent on their performance.

The Substantive side of independence is the most troublesome, and it is relevant for the whole narrative pursued by Facebook.

Facebook will establish a Trust to ensure the governance and accountability of the Board.[19] Facebook will also both appoint the trustee and fund the trust, which  formally appoints the Board, whose members are again chosen by Facebook.

Facebook will first select a group of co-chairs, and together with the co-chairs, Facebook will select the candidate to serve as Board Members. On top of this, Facebook and the public can suggest candidates for the Board. Facebook’s position to remotely control the Board is contrary to the traditional understanding of independence under constitutional law.

So far, the design of the Board suggests that the members might not enjoy autonomy in their professional judgements, nor that the Board is independent of Facebook. Further scrutiny might be required to check that the Board does not become a tool to reinforce Facebook’s position as regards global regulatory problems and the interpretation  of what counts as freedom of expression. 

*Marta Maroni Would like to acknowledge the support received from Reconfiguring Privacy – A Study of the Political Foundations of Privacy Regulation-funded by the University of Helsinki

[1] For a more optimistic and detailed reading see Evelyn, Facebook’s ‘Oversight Board:’ Move Fast with Stable Infrastructure and Humility (4 April 2019). North Carolina Journal of Law and Technology, Vol. 21, No. 1, 2019.

[2] Mark Zuckerberg Proactively Identifying Harmful Content A Blueprint for Content Governance and Enforcement. Zuckerberg’s plan is to develop the  technology further so “as to understand content well enough to proactively remove harmful content and reduce the distribution of borderline content”, and only later render it more flexible in its standards. See;

[3] Zuckerberg, Community Standards in A Blueprint for Content Governance and Enforcement.

[4] Access to an effective remedy CM/Rec(2018)2, Recommendation CM/Rec(2018)2

of the Committee of Ministers to member States on the roles and responsibilities of internet intermediaries

(Adopted by the Committee of Ministers on 7 March 2018 at the 1309th meeting of the Ministers’ Deputies).

[5] Teubner, Fragmented Foundation, Societal Constitutionalism beyond the Nation State in Dobner P and Loughlin M, The Twilight Of Constitutionalism? (Oxford University Press, 2010), p. 33; I should point out that Teubner is not necessarily critical of these dynamics. 

[6] Monika Bickert, Publishing Our Internal Enforcement Guidelines and Expanding Our Appeals Process

[7] Facebook’s commitment to the Oversight Board

[8] A Blueprint for Content Governance and Enforcement.

[9] Art. 4.

[10]  Oversight Board Charter, Article 4. Implementation.

[11] Evelyn Douek, How Much Power Did Facebook Give Its Oversight Board? in and Weinzierl, Quirin: Difficult Times Ahead for the Facebook “Supreme Court”, VerfBlog, 2019/9/21,

[12] These will be adopted by the Members of the Board with some input from Facebook.



[15] Oversight Board Charter Article 2. Authority to Review.

[16]  Oversight Board Charter Article 2. sect. 2.

[17] Oversight Board Charter.

[18] Ibid.

[19] Oversight Board Charter, Article 5, Section 2.