Limits of International Human Rights Law in Digital Content Moderation

Introduction

With social media platforms on the rise, people have been awarded with unsolicited power to voice their opinion and such freedom of speech comes with its own fair share of repercussions triggering the application of international human rights law (IHRL). The advancement in technology and the internet has wrought fundamental changes to the speech ecosystem of societies and legal systems around the world. However, even international human rights law has no befitting blueprint to deal with such changes. IHRL’s role in content moderation within the digital space is in disarray owing to its present generic state and invoking IHRL without being attentive to its limitations can frustrate the goals of those seeking to promote human rights in a digital space.

This piece aims to describe the apparent human rights-based approach to content moderation in its current form, the considerable limitations of IHRL and how IHRL serves the interests of digital platforms as they reap undeserved legitimacy dividends and pay lip-service to the language of IHRL despite the looming indeterminacy of what it requires from these platforms.

I. A Human Rights-Based Approach to Digital Platform Moderation – Identifying the Appropriate Benchmark

While IHRL is primarily addressed to the states, the UN Human Rights Council has adopted the United Nations Guiding Principles on Business and Human Rights (UNGPs) since 2011 encompassing both state obligations and corporate responsibilities and stipulating a tripartite framework purporting to mitigate and redress business-related human rights abuses, routinely referred to as the “Protect, Respect and Remedy” Framework.[1]

Considering digital platforms are corporate-owned entities, a human rights-based approach to content moderation can be addressed in light of the second principle i.e. the corporate responsibility to ‘respect’ human rights by avoiding infringing on the human rights of others and addressing precarious human rights impacts with which the platforms are involved.[2] The corporate responsibility to respect constitutes a “global standard of expected conduct” applicable to business enterprises, which exists independently of the states’ abilities and/or willingness to fulfill their human rights obligations. The human rights-based approach and its common conceptual language are premised on configuring the predominantly ad hoc and reactive approaches to the development of platform moderation policies towards a more principled and structured approach.[3] The major tech platforms are inherently international and generally insist on having a single global set of content standards to the extent possible, as is evident from the statement of Facebook CEO Mark Zuckerberg who has been forced to concede that, “…lawmakers often tell me we have too much power over speech, and frankly I agree. . . . We need a more standardized approach.”[4]

IHRL and the UNGPs are, therefore, a seemingly obvious place to turn to for global rules affecting rights, as many large, multinational corporations have a human rights policy usually designed around guidelines set by the UNGPs.[5]

II. Limitations of IHRL in Content Moderation

i) Indeterminacy of IHRL

The indeterminacy of IHRL creates room for its cooptation by platforms rather than constraining them. This mirrors charges of “blue washing”[6] where companies/platforms enjoy the goodwill benefits of program membership without making costly changes to their human rights and environmental practices[7] while the failure of oversight only advances such practices. Platforms do make commitments to uphold human rights and have been hiring personnel to rectify their longstanding lack of human rights expertise, however, some reports suggest that those involved are either marginalized within the companies or not given sufficient resources or authority to make a real difference,[8] thus relegating such initiatives to window-dressing. Facebook recently took the initiative to commission human rights impact assessments (HRIAs)[9] in developing countries and when the recommendation reports were released, they disclosed that no more than simple due diligence before entering a volatile market was required.

As a result of the indeterminacy of IHRL, it would be unwise for companies to not embrace minimum standards especially if they do not require substantive changes or adjustments to their operations or business models. However, as more companies sign on, IHRL can become mere phraseology if it does not bring actual accountability or constraint.[10]

ii) Risk of Cooptation

Private companies and platforms may attempt to modify the vocabulary of IHRL, legitimizing only minor reforms at the expense of making structural or systemic changes to their moderation processes. As content moderation systems operate at a very fierce pace and volume, deep-rooted risks can unfold. As a result, the dimension of influence that private companies may have on “international normative developments and discourse on freedom of expression”[11] remains unclear. When companies coopt IHRL rhetoric to their advantage, the development of IHRL can squall back. The development of IHRL through state practice or authoritative interpretation by treaty bodies and relevant experts can be ad hoc and slow,[12] hence, the influence of private companies and the disparaging effects of cooptation may be profound.

Aswad observes that,

“…if companies begin applying Article 19(3)[13] in their content moderation operations and take up the Special Rapporteur’s call to produce ‘case law,’ there could be an active fountain of new ‘jurisprudence’ involving the ICCPR’s speech protections, which could influence the direction of international freedom of expression rights.”[14]

In 2012, in a landmark publication, Rebecca MacKinnon called the companies “…sovereigns operating without the consent of the networked,”[15] while the Facebook CEO remarked, “…we’re really settling policies.”[16] This divulges the risks of cooptation because when IHRL is left at the behest of private companies, it becomes what they make of it.

iii) Challenge of Enforcement

Although the implementation of a human rights-based approach can be regulated without any external intervention, there is more than a probable chance that platforms will be resistant where their commercial interests and profitability are in question. From this perspective, social pressure, accompanied by smart governmental intervention, is required to enhance the human rights compatibility of content moderation systems. In the absence of a mechanism to force a company into compliance, the lack of transparency, the information asymmetries and the complexities in discerning the exact nature of a company’s obligations leave direct enforcement at the mercy of the platforms.

Conclusion

IHRL can proffer a standardized structure for content moderation but only if, as Benesch argues, it is “properly interpreted and explained by experts” as platforms are not well-placed to be trusted arbiters of determining whether a particular restriction on the freedom of expression is necessary and proportionate. Merely outsourcing the interpretation and application of these norms to private companies, in the same way that content moderation has been outsourced so far, will only allow for the cooptation of their agenda.

In the absence of an established mechanism for checking and contesting the platforms’ use of IHRL, which will also incorporate state consent and have democratic legitimacy, the platforms’ embrace of IHRL cannot be declared a meaningful victory. This risks creating a situation where platforms’ use of IHRL is praised when it aligns with the outcomes for which it advocates and dismissed when it differs.

We should be cautious of letting platforms clothe themselves in the language of IHRL and accrue legitimacy dividends merely for meeting the bare minimum requirements of transparency and justification.[17] Moreover, social media platforms should, regardless of their status as private, profit-maximizing businesses, respect and uphold the rights of their users. Their rules and decisions have profound impacts on individuals and societies so they should exercise this power in a public-regarding way and be held accountable for doing so.

It should be noted that the areas of content moderation and their interrelation are still in infant stages. More discrepancies are expected to ensue and further discourse on this area of law is awaited.


References

[1] EMILY B. LAIDLAW, REGULATING SPEECH IN CYBERSPACE: GATEKEEPERS, HUMAN RIGHTS AND CORPORATE RESPONSIBILITY 2 (2015)
[2] Id 1.
[3]  Dennis Redeker, Lex Gill, & Urs Gasser, Towards Digital Constitutionalism? Mapping Attempts to Craft an Internet Bill of Rights, 80 INT’L COMMUNICATION GAZETTE 302 (2018); NICOLAS P. SUZOR, LAWLESS: THE SECRET RULES THAT GOVERN OUR DIGITAL LIVES (Submitted Version 2019), p.173
[4] Mark Zuckerberg, Opinion, Mark Zuckerberg: The Internet Needs New Rules. Let’s Start in These Four Areas., WASH. POST (Mar. 30, 2019), https://www.washingtonpost.com/opinions/markzuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504-521a11e9-a3f7-78b7525a8d5f_story.html
[5] Monika Bickert, Defining the Boundaries of Free Speech on Social Media, in THE FREE SPEECH CENTURY 254, 260 (Lee C. Bollinger & Geoffrey R. Stone eds., 2018)
[6] Daniel Berliner & Aseem Prakash, “Bluewashing” the Firm? Voluntary Regulations, Program Design, and Member Compliance with the United Nations Global Compact, 43 POL’Y STUD. J. 115, 116 (2015) (defining “bluewashing” as when firms use engagement with United Nations initiatives to figuratively drape themselves in the blue UN flag in order to distract stakeholders from their real, as opposed to cosmetic, poor environmental or human rights records).
[7]  Bluewashing” the Firm? Voluntary Regulations, Program Design, and Member Compliance with the United Nations Global Compact VL – 43
[8]  Nitasha Tiku, Google’s human rights principles come under fire by a former …https://www.washingtonpost.com › 2020/01/02 › top-g… , January 2, 2020
[9] Miranda Sissons & Alex Warofka, An Update on Facebook’s Human Rights Work in Asia and Around the World, FACEBOOK NEWSROOM (May 12, 2020), https://about.fb.com/news/2020/05/ human-rights-work-in-asia/.
[10] Evelyn douek, The Limits of International Law in Content Moderation Moderation, 2021, Vol. 6:1, UCI JRNL. OF INT’L, TRANSNATIONAL, & COMP. L., p. 40
[11] Sejal Parmar, Facebook’s Oversight Board: A Meaningful Turn Towards International Human Rights Standards?, JUST SECURITY (May 20, 2020),  p.6
https://www.justsecurity.org/70234/facebooks-oversightboard-a-meaningful-turn-towards-international-human-rights-standards
[12] Id.10  at p. 60
[13] https://www.un.org/en/about-us/universal-declaration-of-human-rights
[14] Evelyn Mary Aswad, The Future of Freedom of Expression Online, 17 DUKE L. & TECH. REV. 26, 37 (2018).
[15] REBECCA MACKINNON, CONSENT OF THE NETWORKED: THE WORLDWIDE STRUGGLE FOR INTERNET FREEDOM, at xxiii-iv
[16] Susan Wojcicki, Expanding Our Work Against Abuse of Our Platform, YOUTUBE OFFICIAL BLOG (Dec. 5, 2017), https://blog.youtube/news-and-events/expanding-our-work- against-abuse-of-our [https://perma.cc/EB4B-9MCG]; Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 HARV. L. REV. 1598, 1648-58 (2018)
[17] Id. 10 at p.74

The views expressed in this article are those of the author and do not necessarily represent the views of CourtingTheLaw.com or any other organization with which she might be associated.

Hafsa Saleem Jam

Author: Hafsa Saleem Jam

The writer works as a research associate at the Centre for Law, Justice & Policy and intern at the ADR – Innovation & Justice SDG Project and ADR initiative. She has also published in the SZABIST Law Journal and the Legal Journalism society at City, University of London.

2 comments

Excellent article Hafsa! It’s just that content moderation is extremely complex. It’s not something in black and white, but a gray area. IHRL or any kind of law needs to first reconsider questions in the ever evolving philosophy of law before criticizing platforms for content moderation. For example; Whether an explicit painting of the human body is artistic or pornographic? Whether discussing dangerous criminality on platforms will tempt users to act like that or will it be therapeutic to ease the victims’ suffering? Whether War footage is graphic/bloody or it is newsworthy as evidence of War crimes? Whether animal abuse or child abuse footage will raise awareness or be harmful? Whether Tracking potential criminals through their digital footprint with the help of platforms can aid predictive policing or it is also violative of the right to privacy? Whether leaking confidential documents on platforms is anti-state or a right to information? Whether caricatures are artistic or blasphemous?

Great Article. But recently our state media regulator PEMRA took a news channel off air on the pretense of violation of Article 19 of the Constitution inferring ‘reasonable restrictions’ on the fundamental right of freedom of speech. Apparently a content aired on the
mainstream media platform “maligned the federal government” and was “hateful, seditious, and malicious”. In protest, a lawyer and columnist for Pakistan’s oldest English daily newspaper, in his column, comically proposed to rather draft a ‘Fragile Emotions Protection Bill (2022)’ for codifying such censure and draconian content moderation by the state. Rigid IHRL structures coupled with state consent can sometimes have despotic and autocratic outcomes too. Be it social media or mainstream media.

Comments are closed.