As the sun rises, a worker prepares for yet another gruelling day at work. For eight hours, their weary eyes confront a barrage of disturbing posts that plunge into the abyss of humanity’s darkest corners: texts, videos and images of suicides, beheadings, explicit sexual acts and other harmful content demand their unwavering attention. They are tasked with preventing that harmful content from spreading across the Internet, applying and interpreting the company’s policies and guidelines. Their lunch break offers only a brief respite; half an hour to eat something in a race against the clock, fully aware that even a momentary delay could incur penalties for their temporary absence. All this anxiety in return for very low pay, as low as USD $2.20 per hour in some cases, with minimal or non-existent support for the inevitable psychological toll. The mental scars they carry are the hidden costs of protecting the online space we often take for granted.
This snapshot illustrates the model in which content moderators work across the world. They are not employees but an invisible outsourced workforce, relegated to the shadows, and mostly forgotten by the platforms they serve. This vital task is often delegated to workers in Kenya, the Philippines, India and other regions lacking in union recognition, mental health protection and basic workers rights.
This grim reality raises important questions about the societal consequences of outsourcing content moderation and what responsibility technology giants should bear for the well-being and fair treatment of their outsourced workforce.
Last year, the launch of ChatGPT to the public sparked a strong interest in conversational AI and Large Language Models, but those models rely on low paid workers operating behind the scenes in the Global South. OpenAI sent extensive text snippets to Sama, an outsourcing firm with a base in Kenya, where workers earned a meagre USD $2 per hour to label and fine-tune the data for ChatGPT. The texts they had to review depicted graphic and disturbing situations involving child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. OpenAI is not alone in outsourcing content moderation; contracting companies also handle content for major platforms like Facebook, YouTube, and TikTok.
With regards to content moderation in social media, 200 outsourced workers were dismissed who then filed collective lawsuits in the Kenyan courts. While all the workers are based in Kenya, some hailed from other African countries like Rwanda or South Africa, making it a multilingual and multicultural workforce. The lawsuits were subsequently followed by the Content Moderators Summit in Nairobi, where dozens of workers formally resolved to register the first Content Moderators Union. The resulting Kenyan court ruling favoured the plaintiffs, recognizing Meta as the employer regardless of the intermediary contracting partner.
That decision not only sheds light on the exploitative practices that have long plagued the content moderation industry but may pave the way for significant changes in workers’ rights and conditions. The Kenyan courts acknowledgment that Meta can be held liable for employment-related issues was a significant and visible victory. Although the ruling does not determine the final outcome of the case, it does provide a legal vehicle for the plaintiffs for adjudication, a noteworthy achievement considering that tech giants have in the past been known to employ tactics to sidestep legal proceedings and seek more favourable jurisdictions.
The court determined that Meta exercises control over the content moderators’ practices by providing them with a digital or virtual workspace and establishing specific guidelines for the moderation processes. As a result, Meta sets operational requirements that define and shape the worker’s performance. The contracting firm, on the other hand, is seen as an intermediary, similar to an agent or manager, and not a substitute for Meta. This court’s decision highlights that Meta holds a major role, responsibility, and influence over content moderation operations, making them a relevant party in employment matters raised by workers.
Naturally, Meta has announced it intends to appeal the decision and it would be no surprise if the Big Tech platforms begin to explore alternative Global South countries offering more favourable legal frameworks for outsourcing. However, the developments in Kenya foster a growing awareness of the need to acknowledge and protect the rights of content moderation workers. It has also elevated the visibility and legal standing of this workforce.
The Kenyan decision is part of a broader trend where gig workers and trade unions have sought legal clarification of their employment relationship with platform companies. In 2021, the UK Supreme Court ruled in favour of Uber drivers, acknowledging them as employees and so entitled to workers’ rights such as minimum wage and paid holidays. Similar rulings have occurred in the Netherlands, where food delivery couriers, previously considered independent workers, were reclassified as “regular employees.” Of course, outsourcing practices are not exclusive to platform capitalism, but the unique ability of platforms to break down complex work into simple tasks while remotely monitoring worker performance has made the practice an intrinsic part of their business models.
For workers, navigating the legal process to establish an employment relationship is daunting. Courts typically evaluate factors such as the contractor’s degree of control, dependence, and supervision to establish whether there is a direct employment relationship. Proving that gig workers are under the direct control of the platforms they serve can be particularly difficult. In the Kenya ruling, the clarity of content moderation policies served as evidence of Meta’s control over the workers’ performance. However, when algorithmic management, gamification, and customer reviews come into play, proving control becomes even harder. On top of that, workers need to be organised to take on the task. While there is a clear benefit from being recognized as employees, without the backing of an organised workforce the burden of costs, time, financial resources, and potential work-related harassment (often from the employer) may make the risk not seem worth the rewards.
And while courts have played a crucial role in defending workers’ rights and addressing misclassification issues within the platform economy, relying solely on court adjudication does not necessarily lead to comprehensive systemic changes in the working conditions for gig workers. A more effective approach lies in legislative reforms that empower workers to challenge potential misclassifications and secure their rights. Spain’s ‘Riders Law’, enacted in 2021, serves as an example of such reforms. It recognizes platform operators as formal employees, ensuring workers receive social protections and benefits from their companies. This proactive measure relieves workers from the burden of litigating to establish their employment status; the law explicitly recognises the employment relationship regardless the novel digital environment. The burden of proof is on the platforms to demonstrate that their employees should be classified as independent workers rather than the other way around.
The issues around workers of digital platforms go beyond and across borders, but law is of a territorial nature. Against this backdrop the Kenyan ruling emphasizes the significance of the control exerted by technology giants over virtual workspaces and content moderation policies. It sets a clear boundary of responsibility that these companies cannot easily evade.
Interestingly, the leading role unions once had in the wake of the industrial revolution, may represent a way forward for tackling issues of the platform economy. Policy makers, especially those parties who purportedly support labour rights, should continue to support the unionisation of gig workers, as a crucial mechanism for sectorial agreements that define fair compensation and working conditions in the industry. Additionally, unions can also take a proactive role in determining, along with managers, how technology should be introduced in the workplace and how to ensure that automation serves workers’ productivity instead of replacing them.
While the nature of workspaces, tools, and activities may evolve over time, corporations will bear the fundamental responsibility to ensure that the rights and dignity of their workers are upheld and protected. The awareness and momentum generated by recent legal developments in Kenya are pivotal in advancing the cause of content moderators and platform workers at large, advocating for a more equitable treatment and their overall well-being.
Mauricio Figueroa holds an LL.M. from Tel Aviv University, and is currently pursuing a Ph.D. at Newcastle Law School, UK. @mfiguerres_
Jose L Gallegos has an MPP from Harvard Kennedy School of Government, and is a Doctoral Researcher at Erasmus School of Management, Rotterdam. @jlgallegos_