Mauricio Figueroa argues why we need to look beyond the conventions of international law to regulate AI
Artificial Intelligence (AI) systems are being deployed across industries and sectors of the economy; it is certainly a global problem. At first glance, international law seems like an obvious way forward: a platform designed to address cross-border issues. But this is far from reality. The international law lens is in fact highly superficial and relies on categories that do not reflect the political economy and dynamics of AI.
I. The Nature of International Law
Traditionally, public international law has been primarily – but not exclusively – concerned with the obligations of states, interactions of international organisations, and the recognition of individuals as rights bearers. Conversely, private international law seems to have been concerned with market dynamics, handling contractual transactions, jurisdictional issues, and commercial rights. The generation, commercialisation, and adoption of algorithmic systems was, until recently, largely related to the latter, based on principles and legal institutions of private law, such as torts, copyright, or contract law.
The landscape, however, began mutating as AI’s potential risks and impacts gained attention, and incited international bodies like UNESCO to issue its Recommendation on the Ethics of Artificial Intelligence in November 2021, adopted by all 193 Member States and intended to be universally applicable, despite its non-binding nature. In a similar vein, the United Nations Secretary-General’s High-Level Advisory Body on Artificial Intelligence released its final report Governing AI for Humanity on September 19, 2024. The report includes non-binding recommendations intended to guide the behaviour of states and international organisations, but it also symbolises how a topic like AI governance is now of global relevance and it signals to member States to engage in international efforts to address such a complex global issue. Furthermore, the recent endorsement by the Council of Europe of the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law (CETS No. 225) might be seen as a landmark in the evolution of legal frameworks around Artificial Intelligence (AI), as it is the first binding agreement with a global focus.
II. Wrong Targets
The dichotomy of interests among states, dictated by disparate economic structures and varying levels of technological sophistication, profoundly complicates the drafting and adoption of a global instrument to govern AI. This disparity is further complicated by the dominance of large technology firms, commonly referred to as Big Tech, which wield significant influence over the AI landscape. These corporations often operate beyond the regulatory reach of states. Crafting international treaties that effectively regulate these corporations is thus fraught with difficulties, as these instruments must tackle, or at least acknowledge, the interplay between state obligations and corporate power.
For instance, the Framework Convention falls short of touching the substance of AI-related risks. In its 36 articles, it encapsulates a broad consensus around notions established in the international discourse around AI: protecting human dignity, individual autonomy, transparency, accountability, preserving privacy, risk assessments, procedural safeguards, remedies, and so forth. It does not introduce innovative mechanisms or profound insights into managing AI’s complex socio-legal implications.
But the issue at core is not a matter of limitations within the particular Convention per se but rather the very nature of AI. AI systems are part of a highly complex landscape where a handful of corporations exercise power well beyond the realm of corporate affairs. This operates within the new economic logic that scholars like Julie E. Cohen and Manuel Castells have referred to as informational capitalism.
The challenge lies in that domestic legal frameworks, both directly and indirectly, support the prevailing informational capitalist economy: contract law through terms of service and non-disclosure agreements, intellectual property regimes that protect corporate secrecy, corporate law that governs financial allocations, complex tort law that limits extra-contractual liabilities, and even privacy as well, in the form of data protection statutes that predominantly confer individual rather than collective rights, which could otherwise serve to address issues such as data collection, processing, and web scraping more effectively.
Public international law’s approach to AI regulation, which predominantly places the burden on States to enforce compliance, tends to overlook the profit-driven dynamics and influence of major technology powerhouses, and, on a deeper level, the stark disparities in enforcement capabilities among nations, particularly regarding the expertise and resources needed to establish effective regulatory oversight.
The disparity in oversight and compliance is particularly evident in emerging economies within the Global South or Majority World, where the capacity to regulate and contend with algorithmic decisions and the corporations behind them is often inadequate, with information collection and processing activities within algorithmic systems mediating the citizen-state relationship, wherein national and subnational governments find it difficult to enforce corrections or even understand the full extent of the technology’s impacts. Big Tech corporations, with their extensive resources and global reach, present a major challenge to national governance structures, making the enforcement of international AI governance mostly aspirational.
The significant influence and power of multinational technology corporations pose a unique challenge, as these entities often have more resources than the countries in which they operate, and more often than not they relegate corporate activities to outsourced firms in the Global South, instances like in the case of OpenAI and Kenyan content moderators illustrate how enforcement of AI-related rights and rules in emerging economies is certainly challenging.
Moreover, as Chinmayi Arun points out with precision, the regulatory discourse surrounding AI must contend with a globalised operational framework in which companies outsource data annotation to one nation, test algorithms in another, and deploy products with societal risks across multiple jurisdictions. Legal frameworks that fail to interrogate the power dynamics of these private actors, and the economic systems that propel their actions, risk perpetuating regulatory blind spots. Particularly, international law appears ill-suited to engage with these complexities, a deficiency rooted in its historical detachment from the analytical tools of political economy.
III. A Word on Political Economy
To grapple with the pressing legal challenges posed by AI, lawyers and legal scholars must confront the political economy underpinning these systems. Sadly, despite valuable academic efforts, the term often escapes the notice of practitioners and scholars alike, largely due to its lack of familiarity. The marginalisation of the term may arise, in part, from an unfounded assumption that it entails a wholesale critique of capitalism’s role in shaping AI’s development and being the cause of its negative externalities. But such a reading misses the mark. In fact, that would be rather simplistic because it would reduce complex social, political, and technological phenomena to a general critique of profit-driven motives.
First, while different harms associated with AI systems may certainly be characterised as consequences of profit maximisation, in line with a capitalism-focused approach, the lens of political economy interrogates the why, who, and how. It asks why these harms arise, who is benefited or oppressed, and how policy, governance, and power relations shape technological outcomes.
Secondly, a capitalist critique may well frame digital technologies and their development and deployment as tools for capital accumulation. Yet, political economy reveals how AI is co-constituted by both economic and political forces, rather than being purely driven by market dynamics.
Thirdly, while capitalism critiques may highlight global exploitation, political economy shows how AI systems reinforce dependencies and inequalities between nations, and the monopolisation of AI infrastructure by a few corporations.
Finally, political economy recognises that capitalism is not monolithic. Furthermore, it doesn’t necessarily aim to overthrow capitalism outright but instead offers a more granular analysis of its intersections with other forces, allowing for critique without fatalism, reflecting on spaces for reform, resistance, and alternative approaches.
With notable exceptions, international law rarely engages with general capitalism-focused critiques and has yet to become sensible to specific political economy approaches. The latter is crucial for understanding the AI landscape and its affordances.
IV. Ways Forward
The recent interest and involvement of international law scholars and practitioners in the realm of governance of algorithmic systems is interesting on different levels. However, initiatives like the first global Framework convention or global recommendations by international bodies seem to the missing the wider picture. The lifecycle of AI systems – from design and development, through deployment, to eventual decommissioning – is intertwined with the political economy of major tech companies. This relationship significantly alters any attempt for regulatory paradigms and highlights the challenges of establishing a governance model that adequately addresses AI and its broad implications.
The real value of recent international law involvement in this field lies in the opportunity it presents for widespread dialogue within the legal community at large beyond legal siloes; but also with other disciplines, urging a re-evaluation of traditional legal frameworks in light of AI’s complex and dynamic nature. This collaborative approach is essential for developing legal responses that are not only technically effective but also socially responsible, ensuring that AI development, commercialisation and deployment align precisely with the values and principles that international bodies enshrined.
The increasing prominence of AI within the domain of international law offers an enlarged space for enriched dialogue across legal subdisciplines, as well as with disciplines beyond the legal realm. A more granular understanding of the political economy of AI and the lifecycle of AI systems will not only highlight the potential relevance of international law and its instruments, but also reveal its overall shortcomings and the limits of its applicability. Public international law will be part of the solution but not the solution for AI governance.
Mauricio Figueroa is a legal researcher and the host of the SCL Podcast, Privacy and Technology Laws Around the World. Discover more about his work at figuerres.net