Should AI Be a Legal Person? Why the Debate Exists and What We Really Need Instead

The public relationship with artificial intelligence is becoming deeply personal. When OpenAI’s GPT-4o was briefly taken offline, users did not simply complain about losing an app but described it as losing a companion. One person even wrote that they had lost their “only friend overnight.” Around the same time, the tragic case of Adam Raine, a sixteen-year-old in the United Kingdom who died after months of interaction with an AI chatbot, led his parents to file a wrongful death lawsuit. Whether or not the courts find legal responsibility, it highlights a simple fact: people are no longer relating to AI as mere tools.

This shift has forced a difficult question onto the legal stage: should AI ever be treated as a legal person? The idea may sound far-fetched, yet it is gaining ground because AI already appears to play roles once related to humans only. This blog explores that debate, asking why the idea of AI personhood is being raised at all, what its relevance might be for law and society, and why the immediate need is to build accountability structures for human protection rather than conferring rights on machines.

The Provocation: AI as a “Second Apex Species”

Anthis’s “apex species” metaphor has shaped how the public thinks about AI. Surveys in the United States show that people expect some form of sentient AI within just a few years. While some also support banning such systems altogether, others believe they should be granted rights if they ever emerge. What this reveals is that ordinary people are no longer imagining AI as just some software technology but as something that might carry moral weight. The metaphor may be dramatic, but it captures a genuine anxiety about power and control in a future shaped by digital minds.

The Roots of the Legal Debate

The legal conversation about AI personhood did not start yesterday. More than thirty years ago, legal theorist Lawrence Solum argued that personhood in law is not limited to human beings. It is a legal fiction, something the law creates for practical reasons. Corporations, ships, and even trusts have been treated as “legal persons” because it made economic or administrative sense. Solum suggested that, at least in principle, AI could one day be added to this list. The strength of Solum’s argument lies in its practicality, as he was not claiming that machines think or feel like us, but that the law has always extended personhood when it was useful for society. If treating an AI as a legal person solved problems of responsibility or accountability, then the law could adapt.

This early work matters because it opened the door. Once we admit that personhood is flexible, the question is no longer “can AI ever be persons?” but “should we make them persons, and under what conditions?” That shift explains why the debate has returned with such urgency now that AI systems are embedded in everyday life.

Contemporary Theories of AI Personhood

While Solum focused on the flexibility of legal fictions, more recent work has tried to pin down what kind of qualities would actually make an AI a candidate for personhood. Philosopher Francis Rhys Ward suggests that three conditions would have to be met: agency, theory of mind, and self-awareness.

Agency means the ability to act with some independence, to make choices rather than just following instructions. Theory of mind implies the capacity to understand that other beings have their own beliefs, desires, and intentions. And self-awareness is the recognition of oneself as an individual that exists over time. Taken together, these are the building blocks we usually associate with persons, whether human or not.

The problem is that today’s AI systems do not convincingly satisfy these tests. They produce fluent text, realistic images, or complex strategies, but that does not show genuine understanding or self-reflection. Ward notes that the evidence for any of these traits in AI remains “surprisingly inconclusive.” In other words, there is no solid proof that machines know what they are doing, know what others are thinking, or even know who they are.

This line of thinking highlights an important contrast. On one side, public debate and media coverage often treat AI as if it already possesses humanlike traits. On the other side, serious philosophical inquiry shows how far we still are from the foundations of personhood.

The Counterpoint: Why AI Personhood Is Premature

Not all scholars agree that AI personhood is even worth debating right now. Brandeis Marshall has strongly argued that these conversations are premature and potentially harmful. She is concerned that giving rights to AI could distract us from the fact that millions of people still do not enjoy basic civil rights. Around the world, questions of racial equality, gender justice, and economic fairness remain unresolved. To talk about rights for machines while these struggles continue risks turning the legal spotlight away from human beings who actually need it.

Marshall also points out that AI systems lack the qualities that make personhood meaningful. They do not have accountability or moral responsibility. They do not make independent ethical choices. Whatever they produce, from text to images, is still the result of human-designed data and algorithms. Her arguments reminded me of Professor Dev Gangjee’s (from Oxford University) lecture from the WIPO summer school, where he contended that whatever image or data AI generates is derived from its own training models. Granting such systems personhood could even allow corporations to escape responsibility, shifting blame onto “the AI” whenever harm is done, which is a very dangerous phenomenon, as already the legal frameworks to hold accountable transnational organizations are struggling. 

Her argument reframes the debate. Instead of asking what rights AI should have, she urges us to ask what responsibilities humans should bear when building and using it. The urgent task is not to extend personhood but to create strong accountability frameworks ensuring transparency, oversight, and liability for those who deploy AI, so that harms are not excused as the fault of a machine.

Why the Debate Exists and What Is Really Needed

If today’s AI systems are not truly self-aware, and if the law already insists on human accountability, why is the debate about AI personhood so active? The answer lies in how people, scholars, and the law itself are responding to these technologies.

Socially, people are already relating to AI in human terms. Chatbots are described as friends, therapists, and even partners. When users mourn the loss of a model like GPT-4o, it creates the impression that these systems are more than tools. The emotional bonds may be real, even if the “personality” behind them is only a simulation.

Philosophically, AI appears to mimic the very traits we associate with personhood, such as agency, theory of mind, and self-awareness. Even though the evidence is inconclusive, its outcomes are so convincing making it tough to separate simulation from genuine capacity.

Legally, the debate is harder to ignore because the law has already recognized non-human entities as persons. Corporations can own property, rivers and forests have been granted legal standing in some jurisdictions, and animals are slowly being recognized as rights-bearing beings. If personhood can be extended in these ways, AI inevitably raises the same question: could it be next?

But here lies the real issue. Fascination with AI personhood risks hiding what matters most: responsibility. Treating AI as a legal person today would not make it more accountable; it would give corporations a way to deflect liability by blaming the machine. What is urgently needed is a clear framework of accountability and regulation. Developers and companies must remain legally answerable if AI tools cause harm, spread misinformation, or reproduce bias. Recent practice shows the law already leaning this way. In 2023, a U.S. federal court confirmed that AI-generated works cannot be copyrighted, declaring that “human authorship is a bedrock requirement.” Unlike corporations, which hold rights because they are ultimately controlled by humans, AI systems cannot own intellectual property because they lack accountability and intention. And so far, no court or legislative body in the world is ready to assign legal personality to AI. 

The lesson is straightforward. Before we talk about rights for machines, we must secure human rights such as privacy, dignity, and equality in the face of rapidly advancing technology. The EU’s AI Act is one attempt, focusing on liability and risk rather than personhood. For countries like Pakistan, where regulatory structures are still developing, the priority is also clear: building safeguards for people, not extending rights to code.

As in the words of Jacy Reese Anthis, “If we never invest in the sociology of AI and in government policy to manage the rise of digital minds, we may find ourselves the Neanderthals.” What society needs is not a bill of rights for AI but a system of responsibilities for humans. Only by keeping accountability at the center can we ensure that technology serves people, rather than people serving technology.

References 1. Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims, The Guardian (27 August 2025) https://www.theguardian.com/technology/2025/aug/27/chatgpt-scrutiny-family-teen-killed-himself-sue-open-ai2. Jacy Reese Anthis, It’s time to prepare for AI personhood, The Guardian (30 September 2025) https://www.theguardian.com/commentisfree/2025/sep/30/artificial-intelligence-personhood?3. Lawrence B. Solum, Legal Personhood for Artificial Intelligences (1992) 70 North Carolina Law Review 1231 https://ssrn.com/abstract=11086714. Francis Rhys Ward, Towards a Theory of AI Personhood (2024) AI and Ethics(forthcoming) https://arxiv.org/abs/2501.135335. Brandeis Marshall, No Legal Personhood for AI (2023) https://www.sciencedirect.com/science/article/pii/S26663899230024536. Dev Gangjee, Lecture on AI and Intellectual Property, WIPO Summer School on Intellectual Property (Zhongnan University of Economics and Law, Wuhan, 2025).7. The Interpretation of Personhood: AI and Its Inability to Copyright Works of ‘Original’ Character (2023) Journal of Intellectual Property and Technology Lawhttps://pmc.ncbi.nlm.nih.gov/articles/PMC10682746/8. EU Artificial Intelligence Act (Regulation (EU) 2024/1689 of the European Parliament and of the Council of 12 July 2024)

Ayesha Youssuf Abbasi

Author: Ayesha Youssuf Abbasi

The author, Ayesha Youssuf Abbasi, is a distinguished lawyer and legal researcher currently pursuing a fully funded Ph.D. in Public International Law under the Chinese Government Scholarship (CSC) at Zhongnan University of Economics and Law in China. Hailing from Islamabad, she has actively contributed to the legal community as a member of the Islamabad Bar Council. Her academic career includes teaching positions at Bahria University Islamabad, International Islamic University Islamabad, and Rawalpindi Law College.

The author’s research interests are broad yet deeply interconnected, spanning Artificial Intelligence and Law, International Humanitarian Law (IHL), Public International Law, Human Rights, International Environmental Law (IEL), and the rights of women and children. She adopts a multidisciplinary approach to these subjects, contributing to critical global discussions on governance, legal innovation, and social justice.

Leave a Reply

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.