Judicial Use of AI in Pakistan: Promise, Peril, and Constitutional Boundaries

It was in April 2025, inside the Supreme Court, that Justice Syed Mansoor Ali Shah wrote into Pakistan’s judicial record a sentence that may echo longer than the dispute that occasioned it. He declared that Artificial Intelligence (AI) ought to be ‘welcomed with careful optimism’. The case, Ishfaq Ahmed vs Mushtaq Ahmed, was in form a civil appeal — routine, even forgettable — yet the judgment became something else entirely: a constitutional threshold, quietly crossed.

Behind this verdict lies the weary weight of a court system chronically burdened by delay. For decades, Pakistan’s judiciary has been tasked with doing more with less: fewer judges, thicker files, and slower processes. Litigants die before decisions arrive. The right to a fair trial begins to look aspirational. Into this scene steps a new character — AI — not with robes or oaths, but with algorithms, model prompts, and the promise of speed. The judgment in Ishfaq Ahmed did not close the gates on this new presence. Instead, it opened them, cautiously, while marking the path with constitutional stones.

Justice Shah grounded his reasoning in Article 10A of the Constitution, which guarantees due process and fair trial, and Article 37(d), which promises inexpensive and expeditious justice. These are not aspirational goals to be chased in the abstract. They are binding principles. The court held that AI can support the judiciary — in legal research, linguistic clarity, and even case flow analytics — but its use must never usurp the judge’s interpretive role. Law may be aided by pattern recognition, but it is not reducible to it.

In doing so, the Court offered more than a ruling; it offered a paradigm. One in which AI is an assistant, not an authority. A mirror, not a mask. The danger is not in the use of tools — courts have long embraced new instruments, from typists to PDF readers. The danger lies in outsourcing judgment itself.

The ruling gave voice to three cautions: opacity, bias, and hallucination. Each has already scarred the reputation of AI in other jurisdictions. Algorithmic opacity refers to the inability to trace how an AI system arrives at its output. In judicial settings, where every decision must be justified, this is not just a technical flaw; it is a constitutional defect. An unexplained ruling is an unjust ruling.

Bias is the deeper, quieter danger. If the dataset on which the AI is trained encodes structural discrimination, as so many legal datasets inevitably do, then those patterns will be replicated in the AI’s outputs. The risk is not simply one of explicit prejudice, but of systemic reinforcement. Over-reliance on certain precedents. Neglect of evolving jurisprudence. Omission of minority voices. Such biases are hard to detect and harder still to root out once embedded in the adjudicative process.

And then there is hallucination — a term borrowed from the field but disturbingly apt. These are instances where AI tools generate fictitious citations or invent case law. For a system based on binding precedent, such errors are corrosive. What is justice if it rests on fiction?

Yet the Court did not dismiss AI’s potential out of fear. It recognised, with sober realism, the possibilities. AI tools can sift through vast repositories of legal data in seconds — what once took interns hours to find in Pakistan Law Digest may now be surfaced instantly. The drudgery of precedent-hunting can be delegated, freeing judicial time for actual deliberation.

Language is another frontier. Legal writing in Pakistan often labours under imprecise translation or bureaucratic syntax. AI can offer stylistic revisions, suggesting cleaner formulations without altering substance. This is not the crafting of judgments, which must remain deeply human, but rather the clearing of underbrush so that the legal reasoning can emerge without distortion.

Case flow management may be AI’s most immediate operational use. By analysing filing patterns, categorising delay points, and proposing reallocation, AI can assist court registrars and judicial policymakers in resource management. But only if such systems remain transparent, challengeable, and subject to human override.

In short: the promise is real. But so is the peril. And therein lies the balance.

Internationally, similar debates have already begun to take institutional form. The Council of Europe’s Commission for the Efficiency of Justice (CEPEJ) released its Ethical Charter on the Use of AI in Judicial Systems in 2018. It lays down five core principles: respect for fundamental rights; non-discrimination; quality and security of data and methods; transparency and fairness; and human oversight. These are not ornamental guidelines. They are legal compass points.

Pakistan must now chart its own course — not by copy-pasting Western models, but by drawing from its constitutional roots. The Ishfaq Ahmed judgment is a beginning. The National Judicial (Policy Making) Committee (NJPMC) and the Law and Justice Commission of Pakistan (LJCP) should build on it.

A national framework is needed. It should rest on three pillars that, first, every output must be reviewed, and responsibility must remain with the judge. Second, every use of AI in legal research, drafting, or scheduling must be disclosed in the judgment. Audit trails must be maintained. Litigants have a right to know which parts of their case were touched by code. Lastly, before any citation or legal argument derived from AI is used in a decision, it must be checked against an authorised source — be it the PLD, SCMR, or an official court reporter.

Judicial academies must also evolve. Training on AI literacy should be mandatory — not just how to use these tools, but when not to. Judges must be equipped to recognise AI’s blind spots, its hallucinations, and its hidden biases.

Transparency is essential. Courts should publish annual AI usage reports: how often AI tools were deployed, for what tasks, and in which jurisdictions. This is not only a matter of good governance — it is a matter of public trust. As algorithms enter the courtroom, they must do so in the full light of public scrutiny.

There is also the matter of data protection. Court records contain confidential medical data, family histories, and financial transactions. These cannot be fed into offshore or commercial AI platforms without robust legal safeguards. Any judicial AI must be hosted locally, with full encryption, access controls, and subject to Pakistani data protection regulations.

In all of this, we must remember: the law evolves through context. It absorbs the innovations of its age — the printing press, the digital brief, the online cause list. AI is merely the latest. What matters is not the tool, but the ethos.

Justice Shah’s words — that AI may serve the judiciary, but not substitute it — are not a rejection of progress. They are a reaffirmation of what law is: an ethical endeavour, not a computational one. The court is not a machine. It is a moral space. A place where rights are not calculated but recognised. Where judgment is not rendered by probability, but by principle.

AI can speed the gears of justice, but only the judge can bear its weight. And in Pakistan’s long and unfinished journey toward constitutional justice, that weight remains sacred.

Simra Sohail

Author: Simra Sohail

Simra Sohail holds an LLB (Hons) from the University of London and is currently a law faculty member at LGS International Degree Programme. She is also a research fellow at the Global Institute of Law.

Leave a Reply

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.