AI at the Crossroads: Pakistan’s Regulatory Challenge

Artificial intelligence (AI) covers technologies such as machine learning, generative models, automated decision-making, and facial recognition. AI offers advances in healthcare, agriculture, and governance, but policymakers warn of ethical and privacy risks.

In July 2025, Pakistan’s Federal Cabinet approved a National AI Policy to “accelerate digital transformation” by building a national AI ecosystem. The policy sets ambitious targets: training one million AI professionals by 2030 and supporting 50,000 civic AI projects and 1,000 locally developed AI solutions over a five-year period. The plan creates new AI innovation and venture funds and establishes AI Centres of Excellence in major cities. A new AI Council, chaired by the IT Minister, will oversee this strategy. These reforms are intended to spur innovation, but if the legal framework is not robust, the threats to privacy, free expression, and fairness could outweigh the benefits.

The National AI Policy 2025 will operate in conjunction with Pakistan’s existing digital laws. Pakistan’s main cybercrime statute, the Prevention of Electronic Crimes Act (PECA) 2016, has been used to police online content. In early 2025, Parliament amended PECA to criminalise the spread of “false or fake information,” with penalties of up to three years in prison. Rights groups warn that this offence is vaguely defined and risks a chilling effect on speech. The amendments also establish a Social Media Protection Authority (under the telecom regulator) with broad power to block or remove content on vague grounds. Together with occasional bans on social media platforms, these changes have raised concerns about unchecked censorship in the digital sphere.

Constitutional protections and oversight remain limited. Article 14 guarantees the right to privacy, and Pakistani courts have ruled that unlawful surveillance (such as unauthorised phone-tapping) violates this right. Article 19 guarantees freedom of speech and of the press (subject to reasonable restrictions). However, Pakistan has not yet enacted a comprehensive data-protection law – a Personal Data Protection Bill (2023) remains pending. In practice, PECA’s data provisions serve some of the same functions, but without clear consent or purpose limitations. The Ministry of IT & Telecom leads on technology policy, while agencies like the Pakistan Telecommunication Authority and the FIA’s Cyber Crime Wing enforce cyber rules. The AI policy envisions an AI Council, chaired by the IT Minister, to oversee implementation; however, its legal status remains unclear.

Pakistan’s nascent AI ecosystem faces serious privacy and surveillance risks. Without a strong privacy law, AI systems could collect and analyse personal data on a large scale with few limits. Although privacy is constitutionally protected, enforcement in the digital realm has been inconsistent. The Supreme Court has struck down illegal phone-tapping, but government surveillance powers (especially under laws like PECA) remain broad. In short, without clear legal limits on data collection and sharing, AI-driven monitoring may proceed with too little restraint.

Another problem is algorithmic bias and discrimination. If AI tools learn from flawed data, they can replicate or intensify social prejudices against women, minorities, or people from rural areas. The new policy does not mandate fairness standards or auditing. However, guidelines from UNESCO on AI ethics and the OECD AI Principles stipulate that AI must promote “fairness and non-discrimination” and respect human rights, specifically equality and privacy. Without clear rules for bias testing, explainability requirements, or grievance mechanisms, AI in Pakistan could exacerbate existing unfairness and ultimately lose public trust as a result.

There is also a problem of legal uncertainty. The policies and laws set the broader goals while leaving the important terminologies undefined. For example, the PECA law employs terms such as “false information,” “public order,” and “national security,” but does not define them clearly. This gives officials a wide degree of discretion, but for businesses and individuals, on the other hand, this can be confusing in terms of compliance with their AI systems. Hence, this would result in a slowdown of innovation, as companies may not invest in developing or deploying AI systems. Therefore, to avoid arbitrary enforcement and support innovation, clear definitions and guidelines are necessary.

There is a high risk that AI could widen Pakistan’s existing inequalities. The country’s technology infrastructure and R&D resources are concentrated in urban centers, while rural and underprivileged areas lag behind. If AI development and regulation are driven mainly by major companies or central authorities, the benefits may accrue to cities and elites, leaving others behind. Indeed, observers note that the draft AI policy was unveiled with minimal public consultation. Without inclusive policy-making, AI applications (such as automated public service prioritization or credit scoring) could systematically disadvantage marginalized regions or groups. Ensuring broad participation and capacity-building is therefore crucial to share AI’s benefits more equitably.

Pakistan can learn from international models. The European Union’s AI Act adopts a risk-based approach, banning the most hazardous uses (such as untargeted facial recognition and social scoring) and strictly regulating other “high-risk” systems with requirements for transparency, documentation, and human oversight. The AI Act works alongside the EU’s GDPR, which grants individuals strong data rights (including limits on automated decision-making). This illustrates that a mix of risk-based regulation, independent oversight bodies, and existing data/privacy laws can be effective.

International human rights law and soft-law instruments also set the ground rules. Pakistan is a party to treaties such as the ICCPR and UDHR, which guarantee privacy and freedom of expression. These obligations mean any AI regulation must not erode these fundamental freedoms. UNESCO’s Recommendation on the Ethics of AI (2021) makes human rights and human dignity the “cornerstone” of AI governance and emphasises values such as transparency, accountability, and inclusive governance. Similarly, the OECD AI Principles call for AI that is innovative but “trustworthy,” with human-centric values, fairness, and safety built in. These international standards outline best practices, including the requirement for AI impact assessments, independent audits of high-risk systems, and public appeal processes. They also highlight pitfalls to avoid: vague laws, secret tribunals, or punitive regimes could do more harm than good.

Drawing on these lessons, Pakistan should strengthen its AI framework. Law needs clear definitions: it should specify what counts as “AI” and identify high-risk applications (for example, biometric ID systems, automated decision-making in welfare or security, etc.). A robust data protection regime is also essential. The pending Personal Data Protection Bill should be enacted to establish an independent privacy authority. Such a law must enshrine principles like consent, data minimisation, and purpose limitation, and require privacy impact assessments for major AI deployments. Surveillance powers should be clearly limited, with judicial warrants required for access to communications or personal data.

Free expression safeguards are crucial, too. Provisions on “false information” should be sharply narrowed, with precise definitions. Any content takedown orders must involve an independent review (for instance, by a tribunal or court) and allow for timely appeals. Citizens should have clear rights to challenge any order that restricts their speech. Transparency and accountability must be built into the deployment of AI. Developers and deployers (especially those of high-impact systems) should document their algorithms, disclose the sources of training data where possible, and make public the rationale behind major decisions. The government could require a registry of public-sector AI tools and regulatory sandboxes for testing new systems under supervision. Independent auditing mechanisms, whether through a regulator or third parties, should be mandated for critical AI.

In conclusion, AI presents a dual promise and peril for Pakistan. It could help transform public services, education, and the economy; however, without strong rights-based rules, it could also deepen surveillance, censorship, and inequality. The National AI Policy 2025 is a welcome vision, but it must be followed by robust, transparent laws and institutions adapted to Pakistan’s context. Human rights and human dignity should remain at the core of AI regulation. Legislators, regulators, industry, and civil society will need to work together – not by merely copying foreign rules, but by crafting Pakistan’s own path – to ensure that AI innovation goes hand in hand with the protection of fundamental liberties.

Simra Sohail

Author: Simra Sohail

Simra Sohail holds an LLB (Hons) from the University of London and is currently a law faculty member at LGS International Degree Programme. She is also a research fellow at the Global Institute of Law.

Leave a Reply

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.