Artificial intelligence (AI) covers technologies such as machine learning, generative models, automated decision-making, and facial recognition. AI offers advances in healthcare, agriculture, and governance, but policymakers warn of ethical and privacy risks.
In July 2025, Pakistan’s Federal Cabinet approved a National AI Policy to “accelerate digital transformation” by building a national AI ecosystem. The policy sets ambitious targets: training one million AI professionals by 2030 and supporting 50,000 civic AI projects and 1,000 locally developed AI solutions over five years. The plan creates new AI innovation and venture funds and establishes AI Centres of Excellence in major cities. A new AI Council, chaired by the IT Minister, will oversee this strategy. These reforms are intended to spur innovation, but if the legal framework is not robust, the threats to privacy, free expression, and fairness could outweigh the benefits.
The National AI Policy 2025 will operate alongside Pakistan’s existing digital laws. Pakistan’s main cybercrime statute, the Prevention of Electronic Crimes Act (PECA) 2016, has been used to police online content. In early 2025, Parliament amended PECA to criminalise the spread of “false or fake information,” with penalties of up to three years in prison. Rights groups warn that this offence is vaguely defined and risks a chilling effect on speech. The amendments also establish a Social Media Protection Authority (under the telecom regulator) with broad power to block or remove content on vague grounds. Together with occasional bans on social platforms, these changes have raised fears of unchecked censorship of the digital sphere.
Constitutional protections and oversight remain limited. Article 14 guarantees the right to privacy, and Pakistani courts have ruled that unlawful surveillance (such as unauthorised phone-tapping) violates this right. Article 19 guarantees freedom of speech and of the press (subject to reasonable restrictions). However, Pakistan has not yet enacted a comprehensive data-protection law – a Personal Data Protection Bill (2023) remains pending. In practice, PECA’s data provisions serve some of the same functions, but without clear consent or purpose limitations. The Ministry of IT & Telecom leads on technology policy, while agencies like the Pakistan Telecommunication Authority and the FIA’s Cyber Crime Wing enforce cyber rules. The AI policy foresees an AI Council chaired by the IT Minister to oversee implementation, but its legal status is unclear.
Pakistan’s nascent AI ecosystem faces serious privacy and surveillance risks. Without a strong privacy law, AI systems could collect and analyse personal data on a large scale with few limits. Although privacy is constitutionally protected, enforcement in the digital realm has been inconsistent. The Supreme Court has struck down illegal phone-tapping, but government surveillance powers (especially under laws like PECA) remain broad. In short, without clear legal limits on data collection and sharing, AI-driven monitoring may proceed with too little restraint.
Another problem is algorithmic bias and discrimination. If AI tools learn from flawed data, they can replicate or intensify social prejudices against women, minorities, or people from rural areas. The new policy does not mandate fairness standards or auditing. But guidelines from UNESCO AI ethics and the OECD AI Principles state that AI must promote “fairness and non-discrimination” and respect human rights, i.e., equality and privacy. Without rules for bias testing, explainability requirements, or grievance mechanisms, AI in Pakistan could make existing unfairness worse and lose public trust as a result.
There is also a problem of legal uncertainty. The policies and laws set the broader goals while leaving the important terminologies undefined. For example, the PECA law uses terms like “false information,” “public order,” and “national security,” but it does not define them clearly. This gives officials a wide degree of discretion, but for the businesses and people, on the other hand, this gets confusing in terms of compliance with their AI systems. Hence, this would result in a slowing down of innovation as companies may not invest in developing or deploying AI systems. Therefore, in order to avoid arbitrary enforcement and support innovation, there must be clear definitions and guidelines.
There is a high risk that AI could widen Pakistan’s existing inequalities. The country’s technology infrastructure and R&D resources are concentrated in urban centres, while rural and underprivileged areas lag. If AI development and regulation are driven mainly by major companies or central authorities, the benefits may accrue to cities and elites, leaving others behind. Indeed, observers note that the draft AI policy was unveiled with minimal public consultation. Without inclusive policy-making, AI applications (like automated public service prioritisation or credit scoring) could systematically disadvantage marginalised regions or groups. Ensuring broad participation and capacity-building is therefore crucial to share AI’s benefits more equitably.
Pakistan can learn from international models. The European Union’s AI Act takes a risk-based approach: it bans the most dangerous uses (such as untargeted facial recognition and social scoring) and strictly regulates other “high-risk” systems with requirements for transparency, documentation, and human oversight. The AI Act works alongside the EU’s GDPR, which grants individuals strong data rights (including limits on automated decision-making). This illustrates that a mix of risk-based regulation, independent oversight bodies, and existing data/privacy laws can be effective.
International human rights law and soft-law instruments also set the ground rules. Pakistan is party to treaties like the ICCPR and UDHR, which guarantee privacy and free expression. These obligations mean any AI regulation must not erode these fundamental freedoms. UNESCO’s Recommendation on the Ethics of AI (2021) makes human rights and human dignity the “cornerstone” of AI governance and emphasises values such as transparency, accountability, and inclusive governance. Similarly, the OECD AI Principles call for AI that is innovative but “trustworthy,” with human-centric values, fairness, and safety built in. These international standards point to best practices: for example, requiring AI impact assessments, independent audits of high-risk systems, and public appeals processes. They also highlight pitfalls to avoid: vague laws, secret tribunals, or punitive regimes could do more harm than good.
Drawing on these lessons, Pakistan should strengthen its AI framework. Law needs clear definitions: it should specify what counts as “AI” and identify high-risk applications (for example, biometric ID systems, automated decision-making in welfare or security, etc.). A robust data protection regime is also essential. The pending Personal Data Protection Bill should be enacted, establishing an independent privacy authority. Such a law must enshrine principles like consent, data minimisation, and purpose limitation, and require privacy impact assessments for major AI deployments. Surveillance powers should be clearly limited, with judicial warrants required for access to communications or personal data.
Free expression safeguards are crucial, too. Provisions on “false information” should be sharply narrowed, with precise definitions. Any content takedown orders must involve an independent review (for instance, by a tribunal or court) and allow for timely appeals. Citizens should have clear rights to challenge any order that restricts their speech. Transparency and accountability must be built into AI deployment. Developers and deployers (especially of high-impact systems) should document their algorithms, disclose training data sources where possible, and make public how major decisions are made. The government could require a registry of public-sector AI tools and regulatory sandboxes for testing new systems under supervision. Independent auditing mechanisms, whether through a regulator or third parties, should be mandated for critical AI.
In conclusion, AI presents a dual promise and peril for Pakistan. It could help transform public services, education, and the economy, but without strong rights-based rules, it could deepen surveillance, censorship, and inequality. The National AI Policy 2025 is a welcome vision, but it must be followed by robust, transparent laws and institutions adapted to Pakistan’s context. Human rights and human dignity should remain at the core of AI regulation. Legislators, regulators, industry, and civil society will need to work together – not by merely copying foreign rules, but by crafting Pakistan’s own path – to ensure that AI innovation goes hand in hand with the protection of fundamental liberties.