Regulating Artificial Intelligence: Can the Law Keep Up with Innovation?

Artificial Intelligence (AI) has emerged as a transformative technology influencing key sectors such as healthcare, finance, law enforcement, and armed conflict. It is broadly defined as the capability of machines to execute functions traditionally associated with human cognition, such as learning, logical inference, complex decision-making, and linguistic comprehension.

AI is increasingly integrated into institutional and technological frameworks, with direct effects on human rights, personal freedoms, and standards of living. As technological innovation progresses at an unprecedented and exponential pace, such as the invention of Large Language Models (LLMs), generative AI, autonomous systems, and robotics, the legal and regulatory frameworks tend to develop at a significantly slower pace. For example, the self-driven cars, such as those tested by Tesla and Waymo, are being trialed on public roads while liability frameworks for accidents caused by AI decisions remain underdeveloped. The temporal disconnect between rapid technological development and slower legal evolution has prompted researchers, legislators, and ethicists worldwide to reassess existing regulatory frameworks. Many now question whether legal systems rooted in judicial precedent and legal certainty can adequately respond to the evolving, complex, opaque, and inherently unpredictable nature of artificial intelligence systems[1].

This article reflects on the responsiveness of legal regimes to the challenges posed by the rapid development of AI. It explores initiatives to develop dedicated legal frameworks for artificial intelligence and evaluates whether existing legal doctrines, through regulatory flexibility, forward-thinking approaches, and international collaboration, can sustain coherence amid rapid technological innovation. Using examples from the UK, EU, and US, this article critically examines the extent to which existing legal frameworks are sufficient, holds AI systems accountable, and provides an oversight in AI regulations by ensuring fairness and transparency.

The nature of AI innovations: A legal challenge:

The accelerated development of artificial intelligence (AI) presents significant regulatory and jurisprudential challenges. At the heart of the problem lies a structural disconnect between the rapid pace of technological innovation and the comparatively slow evolution of legal and regulatory frameworks. This phenomenon, commonly described as “regulatory lag,” is not merely a timing issue; it creates normative uncertainty, weakens oversight, and risks ethical and societal harm[2].  Grasping the distinctive attributes of artificial intelligence systems such as their inscrutability, operational independence, and adaptive learning abilities, demonstrates why standard legal mechanisms often prove inadequate in addressing AI’s complexities.

Technological advancements are unfolding rapidly at an unparalleled pace, innovations that once demanded decades of scientific inquiry including complex tasks like predicting protein structures, producing hyper-realistic video content from textual prompts, or deploying AI to perform legal interpretative functions are now taking place at unprecedented speeds within a few months[3]. The launch of ChatGPT by OpenAI in November 2022 quickly captured global attention as it represented a watershed moment in the evolution of AI technologies, amassing over 100 million users within a span of 2 months and spurring the rapid emergence of a broader generative AI ecosystem[4]. Over the course of less than two years, many follow-up versions, such as GPT-4 and GPT-40, have further enabled cross-modal processing and interaction, allowing systems to interpret and generate content across text, visual, and auditory modalities and to process video data [5].

Conversely, law-making processes are, inherently, methodical, deliberative and consensus-driven. It involves a collaborative dialogue among policymakers, experts, and the public, requiring impact assessment, legislative committee scrutiny, followed by a potential judicial oversight. A prominent example is the European Union’s Artificial Intelligence Act, initially introduced in 2021 is anticipated to become fully operative by 2026 or even later. Although legal prudence and legislative diligence is necessary to safeguard fundamental rights, this results in a regulatory environment that is consistently lagging behind technological advancement, leaving legal gaps unaddressed[6]. This temporal mismatch and absence of timely regulation allows many AI innovations to be deployed across commercial domains, public sector operations, and essential national infrastructures before sufficient legal and regulatory mechanisms have been implemented.

Characteristics of AI That Complicate Regulation

AI systems not just simply enhance the speed or efficiency of the technologies; they reflect transformative ways of information processing and novel modes of reasoning which defies the established legal framework. Three critical attributes that create legal complexities are opacity, autonomy (operational independence) and learning capability.

a) Opacity (“Black Box” Problem):

Many modern sophisticated AI models, specifically deep learning architectures that are built on neural networks designs, function through process and exhibit decision-making pathways that remain opaque even to their developers[7]. This phenomenon is often described as the “black box” effect, which reflects that despite producing accurate outcomes the internal reasoning for its decision-making may be difficult to interpret and inaccessible to human understanding or scrutiny[8]. From legal standpoint, this give rise to several challenges such as:

Procedural fairness/ Right to due process: individuals whose rights are impacted by AI generated outcomes (such as rejection of a loan application or identification by predictive policing software), would lack the means to comprehend the rationale underlying such decisions and ultimately be denied the opportunity to legally challenge the outcomes[9]. Hence ensuring compliance with the due process becomes legally challenging.

Legal Accountability and Governance: Regulatory bodies and judicial institutions like courts face  significant difficulties while evaluating whether the outcomes generated by AI systems adhered to  statutory requirements as it would be in breach of the law if its mechanism cannot be examined and scrutinised[10].


b) Autonomy

In contrast to the traditional, static-code softwares, modernized artificial intelligence systems, particularly those deployed in autonomous transportation, unmanned aerial systems and high frequency financial markets are capable of operating and processing without human intervention[11]. Outcomes may evolve dynamically in response to the changing inputs and real-time environmental feedback.

This autonomous functioning of the system triggers multiple legal complexities, for instance:

  • Who should legally be questioned or held responsible in the event of a car crash caused by an autonomous vehicle?[12] Should the liability be attributed to the vehicle manufacturer, hardware producer, software engineer, human operator or the IA entity itself?[13]
  • Can the mens rea of the criminal offense be imputed to a human actor where the harmful conduct was not expressly directed by the human developer but was generated through the AI’s autonomous decision-making?[14]
  • Does the proliferation of autonomous decision-making necessitates the formulation of a new form of strict or vicarious liability that is expressly designed to address the unique characteristics of AI systems? [15]

c) Adaptive learning and Dynamic Behaviour:

A hallmark of the advanced IA technologies is their ability to iteratively learn, extract patterns, derive insights and modify behaviour through exposure to data[16]. Via supervised, unsupervised or enforcement learning algorithms, these AI systems are capable of dynamically refining their outputs as they process new information. They inherently exhibit responsive behaviour by adjusting the outputs in line with the novel input patterns[17]. Although such level of adaptability enhances the functionality and operational efficiency, this simultaneously complicates regulatory oversight in following ways:

  • The same AI model may produce divergent outputs and shift unpredictably in response to similar inputs or analogous circumstances at different points in time[18].
  • AI systems may gradually deviate or experience functional drift from their initial programming and design parameters as they continue to learn and adapt to the environmental changes, thereby complicating the assessment that whether the resulting harm can be attributed to the defective designs embedded in the initial programming or to the system’s later autonomous functioning[19].
  • Transparency and auditability is impaired when the rationale behind the decisions is continuously being reshaped and modified by incoming data and successive iterations[20].

Where behavioural deviations occur post-deployment, conventional notions of product liability and regulatory certification are strained[21]. Under EU and UK tort frameworks, liability traditionally hinges on foreseeability and defect at the point of production. However, AI systems that autonomously evolve after market release raise a critical legal question: can developers or deployers be held accountable for harms that could not reasonably have been anticipated at the time of programming?[22] This tension underscores a potential inadequacy in existing ex ante regulatory approaches, which are predicated on static technological characteristics rather than iterative, adaptive behaviours.

Moreover, the adaptive nature of machine learning systems may perpetuate or exacerbate pre-existing societal biases encoded within training datasets[23]. Empirical evidence demonstrates that algorithmic recruitment tools have systematically disadvantaged female candidates, reflecting historical gender imbalances[24]. Such outcomes, though latent during initial development, may only manifest post-deployment, exposing individuals to discrimination in violation of anti-discrimination statutes, such as the UK Equality Act 2010, and potentially contravening EU directives on equal treatment in employment[25]. From a regulatory perspective, these latent harms highlight the limitations of purely ex ante oversight mechanisms. Static compliance standards, such as pre-market conformity assessments, may fail to capture discriminatory drift, emphasizing the necessity for continuous, ex post monitoring and adaptive regulatory mechanisms[26].

Existing Legal Frameworks: Are They Fit for Purpose?

As AI technologies continue to escalate and develop they have integrated across various domains and sectors, this prompts scrutiny as to whether current legal frameworks and regulatory mechanisms are sufficient to oversee their creation, design, implementation and resulting societal impacts. The European Union has taken a comprehensive approach to AI regulations by proposing an AI act. This framework classifies AI into different categories based on risk ranging from minimal to high risk and imposes stringent requirements on the systems that are used in critical infrastructure and employment. On the other hand the United States has adopted a fragmented and sector specific regulatory approach. Several agencies like the federal trade commission particularly overseas[27]. The general AI practice is related to the customer protection while the individual states such as California through the California privacy act imposes additional requirements, even though this framework allows for innovation and rapid technological deployment, it still leaves gaps in comprehensive oversight[28]. The United Kingdom has adopted a pro innovative and sectorial approach which was outlined in the 2020 3AI white paper. This aims to maintain the competitiveness by encouraging voluntary courts of conduct while fostering responsible AI deployment[29]. However, excessive reliance on the soft regulation ultimately raises concerns about the protection of fundamental rights particularly for the high risk systems where the voluntary compliance may be insufficient.

Although instruments like General Data Protection Regulation (GDPR) and longstanding established tort principles serve as a foundational basis for the initial framework for regulatory oversight, they were formulated at a time when technologies lacked the ability of  autonomy and continuous adaptation. Their underlying legal principles are rooted in the assumptions of human control and static functioning however these assumptions prove increasingly inadequate when confronted with the realities of AI systems[30]. This part of the article examines the strengths and shortcoming within two foundational legal grimes: data protection legislation and civil liability.

Data Protection Laws (e.g., GDPR):

The General Data Protection Regulation (GDPR) represents one of the most comprehensive and stringent data protection frameworks globally, with significant implications for AI systems that process personal information.[31] It establishes core principles, such as data minimisation, purpose limitation, lawful processing, and individual rights over personal data, that aim to ensure accountability and fairness in data handling[32]. However, the interaction between AI technologies and these principles presents profound legal and practical challenges.

Article 22 of the GDPR is particularly relevant to AI, prohibiting decisions based solely on automated processing, including profiling, that produce legal or similarly significant effects on individuals[33]. In principle, this provision is intended to safeguard against opaque and consequential decisions made without human oversight. From a legal perspective, Article 22 embodies the broader principle that accountability and reasoned decision-making must underpin technologically mediated outcomes, echoing foundational doctrines of administrative law and due process.[34]

Yet in practice, enforcing Article 22 is fraught with ambiguity. The regulation provides limited guidance on what constitutes a decision made “solely” by automated means or a consequence that is sufficiently “significant” to trigger protection.[35] These definitional uncertainties create enforcement gaps and raise questions about the scope of regulators’ supervisory powers. Moreover, the exceptions in Article 22(2), which permit automated decision-making when necessary for contractual performance or authorised by law, further dilute protections, potentially allowing high-risk AI applications to escape meaningful scrutiny.[36]

Deep learning and other complex AI architectures exacerbate these regulatory difficulties. Their opacity and non-linear decision-making processes render the logic behind outputs largely inscrutable to affected individuals, making it extremely challenging to exercise rights to explanation, objection, or judicial review[37]. This opacity undermines the GDPR’s core principles of fairness and due process, revealing a tension between the law’s human-centric assumptions and the operational realities of contemporary AI systems[38].

From a legal analysis standpoint, Article 22 demonstrates a structural limitation in ex ante regulatory frameworks. While it establishes formal safeguards, it does not adequately address the emergent risks of adaptive AI systems that evolve post-deployment, potentially producing discriminatory or harmful outcomes that were unforeseeable at the time of data collection.[39] This reveals a critical gap: conventional regulatory tools are insufficient for algorithmic systems whose autonomy, opacity, and continuous learning challenge traditional concepts of foreseeability, accountability, and enforceable rights.[40] The law must therefore consider complementary mechanisms, such as ongoing auditing obligations, algorithmic impact assessments, and dynamic regulatory oversight, to ensure meaningful protection for individuals in the AI era.

Challenges of Data Minimisation and Consent in AI:

A major challenge arises from the tension between AI functionality and the GDPR’s principle of data minimisation. Under Article 5(1)(c), organisations may only collect data necessary for a specified purpose, yet machine learning systems often require vast datasets to perform effectively and improve over time.[41] This structural conflict exposes the limits of ex ante regulatory models, where legal obligations assume data requirements are fixed and foreseeable, whereas AI systems evolve dynamically.

Consent under Article 7 illustrates a similar friction.[42] GDPR mandates that consent be freely given, specific, informed, and unambiguous. However, adaptive AI systems, particularly those using reinforcement learning may repurpose data in ways not covered by the original consent.[43] This raises doubts about whether consent remains valid over time and highlights the risk of “function creep,” undermining the GDPR’s protective intent.

Facial recognition technology (FRT) exemplifies these tensions in practice. Biometric data is a “special category” under Article 9 and generally prohibited without strict justification, yet FRT has been deployed extensively in public and private settings.[44] In R (Bridges) v Chief Constable of South Wales Police (2020)[45], the Court of Appeal held that live facial recognition lacked a proper legal basis and failed necessity and proportionality tests under both human rights and data protection law. The judgment underscores the difficulty of reconciling fast-moving AI deployment with regulatory frameworks conceived for static, human-directed data processing and exposes the need for ongoing judicial and regulatory scrutiny to enforce compliance effectively.

Criminal law and AI misconduct:

The rise of autonomous AI technologies presents fundamental challenges to traditional criminal law. Central to criminal liability are the elements of actus reus which is the guilty act and mens rea defined as the guilty mind. AI has complicated both the elements, autonomous systems can perform harmful acts without direct human intervention, raising the question of whether these acts can satisfy the actus reus requirement. Simultaneously, mens rea presumes human intention or recklessness, yet AI is devoid of awareness, ethical comprehension, and intentionality[46].

AI-assisted offences, such as algorithmically driven financial fraud, cyberattacks, or autonomous vehicle collisions, bring these complexities to the fore. The courts then have to determine whether the responsibility lies with the human programmer, operator, or owner, rather than the AI itself. Legal scholars have been debating about introducing doctrines like strict liability, vicarious liability, or new forms of “AI liability” to cover harms caused by autonomous systems. The challenge is to balance the accountability with the recognition that AI systems act independently within programmed parameters, often evolving in unanticipated manner. However the current legal frameworks struggle to accommodate AI misconduct and necessitate careful reconsideration of liability[47].

The case for new legal paradigms:

The rapid and unpredictable evolution of AI technologies has exposed the limitations of traditional regulatory frameworks and prompted calls for novel legal standards. One proposed solution is the concept of AI-specific legal personhood or agency, which would confer a form of legal responsibility directly upon autonomous systems[48]. While highly controversial, this approach aims to resolve the persistent accountability gap that arises when harms are generated by actions not directly attributable to human operators or developers. It challenges conventional doctrines of tort and criminal liability, which presuppose human agency, and raises complex questions about the applicability of corporate-style liability models to non-human entities.[49]

Another emerging strategy is dynamic regulation, exemplified by the use of regulatory sandboxes. Such mechanisms allow AI innovations to be deployed within controlled environments under continuous supervisory oversight, enabling regulators to monitor real-world impacts, adjust compliance requirements, and mitigate risks before full market release.[50] The European Union has explored sandboxes for financial AI applications, reflecting a recognition that traditional ex ante regulatory approaches cannot keep pace with rapid technological change.[51] From a legal perspective, these adaptive regulatory models represent a shift toward ex post and iterative supervision, bridging the regulatory lag and reinforcing accountability while preserving space for innovation.


[1] Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (4th edn, Pearson 2021).

[2] Gary E Marchant, ‘The Growing Gap Between Emerging Technologies and the Law’ (2011) 19 The Sci and Eng Ethics Journal 77.

[3] Richard Evans et al, ‘Protein Complex Prediction with AlphaFold-Multimer’ (2022) 610 Nature 583; OpenAI, ‘GPT-4 Technical Report’ (2023) https://cdn.openai.com/papers/gpt-4.pdf.

[4] OpenAI, ‘ChatGPT: Optimizing Language Models for Dialogue’ (30 November 2022) https://openai.com/blog/chatgpt/, Krystal Hu, ‘ChatGPT Sets Record for Fastest-Growing User Base’ Reuters (2 February 2023) https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-2023-02-02/ 

[5] OpenAI, ‘GPT-4 Technical Report’ (2023); OpenAI, ‘Hello GPT-4o’ (13 May 2024) https://openai.com/index/hello-gpt-4o/

[6] European Commission, ‘Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)’ COM(2021) 206 final.

[7] Jenna Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3 Big Data & Society 1.

[8] Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press 2015) 85–110.

[9] Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a “Right to an Explanation” Is Probably Not the Remedy You Are Looking For’ (2017) 16 Duke Law & Technology Review 18.

[10] Mireille Hildebrandt, Law for Computer Scientists and Other Folk (OUP 2020) 102–115.

[11] Ryan Calo, ‘Robotics and the Lessons of Cyberlaw’ (2015) 103 California Law Review 513.

[12] Bryson, J., Diamantis, M. and Grant, T., ‘Of, for, and by the People: The Legal Lacuna of Synthetic Persons’ (2017) 36 Artificial Intelligence and Law 273.

[13] European Parliamentary Research Service, ‘Civil Law Rules on Robotics’ (IPOL STU 2016/01) 34–37.

[14] Gabriel Hallevy, When Robots Kill: Artificial Intelligence Under Criminal Law (Springer 2013) 42–45.

[15] Matthew U. Scherer, ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies’ (2016) 29 Harvard Journal of Law & Technology 353.

[16] Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (4th edn, Pearson 2021) 28–35.

[17] Kevin P Murphy, Machine Learning: A Probabilistic Perspective (MIT Press 2012) 1–10; Richard S Sutton and Andrew G Barto, Reinforcement Learning: An Introduction (2nd edn, MIT Press 2018).

[18] Mireille Hildebrandt, Law for Computer Scientists and Other Folk (OUP 2020) 103–110.

[19] European Parliament, ‘Civil Liability Regime for Artificial Intelligence’ (2020/2014(INL)) 15–18; Andrea Bertolini, ‘Artificial Intelligence and Civil Liability’ (2020) 24 Stanford Technology Law Review 1.

[20] Jenna Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3 Big Data & Society 1; Frank Pasquale, The Black Box Society (Harvard University Press 2015).

[21] Mireille Hildebrandt, Law for Computer Scientists and Other Folk (OUP 2020) 103–115.

[22] European Parliament, ‘Civil Liability Regime for Artificial Intelligence’ (2020/2014(INL)) 15–18.

[23] Solon Barocas and Andrew D. Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671.

[24] Jeffries, ‘Amazon’s AI Recruiting Tool Showed Bias Against Women’ Reuters (11 October 2018) https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G 

[25] Equality Act 2010 (UK); Council Directive 2006/54/EC on the Implementation of the Principle of Equal Opportunities and Equal Treatment of Men and Women in Matters of Employment and Occupation [2006] OJ L204/23.

[26] Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR’ (2017) 31 Harvard Journal of Law & Technology 841.

[27] European Parliament, ‘Civil Liability Regime for Artificial Intelligence’ (2020/2014(INL)).

[28] Federal Trade Commission, ‘Using Artificial Intelligence and Algorithms’ (FTC Guidance, 2021).

[29] UK Government, A Pro-Innovation Approach to AI Regulation (White Paper, 2023).

[30] Mireille Hildebrandt, Law for Computer Scientists and Other Folk (OUP 2020) 103–115; Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a “Right to an Explanation” Is Probably Not the Remedy You Are Looking For’ (2017) 16 Duke Law & Technology Review 18.

[31] Regulation (EU) 2016/679 (General Data Protection Regulation) [2016] OJ L119/1.

[32] Paul De Hert and Vagelis Papakonstantinou, ‘The New General Data Protection Regulation: Still a Sound System for the Protection of Individuals?’ (2016) 32 Computer Law & Security Review 179.

[33] GDPR art 22.

[34] Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a “Right to an Explanation” Is Probably Not the Remedy You Are Looking For’ (2017) 16 Duke Law & Technology Review 18; Tom Bingham, The Rule of Law (Penguin 2010)

[35] Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR’ (2017) 31 Harvard Journal of Law & Technology 841.

[36] GDPR art 22(2).

[37] Jenna Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3 Big Data & Society 1; Frank Pasquale, The Black Box Society (Harvard University Press 2015) 85–110.

[38] Mireille Hildebrandt, Law for Computer Scientists and Other Folk (OUP 2020) 103–115.

[39] European Parliament, ‘Civil Liability Regime for Artificial Intelligence’ (2020/2014(INL)) 15–18.

[40] Ryan Calo, ‘Artificial Intelligence Policy: A Primer and Roadmap’ (2017) 51 U.C. Davis Law Review 399.

[41] Mireille Hildebrandt, Law for Computer Scientists and Other Folk (OUP 2020) 103–110; Kevin P Murphy, Machine Learning: A Probabilistic Perspective (MIT Press 2012) 1–10.

[42] GDPR art 7.

[43] Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR’ (2017) 31 Harvard Journal of Law & Technology 841.

[44] GDPR art 9; Lilian Edwards and Michael Veale, ‘Data Protection in the Age of AI’ (2017) 16 Duke Law & Technology Review 18.

[45]  EWCA Civ 1058.

[46] Andrew Ashworth, Principles of Criminal Law (9th edn, OUP 2022).

[47] Gabriel Hallevy, When Robots Kill (Northeastern University Press 2013).

[48] Bryson, Diamantis and Grant, ‘Of, for, and by the People: The Legal Lacuna of Synthetic Persons’ (2017) 36 Artificial Intelligence and Law 273.

[49] Gabriel Hallevy, When Robots Kill: Artificial Intelligence Under Criminal Law (Springer 2013) 42–45.

[50] UK Financial Conduct Authority, Regulatory Sandbox: Feedback and Lessons Learned (2019).

[51] European Commission, ‘FinTech Action Plan: Regulating Innovation in Financial Services’ COM(2018) 109 final.


Author: Aima Hassan

Aima Hassan is a second-year law student in the University of London LLB programme with a keen interest in technology and AI regulation. She writes on the legal, ethical, and societal implications of emerging technologies and aims to bridge the gap between law and innovation.

Leave a Reply

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.