ABSTRACT
The rapid deployment of artificial intelligence has intensified global debates on the regulation of high-risk algorithmic systems, particularly in contexts where statistical probabilities are translated into an uncertain and opaque set of surveillance practices. Among these applications, predictive policing stands out as a particularly contested use of artificial intelligence, raising questions about accountability, legal validity, and the role of executive discretion. Predictive policing differs from other uses of predictive technology in that its AI models can be used to immediately guide police action based on probability; as a result, there are fewer barriers between statistical inference and the exercise of state coercive power. As seen in deployments such as Maha Crime OS AI in Maharashtra[1] and Trinetra in Uttar Pradesh[2], AI-generated predictive outputs now influence how officers in India allocate their time, identify suspects, and determine which cases are prioritised for investigation.
This article examines whether the EU AI Act’s regulatory influence meaningfully reaches state-controlled uses of AI of this kind, using predictive policing in India as a case study. It argues that executive-centric governance, weak regulatory capacity, and limited ex-ante oversight constrain the Brussels Effect in policing contexts, highlighting the need for a rights-based constitutional framework rather than passive regulatory borrowing.
INTRODUCTION
Against wider concerns about the use of AI in high-stakes public decision-making, particularly where algorithmic outputs intersect with state coercive power, the European Union (EU) adopted the EU Artificial Intelligence Act in 2024,[3] with key compliance deadlines extending up to 2027. Within the Act’s regulatory architecture, predictive policing falls under the law-enforcement category, where certain practices are either prohibited in limited cases or classified as high-risk AI systems due to concerns about their impact on fundamental rights and public safety.[4] Although framed as a product-safety regulation[5], the AI Act has positioned the EU as a global standard-setter in AI, reflecting what is commonly described as the Brussels Effect.[6]
This article therefore focuses on the Brussels Effect as a form of market-driven regulatory diffusion, rather than the AI Act’s formal territorial or extraterritorial reach. The inquiry is not whether the Act directly applies to non-EU actors as a matter of law, but whether its regulatory logic meaningfully travels beyond Europe through indirect channels of influence in ways that produce observable changes in governance standards, procurement practices or institutional design. Predictive policing is a particularly hard case for this inquiry. Unlike commercial AI systems, policing technologies are deployed by the state, operate largely outside competitive markets, and escape consumer-facing compliance pressures. If the EU AI Act struggles to shape governance norms in these domains, it highlights the limits of Brussels-style diffusion when applied to high-risk, state-controlled uses of AI. Using predictive policing as a case study, this article assesses whether the EU AI Act can meaningfully function as a global regulatory model beyond the European context, with particular reference to India. It further argues that weak regulatory standards constrain the reach of such diffusion.
PREDICTIVE POLICING UNDER THE EU ARTIFICIAL INTELLIGENCE ACT
The European Union’s AI Act constitutes the first ever comprehensive binding legal framework on AI. It is structured around a risk-based regulatory architecture that classifies AI systems on the basis of their potential impact on fundamental rights and democratic processes.[7] Predictive policing occupies a sensitive position within this framework. While certain forms of predictive policing, particularly those involving analysis of biometric data and social scoring, are prohibited, most predictive policing systems are treated as high-risk applications. Their deployment is therefore permitted, but only subject to stringent compliance obligations[8]. This approach reflects the broader design of the act, which prioritises conditional permissibility and procedural safeguards over outright exclusion. Under the EU AI Act, regulatory responsibilities are split between providers and deployers. Providers bear primary responsibility at the design and development stage. They must ensure compliance with the Act’s technical and organisational requirements, including quality management systems, technical documentation, logging, conformity assessments, and corrective measures before a system is placed on the market[9]. These lifecycle obligations are intended to address systemic risks embedded in the AI system itself.
Deployers, including law enforcement authorities[10], are subject to use-stage obligations focused on rights protection. Prior to deploying a high-risk AI system, deployers must conduct a Fundamental Rights Impact Assessment under Article 27[11], assessing the system’s foreseeable impact on fundamental rights and identifying mitigation measures. They are also required to ensure human oversight and compliance with operational use constraints. This allocation of responsibilities assumes the existence of strong compliance cultures and oversight institutions, an assumption that becomes strained when such systems are deployed in jurisdictions without comparable regulatory capacity.
PREDICTIVE POLICING UNDER INDIAN LAW
India’s engagement with AI governance has neither been absent nor consistent. NITI Aayog’s 2018 National Strategy for Artificial Intelligence [12] and its 2021 Responsible AI principles both reflected a degree of awareness of the risks that AI systems pose, but neither document carried any binding legal force and both remained aspirational in character. The proposed Digital India Act, which was widely expected to address emerging technologies including AI as a legislative successor to the Information Technology Act 2000, has not yet been introduced in Parliament. The Digital Personal Data Protection Act 2023, while a significant development in data governance more broadly, applies only incidentally to AI systems, and its broad exemptions for state processing mean that police use of AI falls largely outside its scope. The overall picture is therefore one of considerable policy activity, in the form of reports, consultations and advisories, that has not translated into enforceable legal standards governing high-risk AI.
In sharp contrast to the European Union’s Artificial Intelligence Act, which embeds predictive policing within a structured, risk-based statutory framework, India’s deployment of artificial intelligence in law enforcement operates in a conspicuous regulatory vacuum. There is no comprehensive, binding statute governing artificial intelligence, though some laws such as the Digital Personal Data Protection Act apply incidentally. Instead, such technologies are introduced through executive or state-level initiatives and procurement contracts between police departments and private technology vendors. These arrangements are often opaque and largely isolated from public debate and legislative scrutiny. India is therefore a particularly useful comparator as a large democratic jurisdiction rapidly adopting predictive policing technologies and other high-risk uses of AI without a dedicated statutory framework, which makes it a stress test for Brussels-style regulatory diffusion, since the absence of market-facing compliance incentives means that the standard channels through which EU norms travel – vendor competition for EU market access and alignment by domestic regulators are structurally absent here. [13] This mode of governance stands in sharp contrast to the EU’s emphasis on ex ante legal authorisation, protection of fundamental rights, and risk assessment mechanisms.
Predictive policing models in India are conditioned on prior crime records, complaint data, and surveillance inputs, which often reflect existing patterns of over-policing rather than actual crime rates.[14] In practice, design choices matter. Complaints from informal settlements are frequently recorded to flag the entire area, whereas complaints from more affluent neighbourhoods are logged to flag particular localities. This has the effect of turning whole localities into permanent crime hotspots on the basis of socio-economic conditions[15]. Once a neighbourhood is designated as high risk, increased police presence leads to more recorded crime, which in turn reinforces the system’s initial assumptions. Historical bias is thus internalised into machine-learning models and reproduced through data-driven outputs, making AI-enabled predictive policing substantially different from ordinary surveillance technologies that merely document events.
Tools such as Trinetra, The Crime Mapping Analytics and Predictive System (CMAPS), and facial recognition platforms deployed by the Delhi Police, are all procured by private technology vendors through opaque state contracts have been operationalised without a clear statutory basis defining their scope, limits, or evidentiary value[16]. The absence of binding legal standards governing accountability, regulation and due process allows these systems to function as discretionary instruments of executive power within law enforcement, rather than as legally-constrained support tools. This regulatory absence raises various constitutional questions as courts only intervene after alleged violations of fundamental rights have occurred, most often through ex post challenges under Articles 14 and 21[17] of the Constitution of India[18]. This absence of ex ante safeguards has allowed algorithmic outputs to acquire operational authority within policing practices, even when their reliability remains deeply contested. Investigative reports revealed that the Delhi Police’s facial recognition systems, which rely on machine-learning models to generate probabilistic identity matches from CCTV footage and digital images[19], continue to subject individuals to investigative scrutiny even when algorithmic matches fall below accuracy standards, with no disclosed methodology for redress or verification.[20]
It is for these reasons that the EU AI Act treats predictive policing as a high-risk use of AI. Article 10(2)(f) requires providers to examine training data for biases that would influence inputs for future operations [21]effectively tackling the feedback loop problem described above. Article 9 goes further by requiring this risk assessment to be continuous across the system’s lifecycle[22], meaning bias that emerges post development must also be identified and mitigated.
This regulatory vacuum in India is exemplified by a reported incident involving an individual referred to as ‘Ali’ (a pseudonym), one of the 29 accused in the Ratan Lal Murder Case[23]. In the context of the 2020 North-East Delhi violence[24], according to these reports, Ali was arrested after being identified through facial recognition software deployed by the Delhi Police, which generated probabilistic matches rather than definitive identification. The identification relied on automated matching of CCTV footage and private videos, without corroborating eyewitness testimony or a Test Identification Parade, and without disclosure of the system’s methodology or confidence thresholds, despite acknowledged accuracy concerns. Although the EU AI Act postdates the 2020 North-East Delhi Violence and has no direct legal application to Indian policing, the reported incident remains analytically relevant as an illustration of the risks associated with highly unregulated, high-risk uses of AI in law enforcement, precisely the kinds of harms the Act seeks to regulate through ex ante safeguards. Disclosures obtained under India’s Right to Information Act (RTI)[25] further indicate that facial recognition technology was used in over 750 riot-related investigations. [26]By contrast, the EU AI Act treats biometric identification as a high-risk use of AI and embeds this classification in mandatory ex ante accountability mechanisms, including impact assessments and documentation obligations that resemble algorithmic audits. Indian policy discourse has not been entirely absent, though it has remained fragmented and non-binding. While Niti Aayog’s strategy papers have emphasized innovation over regulation, civil society organisations have pushed back – the Vidhi Centre for Legal policy has called for mandatory algorithmic impact assessments[27], while the Software Freedom Law Centre has documented how RTI requests on predictive policing tools are routinely denied, making independent audit effectively impossible[28]. These concerns have not translated into legislative reform, partly because development is driven by executive procurement processes that remain insulated from public scrutiny, and partly because policing falls under the State List of the Indian Constitution[29], fragmenting accountability across jurisdictions with no central oversight mechanism.
THE BRUSSELS EFFECT AND LIMITATIONS ON GLOBAL GOVERNANCE
The European Union’s Artificial Intelligence Act must be situated within the broader phenomenon commonly described as the Brussels Effect[30]. Coined by Anu Bradford of Columbia University, it refers to the EU’s ability to project its own market standards across the globe, driven by the EU’s insistence on “autonomy” and “European values.” [31] It is generally understood as a form of indirect regulatory diffusion, where EU rules ultimately shape practices outside Europe because market structures and compliance incentives push actors to comply with them. Companies across jurisdictions often adopt EU standards to avoid regulatory fragmentation or exclusion from the vast EU market[32]. India’s emanation of the phenomenon does not lie in any formal obligation to follow EU law. Rather, it operates through indirect channels. This dynamic is already visible in the Indian Technology Sector – major Indian IT firms such as TCS, Infosys, and Wipro have formally committed to early compliance with the EU AI Act, explicitly aligning their AI governance strategies with EU benchmarks to preserve access to EU markets[33]. Yet this compliance dynamic operates exclusively in the commercial sector – domestic regulators and policymakers also look to EU frameworks as a form of guidance. In this way, EU standards can shape governance choices in jurisdictions that remain legally autonomous. This was seen explicitly through the EU’s General Data Privacy Regulation (GDPR), under which principles such as consent, data minimisation, and purpose limitation were soon adopted across the globe. For example, Albania’s parliament acknowledged that their Personal Data Protection Law was in alignment with the GDPR, alongside larger jurisdictions such as Brazil through the Lei Geral de Proteção de Dados (LGPD)[34] and Japan through amendments to the Act on the Protection of Personal Information [35](APPI), both of which were closely modelled on the GDPR to ensure regulatory compatibility with the EU. [36] Critically, the GDPR achieved this in part through its direct extraterritorial reach – any organisation processing EU citizen’s data, wherever located, was bound by its obligations. The AI Act has no equivalent deployer-side hook for non-EU actors, meaning its diffusion relies entirely on market access incentives that simply do not operate when the customer is a foreign state.
The legislation of the AI Act is based on several institutional presumptions that assume Member States have authorities who wield power to regulate independently, have obligations which can enforce that guarantee transparently, and conduct judicial review, alongside strong ex ante compliance cultures internal to deploying authorities. These assumptions hold, if imperfectly, within the EU’s internal market architecture where market surveillance authorities, conformity assessments, and rights impact assessments are embedded into the regulatory lifecycle of high-risk AI systems. Therefore, when regulatory norms are imported to different jurisdictions through a market-led process, their potential to function effectively relies heavily on the domestic institutional capacity. In India, this gap is visible in the way predictive policing systems are introduced and scaled with limited procedural transparency. Research tracking AI developments[37] over an extended period shows that tenders and requests for information on tenders for high-risk algorithmic systems such as policing technologies are often not publicly disclosed, making it difficult to assess whether basic safeguards such as testing, impact assessment, or oversight have been followed. Requests under the Right to Information (RTI) Act seeking details of AI-based policing projects frequently receive delayed or incomplete responses, offering little insight into how predictive models operate, how accuracy is evaluated, or how individuals may contest algorithmic-driven police action.[38]
India’s AI regulation oscillates between a non‑regulatory innovation stance and ad hoc interventions triggered by harm, even as the state presents itself as afuture global AI hub[39]. This regulatory ambivalence is reflected in a series of conflicting signals. In 2023, the government stated that it had no intention to regulate AI. This was soon followed by indicators that high-risk AI systems would fall within the scope of the proposed Digital India Act. Subsequently, the Ministry of Electronics and Information Technology issued, and then partially withdrew, an advisory in March 2024 on the same subject.[40] The pattern has continued into 2026. In February 2026, the Indian Government hosted the Indian AI Impact Summit [41]in New Delhi, convening global technology leaders including the chief executives of Open AI, Google and other such tech giants under the banner of “From Vision to Action.” The Summit’s framing around aspirational principles, organised under the IndiaAI Mission, reflects the same tension at the heart of India’s regulatory trajectory: a state that presents itself as a global AI leader while the domestic legal framework governing high-risk AI deployments remains largely absent. India is pulled in two directions at once: a clear incentive to position itself as a global AI hub and keep regulatory friction, and a recurring pressure to intervene when specific harms like misinformation or discriminatory surveillance become practically impossible to ignore. This matters for the Brussels Effect argument because a regulatory environment that moves with political mood rather than institutional structure has no real stable institutional ground on which imported norms can take root.
This legal ambiguity reflects the limits of the EU AI Act’s extraterritorial reach, and it comes down to how the Act was built. Because it was grounded in Article 114 TFEU, the EU’s internal market legal basis, it functions structurally as a product-safety instrument rather than a fundamental right one,[42] and this is an EU design choice with real consequences for how it travels. What gets exported through Brussels Effect diffusion is the compliance form, things like technical documentation, conformity assessments, and impact assessment procedures. A vendor supplying algorithmic tools to an Indian state government could theoretically produce EU-style documentation tomorrow and this would not, in itself, address the underlying harms experienced by welfare beneficiaries, simply because its client is the Indian state rather than the EU market. The same is true of facial recognition vendors supplying Indian police forces. In jurisdictions without strong ex ante accountability cultures, apparent compliance with the AI Act therefore risks becoming largely symbolic, lending political legitimacy to high-risk deployments rather than genuinely constraining executive power, which reinforces the limits of Brussels Effect diffusion in state-controlled domains like policing[43]. This dynamic is illustrated by the behaviour of AI vendors themselves. The same vendors aligned with EU standards, including smaller domestic suppliers like Posidex Technologies (builder of the Samagra Vedika welfare system, discussed further below), are also the primary contractors supplying high-risk AI to Indian state governments. This produces a distorted Brussels Effect: vendors can claim to bear EU-style technical documentation, and the deploying state authority points to that documentation as evidence of due diligence, while conducting no independent FRIA and imposing no human oversight obligations of its own. The compliance form travels, but the rights-protective substance does not.
LESSONS FOR AI GOVERNANCE IN INDIA
Good AI governance cannot be imported passively by virtue of the Brussels Effect. India should proactively develop a rights-based ex ante legal regime including clear legal boundaries for high-risk AI. Beyond existing regulatory vacuums, this can place explicitly proscribed high-risk AI under the proportionality and necessity tests of the Indian Constitution. Given the opacity of algorithmic systems, market forces and ex post judicial review are insufficient.
A rights-facing ex ante governance model for predictive policing would require police AI systems to clear a structured constitutional review before they are procured or deployed. In practical terms, this means moving beyond vague policy advisories and embedding Articles 14 and 21 of the Constitution directly into procurement and operational decision-making.
Operationally, this could be implemented through mandatory Fundamental Rights Impact Assessments (FRIA) for high-risk policing AI. These assessments would be triggered before pilot deployment and would require disclosure of system purpose, data sources, accuracy benchmarks, and safeguards. Importantly, failure to conduct or comply with such assessments should carry consequences. Existing accountability mechanisms such as the National Human Rights Commission, state Police Complaints Authorities, and constitutional tort jurisdiction could be extended so that violations of authorisation conditions, non-disclosure of AI use in criminal proceedings, or continued reliance on opaque systems could trigger institutional liability. Public registers of approved tools and basic impact summaries would help address the current lack of public information around policing AI. This could be operationalised immediately by amending the General Financial Rules [44](the central government’s executive procurement rulebook, which governs all public spending without requiring Parliamentary approval) to mandate a Fundamental Rights Impact Assessment as a prerequisite for any procurement involving a high-risk AI system above a specific contract value, before standalone AI legislation is introduced.
The Digital Personal Data Protection Act 2023[45] acknowledges large-scale data processing risks but leaves state surveillance largely exempt, creating precisely the accountability gap predictive policing now occupies. Transparency efforts using the Right to Information Act further show that tenders and technical details relating to AI-based policing systems are often not publicly disclosed, and RTI requests frequently receive delayed or incomplete responses. Ex ante authorisation requirements would help shift oversight upstream, rather than relying on fragmented post-hoc litigation after harm has occurred.[46]
The most immediate concern is institutional capacity: police forces and welfare agencies often lack the technical expertise needed to conduct meaningful rights impact assessments, and there is a real risk that such assessments become procedural formalities rather than genuine evaluations of harm. The Samagra Vedika system in Telangana illustrates perfectly this problem beyond the policing context. Deployed by the state governments to determine welfare eligibility across housing, food and pension schemes, the system relied on an externally procured algorithmic tool supplied by Posidex Technologies whose underlying methodology remained shielded from public scrutiny, with authorities invoking proprietary exemptions to resist RTI disclosures. Individuals were automatically excluded from social protection schemes on the basis of erroneous data attributions with no meaningful mechanism for redress.[47] The governance gap is therefore not specific to policing but reflects a systematic feature of how India deploys high-risk AI across public services, making the case for an ex-ante framework that extends well beyond law enforcement. However, if India currently lacks this kind of regulatory capacity, how can it build what an ex-ante framework demands? The answer is that the framework does not presuppose a fully developed regulatory architecture but rather focuses on incrementally generating one. A qualified parallel can be drawn with India’s Environmental Impact Assessment regime, which started through administrative notification in 1994 under the Environment (Protection) Act 1986[48], gradually gained judicial endorsement, and built its expert committee, appraisal bodies, and procedural infrastructure through the practice of conducting assessments over two decades rather than through a comprehensive statute at the outset.[49] While the subject matter is different, the trajectory suggests that ex-ante assessment frameworks can be developed through executive action, judicial review, and iterative capacity building, without waiting for an omnibus legislative framework first. India’s 2025 AI Governance Guidelines [50]already lay down the institutional map needed, proposing an AI governance group, an AI safety institute, a Technology and Policy Expert Committee and sectoral regulators operating across ministries. Embedding ex ante authorisation requirements into procurement rules would give the framework an immediate operational anchor without waiting for a standalone AI Act. Each impact assessment conducted builds institutional knowledge that compounds across sectors, each procurement condition enforced creates an accountability record, and each mandatory disclosure requirement chips away at the opacity that currently allows systems like Samagra Vedika to function without scrutiny. The model, therefore, serves as both a governance tool and an institution-building mechanism operating in parallel.
The Constitutional foundation for this already exists. Drawing from the proportionality framework in Puttaswamy[51], any state action interfering with the right to privacy needs to satisfy three things: it must be sanctioned by law, necessary for a legitimate aim, and proportionate to that aim. A rights facing ex-ante governance model essentially operationalizes this test at the procurement stage, requiring deploying authorities to show in advance that a proposed AI system clears each of these hurdles before it gets approved. In practice, the deploying authority would need to produce a documented justification satisfying each limb, subject to review by the relevant sectoral regulator proposed under the 2025 AI Governance Guidelines. Until those institutions are fully up and running, existing bodies like the National Human Rights Commission, state Police Complaints Authorities, or courts through judicial review could perform these functions in the interim, meaning the framework does not have to wait for perfect institutional conditions to start working. The alternative is not neutrality but unchecked expansion, as systems keep being deployed across policing, welfare and public services with no meaningful constitutional scrutiny at all. This reinforces the central argument of this article: without domestic institutional capacity and legally embedded safeguards, external regulatory influence like the Brussels Effect simply cannot translate into meaningful rights protection where it matters most.
These are not problems unique to India. The United Kingdom (UK) has over thirty AI-related policing projects underway, yet oversight remains fragmented and is primarily reliant upon local data ethics committees such as the West Midlands Police Data Ethics Committee and “soft-law” type tools like ALGOCARE.[52] Recent scandals involving the use of false AI-generated information to make policing decisions[53], have made it evident that advisory frameworks alone cannot guarantee transparency or contestability. Predictive policing therefore illustrates how hard it is to regulate AI-driven police power even within sophisticated regulatory systems, and the limitations of market-driven diffusion become equally apparent for India.
CONCLUSION
The EU and India approach to AI-based predictive policing differ significantly, and these differences are more than regulatory design. They are based on radically different assumptions regarding the connection between state power and individual rights. The risk-based framework of the EU is effective due to the institutional architecture underpinning it, and that architecture just does not travel. The Brussels Effect may drive countries to standardisation, but what is actually exported is the paperwork: impact assessment, conformity certificates, technical documentation, but not the deployer-side institutional obligations that give any of it meaning. Without those obligations, passive import does not improve governance, but provides cover for it. That is how legitimisation works in practice: through the gap between formal adoption of a statute and its actual enforcement. In India, the issue is not whether a governance framework is required, but whether a state that implements predictive policing through opaque procurement procedures has sufficient institutional incentive to constrain its own use of power.
[1] Ankita Deshkar, ‘Satya Nadella launches AI-powered tool for Maharashtra police: How will it change crime investigation in state’ The Indian Express (Nagpur, 14 December 2025).
[2] Press Trust of India, ‘AI and facial recognition tech “Trinetra” launched to aid Uttar Pradesh Police’ (Firstpost, 27 December 2018) < https://www.firstpost.com/tech/news-analysis/ai-and-facial-recognition-tech-trinetra-launched-to-aid-uttar-pradesh-police-5799901.html > accessed 25 February 2026.
[3] European Parliament, ‘EU AI Act: first regulation on artificial intelligence’
(European Parliament, 8 June 2023) < http://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence > accessed 30 December 2025.
[4] Spyros Kaminaris, ‘AI Act: Prohibited AI Practices Become Applicable’ (EY, 4 February 2025) < https://www.ey.com/en_gr/technical/tax/tax-alerts/ai-act-prohibited-ai-practices-become-applicable > accessed 14 February 2026.
[5] European Commission, ‘European approach to artificial intelligence’ (European Commission Digital Strategy) < https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai > accessed 30 December 2025.
[6] Anu Bradford, The Brussels Effect: How the European Union Rules the World (Oxford University Press 2020).
[7] European Commission (n 6).
[8] Laura Lázaro Cabrera, ‘EU AI Act Brief – Pt. 2: Privacy & Surveillance’ (Center for Democracy & Technology, 30 April 2024) < https://cdt.org/insights/eu-ai-act-brief-pt-2-privacy-surveillance/ > accessed 30 December 2025.
[9] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) OJ L1689, art 16.
[10] Ilze Barkane and Liga Buka, Prohibited AI Surveillance Practices in the Artificial Intelligence Act: Promises and Pitfalls in Protecting Fundamental Rights in Critical Perspectives on Predictive Policing (Edward Elgar Publishing 2025) 112.
[11] AO Shearman & Sterling, ‘Zooming in on AI: EU AI Act Focus on Fundamental Rights Impact Assessment for High-Risk AI Systems’ (A&O Shearman & Sterling Tech Insights, 9 December 2024) < https://www.aoshearman.com/en/insights/ao-shearman-on-tech/zooming-in-on-ai-13-eu-ai-act-focus-on-fundamental-rights-impact-assessment-for-high-risk-ai-systems > accessed 30 December 2025.
[12] NITI Aayog, ‘National Strategy for Artificial Intelligence’ (Niti Aayog, 27 June 2018) < https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf > accessed 19 March 2026.
[13] Antara Vats, “Building the Case for Restricted Use of Predictive Policing Tools in India” (2022) 32(11) International Review of InformationEthics< https://informationethics.ca/index.php/irie/article/view/487/467 > accessed 12 December 2025.
[14] Ananya Singh, ‘Algorithmic Injustice: AI, Caste and Democracy in India’ (LHSS Collective, 4 August 2025) < https://lhsscollective.in/algorithmic-injustice-ai-caste-and-democracy-in-india/ > accessed 16 December 2025.
[15] Vidushi Marda and Shiv Narayan, ‘Data in New Delhi’s Predictive Policing System’ (Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAT), 22 January 2020).
[16] Vidushi Marda and Shiv Narayan (n 16).
[17] Constitution of India, arts 14 and 21.
[18] Justice K.S. Puttaswamy v Union of India [2017] 10 SCC 1 (SC).
[19] Abhay Singh, ‘How AI-based FRS is helping Delhi cops identify and track city villains’ The Times of India (New Delhi, 25 December 2024).
[20] Vallari Sanzgiri, ‘Delhi Police’s Reply to IFF RTI Shows FRT May Worsen Religious Bias in Policing’
(MediaNama, 18 August 2022) < https://www.medianama.com/2022/08/223-delhi-police-iff-rti-facial-recognition-80-percent-accuracy-2/ > accessed 14 December 2025.
[21] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) OJ L1689, art 10(2)(f).
[22] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) OJ L1689, Art 9.
[23] Mohd Wasim Bablu v State (NCT of Delhi) Delhi HC 8454.
[24] Ratan Lal Case: 2 Declared Proclaimed Offenders’ (The Hindu, 6 September 2020) < https://www.thehindu.com/news/cities/Delhi/ratan-lal-case-2-declared-proclaimed-offenders/article32537952.ece > accessed 25 March 2026.
[25] Right to Information Act 2005.
[26] Astha Savyasachi, ‘As AI Took Over Policing in Delhi, Who Bore the Brunt?’
(The Wire, 2 July 2025) < https://thewire.in/law/as-ai-took-over-policing-in-delhi-who-bore-the-brunt > accessed 30 December 2025.
[27] Akriti Bopanna, ‘India’s Tryst with Predictive Policing’ (Vidhi Centre for Legal Policy, 13 April 2020) <https://vidhilegalpolicy.in/blog/indias-tryst-with-predictive-policing/\> accessed 20 March 2026.
[28] Software Freedom Law Centre, ‘Tracing the Rise of Predictive Policing in India’ (Software Freedom Law Centre, 26 February 2026) < https://sflc.in/tracing-the-rise-of-predictive-policing-in-india/\ > accessed 20 March 2026.
[29] Constitution of India 1950, Schedule VII, List II, Entry 2.
[30] Anu Bradford, “The Brussels Effect’ [2012] 107 Northwestern University Law Review 1
[31] Dan Peeperkorn and Francesco Guarascio, ‘EU Autonomy, the Brussels Effect and the Rise of Global Economic Protectionism’ (ECIPE, 27 August 2025) < https://ecipe.org/publications/eu-autonomy-brussels-effect-rise-global-economic-protectionism/ > accessed 30 December 2025.
[32] Michael Rasmussen, ‘The Regulatory Divide: How EU and US Approaches Shape Business Strategy’ (GRC 2020, 6 March 2025) < https://www.grc2020.com/2025/03/06/the-regulatory-divide-how-eu-and-us-approaches-shape-business-strategy/ > accessed 21 February 2026.
[33] TOI Tech Desk, ‘TCS, Infosys, Wipro, Google and Microsoft Among 100 Tech Companies to Sign Europe’s First Ever AI Legal Framework’ (Times of India, 29 September 2024) < http://timesofindia.indiatimes.com/articleshow/113763787.cms\ > accessed 20 March 2026.
[34] Brazil General Data Protection Law 2018.
[35] Protection of Personal Information (Amendments of 2020).
[36] Ronan Murphy, Mapping the Brussels Effect: The GDPR Goes Global (Center for European Policy Analysis, 7 August 2025) <https://cepa.org/comprehensive-reports/mapping-the-brussels-effect-the-gdpr-goes-global/> accessed 30 December 2025.
[37] Disha Verma, ‘What We Do in the Shadows: IFF Seeks Transparency in How Indian “Smart Governments” Are Using AI’ (Internet Freedom Foundation, 26 March 2024) < https://internetfreedom.in/transparency-in-government-ai/ > accessed 21 February 2026.
[38] Marco Almada and Anca Radu, “The Brussels Side-Effect: How the AI Act Can Reduce the Global Reach of EU Policy’ [2024] 25(4) German Law Journal 646, 646–663.
[39] Shaoshan Liu, ‘India’s AI Regulation Dilemma’ (The Diplomat, 27 October 2023) < https://thediplomat.com/2023/10/indias-ai-regulation-dilemma/ > accessed 30 December 2025.
[40] Amlan Mohanty and Shatakratu Sahu, ‘India’s Advance on AI Regulation’ (Carnegie Endowment for International Peace, 21 November 2024) < https://carnegieendowment.org/research/2024/11/indias-advance-on-ai-regulation?lang=en > accessed 30 December 2025.
[41] IndiaAI Mission, ‘India AI Impact Summit 2026’ (IndiaAI, February 2026) < https://impact.indiaai.gov.in > accessed 19 March 2026.
[42] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) OJ L 2024/1689, Recital 3.
[43] Marco Almada and Anca Radu (n 37).
[44] General Financial Rules 2017.
[45] Digital Personal Data Protection Act 2023, s 2(x).
[46] Disha Verma (n 36).
[47] Amnesty International, ‘Entity Resolution in India’s Welfare Digitalisation’ (Amnesty International, 30 April 2024) < https://www.amnesty.org/en/latest/research/2024/04/entity-resolution-in-indias-welfare-digitalization/ accessed > 20 March 2026.
[48] Environment (Protection) Act 1986.
[49] Aakaanksha Tiwari, ‘The Emergence of Environmental Impact Assessment in India’ (CAG, 15 August 2020) < https://www.cag.org.in/blogs/emergence-environmental-impact-assessment-india > accessed 24 March 2026.
[50] Ministry of Electronics and Information Technology, ‘India AI Governance Guidelines’ (MeitY, November 2025) < https://static.pib.gov.in/WriteReadData/specificdocs/documents/2025/nov/doc2025115685601.pdf accessed > 20 March 2026.
[51] Puttaswamy (n 19).
[52] Tom Sorell, “AI-related Data Ethics Oversight in UK Policing’ [2024] 18 Policing: A Journal of Policy and Practice paae016.
[53] Your News Media Newsroom, ‘UK Police Move Toward Predictive AI Systems to Identify Potential Crimes Before They Occur’ (Your News, 21 January 2026) < https://yournews.com/2026/01/21/6225260/uk-police-move-toward-predictive-ai-systems-to-identify-potential/ > accessed 24 February 2026.
Neeve Anand
BBA.LLB (Hons.) Gujarat National Law University 2030
