This study will examine the future use of AI in policing using a survey to examine the relationship between AI and policing, planning and possible future uses.
| Lead institution | |
|---|---|
| Principal researcher(s) |
Paul A. Gowers
|
| Police region |
South West
|
| Level of research |
PhD
|
| Project start date |
|
| Date due for completion |
|
Research context
The research adopts a critical realist methodological framework grounded in Roy Bhaskar’s stratified ontology, integrating philosophical insights from Luciano Floridi’s philosophy of information and Michel Foucault’s theory of governmentality with criminological analyses of organisational culture, surveillance and legitimacy. This interdisciplinary synthesis provides a framework for identifying the underlying causal mechanisms through which artificial intelligence (AI) and digital infrastructures reshape policing authority. Empirically, the study examines UK policing, with particular focus on Gloucestershire Constabulary, situating local practices within national governance frameworks, including the National Intelligence Model, the NPCC AI Strategy (2024–2027) and the UK National AI Strategy. The research demonstrates that routine recording practices constitute the foundational micro-mechanisms of informational power, activating algorithmic processes that influence risk classification, operational deployment and institutional authority.
The thesis concludes that democratic legitimacy in AI-enabled policing depends upon ethical information governance, transparency, contestability and accountability at the level of informational infrastructure. The research contributes to sociological theory, criminology and public policy by establishing information power as a central analytical concept for understanding governance in the digital age.
Research methodology
This study adopts a cross-sectional quantitative survey design to examine trust in artificial intelligence (AI) among police officers. The design is appropriate because it enables the systematic measurement of attitudinal constructs such as trust, perceived fairness and explainability, and allows for statistical testing of relationships between variables across a defined population.
Within the broader thesis, this quantitative component operates as the empirical layer (Bhaskar’s domain of the empirical), capturing observable perceptions and reported attitudes. This data is subsequently interpreted in relation to underlying generative mechanisms (real domain) such as organisational culture, informational authority and technological governance structures.
The design aligns with established research on policing and decision-making, where structured survey instruments are used to quantify professional attitudes toward risk, discretion and technological systems. It also reflects the increasing emphasis in UK policing policy on understanding practitioner trust as a prerequisite for responsible AI deployment.