Considerations for deciding when and where to use artificial intelligence (AI)
The Data-driven technologies authorised professional practice (APP) emphasises that decisions on whether to apply these technologies should be driven by your force’s strategic priorities.
Horizon scanning
Your force should consider having a system in place for horizon scanning artificial intelligence (AI) based opportunities that are available in the marketplace, then identifying potential solutions that might align with force and national priorities. The National Police Chiefs' Council (NPCC) AI Board has asked all forces to have a designated AI lead, who would ideally be someone with digital delivery experience. This individual would be well placed to lead on horizon scanning. Some larger forces will be able to dedicate a team to these activities, in addition to an AI lead. The horizon scanning function could involve the following:
- Looking out for opportunities to receive central support in building AI capability from national partners, such as:
- Digital Public Contact
- the Office of the Police Chief Scientific Adviser (OPCSA)
- the NPCC AI Board
- Looking out for opportunities to collaborate on AI regionally – for example, by linking in with the NPCC’s Regional Innovation Leads Network. Regional collaboration can be an effective means of deepening the skills base for a project and promoting interoperability.
- Acting as a single point of contact for approaches from prospective suppliers and for requests from within your force to procure AI-based products.
- Maintaining a central register of AI use in force. This will help your force to keep track of what AI-based products are being procured and used.
- Identifying whether other innovations being brought into force have AI components, even if the technology has not been badged as an AI product. Your force may want to consider requiring contracts with all suppliers to stipulate whether products incorporate AI to support this.
Criteria for a potential use case
The Data-driven technologies APP states that AI should be used in response to a clearly defined problem or opportunity to improve. A starting point would be to think about the challenges your force is facing in trying to meet its top priorities, and then to consider which of these are amenable to an AI solution. Problems that AI can help with include where there is a need to:
- accelerate and streamline processes
- optimise the use of resources
- introduce more data-driven decision-making
- perform complicated or specialist tasks more cost-effectively
- increase staff satisfaction
Need for an AI-based solution
AI should not be treated as a default solution. Alternative approaches may offer a more cost-effective and lower-risk means of achieving the same outcome. For example, if a force is considering investing in a predictive, AI-driven tool for identifying young people at risk of involvement in knife crime, it is essential to evaluate whether it offers a clear operational advantage over existing analytical capabilities and whether it offers value for money. The Machine learning guide for policing includes a list of questions to support forces in determining whether they are pursuing the most appropriate and proportionate solution.
Appropriateness of AI
Your force’s data and AI maturity
You will need to consider whether a particular use of AI is appropriate for your force’s level of digital infrastructure, skills and experience. If your force’s use of AI to date is limited, consider starting with well-established uses that are less directly related to the front line, such as:
- automated document redaction – using AI-based tools to scrub sensitive and protected data from lengthy texts
- synthesis of complex data – production of simple structured summaries out of long, complex data
- enhanced search – quick retrieval of relevant organisational information or case file content
- support for responding to requests – analysing voice calls to understand citizens’ needs and routing their requests to the place where they can best get help
- support digital enquiries – enabling citizens to express their needs in natural language online, and helping them to find the content and services that are most useful to them
Safety, ethics and legal considerations
You also need to consider whether the use case and the context it will be applied in are appropriate for AI. The ethical standards that force use of AI need to meet are set out in the Data ethics APP, which incorporates the NPCC AI Covenant. This APP should be read in full. However, for ease of reference, the key principles are as follows.
Human intervention
Procedures should place human intervention at the core of decision-making. The human-in-the-loop makes the decision advised by AI, aware of its risks and limitations. The technology does not make the final decision.
Safety and wellbeing
Procedures should prioritise the safety and wellbeing of the public.
Fairness and impartiality
Procedures must operate with fairness and impartiality. They must not discriminate against or disadvantage individuals, particularly:
- those who are most vulnerable
- individuals or groups based on their ‘protected characteristics’ as defined in the Equality Act 2010
Respect and dignity
Procedures should show the highest standards of respect and dignity towards individuals and groups, as set out in the Code of Ethics.
Transparent, proportionate and respects rights
Procedures should be transparent and proportionate, and should respect sensitive personal data and human rights in accordance with the law. An important aspect of transparency is explainability, meaning how clearly an AI system’s decision-making can be understood by those interpreting or overseeing it. This means ensuring that relevant personnel can access, assess and understand its reasoning, reliability and fairness. The AI playbook for UK government gives examples of where use of AI is inappropriate, including using AI on its own in high-risk, high-impact situations.
The Information Commissioner’s Office (ICO) and The Alan Turing Institute have produced joint guidance on explaining decisions made with AI.