The governance and risk assessment process of artificial intelligence (AI)
Artificial intelligence (AI) should not be deployed without having undergone an ethical scrutiny process. In an AI context, independent ethics committees are strongly encouraged as best practice for governance where your data protection impact assessment or other risk assessments indicate a high risk of harm to individuals’ fundamental rights, freedoms and interests. These committees should be independently chaired, and should be convened and managed by the Office of the Police and Crime Commissioner (OPCC).
These committees should include representatives from:
- the PCC’s office
- relevant disciplines in academia (such as law, computer science, information security, safeguarding and ethics)
- individuals with policing backgrounds, with a strong emphasis on appointing members who are, as far as possible, representative of the force area’s diverse communities.
If your PCC’s office does not have an ethics committee, consider requesting it to have one set up. The Association of Police and Crime Commissioners can be contacted for information on such committees, their structures and terms of reference. If that still does not seem feasible, you could explore with the OPCC whether one of its other independent advisory groups could be adapted for the purpose.
The Biometrics and Forensics Ethics Group can also provide ethical scrutiny for high-risk, nationally significant use cases.
For lower-risk deployments of AI, there should still be an ethical scrutiny forum that is separate from the day-to-day oversight provided by the any project or delivery boards. This forum should bring in independent expertise (for example, legal, technical and analytical) as required.
Regardless of the risk level associated with the tool or system, you should inform the OPCC of any proposed force adoption of AI and make sure they are informed in advance of any AI deployment. To ensure that decisions are transparent and auditable, you should document:
- all decisions
- the AI tools and systems considered
- advice provided by the ethics committee (or other ethical scrutiny forum) and its impact on final decisions
- any trade-offs – for example, increased accuracy often means decreased explicability
- summaries of the key information about the datasets and model used
- documents should be published where appropriate to demonstrate transparency
You should also work with the OPCC and the ethics committee to establish what your policy will be on informing the public about how your force is using the AI tool or system. This policy should balance the need for transparency with the cybersecurity risks that can arise with these technologies. For more information, go to the National Police Chiefs' Council (NPCC) AI Playbook.
Independent scrutiny is vital to ensure effective and responsible deployment of AI technologies, although this is advisory. The overarching decision to deploy AI technologies remains with the chief constable, having considered their wide-ranging impacts and been informed by the scrutiny.