The UK government has published a paper outlining its approach to regulating AI. It says that its “ambition is to support responsible innovation in AI”. Its proposed rules aim to employ a flexible approach to address future risks and opportunities so businesses are clear how they can develop and use AI systems and consumers are confident they are safe and robust.
The report says that AI has two particular characteristics that should be considered when regulating it:
- The ‘adaptiveness’ of the technology - explaining intent or logic; and
- The ‘autonomy’ of the technology - assigning responsibility for action.
The government’s approach is based on six core principles that regulators must apply, with flexibility to implement these in ways that best meet the use of AI in their sectors. These build on the OECD Principles on AI and require developers and users to:
- Ensure that AI is used safely;
- Ensure that AI is technically secure and functions as designed;
- Make sure that AI is appropriately transparent and explainable: regulators may deem that decisions which cannot be explained should be prohibited entirely;
- Consider fairness: high-impact outcomes, and the data points used to reach them, should be justifiable and not arbitrary;
- Identify a legal person to be responsible for AI: accountability for the outcomes produced by AI and legal liability must always rest with an identified or identifiable legal person, whether corporate or natural; and
- Clarify routes to redress or contestability – the government expects regulators to implement proportionate measures to ensure the contestability of the outcome of the use of AI in relevant regulated situations.
It says that regulators such as the ICO or the FCA will be asked to interpret and implement the principles. They will be encouraged to consider lighter touch options which could include guidance and voluntary measures or creating sandboxes - such as a trial environment where businesses can check the safety and reliability of AI tech before introducing it to market. The government will also consider if there are any regulatory gaps.
Commentators have expressed disappointment that the report does not specifically cover the use of AI in the health and safety, or employment, contexts and have called for transparency on personal data usage. The government does reference the issue of bias and opaque decision-making in its report, which we have written about recently, and it is likely that the White Paper will consider these issues. It is also likely that the EU's approach to regulating AI may make the government's approach somewhat redundant in relation to cross-border activities as companies will have to comply with the AI Act anyway.
The government invites feedback on the report. The call for evidence ends on 26 September 2022, following which a White Paper is planned. The call for evidence comes as the government has introduced the Data Protection and Digital Information Bill to parliament. We reported on the government’s plans for the Bill here.
While we currently do not see a need for legislation at this stage, we cannot rule out that legislation may be required as part of making sure our regulators are able to implement the framework