July 2023
Of course the following summary is not legal advice, but it may help you decide if you need to talk to your lawyers and update your AI Governance framework. (AI Governance refers to the policies, processes and tools that support responsible and effective use of AI and AI-enabled products throughout their lifecycle). Please note that regulatory frameworks are developing rapidly and you should always seek up to date legal advice on your obligations in each jurisdiction.
***************************************************************
Effective governance of AI is one of the three pillars underpinning the UK’s ten year plan to make Britain “a global AI superpower” as described in its National AI Strategy.
There are currently no UK laws that were explicitly written to regulate the use of AI technologies, although it is partially captured by existing data protection and processing laws. Post-Brexit, the UK is focused on gaining competitive advantage through a business-friendly approach to AI regulation and a “more nimble” regulatory framework than the EU. This approach is reflected in its March 2023 white paper A pro-innovation approach to AI regulation, which proposes a decidedly light touch approach to AI regulation.
Overview: The white paper specifies five high-level common principles to be issued on a non-statutory basis and implemented by existing regulators, making use of their domain-specific expertise to contextualise them. The common principles are: safety, security & robustness; appropriate transparency & explainability; fairness; accountability & governance; and contestability & redress (overall, broadly consistent with the OECD AI Principles). Following an initial period of implementation, and “when parliamentary time allows”, the government may introduce a statutory duty on regulators requiring them to have due regard to the principles. The paper identifies some central monitoring and coordination functions required to ensure the framework is effective, but rules out creation of a new AI regulator.
Who will be impacted: Suppliers, developers and users of AI in the UK will continue to be impacted by changes proposed to existing data protection and processing laws. Those operating in regulated sectors should also consult any specific guidance issued by their regulator to determine what (if any) additional obligations may apply to them.
How it will be enforced: This approach relies on the enforcement powers and processes of existing regulators such as the Information Commissioner’s Office (ICO), Competition and Markets Authority (CMA), Office of Communications (OfCom), Medicine and Healthcare Regulatory Authority (MHRA) and the Equality and Human Rights Commission (EHRC). The government may review and update the powers and remits of regulators as required, but overall takes the view that uniform powers are not necessary.
Timing: Submissions on the white paper were due by 21 June 2023, after which the government will publish its response. It aims to publish final cross-sectoral principles by the end of 2023, together with an AI Regulations Roadmap which will describe how the principles will be monitored and coordinated.
Guidance: The white paper anticipates that UK regulators will publish non-statutory guidance over the next 12 months, including practical tools such as risk assessment templates, and standards. It requires guidance to be pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative.
Of course, some regulators have already issued guidance. The Government Digital Service and the Office for Artificial Intelligence published a joint Guide to using artificial intelligence in the public sector in 2019; and the EHRC followed up with its own advice and guidance on use of AI in the public services in 2022. The ICO’s more recent Guidance on AI and data protection, published in Oct 2022, is aimed at a broader audience including the private sector. The latter includes the first version of a risk management toolkit identifying risks at each phase of the AI lifecycle, together with controls and practical steps to reduce them.
Comment: Assuming most suppliers and developers have their eye on the much larger EU market and will use the EU AI Act as their compliance benchmark, it may be sensible for the UK not to create yet another horizontal statutory regime.
A vertical approach runs the risk of creating a more difficult compliance environment for organisations that are active in several sectors. But if the regulators collaborate closely and adopt common international standards, they will lighten the burden.
Like the EU and the US, the UK sees technical standards as a means of implementing and harmonising AI governance. It is monitoring and participating in the work of international standards development organisations via its AI Standards Hub. It is also very focused on developing the audit and assurance services needed to certify compliance with standards. Its AI Assurance Roadmap describes a vision in which UK professional service firms provide “a range of services to build justified trust in AI” – a whole new line of business for auditors and consultants.
Given the number of regulators who will now need AI skills and expertise to carry out their role, some form of resource pooling may be required to manage the looming talent shortfall. Some of the key UK regulators including the ICO, CMA, FCA and OfCom are already working together through the Digital Regulation Cooperation Forum to promote coherence, collaboration and capability building; it will be interesting to see if membership of this forum is expanded in future.
Thanks for checking out our business articles. If you want to learn more, feel free to reach out to Red Marble AI. You can click on the "Let's Talk" button on our website or email Bronwyn, our Ai Govern expert at b.ross@redmarble.ai.
We appreciate your interest and look forward to sharing more with you!