AI Regulatory Update

AI Regulatory Update – EU

By Bronwyn Ross

July 2023

July 2023

Of course the following summary is not legal advice, but it may help you decide if you need to talk to your lawyers  and update your AI Governance framework. (AI Governance refers to the policies, processes and tools that support responsible and effective use of AI and AI-enabled products throughout their lifecycle). Please note that regulatory frameworks are developing rapidly and you should always seek up to date legal advice on your obligations in each jurisdiction.

***************************************************************

Process: The first draft of Europe’s Artificial Intelligence Act (the “AI Act”) was introduced in April 2021 and is currently going through the EU legislative process. In June 2023 it entered three-way (trilogue) negotiations between the European Parliament, Commission and Council (representing the 27 EU Member States). The bill will become law once the parties agree on the final text, expected to be by the end of 2023. 

Overview: The proposed AI Act applies “horizontally” (across all sectors excluding military) and with extraterritorial reach (it can apply to non-EU organisations that supply AI systems into the EU). 

It classifies AI applications and systems as unacceptable risk (to be banned), high-risk (to be regulated) or low / minimal risk (unregulated but subject to transparency requirements). Providers of high-risk systems will be required to register them in an EU database operated by the Commission and submit conformity assessments before placing them on the market. Changes introduced by the EU Parliament would also require users of high-risk systems to conduct a fundamental rights impact assessment before deploying them.

The current draft proposes six general principles applicable to all AI systems and their operators: (1) human agency and oversight; (2) technical robustness and safety; (3) privacy and data governance; (4) transparency; (5) diversity, non-discrimination and fairness; and (6) social and environmental well-being. General purpose AI (GPAI) systems aren’t necessarily regulated as high-risk AI systems, but they are subject to strict requirements around data governance, technical documentation and quality management systems. Deployers of generative AI (e.g. ChatGPT and Dall-E) need to notify individuals they are interacting with an AI system, and disclose what copyrighted material has been used to train their systems to create text and images.

Deployers have various compliance obligations. This includes a requirement to conduct a fundamental rights impact assessment (similar to a data protection impact assessment) and providing certain information to individuals affected by the decision of a high-risk AI system.

Who will be impacted: The AI Act will apply to providers of AI systems in the EU market, no matter where the provider is located; users established in the EU; and to users or providers of AI systems in third countries if their output is used in the EU. It catches importers and distributors of AI systems, and product manufacturers who integrate AI systems with their products. It doesn’t apply to personal use, but it does catch small to medium enterprises.

How it will be enforced: Regulators in each of the EU member states will be responsible for enforcement of the AI Act, as with the General Data Protection Regulation (GDPR). They will be advised by an EU “AI Board” comprising representatives from member states and the Commission.

Penalties: Failing to comply with the rules around prohibited uses and data governance is punishable by a fine of up to €30M or 6 percent of worldwide annual turnover (whichever is higher). For high-risk AI systems, the upper limit is €20M or 4 percent of turnover. Failing to supply accurate and complete information to national bodies can result in a fine of up to €10M or 2 percent of turnover.

Timing: Currently predicted to be adopted by the end of 2023. Once adopted, a grace period of up to 24 months applies before it is enforced.

Guidance: Compliance with EU technical standards will create a presumption of conformity for high-risk AI applications and services and creates a strong incentive for providers to adopt EU standards. Global standards bodies (ISO/IEC) have been working on their AI roadmap  in the last 24 months and an important management standard is expected to be published by the end of 2023. It is likely that the European standards bodies will adopt and align with  ISO/IEC standards to a large degree.

And furthermore: The EU’s draft AI Liability Directive was introduced on 28 Sep 2022  and complements the draft AI Act. Its aim is to help civil plaintiffs obtain redress for damage caused by AI systems, by updating the national liability rules In Member States. It is notable for introducing a presumption of causation in fault-based scenarios; and also allowing claims to be brought by a subrogated party or a representative of a claimant, including by class action. The approach seems similar to product liability. Whilst the draft does not propose strict liability where there is no obvious defective product or fault of the defendant, it does propose that the European Commission should review the need for no-fault strict liability rules five (5) years after the AI Liability Directive comes into force.

Comment: The EU’s world-first draft legislation is likely to continue the “Brussels effect” if passed, entrenching Europe’s thought leadership in the field of consumer digital rights. Its extra-territorial effect could establish it as a global standard, in the same way the GDPR has become the gold standard for privacy. Given the length of time before it takes effect however, the EU is now calling for an interim self-regulation initiative to cover generative AI products. At the US/EU Trade and Technology Council meeting in May 2023, the EU proposed an initiative with the US to establish a voluntary generative AI code of conduct which would be put before G7 leaders and invited companies as a joint transatlantic proposal. It plans to develop a draft with industry input in coming weeks.

Thanks for checking out our business articles. If you want to learn more, feel free to reach out to Red Marble AI. You can click on the "Let's Talk" button on our website or email Bronwyn, our Ai Govern expert at b.ross@redmarble.ai.

We appreciate your interest and look forward to sharing more with you!

Let’s Talk

Keep reading

cost fine tuning LLM redmarble
Research Briefs
Research Briefs
An Update on AI Agents - AI Research From The Lab - Red Marble AI
Research Briefs
OpenAI for Docket Recognition - AI Research From The Lab - Red Marble AI
Research Briefs
AI-Generated Video - AI Research From The Lab - Red Marble AI
Research Briefs
Fine-Tuning GPT-3.5 Turbo - AI Research From The Lab - Red Marble AI
Research Briefs
12 steps to responsible ai
AI Governance
Audiocraft AI Music Generation - AI Research From The Lab - Red Marble AI
Research Briefs
GPT4all - AI Research From The Lab - Red Marble AI
Research Briefs
Emerging LLMs - AI Research From The Lab - Red Marble AI
Research Briefs
AI-Powered Autonomous Agents - AI Research From The Lab - Red Marble AI
Research Briefs
AI Regulatory Update
AI Governance
AI Regulatory Update
AI Governance
AI Regulatory Update
AI Governance
AI Regulatory Update
AI Governance
descrimination in ai
AI Governance
The Quiet AI revolution in Heavy Industries -Red Marble AI
AI Strategy
Red Marble Construction Language Research project
AI in Construction
The AI Revolution is here - Red Marble AI whitepaper
AI in Business
AI
AI in Construction
AI Strategy
AI in Business
AI in Business
AI
AI
AI
AI Strategy
AI
Experiments with Red Marble AI
AI Strategy
AI in Business
AI in Business
AI
AI in Business