AI Regulatory Update

AI Regulatory Update – US

By Bronwyn Ross

July 2023

July 2023

Of course the following summary is not legal advice, but it may help you decide if you need to talk to your lawyers  and update your AI Governance framework. (AI Governance refers to the policies, processes and tools that support responsible and effective use of AI and AI-enabled products throughout their lifecycle). Please note that regulatory frameworks are developing rapidly and you should always seek up to date legal advice on your obligations in each jurisdiction.

***************************************************************

There is currently no comprehensive federal legislation on AI in the US, despite much activity –  in 2022 alone there were 88 proposed bills relating to AI introduced to Congress. State legislatures also demonstrate growing interest in AI policy, with 60 AI-related bills proposed and more than 21 passed into law in 2022. 

In the absence of binding legislation, federal agencies have been busy issuing guidance and voluntary frameworks. Regarding consumer rights, the Federal Trade Commission published an Advanced Notice of Proposed Rulemaking (ANPR) on commercial surveillance and lax data security practices in August 2022.  More recently, it also issued a warning to business to keep AI claims in check; and a reminder that existing prohibitions against deceptive or unfair conduct can apply also to the development, selling or use of tools  designed to deceive (such as synthetic media or deep fake technologies). In the field of intellectual property rights, the US Copyright Office issued an important statement of policy on the registration of works containing AI-generated material. And in the area of risk  management, the US Dept of Commerce’s National Institute of  Standards and Technology (NIST) published its Artificial Intelligence Risk Management Framework v.1.0 (the AI RMF) in Jan 2023. The AI RMF is flexible and scalable, designed to be used by large and small organisations in the public, private and non-profit sectors

In another interim move, the Biden administration announced on 21 July 2023 that it has secured pledges from 7 tech vendors (Microsoft, OpenAI, Google, Meta, Amazon, Anthropic and Inflection) to make 8 voluntary commitments promoting the safe, secure and trustworthy development and use of AI technology. These voluntary AI Commitments are somewhat limited in scope, they apply only to generative models that “are overall more powerful than the current industry frontier” (such as GPT-4 and DALL-E 2) and in some cases they replicate existing policy for the companies. They are designed to bridge a gap until regulations covering substantially the same issues take effect.

The background to all this activity is the Biden-Harris Administration’s Blueprint for an AI Bill of Rights (AI BoR), released by the Office of Science and Technology Policy (OSTP) in Oct 2022.  It provides a framework for how government, technology vendors and citizens can work together to ensure accountable AI. Here we summarise the content of the AI BoR.

Process:  The AI BoR was developed through consultation with the public, researchers, technologists, advocates, journalists, and US government agencies. 

Overview: The AI BoR comprises a set of five principles and associated practices  guiding the design, use, and deployment of automated systems in order to protect the rights of the American public. The principles are: 1) protection from unsafe or ineffective AI systems; 2) protection against algorithmic discrimination; 3) protection from abusive data practices and agency over personal data; 4) notice when an automated system is being used and explanation of how it impacts you; 5) ability  to opt out of AI systems and seek remedy from an accountable person. The AI BoR includes actions that actors can take to translate the principles into practices.

Who will be impacted: The framework applies to (1) automated systems that (2) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services, for example in the areas of education, health and housing. It has horizontal application, across all sectors.

How it will be enforced: The framework has no enforcement mechanism. It is intended to be used to inform policy decisions where existing law does not provide guidance.

Timing: Effective from date of publication (4 Oct 2022).

Comment: Whilst the AI BoR provides useful guidance on rights to be preserved, it does not enforce compliance. In this regard, the patchwork of use-case-specific and geographic-specific legislation being enacted at state level creates a difficult compliance environment for developers and users of AI in the US. One development that is worth watching is California’s  Assembly Bill 311 which would regulate automated decision making tools in a manner consistent with the AI BoR. If passed, this proposal would require deployers and developers of  “consequential” AI products (related to employment, education, housing and health care) to conduct impact assessments; provide notice and opt-out rights to California residents; and implement reasonable administrative and technical safeguards to address  risks of algorithmic discrimination.  The law would be enforceable by the California attorney general and would also contain a limited private right of action. Assembly Bill 311 is currently still in committee, but is likely to be enacted given California’s proactive stance on tech policy, following which other states may follow suit.On a separate issue, work is being undertaken also on mechanisms to promote public trust in AI systems. The federal Department of Commerce issued a request for comment on accountability measures for AI which closed on 12 June 2023. And the U.S. Congress introduced a bipartisan bill in June 2023 for a National AI Commission that would recommend what governmental structures may be needed to oversee and regulate AI systems, including general purpose AI (GPAI) systems.

Thanks for checking out our business articles. If you want to learn more, feel free to reach out to Red Marble AI. You can click on the "Let's Talk" button on our website or email Bronwyn, our Ai Govern expert at b.ross@redmarble.ai.

We appreciate your interest and look forward to sharing more with you!

Let’s Talk

Keep reading

An Update on AI Agents - AI Research From The Lab - Red Marble AI
Research Briefs
OpenAI for Docket Recognition - AI Research From The Lab - Red Marble AI
Research Briefs
AI-Generated Video - AI Research From The Lab - Red Marble AI
Research Briefs
Fine-Tuning GPT-3.5 Turbo - AI Research From The Lab - Red Marble AI
Research Briefs
12 steps to responsible ai
AI Governance
Audiocraft AI Music Generation - AI Research From The Lab - Red Marble AI
Research Briefs
GPT4all - AI Research From The Lab - Red Marble AI
Research Briefs
Emerging LLMs - AI Research From The Lab - Red Marble AI
Research Briefs
AI-Powered Autonomous Agents - AI Research From The Lab - Red Marble AI
Research Briefs
AI Regulatory Update
AI Governance
AI Regulatory Update
AI Governance
AI Regulatory Update
AI Governance
AI Regulatory Update
AI Governance
descrimination in ai
AI Governance
The Quiet AI revolution in Heavy Industries -Red Marble AI
AI Strategy
Red Marble Construction Language Research project
AI in Construction
The AI Revolution is here - Red Marble AI whitepaper
AI in Business
AI
AI in Construction
AI Strategy
AI in Business
AI in Business
AI
AI
AI
AI Strategy
AI
Experiments with Red Marble AI
AI Strategy
AI in Business
AI in Business
AI
AI in Business