
Context
There’s been a flurry of AI policy activity in Australia in the last month – we think it’s time to provide an update! By way of quick recap:
Australia currently has no “economy-wide” AI-specific legislation, although it has a number of general laws (e.g. privacy, anti-discrimination, intellectual property and product liability laws) and sector-specific laws (e.g. the therapeutic goods regime) that may apply to the use of AI.
Australia’s AI Ethics Principles published in 2019 reflect Australia’s commitment to international frameworks such as the OECD AI Principles, but they non-binding and high level.
In 2023, the government invited public comment on the required regulatory settings for AI via its discussion paper ‘Safe and Responsible AI in Australia’. After receiving submissions it published an Interim Response in Jan 2024, noting the public’s call for regulation of high-risk AI and powerful ‘frontier’ models which could have unforeseen risks. The Interim Response flagged 5 pillars for government action:
- delivering regulatory clarity and certainty
- supporting and promoting best practice
- supporting AI capability
- government as an exemplar in the use of AI
- international engagement
In line with those pillars, the government has recently published a blizzard of papers:
- Voluntary AI Safety Standard (5 Sep 2024)
- Proposals Paper for mandatory guardrails for AI in high risk settings (5 Sep
2024) - Policy for Responsible Use of AI in Government (published 15 Aug 2024)
- Standard for AI transparency statements (published 29 Aug 2024); and a
- National Framework for the Assurance of Artificial Intelligence in
Government.
The Standard and The Proposals
Let’s summarise the two papers which have general application first:
The Voluntary AI Safety Standard (the Standard) covers all organisations developing and deploying AI. It proposes 10 voluntary guardrails that organisations can adopt in relation to AI systems of any risk level. This first version of the Standard focuses more closely on organisations that deploy AI systems, and it offers some high level suggestions on procurement due diligence to try and ensure third party systems comply with the guardrails. Apparently the next version of the Standard will expand on technical practices
and guidance for AI developers.
The guardrails themselves are a mixture of organisational obligations, and system-level obligations for each use case or AI system. They focus on 1) testing (pre-and post-deployment), 2) transparency and 3) accountability for governance and risk management. As well as mapping to Australia’s AI Ethics Principles, the Standard also draws on and is aligned with key international standards such as AS ISO/IEC 42001:2023 on AI Management Systems, and NISTS’s AI RMF 1.0.
The Proposals Paper, published at the same time as the Standard, seeks comment on mandatory regulatory settings which would apply to all developers and deployers of high-risk AI. The proposed mandatory guardrails are identical to those in the voluntary Standard, with one exception – voluntary guardrail #10 is around stakeholder engagement, whereas mandatory guardrail #10 requires conformity assessments.
The Proposals Paper invites feedback on how to define high-risk AI (principles vs a list-based approach), and whether mandatory guardrails should apply to all examples of General Purpose AI (GPAI) or just a subset. It also seeks views as to how the mandatory requirements should be implemented at law: whether government should amend existing laws on a case-by-case basis to include the guardrails (the “domain specific approach”), whether it should define the guardrails in separate legislation that can be incorporated by existing laws (the “framework approach”), or whether it should introduce a new “cross-economy” AI-specific Act (the “whole of economy” approach).
Policy for Government Agencies
Now let’s turn to the policy settings for government agencies. Whilst not applicable to corporate organisations, they are useful exemplars:
The Policy for Responsible Use of AI in Government (the Policy) is a high level document that aims to ensure that the government leads by example in embracing AI responsibly. The Policy is mandatory for all non-corporate Commonwealth entities, except for the defence portfolio and the national intelligence community, and takes effect on 1 September 2024.
Amongst other things, it requires agencies to designate accountable officials, and publish an AI transparency statement by 30 Nov 2024 outlining the agency’s approach to AI adoption. Helpfully, as noted above, the government has published a Standard for AI transparency statements which defines a consistent format and classification system and hopefully makes it easier for the public to understand and compare how government agencies adopt AI.
The National framework for the assurance of artificial intelligence in government (the Framework) is a slightly older document, it was agreed to by all state and territory governments at the Data and Digital Ministers Meeting on 21 June 2024. It provides a nationally consistent approach for the assurance of AI in government, based on Australia’s AI Ethics Principles and 5 “cornerstones”: AI governance, data governance, a risk-based approach, adoption of existing standards, and consideration of procurement
practices.
Our Feedback on the Proposals Paper
The voluntary guardrails are intended to fill a gap while the government irons out its approach to mandatory legislation. In our view, the answers to some of the questions in relation to mandatory legislation are clear; we should adopt a list-based approach to define high-risk AI, in the interests of regulatory certainty and clarity, and based on the EU precedent.
Other questions deserve discussion, for example should all GPAI models be deemed high-risk? Overseas jurisdictions such as the EU and the US apply maximum guardrails only to specific categories of GPAI models, defined by reference to systemic risk or compute-based thresholds (e.g. 10^26 FLOPs).
By contrast, Australia proposes applying maximum guardrails to all GPAI models, which means any “AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems”. Could this mean that US and EU vendors face stricter compliance obligations in Australia than at home? That would be counter to the government’s stated preference for international harmonisation.
In relation to the publications for government agencies, the Framework in particular is a useful primer for any organisation wondering how to operationalise Australia’s AI Ethics Principles
Key Dates
The public consultation period for the Proposals Paper closes 5pm, Friday 4 Oct 2024.
For Further Information
Contact Bronwyn Ross at b.ross@redmarble.ai or Dave Timm at d.timm@redmarble.ai