July 2023
Of course the following summary is not legal advice, but it may help you decide if you need to talk to your lawyers and update your AI Governance framework. (AI Governance refers to the policies, processes and tools that support responsible and effective use of AI and AI-enabled products throughout their lifecycle). Please note that regulatory frameworks are developing rapidly and you should always seek up to date legal advice on your obligations in each jurisdiction.
***************************************************************
Singapore’s National AI Strategy (NAIS) (Nov 2019) spells out its plan to become a leader in developing and deploying AI solutions by 2030, which includes “learning how to govern and manage the impact of AI”. Despite its small size, Singapore aims to become a common global reference point for AI governance by developing universal governance tools based on global principles, promoting them at international organisations such as the World Economic Forum (WEF) and the OECD, and obtaining industry buy-in via pilots with multinational companies.
Process: Singapore’s starting point is self-regulation. Two government agencies, the Infocomm Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC) are responsible for iterating best practice guidance via industry and international consultation.
Overview: Singapore released version 2 of its Model AI Governance Framework (the Model Framework) at the WEF in Davos in Jan 2020. The Model Framework provides detailed guidance on translating ethical principles into practice in four broad areas: internal governance structures, AI-augmented decision-making, operations management and stakeholder communications. It adopts a horizontal and technology-agnostic approach, intended to complement sector-specific guidance such as the FEAT principles (which were published by the Monetary Authority of Singapore to guide use of AI and data analytics in Singapore’s financial sector).
Who will be impacted: The Model Framework is designed for organisations who have chosen to deploy AI technologies at scale, rather than companies who are using off-the-shelf AI-enabled tools. Even for this limited audience, adoption of the Model Framework principles is voluntary. But all suppliers, developers and users of AI in Singapore remain subject to the Personal Data Protection Act, together with any other sector-specific laws governing data protection and security.
How it will be enforced: The Model Framework does not have an oversight or enforcement mechanism, there are no sanctions proposed at this point.
Timing: N/A
Guidance: The Model Framework comes with an Implementation and Self-Assessment Guide for Organisations (ISAGO) to help them align their business practices with its recommendations. It was further supplemented in May 2022 by a suite of tools called A.I. Verify which is designed to help organisations self-test their AI models. A.I. Verify packages open-source technical testing tools together with process checklists, and generates performance reports for business stakeholders. As at Jan 2023, A.I. Verify was in the pilot phase. (It is worth noting that there are several commercial software platforms already available that appear to have the same capabilities).
Comment: Like most of the jurisdictions we have surveyed so far, it seems Singapore is trying to balance two competing objectives: attracting technology investment by minimising regulation; but confirming its credentials as a responsible regime so it can influence emerging international standards. It has tackled both objectives by forging strong relationships and wielding soft influence at multiple levels. It works with international organisations (OECD, WEF), selected jurisdictions (US Dept of Commerce), official standards bodies (ISO/IEC) and the private sector. Regarding the latter, in a very astute move Singapore established an Advisory Council on the Ethical Use of AI & Data in 2018, comprising representatives from the tech sector, such as Google, Salesforce, Microsoft, IBM and Alibaba. The Council advises the Government “on ethical, policy and governance issues arising from the use of data-driven technologies in the private sector” among other things.
Singapore is a very good example of how small jurisdictions can leverage relationships to ensure they have a voice in the international discussion on AI ethics.
Thanks for checking out our business articles. If you want to learn more, feel free to reach out to Red Marble AI. You can click on the "Let's Talk" button on our website or email Bronwyn, our Ai Govern expert at b.ross@redmarble.ai.
We appreciate your interest and look forward to sharing more with you!