30 January 2024

Between 16 January 2024 and 15 March 2024, the Info-communications Media Development Authority (“IMDA”) is conducting a public consultation on a proposed framework (“proposed framework”) to govern generative artificial intelligence (“AI”). Developed by IMDA and its wholly owned and not-for-profit subsidiary AI Verify Foundation, the proposed framework titled “Model AI Governance Framework for Generative AI” expands on the existing Model AI Governance Framework that covers traditional AI.

Singapore released the first version of the Model AI Governance Framework in 2019, and updated it subsequently in 2020. As the recent advent of generative AI has reinforced some of the same AI risks (e.g. bias, misuse, lack of explainability), and introduced new ones (e.g. hallucination, copyright infringement, value alignment), there is a need to update the earlier model governance framework.

The proposed framework seeks to establish a systematic and balanced approach to address generative AI concerns while continuing to facilitate innovation.

Traditional AI and generative AI

The term traditional AI refers to AI models that make predictions by leveraging insights derived from historical data. Typical traditional AI models include logistic regression, decision trees, and conditional random fields. Generative AI are AI models capable of generating text, images or other media and learning the patterns and structure of their input training data and producing new data with similar characteristics.

Nine dimensions of the proposed framework

The proposed framework identifies nine dimensions to support a comprehensive and trusted AI ecosystem. Set out below is a snapshot of the nine dimensions:

  • Accountability: Putting in place the right incentive structure for different players (such as model developers, application deployers, and cloud service providers) in the AI system development life cycle to be responsible to end-users.
  • Data: Ensuring data quality (e.g. through using trusted data sources) and addressing potentially contentious training data (e.g. personal data and copyright material) in a pragmatic way.
  • Trusted development and deployment: Enhancing transparency around baseline safety and hygiene measures based on industry best practices in development, evaluation, and disclosure.
  • Incident reporting: Implementing an incident management system for timely notification, remediation, and continuous improvements.
  • Testing and assurance: Providing external validation and added trust through third-party testing and developing common AI testing standards to ensure quality and consistency.
  • Security: Adapting existing frameworks for information security and developing new testing tools to address new threat vectors that arise through generative AI models.
  • Content provenance: Exploring technical solutions such as digital watermarking and cryptographic provenance to provide transparency about where content comes.
  • Safety and alignment research & development (R&D): Accelerating R&D through global cooperation among AI safety institutes to improve model alignment with human intention and values.
  • AI for public good: Responsible AI includes harnessing AI to benefit the public by democratising access, improving public sector adoption, upskilling workers, and developing AI systems sustainably.

IMDA welcomes views on the proposed framework from the international community. The consultation closed on 15 March 2024 and IMDA aims to finalise the proposed framework in mid-2024.

Reference materials

The following materials are available on the IMDA website www.imda.gov.sg: