30 March 2026

Vietnam’s new Law on Artificial Intelligence (“Law”) came into effect on 1 March 2026. Passed by the National Assembly on 10 December 2025, the Law is the country’s first comprehensive, standalone legal framework regulating artificial intelligence (“AI”). It complements the Law on Digital Technology Industry No. 71/2025/QH15 (“DTI Law”) by introducing a dedicated, risk-based regulatory framework for AI systems, while the AI-related provisions of the DTI Law continue to apply alongside it. For more on the DTI Law, please see our article “Vietnam establishes digital technology framework and launches crypto asset market pilot program”.

The Law sets out Vietnam’s policy on AI activities noting, among others, the intent to develop AI to become a key driver for growth, innovation, and sustainable development.

This article provides an overview of the Law’s key provisions.

Scope

The Law applies to Vietnamese agencies, organisations, and individuals, as well as foreign organisations and individuals participating in the research development, provision, deployment, and use of AI systems (“AI activities”) in Vietnam. AI activities that serve only the purposes of national defence, security, and cryptography are not within the scope of this Law.

The Law regulates AI activities, the rights and obligations of relevant organisations and individuals, and state management of AI activities in Vietnam.

Responsibilities in using AI systems

The term “AI systems” is defined in the Law as a machine-based system designed to perform AI capabilities with varying degrees of autonomy, capable of self-adaptation after deployment; based on clearly defined or implicitly formed objectives, the system infers from input data to produce outputs such as predictions, content, recommendations, or decisions that can influence the physical or digital environment

Where a serious malfunction occurs in an AI system, the following responsibilities apply:

  • Developers, being organisations or individuals that design, build, train, test, or fine-tune all or part of an AI model, algorithm, or system and exercises direct control over technical methods, training data, or model parameters, must promptly implement technical measures to rectify, suspend, or recall the AI system and simultaneously notify the competent authority.
  • Suppliers, being organisations or individuals that bring an AI system to the market or puts it into use under its own name, brand, or trademark, regardless of whether the system was developed by them or by a third party (“Suppliers”), must promptly implement appropriate technical measures to rectify, suspend, or recall the AI system and notify the competent authority.
  • The implementing party (“IP”) and users must promptly record and report the incident through a centralised, one-stop electronic AI portal (“Portal”) and cooperate with the relevant parties in the incident resolution process. An IP is the organisation or individual that utilises AI systems under its control in professional, commercial, or service-providing activities (excluding personal or non-commercial use) and users are defined in the Law as organisations or individuals who directly interact with the AI system or use the output of that system (“Users”).

The competent state management authority is responsible for receiving, verifying, and guiding the handling of incidents and may, where necessary, require the temporary suspension, withdrawal, or reassessment of the AI system. Incident reporting and resolution are carried out through the Portal.

Transparency

The Law requires the Supplier of an AI system to:

  • warrant that the AI system interacting directly with humans is designed and operated in a way that allows the user to be aware of their interaction with the system, unless otherwise provided by law; and
  • guarantee that audio, visual, and video content generated by the AI system is tagged in a machine-readable format as prescribed by the Government.

The IP is required to:

  • clearly notify the public when providing text, audio, images, or videos created or edited using AI systems if such content is likely to cause confusion regarding the authenticity of events or individuals, unless otherwise provided by law; and
  • ensure that audio, images, and videos created or edited using AI systems to simulate or mimic the appearance and voice of real people or to recreate real events are clearly labelled to distinguish them from real content.

Risk-based framework

The Law introduces a three-tier, risk-based classification framework for AI systems, aligning Vietnam with leading global regulatory models such as the European Union’s AI Act.

AI systems must be classified prior to deployment according to their potential impact and risk profile, with corresponding regulatory obligations. The risk classification assigned by a Supplier is inherited by the purchaser, who is responsible for ensuring the system’s safety and integrity during use.

Where the AI system is modified, the purchaser/owner is required to consult the Supplier to assess whether the original risk classification remains accurate. Suppliers must notify the Ministry of Science and Technology of the classification results through the Portal before putting the AI system into use.

AI systems classified as high-risk must undergo conformity assessment - a process confirming that the system meets applicable requirements - before deployment and upon any significant changes during operation.

The responsibilities of different classifications of AI systems are set out in the table below.

Category

Responsibilities

High-risk: AI systems that may cause significant harm to life, health, legitimate rights and interests of organisations and individuals, national interests, public interests and national security.

AI system providers must comply with ongoing obligations including maintaining risk management measures, technical documentation and operational logs, and providing relevant authorities with necessary information for inspection and audit purposes.

Parties deploying high-risk AI systems must operate and monitor the system within its classified scope and risk level, ensure data security and confidentiality, prevent unauthorised interference, and maintain compliance with applicable technical standards and AI regulations.

Foreign Suppliers of high-risk AI systems in Vietnam must maintain a legal point of contact in Vietnam. Where the system is also subject to mandatory conformity certification prior to use, the Supplier must additionally establish a commercial presence or appoint an authorised representative in Vietnam.

Medium-risk: AI systems are those that have the potential to confuse, influence, or manipulate users due to their inability to recognise that they are interacting with an AI system or AI-generated content.

Suppliers must provide, upon request by competent authorities, explanations on intended use, functional operation, key input data, and risk management and security measures (without disclosing source code or trade secrets).

IPs are accountable for system operation, risk control, incident handling, and protection of legitimate rights and interests during inspections, audits, or incident investigations.

Users must comply with applicable notification and labelling requirements.

Low-risk: AI systems are those that do not fall within the high- or medium-risk categories and will accordingly be subject to minimal regulatory intervention.

Suppliers must provide explanations to competent authorities where there are indications of legal violations or adverse impacts on the legitimate rights and interests of organisations or individuals.

IPs must likewise provide explanations in such circumstances.

Users may use the system for lawful purposes and are solely responsible for their use.

 

AI infrastructure and regulatory architecture

To implement the Law, Vietnam will establish the Portal and a National Database on Artificial Intelligence Systems. Also provided for is an AI sandbox.

AI sandbox

The Law establishes a controlled testing mechanism for AI systems, implemented in accordance with science and technology regulations. Results of controlled testing may form the basis for recognition of conformity assessments and, in certain cases, exemption, reduction, or adjustment of compliance obligations. Competent authorities may suspend or terminate testing where safety, security, or rights-related risks arise.

AI portal

The Portal will function as the central interface for regulatory engagement, including the registration of controlled testing, submission of AI system classifications, reporting of serious incidents and periodic updates, and public disclosure of conformity assessment results and enforcement actions. It will also serve as a platform to connect support programmes, funding mechanisms, infrastructure, and shared data resources.

National database and infrastructure

In parallel, a centrally managed National Database on Artificial Intelligence Systems will support regulatory oversight, monitoring, and public transparency. The Government will issue detailed regulations governing the operation, management, and access mechanisms for both systems.

The Law further provides for the development of national AI infrastructure, including computing capacity, shared data resources, and testing and model platforms. AI infrastructure developed by the State, enterprises, and social organisations must be interconnected and operated in accordance with applicable safety, security, and data protection requirements. Notably, AI applications designated as “important” in essential sectors must be deployed on the national AI infrastructure to ensure safety, security, and effective state oversight. The Prime Minister will issue and periodically update a National Strategy on Artificial Intelligence, which ministries and local authorities must integrate into their development planning.

Prohibited acts

The Law expressly prohibits the misuse of AI systems, including exploiting or hijacking AI systems to commit unlawful acts or infringe the legitimate rights and interests of organisations and individuals. It also prohibits the development, deployment, or use of AI systems for deceptive manipulation, exploitation of vulnerable groups, dissemination of false content that seriously threatens national security or public order, unlawful data use, obstruction of required human oversight mechanisms, and concealment or falsification of mandatory disclosures or labelling.

These prohibitions apply irrespective of the risk classification of the AI system.

Inspections and enforcement

The Law provides that the inspection of AI activities shall be conducted in accordance with the Law on Inspection 2025 (No. 84/2025/QH15), with designated state management authorities responsible for overseeing compliance by organisations and individuals engaged in AI activities.

During inspections or audits, relevant organisations and individuals must provide technical documentation, operational logs, training data, and other necessary information to enable the investigation of violations or incidents and the allocation of responsibility, subject to applicable laws on state secrets, data protection, personal data, and intellectual property.

Handling of violations and civil liability

The Law imposes a range of penalties for its breach, from administrative sanctions to criminal liability.

Where a high-risk AI system has been managed, operated, and used in compliance with applicable regulations but nevertheless causes damage, the IP is responsible for compensating the affected party. Following compensation, the IP may seek reimbursement from the appropriate party. Liability for compensation will not arise where the damage is wholly attributable to the intentional fault of the injured party or where the damage occurs due to force majeure or an emergency situation, unless otherwise provided by law.

Where an AI system is unlawfully accessed, controlled, or interfered with by a third party, that third party will be liable for resulting damage. If the IP or Supplier is at fault in permitting such unlawful access or interference, they will jointly liable.

The Government will issue detailed regulations on administrative penalties relating to violations involving AI systems.

Ethics and impact assessment

The Law provides for the issuance of a National Artificial Intelligence Ethics Framework, which will guide the safe, transparent, and responsible development and deployment of AI systems.

In addition, organisations operating high-risk AI systems in state management or public service contexts must prepare impact assessment reports addressing risk identification, mitigation measures, and mechanisms for human oversight. These reports must be made public, subject to confidentiality protections.

Implementation

AI systems operational before the Law’s commencement date of 1 March 2026 benefit from the following grace periods, during which they may continue to operate:

  • Healthcare, education, and finance: 18 months from the effective date
  • All other fields: 12 months from the effective date

However, where the relevant authority determines that an AI system poses a serious risk of harm, it may order a temporary suspension or termination of irrespective of the AI system falling into the grace periods stated above.

More