top of page
Search

🌍 A 2025 Global Snapshot of AI Legislation & Regulation

ree

If you are finding it challenging to stay on top of the current AI Legislations & Regulations, here is a global snapshot of AI laws and near-laws worldwide, plus what they mean for you right now. 


🌍 GLOBAL 

• Council of Europe AI Convention. First binding treaty on AI and human rights, open for signature since 5 September 2024. The EU, US and UK signed. It applies to public and private actors once ratified by each country. (Portal) 

• United Nations. The UN General Assembly adopted a consensus resolution on safe, secure and trustworthy AI on 21 March 2024. It guides national policy but is not binding. (Reuters) 

• OECD AI Principles. Updated in May 2024, now cover general purpose and generative AI. Useful as a policy and compliance baseline. (OECD)


🇪🇺 EUROPEAN UNION 

• EU AI Act. In force since 1 August 2024. Staggered application: bans and AI literacy duties apply from 2 February 2025, general-purpose model rules from 2 August 2025, most high-risk system duties from 2 August 2026, with some product-embedded systems out to 2 August 2027. Final text is in the Official Journal. (Digital Strategy)


🇬🇧 UNITED KINGDOM 

• Government signalled legislation to make developer safety commitments binding and to formalise the AI Safety Institute’s role. Policy documents set the agenda while the bill is prepared. The UK also signed the Council of Europe convention. (White & Case)


🇺🇸 UNITED STATES 

• No single federal AI statute. Policy is anchored in NIST’s AI Risk Management Framework, plus sector laws and enforcement. The 2023 Executive Order 14110 drove many agency actions, then the 2025 “America’s AI Action Plan” shifted federal posture and rescinded 14110. Check agency requirements you fall under. (NIST Publications) 

• State laws. Colorado’s AI Act (SB-205) is the first comprehensive state AI law. It targets high-risk systems and takes effect on 30 June 2026 after a 2025 delay. Expect more states to follow. (Future of Privacy Forum)


🇨🇳 CHINA 

• Binding rules already in place. Generative AI Interim Measures and Deep Synthesis rules set provider duties on safety, data, and labelling. Enforcement is administrative and active. (White & Case)


🇨🇦 CANADA 

• AIDA did not pass in 2025. There is no federal AI act at present. Government guidance and a voluntary code fill the gap for now. (Montreal AI Ethics Institute)


🇧🇷 BRAZIL 

• PL 2338/2023. Senate approved on 10 December 2024. The Chamber of Deputies is still considering it in 2025. Scope and duties are likely to echo the EU’s risk-based model. (Artificial Intelligence Act)


🇯🇵 JAPAN 

• Soft-law first. Government AI Guidelines for Business and the Japan AI Safety Institute’s evaluation guide were both updated in March 2025. Expect guidance-led oversight rather than a single statute. (International Bar Association)


🇸🇬 SINGAPORE 

• Governance through standards and testing. Model AI Governance Framework for Generative AI released May 2024. AI Verify and a global assurance pilot in 2025 operationalise testing and assurance. (AI Verify Foundation)


🇦🇺 AUSTRALIA 

• Mandatory guardrails coming for high-risk AI. Government consulted in 2024 and continued work in 2025. Final rules will set baseline duties for developers and deployers in high-risk settings. (Consult Industry)


🇮🇳 INDIA 

• No standalone AI law yet. The 2023 Digital Personal Data Protection Act governs data and now has 2025 rules in progress, which shape AI deployment. (Press Information Bureau)


🇦🇪 UAE AND 🇸🇦 SAUDI ARABIA 

• UAE. Active policy and sectoral guidance rather than a cross-sector AI act, plus an AI regulatory intelligence platform approved in April 2025. Check your sector regulator. (Latham & Watkins) 

• Saudi Arabia. National AI strategy and adoption framework through SDAIA guide practice and procurement. No overarching AI statute yet. (HSF Kramer)


👍 HOW TO STAY RIGHT-SIDE-UP ACROSS JURISDICTIONS


  1. Map uses and risks. Classify where your systems fall under EU risk tiers, Colorado “high-risk,” or China’s generative AI scope. Keep a live register per system. (Digital Strategy)

  2. Do impact assessments. For high-risk or sensitive uses, run pre-deployment and periodic assessments, with bias testing and red-teaming evidence retained. Align methods to NIST AI RMF to keep work reusable. (NIST Publications)

  3. Prove provenance and safety. Keep model and dataset lineage, training disclosures required in the EU for general purpose models, and deep synthesis labelling where applicable in China. (Digital Strategy)

  4. Be transparent with users. Provide clear notices when people interact with AI, opt-outs where required, and meaningful explanations for high-impact decisions. This is expected under the EU Act, Colorado law and most soft-law codes. (Digital Strategy)


Watch the treaty effect. If you operate in the EU, UK, or US public sector supply chain, expect human rights impact checks tied to the Council of Europe AI Convention once ratified domestically. (Portal)


 
 
 

Comments


bottom of page