UNITED STATES
Co‑Governance and the Future of AI Regulation
Advocates for co-governance: shared decision-making between government, industry, and communities, acknowledging AI's complexity and societal impact.
Emphasizes democratic values in AI governance: accessibility, transparency, and participation, fostering a more democratic development approach.
Proposes institutional reforms like participatory standard-setting, community input, and adaptive governance structures for AI, moving beyond 'one-size-fits-all' rules.
Source: Harvard Law Review (2025), "Co-Governance and the Future of AI Regulation"
How Generative AI Turns Copyright Law Upside Down — Columbia STLR (2024)
Source: Columbia Science & Technology Law Review (2024)
CRS Legal Sidebar: Generative AI & Copyright (2025 update, LSB10922)
Main Copyright Issues
Congress receives a neutral map of key copyright issues concerning: (1) whether AI‑generated outputs can be copyrighted, (2) whether training on copyrighted works is infringing, and (3) when AI outputs themselves might infringe others’ works.
Human Authorship & US Law
Explains current U.S. law on human authorship (Copyright Office guidance and cases like Thaler (Thaler v. Perlmutter)), concluding that works created without sufficient human creative input generally aren’t copyrightable, though human‑edited or curated AI outputs may be.
Fair Use & Policy Options
On training, it lays out arguments for and against fair use, referencing search‑engine case law and ongoing litigation, and notes policy options for Congress: amend the Copyright Act to clarify AI issues, or take a “wait‑and‑see” approach while courts and agencies develop doctrine.
Source: Congressional Research Service, Legal Sidebar LSB10922 (2025)
The Ethics and Challenges of Legal Personhood for AI — Yale Law Journal Forum (2024)
Historical Precedent
Reviews how U.S. law has extended “personhood” to non‑human entities (corporations, associations, some natural resources) and asks whether similar reasoning could ever justify rights for AI systems.
Practical Obstacles
Explores doctrinal pathways (e.g., equal protection and due process) but stresses practical obstacles: standing, representation (guardians, corporate wrappers, or organizational standing for AI), and the difficulty of tying rights to non‑sentient or opaque systems.
Ethical Focus & Unlikelihood
Concludes that while constitutional language about “persons” is flexible, current Supreme Court trends (e.g., Dobbs (Dobbs v. Jackson Women's Health Organization)) make expansions of personhood unlikely; ethically, the author urges focusing on human accountability and corporate responsibility rather than granting AI its own legal personhood.
Source: Yale Law Journal Forum (2024)

ABA Business Law Today: Recent Developments in AI Cases & Legislation (2025)

Annual Update on AI Litigation & Legislation Provides a practitioner‑oriented annual update on major U.S. AI lawsuits (especially generative‑AI copyright and privacy cases), enforcement actions, and state/federal legislative initiatives. Tracking Emerging AI Trends & Regulations Tracks trends like: state‑level AI and automated‑decision‑making bills; sector‑specific rules (employment, consumer protection, financial services); and the interaction of new state laws with the White House AI Executive Order and existing FTC/CFPB enforcement. Practical Takeaways for Business Lawyers Offers practical takeaways for business lawyers—e.g., the importance of AI governance programs, documentation, model‑risk management, and updated contracting/allocation of liability with AI vendors and customers. Source: American Bar Association, Business Law Today (2025)

FRANCE

Artificial Intelligence in the French Law of 2024 — Alain Duflot (2024)

Surveys the French and EU regulatory landscape for AI, with a focus on the justice sector (predictive‑justice tools, DataJust, judicial data open‑data reforms) and France’s national AI strategy. Uses the 2024 Olympic and Paralympic Games law (Law 2023‑380) and its “augmented video‑protection” experiments as a case study of AI‑driven surveillance and its implications for privacy and fundamental rights. Argues that while the EU AI Act and the AI Liability Directive create a common European framework, French law will still need to address gaps in liability, IP, and fundamental‑rights protection—especially as AI use in courts and administration intensifies. Source: Alain Duflot (2024)

CNIL: Q&A on Entry‑into‑Force of the EU AI Act (2024)
Phased Implementation
The AI Act (RIA) enters into force and applies over time, with enforcement beginning 1 August 2024. Bans on “unacceptable‑risk” systems apply from 2 February 2025, GPAI rules and national authority designations from 2 August 2025, and most high‑risk obligations from 2 August 2026–2027.
New Governance Architecture
A new structure includes an EU‑level AI Board and AI Office, alongside national market‑surveillance authorities. In France, CNIL anticipates retaining its GDPR enforcement role while also engaging in AI supervision where personal data is involved.
Integrated Approach to Compliance
CNIL emphasizes an integrated approach, utilizing AI‑Act requirements (risk management, documentation, and bans on certain biometric and scraping practices) to facilitate simultaneous compliance with both GDPR and the AI Act. This signals continued scrutiny of practices such as large‑scale facial‑recognition scraping.
Source: CNIL (Commission Nationale de l'Informatique et des Libertés), 2024
Conseil d’État (2022): Intelligence artificielle et action publique
Principes fondamentaux de l'IA publique fiable
La doctrine fondamentale sur l'"IA publique fiable" exige que lorsque les autorités publiques utilisent des algorithmes/IA, elles doivent préserver la légalité, la transparence et l'égalité devant la loi, et maintenir une supervision humaine significative dans les décisions administratives.
Évaluations et Transparence des Algorithmes
Recommande des évaluations d'impact systématiques, la documentation des algorithmes et des mécanismes permettant aux citoyens de comprendre et, le cas échéant, de contester les décisions assistées par l'IA ; insiste fortement sur les données ouvertes et l'explicabilité pour les outils du secteur public.
Opportunités et Protection des Droits
Cadre l'IA dans l'administration comme une opportunité d'améliorer les performances et la qualité des services, mais seulement si elle est combinée à des mesures de gouvernance qui renforcent la confiance du public et protègent les droits fondamentaux.
Source: Conseil d'État (2022), Intelligence artificielle et action publique
Défenseur des droits (2024/2025): Algorithms, AI systems and public services
Risks of Opacity and Automation Bias
The 2024 report examines the growing use of algorithms and AI in French public services (benefits allocation, tax, education platforms like Parcoursup), highlighting risks of opacity and “automation bias” where officials rubber‑stamp system outputs.
Real Human Intervention
Emphasises the legal requirement that “partially automated” administrative decisions still include real human intervention—an active, informed review of the AI output, not a passive confirmation—and finds that this is often lacking in practice.
Stricter Procedural Safeguards
Calls for stricter procedural safeguards: clearer transparency duties toward users, criteria for the required human involvement, and stronger protections against automated discrimination, building on earlier reports on biometrics and algorithmic discrimination.
Source: Défenseur des droits (2024/2025)
"Artificial intelligence & public services: a doctrine for use?” (2025)
AI Use Case Classification
Reflects on how, after ChatGPT‑style systems, French administrations should develop a “doctrine d’usage” for AI in public services—distinguishing low‑risk uses (e.g., drafting aids, summarisation) from sensitive uses that affect rights or entitlements.
Core Principles for Impactful AI
Synthesises lessons from CNIL, the Défenseur des droits and practical pilots: reiterates that human oversight, transparency, and explainability are non‑negotiable where decisions impact individuals.
Centralized Coordination & Security
Encourages central coordination, shared public‑sector tools, and avoiding fragmented, duplicative development of custom models in each ministry, while still ensuring data protection and security.
Source: Artificial intelligence & public services: a doctrine for use? (2025)
UNITED KINGDOM
UK Parliament POST Note: AI ethics, governance & regulation (2024)
1
UK AI Ecosystem Mapping
Maps the UK AI ecosystem: economic potential, uses in employment, public services and critical infrastructure, and major ethical risks (bias and discrimination, opacity, privacy, disinformation, cyber‑security and catastrophic‑risk concerns).
2
"Pro‑innovation" Framework
Explains the government’s “pro‑innovation” framework: a largely sector‑led, principle‑based approach that leans on existing regulators rather than a single AI act, while noting pressure from the EU AI Act and global developments.
3
Policy Options & Parliamentary Questions
Sets out policy options and “questions for Parliament”: whether to legislate for rights to human intervention, algorithmic impact assessments, transparency and certification schemes, and how to allocate liability among developers, deployers and users.
Source: UK Parliament POST (Parliamentary Office of Science and Technology) Note (2024)
White & Case: UK AI regulatory tracker (AI Watch: UK, 2025)
Summarises UK rules “directly regulating AI” (if/when a dedicated AI Regulation Bill appears) and “other laws affecting AI”, such as data‑protection law, the Online Safety Act, financial‑services rules, competition law and equality/human‑rights frameworks.
Highlights the UK’s incremental strategy: non‑statutory cross‑cutting principles, guidance from regulators, and forthcoming legislative proposals flagged in the 2024 King's Speech, alongside the creation and early work of the AI Safety Institute.
Offers practical guidance for businesses on territorial scope (when UK rules catch foreign providers), interplay with the EU AI Act for UK‑based operations, and the need to monitor rapid policy evolution across sectors.
Source: White & Case, AI Watch: UK (2025)
IJLIT (OUP): Risks, innovation & adaptability in the UK’s incrementalism (2024, Asress Gikay)
UK vs. EU AI Strategies
Compares the UK’s “incrementalist” AI strategy (flexible, innovation-friendly) with the EU’s comprehensive AI Act (less predictable, less protective of fundamental rights).
Accountability & Redress Gaps
Finds that relying on dispersed regulators and non-binding principles can leave gaps in accountability and citizen redress, particularly for high-risk AI in public services and employment.
Strengthening the UK Framework
Concludes that the UK must strengthen its framework—with clearer risk-classification, mandatory safeguards, and better resourcing of regulators—to achieve credibility and exportability, while avoiding wholesale copying of the AI Act.
Source: Asress Gikay, International Journal of Law and Information Technology (Oxford University Press, 2024)
AI, the Rule of Law & Public Administration (tax case study) – Cambridge‑linked research (2024)
Shift to Opaque AI Decisions
Examines how deploying AI and machine learning in HMRC’s automated decision‑making (ADM) could shift who is really making tax decisions—from human officers to opaque systems—and what that means for legality, discretion and accountability.
Advocating AI Tax Legal Safeguards
Argues for explicit legislative authorization for AI in tax ADM, limits on where AI can be used, rules on training data and bias, mandatory testing and auditing, and clear rights for taxpayers to be informed and to receive explanations.
Rule-of-Law Values & Sector Safeguards
Stresses that rule‑of‑law values (foreseeability, reason‑giving, effective remedies) require sector‑specific safeguards, not just general AI or data‑protection principles.
Source: Cambridge-linked research (2024)
House of Lords Library Briefing (2024): Algorithmic & Automated Decision‑Making Bill
Imposing Cross-Cutting Duties
Describes a private member’s bill that would impose cross‑cutting duties on public bodies (and contractors) using algorithmic or automated decision‑making: impact assessments, logging and documentation, transparency to affected persons, and audit obligations.
Addressing "Black-Box" Concerns
Emphasises parliament’s concerns about “black‑box” decisions in welfare, policing and other services, and how the bill would strengthen parliamentary and judicial scrutiny of such systems.
Integrating with Existing Frameworks
Notes that the bill interacts with existing frameworks (Data Protection Act 2018, equality law, judicial‑review principles) by making transparency and contestability more explicit, rather than creating an entirely separate AI regime.
Source: House of Lords Library Briefing (2024)
MEXICO
“Inteligencia artificial y derecho en México. Un dilema” — Revista (2024)
AI & Law Panorama
Offers a broad panorama of how Mexican civil and criminal law currently intersect with AI, highlighting autonomous vehicles as a key case study for fault, risk, and product liability.
Complex Liability
Argues that traditional theories of culpa and daño struggle with complex AI systems where multiple actors—developer, manufacturer, owner, operator—shape behaviour, making attribution of responsibility difficult.
Modernization Dilemma
Concludes that Mexico faces a “dilemma”: it must modernise liability, evidence and procedural rules for AI harms without stifling innovation, and should look to comparative models (EU AI Act, product‑liability reforms) when doing so.
Source: Revista (2024)
SciELO México (2025): Regulación de la IA: Desafíos para los derechos humanos
Análisis de Derechos Humanos y IA
Análisis cualitativo y basado en derechos de cómo la IA amenaza y apoya los derechos humanos en México, centrándose en privacidad, no discriminación, acceso a la información y debido proceso.
Marco Legal Incompleto
Revisa estándares internacionales y la legislación mexicana, concluyendo que México carece de un marco legal cohesivo que centre los derechos humanos en la regulación de la IA.
Estrategia Regulatoria Humanocéntrica
Recomienda una estrategia regulatoria de IA integral y centrada en el ser humano, incluyendo evaluaciones de impacto, prohibiciones explícitas y fuerte aplicación de protección de datos.
Source: SciELO México (2025)
UNAM — Revista Derecho de la Información (2025): Protección de datos e IA en contratación inteligente
Interacción de Contratos Inteligentes y Protección de Datos
Analyses how smart contracts and AI‑driven contracting interact with data‑protection rules, especially where personal data is written onto immutable ledgers or processed by autonomous agents.
Desafíos de Inmutabilidad y Responsabilidad
Highlights tensions between blockchain‑based “immutability” and rights like rectification and erasure, and warns that automated, AI‑assisted contracting can obscure who is the data controller or processor for compliance purposes.
Recomendaciones para la Adaptación Legal
Suggests adapting Mexican and comparative data‑protection law via privacy‑by‑design requirements, contractual allocation of responsibilities, and technical solutions (off‑chain storage, encryption, pseudonymisation) to reconcile smart‑contract innovation with data‑subject rights.
Source: UNAM, Revista Derecho de la Información (2025)
Universidad Veracruzana (2025): La inteligencia artificial y su regulación jurídica en México
Análisis de Brechas Regulatorias
Identifica que las leyes actuales de México (garantías constitucionales, protección de datos, consumo, fintech, telecomunicaciones) están fragmentadas y no abordan los riesgos sistémicos de la IA.
Riesgo de Dependencia Externa
Advierte que, a diferencia de otras jurisdicciones con leyes de IA dedicadas (UE, LatAm), México corre el riesgo de depender de marcos regulatorios extranjeros si no desarrolla su propio régimen.
Propuesta de Marco Legal Coherente
Sugiere una ley general de IA que defina la IA, clasifique riesgos, establezca autoridades de supervisión, e imponga obligaciones de transparencia y rendición de cuentas, armonizándose con regulaciones sectoriales.
Source: Universidad Veracruzana (2025)
REDILAT (2025): El servicio de la IA en el sistema legal mexicano
Experimentación y Desafíos Éticos
Examina la experimentación temprana del poder judicial y el sistema legal en general con la IA (asistentes de investigación, redacción de documentos, herramientas de búsqueda de jurisprudencia) y las cuestiones éticas en torno a la independencia, el sesgo y el acceso desigual entre litigantes ricos y pobres.
IA como Herramienta de Apoyo y Supervisión Judicial
Argumenta que la IA en los tribunales debe enmarcarse estrictamente como una herramienta de apoyo: los jueces deben mantener la responsabilidad de los razonamientos, y las partes deben poder saber cuándo se utilizó la IA y impugnar sus resultados cuando afecten los resultados de los casos.
Opciones Regulatorias y Transparencia
Describe las opciones regulatorias, desde directrices éticas de derecho indicativo hasta normas vinculantes sobre transparencia algorítmica, formación obligatoria para profesionales del derecho y auditorías de las herramientas de IA utilizadas en la adjudicación.
Source: REDILAT (Red Iberoamericana de Derecho y Tecnología, 2025)
COMPARING THE STUDIES
A Cross-Jurisdictional Legal Review
INDEX KEY
Type
A = academic
G = government/independent authority
P = practitioner/commentary
Level
N = national
Supran. = EU‑level focus