POLICIES
UNITED STATES
NIST AI Risk Management Framework (AI RMF 1.0)
What it is
A voluntary, sector-agnostic framework (Jan 2023) (NIST AI RMF 1.0, 2023) for managing AI risks across the lifecycle. It's designed for both developers and deployers and is explicitly referenced in the Biden AI Executive Order.
Core Focus
Primarily addresses risks to individuals and society, encompassing safety, civil rights, economic impact, and environmental concerns, rather than solely organizational risks.
Application
Utilized for establishing internal AI risk programs, ensuring compliance with US government expectations, and aligning with global "trustworthy AI" requirements like those from the EU and OECD.
Core Structure: Four Key Functions
Map
Identify and categorize AI risks.
Measure
Quantify and assess the identified risks.
Manage
Implement strategies to mitigate and respond to risks.
Govern
Establish oversight, accountability, and continuous improvement for risk management.
(NIST AI RMF 1.0, 2023)
Characteristics of Trustworthy AI
  • Validity: Accuracy and appropriate performance.
  • Reliability: Consistent operation under varying conditions.
  • Safety: Protection from harm to users and society.
  • Security/Resilience: Ability to resist and recover from attacks.
  • Accountability: Clear responsibility for AI system outcomes.
  • Explainability: Transparency in decision-making processes.
  • Privacy: Protection of personal data and individual rights.
  • Fairness: Equitable treatment and avoidance of bias. (NIST AI RMF 1.0, 2023)
The framework also encourages comprehensive documentation (assessments, impact analyses, data lineage) and multi-stakeholder engagement throughout the AI lifecycle. (NIST AI RMF 1.0, 2023)
NIST Generative AI Profile (2024)
What it is: A GenAI-specific “profile” that applies AI RMF concepts to generative models and applications, developed under EO 14110 (NIST Generative AI Profile, 2024).
1
GenAI-specific Harms
Identifies harms unique to generative AI: hallucinations, disinformation, synthetic CSAM, dual-use (bio/cyber), IP abuse, and privacy breaches.
2
Key Personas
Breaks down risks by specific roles: developers, deployers, cloud providers, evaluators, and policymakers.
3
Emphasis Areas
Stresses critical aspects such as content provenance, watermarking, model evaluation/red-teaming, access controls, and safety policies.
(NIST Generative AI Profile, 2024)
The profile also links to emerging AI Safety Institute testing work (NIST Generative AI Profile, 2024).
How it is Applied: Utilized for gap-analysis within GenAI systems, particularly for demonstrating alignment with “frontier model” safety expectations to regulators and clients.
Blueprint for an AI Bill of Rights (OSTP)
What it is: Non‑binding White House blueprint (2022) (OSTP Blueprint for an AI Bill of Rights, 2022) setting out five “rights‑like” principles for automated systems that affect the public.
Five Key Principles
1
Safe and effective systems
Pre‑deployment testing and ongoing monitoring.
2
Algorithmic discrimination protections
Proactive fairness testing and use of representative data.
3
Data privacy
Data minimization and heightened protections for sensitive data.
4
Notice and explanation
Disclosure when automation is used, alongside meaningful explanations.
5
Human alternatives and fallback
Timely, accessible opt‑outs or appeal mechanisms for users.
(OSTP Blueprint for an AI Bill of Rights, 2022)
Debates / Uptake
  • Critiqued for being guidance, not law, and for limited direct enforcement.
  • Still widely cited as a “rights‑preserving” benchmark and used as a design checklist in health, finance, and government pilots.
(OSTP Blueprint for an AI Bill of Rights, 2022)
U.S. Copyright Office: Generative‑AI Training Report (2025)
What it is: A policy study on whether training GenAI models on copyrighted works without permission or payment should be lawful, and how copyright should treat AI‑generated outputs. (U.S. Copyright Office, 2025)
Core Tensions Explored
Training data
Whether large‑scale web scraping is fair use; impact on authors’ markets and moral rights.
Authorship
Reaffirms that copyright requires human authorship, so fully machine‑generated content is not protected, but human‑in‑the‑loop workflows may be.
Liability
Sketches models for responsibility (developer vs deployer) when outputs are infringing/derivative.
Policy options
Transparency obligations about training data, opt-out/collective licensing schemes, or new remuneration rights.
(U.S. Copyright Office, 2025)
How it is Applied: Understanding where US doctrine is likely to land on training data and why many companies are moving toward licensing plus transparency.
CDC (2024): Health Equity & Ethical Considerations in AI
What it is: Peer‑reviewed CDC commentary (Preventing Chronic Disease, Aug 2024) (CDC, 2024) on AI in public health and medicine through a health‑equity lens.
Main Themes
Amplify Health Disparities
AI can amplify health disparities via biased data, incomplete representation, and lack of community input.
Equity-by-Design
Calls for equity‑by‑design: community engagement, inclusive datasets, fairness testing, and culturally appropriate use.
Transparency & Accountability
Stresses transparency, explainability, and accountability in population‑level tools (surveillance, risk prediction, triage).
Governance
Highlights governance: ethics review, data governance boards, public‑health oversight.
(CDC, 2024)
How it is Applied: Health‑sector AI ethics policies, impact assessments, and justification for equity‑focused audits.
AI Now Institute (2024–25) – Lessons from the FDA for AI & Power and Governance in the Age of AI
What they are: Critical governance analyses (AI Now Institute, 2024–25) arguing for structural, regulatory solutions rather than self‑regulation by tech firms.
Key Ideas
FDA Analogy
Look at pre‑market approval, post‑market surveillance, and independent expertise as models for AI regulation; main leverage point is before systems hit the market.
Regulatory Design Lessons
Skeptical about an “FDA for AI” as such but extracts design lessons: clear mandates, strong conflict‑of‑interest rules, meaningful enforcement.
Power Lens
AI as an extension of existing power (platforms, states, militaries), not a neutral tech; governance must confront concentration, geopolitical competition, and labor impacts.
Policy Recommendations
Pushes for independent public agencies, strong antitrust, procurement rules, and limits on high‑risk uses (e.g., military, surveillance).
(AI Now Institute, 2024–25)
How it is Applied: Normative critique and design principles when thinking about agencies, institutional design, and civil‑rights centric AI law.
DOJ Civil Rights Division – Artificial Intelligence and Civil Rights
What it is: The Civil Rights Division’s AI hub/page (2024) (DOJ Civil Rights Division, 2024) articulating DOJ’s enforcement posture on AI and discrimination under existing civil‑rights laws.
Core messages
Existing laws already apply: Fair housing, employment, disability, voting, education, and policing laws cover AI‑mediated decisions.
Warns that delegating decisions to AI does not shield entities from liability.
Focus on discriminatory outcomes (disparate treatment/impact), not just intent or model internals.
Highlights collaboration with agencies (CFPB, EEOC, FTC) and encourages impact assessments, audits, and accessible complaint channels.
(DOJ Civil Rights Division, 2024)
How it is Applied: Risk registers and legal memos on discrimination, plus scoping algorithmic‑bias audit obligations for US deployments.
CNPEN Opinion No. 7 (2024): Ethical Issues of Generative AI
What it is
Opinion from the French Comité national pilote d’éthique du numérique (CNPEN) (CNPEN Opinion No. 7, 2024) on the societal and democratic risks of GenAI.
Key themes
Risks to information integrity, democratic debate, education, and creativity
(deepfakes, mis/disinformation, cultural homogenization).
Calls for traceability, labelling/sourcing of synthetic media
and education in media literacy.
Argues for proportional regulation
stricter for public‑sector and critical‑infrastructure uses; oversight for foundation models.
Emphasizes human agency and dignity
against excessive automation.
(CNPEN Opinion No. 7, 2024)
How it is Applied: Normative justification for content provenance tools and stricter governance of GenAI in public communication and education.
CNIL Recommendations on Developing AI Systems (2024–25)
What it is
French data‑protection authority’s practical, GDPR‑grounded guidance for AI developers and deployers (released in stages and finalized in 2025) (CNIL, 2024–25).
Data Management
“Fiches pratiques” covering: data minimization, legal bases, DPIAs, security, rights of access/erasur, automated decision‑making rules.
Phased Distinction
Distinguishes training vs inference phases and clarifies controller/processor roles.
Practical Expectations
Practical expectations for documentation, human review, fairness testing, and explainability.
Risk-Based Approach
Aligns with EU AI Act risk‑based approach, anticipating high‑risk categories.
(CNIL, 2024–25)
How it is Applied: A ready‑made compliance checklist for GDPR‑aligned AI development; very useful template beyond France.
CCNE/CNPEN Joint Opinion (2024) – Health Data Platforms
What it is
Joint ethics opinion on large‑scale health‑data platforms (research, AI, and public‑interest uses) (CCNE/CNPEN Joint Opinion, 2024).
Ethical tensions
Consent vs public interest
Challenges of re‑use at scale; proposes layered consent plus robust governance bodies.
Transparency
Emphasis on transparency to patients and public about uses, partners, and algorithms.
Independent oversight
Recommends independent ethics and data‑governance committees, including lay participation.
Risk mitigation
Warns against commercial capture and re‑identification risks.
(CCNE/CNPEN Joint Opinion, 2024)
How it is Applied: Designing health data/AI platforms (or data trusts) that must navigate consent, research, and public‑interest trade‑offs.
Conseil d’État Study (2022): IA & Action Publique: Construire la Confiance
What it is
France’s top administrative court’s study (Conseil d'État, 2022) on trustworthy AI use in the public sector.
Key pillars
Legality & proportionality of automated decision‑making in public administration.
Advocates for algorithmic transparency, including communication to citizens of the role of automation and main logic.
Calls for impact assessments, internal governance, and oversight authorities.
Differentiates between support tools and decisional tools, insisting on human responsibility.
(Conseil d'État, 2022)
How it is Applied: Public‑sector AI governance models, particularly around explainability, recourse, and proportionality.
CNPEN Opinion No. 8 (2025): Facial/Posture/Behavioural Recognition
What it is
Ethics opinion on facial recognition, biometric categorisation, and behaviour/posture analysis (CNPEN Opinion No. 8, 2025).
Main conclusions
These technologies pose high risks for privacy, autonomy, freedom of assembly, and non‑discrimination.
Strongly critical of mass or remote biometric surveillance; supports very strict limits and case‑by‑case proportionality tests.
Warns about chilling effects and potential abuse in workplaces, schools, and public spaces.
Recommends moratoria or bans in highly intrusive contexts, and strict safeguards elsewhere (independent oversight, narrow purpose).
(CNPEN Opinion No. 8, 2025)
How it is Applied: Designing or critiquing biometric systems and justifying restrictive policies in line with EU AI Act prohibitions/constraints.
CDEI Review into Bias in Algorithmic Decision‑Making
What it is
Flagship UK review (2020–21) by the Centre for Data Ethics and Innovation (CDEI, 2020–21) on bias in algorithmic decision‑making in recruitment, policing, financial services, and local government.
Maps sources of bias (data, design, deployment) and cross‑sector harms.
Recommends mandatory transparency obligations for public‑sector uses and standards for assurance/audits.
Pushes for regulatory coordination and professionalization of “algorithmic auditors”.
Emphasizes participatory design and robust evaluation, not just technical fixes.
(CDEI, 2020–21)
How it is Applied: Conceptual basis for bias mitigation plans, and to justify algorithmic‑audit requirements.
UK Gov: Ethics, Transparency & Accountability Framework for Automated Decision‑Making
What it is
A 7‑point framework for UK public‑sector bodies using automated decision‑making. Originally 2021, refreshed on the governments website in 2025 (UK Government, 2021/2025).
Seven areas (simplified)
Senior ownership & governance.
Clear user need and public benefit.
Data protection, security, and rights safeguards.
Fairness and non‑discrimination checks.
Transparency to affected individuals.
Human review and contestability.
Continuous monitoring and evaluation.
(UK Government, 2021/2025)
How it is Applied: A practical checklist for gov AI services; maps well to impact assessment templates.
AI Safety Institute: Approach to Evaluations (2024)
What it is
Statement of the UK AI Safety Institute’s mandate and methodology (UK AI Safety Institute, 2024) for evaluating advanced models, plus its Inspect evaluation platform.
Core elements
Focused on frontier models and catastrophic / systemic risks.
Evaluation pillars: automated capability assessments, red‑teaming, human‑uplift evaluations, agentic behaviour, misuse, societal impacts, safeguards testing.
Does not “certify safety” but generates evidence on capabilities and failure modes.
Linked to the UK’s international AI‑safety diplomacy and collaboration with the US AI Safety Institute.
(UK AI Safety Institute, 2024)
How it is Applied: Designing internal model‑eval pipelines and understanding emerging expectations for high‑capability systems.
UK Parliament POSTnote (2025): AI & Mental Healthcare
What it is
Parliamentary Office of Science and Technology briefing (UK Parliament POSTnote, 2025) on opportunities/risks of AI in mental health care.
Surveys AI use in diagnosis, triage, monitoring via apps, chatbots, and clinical decision support.
Highlights ethical concerns: informed consent, safety and clinical validation, explainability, bias against marginalized groups, data protection, and crisis‑response limits.
Notes regulatory patchwork (medical‑device rules, data protection, clinical governance) and calls for robust evaluation and oversight before deployment.
(UK Parliament POSTnote, 2025)
How it is Applied: Sector‑specific ethics/regulatory mapping for mental‑health AI products.
NHS Confederation (2025): AI in NHS Comms / Comms AI Operating Framework
What it is
NHS Confed taskforce report and operating framework on using AI in NHS communications; includes a draft ethical framework (2024) and later operating framework (2025) (NHS Confederation, 2024–25).
Ethics & governance themes
Fairness & inclusion
Avoid discriminatory content; ensure accessibility and plain‑language communication.
Privacy & consent
Rules for data input, PHI handling, and DPIAs.
Transparency & human oversight
Disclose AI‑assisted content, require human review for sensitive comms.
Training and capacity-building
For comms staff; an AI Comms Network for knowledge‑sharing.
(NHS Confederation, 2024–25)
How it is Applied: Blueprint for responsible GenAI use in large public‑sector communications teams.
DRCF (2022): Auditing Algorithms – Existing Landscape, Role of Regulators, Future Outlook
What it is
Digital Regulation Cooperation Forum (CMA, ICO, Ofcom, FCA) discussion paper (DRCF, 2022) on algorithmic auditing and the role of regulators.
Key takeaways
Maps a nascent algorithmic‑audit ecosystem (internal, third‑party, and regulator‑led audits).
Identifies possible regulator roles: setting standards, accrediting auditors, conducting own audits, and using audits in enforcement.
Notes cost and capacity barriers, especially for SMEs.
Connects auditing to transparency, fairness, competition, consumer protection, and online‑safety mandates.
(DRCF, 2022)
How it is Applied: Blueprint for responsible GenAI use in large public‑sector communications teams.
MEXICO
INAI / RIPD: Recomendaciones generales para el tratamiento de datos en la IA
What it is
General recommendations from the Red Iberoamericana de Protección de Datos, strongly promoted by INAI (INAI/RIPD), on AI projects involving personal data.
Ethics‑by‑design guidance
Emphasizes privacy and ethics from the design phase, including DPIAs, security, accountability, and explainability.
Recommends governance structures, documentation, and user‑rights enablement (access, correction, objection).
Frames AI within existing data‑protection principles (necessity, proportionality, purpose limitation). (INAI/RIPD)
How it is Applied: Latin‑American flavored privacy‑by‑design guide for AI, especially where GDPR is a reference but not directly binding.
IA2030Mx: Agenda Nacional Mexicana de IA
What it is
Multi‑stakeholder roadmap (originally 2020, still influential) (IA2030Mx, 2020) developed by the IA2030Mx coalition, outlining Mexico’s AI vision to 2030.
Ethics & governance elements
Stresses responsible AI, human‑rights protection, and inclusion as pillars of national AI strategy.
Calls for regulatory frameworks on data governance, transparency, and accountability.
Highlights AI for development (health, education, public services) while warning against inequality and regional gaps.
(IA2030Mx, 2020)
How it is Applied: Context for policy projects and advocacy around a Mexican national AI strategy aligned with human rights.
UNAM (2024): Los desafíos éticos de la IA
What it is
Essay in Revista de la Universidad de México on the ethical challenges of AI (Jorge Enrique Linares) (UNAM, 2024).
Core arguments
Critica el Tecnosolucionismo
Cuestiona la concentración de poder en las grandes empresas tecnológicas.
Impacto en la Sociedad
Analiza la influencia de la IA en la autonomía, la democracia, el empleo, la vigilancia y la producción de conocimiento.
Control Democrático y Ética
Aboga por un control público y democrático de las infraestructuras de IA y una sólida educación ética.
(UNAM, 2024)
How it is Applied: Philosophical framing of the AI debate in Mexico and Spanish‑language teaching on AI ethics.
UNAM/PUB (2025): Bioética UNAM – Ethical Implications of AI
What it is
Bioética UNAM scholarly work (UNAM/PUB, 2025) (e.g., “The ethical implications of using artificial intelligence to manipulate or enhance natural ecosystems and biodiversity”) examining AI from a bioethics standpoint.
Key themes
Technological Intervention
AI's role in living systems, raising issues of ecological integrity, inter-species ethics, and long-term uncertainty.
Core Ethical Principles
Stresses precaution, justice, and intergenerational responsibility in AI development.
Participatory Governance
Advocates transparent, participatory governance when AI alters ecosystems or public health.
(UNAM/PUB, 2025)
How it is Applied: Extending AI ethics beyond human‑centered concerns to environmental/bioethical domains.
UDG/CEED (2025): La adopción de la IA en los gobiernos estatales de México
What it is
Research report on AI adoption in Mexican state governments (Centro de Estudios Estratégicos para el Desarrollo, Universidad de Guadalajara) (UDG/CEED, 2025)
Key Findings
Challenges & Potential
Public officials recognize AI's potential for efficiency and transparency, but are hindered by a lack of budget, infrastructure, skills, and clear regulation.
Ethical Concerns
The study highlights significant concerns regarding privacy, data protection, and fairness in public-sector AI implementations.
Strategic Imperatives
Calls are made for crucial capacity-building, robust governance frameworks, and citizen-centric design in AI strategy development.
(UDG/CEED, 2025)
How it is Applied: Empirical grounding for government AI strategies, particularly in addressing capacity and governance gaps.
Endeavor México (2025): La era de la IA en México
What it is
Data‑driven report by Endeavor México and Santander (Endeavor México, 2025) on Mexico’s AI ecosystem and policy proposals for ethical/responsible AI.
Highlights
AI Ecosystem Growth
Identifies ~362 AI companies and strong growth in jobs and investment within Mexico's burgeoning AI sector.
Institutional Architecture
Proposes a Digital National Agency with an AI Office, along with cybersecurity law and strengthened regulators.
Ethical AI Governance
Advocates for human‑rights‑based governance, multi‑actor coordination, and addressing talent and regional gaps.
(Endeavor México, 2025)
How it is Applied: Business‑policy bridge document—useful when arguing for institutional architecture and investment in responsible AI.
UNESCO Observatory – Mexico and AI Ethics Recommendation
What it is
UNESCO Global AI Ethics and Governance Observatory’s country page and readiness report for Mexico, tracking implementation of the 2021 UNESCO AI Ethics Recommendation (UNESCO Observatory).
Key points
National AI Strategy
Recommends developing a national AI strategy explicitly aligned with UNESCO’s ethics principles, integrated into the National Development Plan.
Agile Regulation & Participation
Emphasizes agile regulatory instruments, continuous learning, and broad stakeholder participation in AI governance.
Education & Literacy
Focuses on enhancing education, fostering AI literacy, and actively working to avoid new digital divides.
(UNESCO Observatory)
How it is Applied: Benchmarking Mexico’s alignment with global AI‑ethics norms; justification for ethics‑centric national policy.
COMPARING THE POLICIES
COMPARING POSITIONS
United States
Risk focus: safety/robustness, bias and civil-rights harms, GenAI mis/disinfo, IP/copyright, health equity.
Rights lens: civil rights + privacy + equity; “rights-preserving design” via AI Bill of Rights.
Governance tools: NIST-style risk frameworks, sector regulators (DOJ, health), doctrinal reports (Copyright Office), and emerging model-eval infrastructure.
France
Risk focus: privacy & data-protection, health-data reuse, democratic integrity, biometric surveillance.
Rights lens: GDPR rights, dignity, liberties (expression, assembly), proportionality.
Governance tools: CNIL-led GDPR compliance guidance; high-level ethics opinions; Conseil d’État doctrine for public-sector AI; willingness to recommend bans/moratoria on biometrics.
United Kingdom
Risk focus: bias/discrimination, consumer harms, frontier-model catastrophic risk, sectoral risks in mental health and comms.
Rights lens: fairness, transparency, safety, consumer protection.
Governance tools: algorithmic audits and assurance; public-sector ADM framework; AI Safety Institute evaluations; multi-regulator coordination via DRCF.
Mexico
Risk focus: personal-data misuse, inequality and digital divides, state capacity gaps, structural/democratic harms, environmental impacts.
Rights lens: human rights, privacy, inclusion, environmental and intergenerational justice.
Governance tools: ethics-by-design guidance (INAI); national-agenda roadmapping (IA2030Mx); UNESCO alignment; institutional and capacity proposals (digital agency, AI office, gov-AI adoption studies).
Germany
Risk focus: fundamental-rights and dignity harms, high-risk AI (biometrics, social scoring), trust in welfare systems, governance fragmentation.
Rights lens: strong emphasis on human dignity and autonomy, plus fundamental rights and democracy.
Governance tools: risk-tier regulation and bans (DEK); deep normative guidance (Ethics Council); national strategy; OECD/UNESCO implementation; sector-specific work on PES and welfare.