Blueprint for an AI Bill of Rights (OSTP, 2022)
What it is / status
Non‑binding White House policy “blueprint” from the Office of Science and Technology Policy (OSTP, 2022). It’s guidance for federal agencies and a normative signal to the private sector, not a statute or regulation, so it doesn’t itself create new legal rights or liabilities (OSTP, 2022).
Core principles (the 5 “rights”)
1
Safe and effective systems
Testing, risk assessment, and monitoring before and after deployment.
2
Algorithmic discrimination protections
Anti‑discrimination framing, calling for bias audits and civil‑rights‑aware design.
3
Data privacy
Limits on surveillance, use minimization, and stronger protections for sensitive data.
4
Notice and explanation
People should know when automated systems are used and get meaningful explanations.
5
Human alternatives, consideration, and fallback
Access to a human and the ability to opt out of automated decisions in many high‑impact contexts.
Use / impact
Serves as a reference point for agency policy (e.g. in AI procurement and civil‑rights enforcement) and for companies doing AI “responsible AI” programs, but is often criticized as too soft because it relies on existing laws and voluntary uptake (OSTP, 2022).
White House Office of Science and Technology Policy. (2022). Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. Retrieved from https://www.whitehouse.gov/ostp/ai-bill-of-rights/
NIST AI Risk Management Framework (AI RMF 1.0, 2023) + Generative AI Profile (2024)
What it is / status
A voluntary technical/policy framework from the U.S. National Institute of Standards and Technology (NIST, 2023). It gives a structured way for organizations to manage AI risks across the lifecycle; the 2024 Generative AI Profile (NIST, 2024) tailors the framework specifically to gen‑AI use cases (LLMs, image models, etc.).
Core ethical themes
“Trustworthy AI” is defined with characteristics like validity, safety, security, privacy enhancement, explainability, fairness, and accountability. The RMF is organized around four key functions, essentially embedding ethics and risk controls into governance, system design, evaluation, and operations (NIST, 2023).
Govern
Establish norms, policies, and procedures for AI risk management.
Map
Identify risks and contextual factors impacting AI systems.
Measure
Evaluate AI risks and the effectiveness of risk management activities.
Manage
Prioritize, respond to, and mitigate identified AI risks.
Use / impact
It’s quickly become a de‑facto reference in U.S. and international industry: companies map their internal “AI governance” and model‑risk processes to the RMF. The Generative AI Profile translates abstract principles into more concrete practices (NIST, 2024) (e.g. prompt‑injection testing, content‑filtering strategies, provenance/watermarking, and evaluation of hallucination risks).
Sources:
• NIST. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). https://doi.org/10.6028/NIST.AI.100-1
• NIST. (2024). Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile. https://doi.org/10.6028/NIST.AI.600-1
AI Now 2023 Landscape Report
What it is
A think‑tank report mapping power structures in AI, with a strong critical/civil‑rights lens. It focuses on the U.S. but situates it in a global political economy (AI Now Institute, 2023).
Amplified Harms & Concentrated Power
Argues that AI harms (bias, worker surveillance, policing tools, immigration control, etc.) track existing inequalities and are amplified by a highly concentrated industry (Big Tech) (AI Now Institute, 2023).
Structural Remedies Over Soft Ethics
Calls out the gap between “soft ethics” and harder regulatory constraints, and stresses labor rights, competition law, and democratic oversight as ethical tools—not just internal ethics boards.
Use / impact
Used heavily by advocacy groups, academics, and some regulators as a counterpoint to industry‑driven “trustworthy AI” narratives; it pushes for structural remedies (competition law, procurement rules, bans on some uses) over purely technical fairness fixes (AI Now Institute, 2023).
Source: AI Now Institute. (2023). 2023 Landscape: Confronting Tech Power. https://ainowinstitute.org/2023-landscape
Harvard Gazette: “Ethical concerns mount as AI takes bigger decision‑making role” (2020)
What it is
A symposium‑style popular piece summarizing views of Harvard scholars on the ethics of AI in domains like criminal justice, employment, finance, and health. (Laidlaw, 2020)
Key ethical worries
Opaque Decisions
Automated decisions undermining due process and public justification.
Algorithmic Bias
Harming marginalized groups.
Outsourced Judgment
The risk that public institutions “outsource” moral judgment to private algorithms.
(Laidlaw, 2020)
Role in your corpus
It doesn’t set rules, but it frames the mainstream U.S. debate: AI ethics isn’t just about accuracy; it’s a question of democratic legitimacy, contestability, and the moral limits of automation. (Laidlaw, 2020)
Source: Laidlaw, E. (2020, October 26). Ethical concerns mount as AI takes bigger decision-making role. Harvard Gazette. https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/
FRANCE
CNIL – “How can humans keep the upper hand? The ethical matters raised by algorithms and AI” (2017–2018)
A major public‑consultation report by the French data‑protection authority (CNIL), summarizing debates held in 2017 and published around 2017–2018. It’s not law, but it set the tone for France’s AI ethics agenda (CNIL, 2017).
Human Oversight
Humans must “keep the upper hand” over automated systems.
Fairness & Non‑discrimination
Concern about algorithmic profiling to ensure equitable treatment.
Transparency & Intelligibility
Algorithms should be understandable and open to independent auditing.
Stronger Checks
Calls for tools like algorithmic‑auditing platforms and institutional checks.
(CNIL, 2017)
Impact
It fed directly into broader French and EU conversations (e.g., on algorithmic transparency and the future AI Act) and established CNIL as a central ethics actor, not just a privacy regulator (CNIL, 2017).
Source: CNIL. (2017). How can humans keep the upper hand? The ethical matters raised by algorithms and artificial intelligence. https://www.cnil.fr/en/how-can-humans-keep-upper-hand-report-ethical-matters-raised-algorithms-and-artificial-intelligence
“For a Meaningful Artificial Intelligence: Towards a French and European Strategy” (Villani Report, 2018)
What it is
A strategic roadmap led by mathematician and MP Cédric Villani, commissioned by the French government. It underpins France’s “AI for Humanity” strategy (Villani et al., 2018).
Ethical and social pillars (Villani et al., 2018)
Human-centric AI
Strongly endorses human‑centric AI, with ethics embedded in R&D, education, and industrial policy.
Ethical Governance
Calls for an AI ethics committee, labels or certification for “ethical AI,” and robust governance for data sharing (especially health and mobility data).
Social & Ecological Impact
Emphasizes reducing inequalities, ecological sustainability, and public‑sector AI to support social welfare, not just competitiveness.
Use / impact
Influenced national funding priorities, regulatory initiatives, and France’s positioning in EU AI debates as a champion of “AI made in Europe” that’s both innovative and rights‑protective (Villani et al., 2018).
Source: Villani, C. et al. (2018). For a Meaningful Artificial Intelligence: Towards a French and European Strategy. https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf
CNPEN/CCNE opinions on generative AI & health‑data platforms (2023–2025)
What they are
The CNPEN (National Pilot Committee for Digital Ethics) and CCNE (National Consultative Ethics Committee) have issued key opinions.
CNPEN Avis n°7 (2023) examines the ethical issues of generative AI systems (e.g. ChatGPT‑like models) and offers 22 recommendations for designers, public authorities, and users (CNPEN, 2023).
Joint CCNE/CNPEN opinions, including Avis 141/4 on AI for medical diagnosis and Avis 143/5 on health‑data platforms, analyze AI in healthcare and "plateformes de données de santé" (like the Health Data Hub) (CCNE/CNPEN, 2023; 2024).
Key ethical themes
Generative AI
Risks to truthfulness, manipulation, dependency, and dignity; calls for traceability of AI‑generated content, strong human oversight, and attention to vulnerable groups (children, patients, workers).
Health‑data platforms
Proportionality and necessity of large‑scale data reuse, consent and information quality, governance structures with public representation, data security, environmental impact, and digital sovereignty.
Practical effects
These opinions give French regulators and ministries an ethics blueprint when implementing the EU AI Act in health and when designing national platforms; they are widely cited in French medical and digital‑policy debates (CNPEN, 2023; CCNE/CNPEN, 2023; 2024).
Sources:
• CNPEN. (2023). Avis n°7: Enjeux éthiques des systèmes d'intelligence artificielle générative.
• CCNE/CNPEN. (2023). Avis 141/4: Intelligence artificielle appliquée au diagnostic médical.
• CCNE/CNPEN. (2024). Avis 143/5: Plateformes de données de santé.
CNIL – “Algorithms and artificial intelligence: CNIL’s report on the ethical issues” (2018)
What it is
A synthesis report from CNIL summarizing the main ethical questions around AI and algorithms and orienting future regulatory work (CNIL, 2018).
Substantive content
1
Broad Themes
  • Transparency/explainability
  • Fairness
  • Autonomy
  • Data protection
  • Need for accountability mechanisms
2
Societal Stakes
  • Power asymmetries between platforms and individuals
  • Risk of “black box” public decisions
  • Importance of preserving individual freedoms
(CNIL, 2018)
Role
Works as a high‑level map for French AI governance options (codes of conduct, audits, sector‑specific rules) and complements the more detailed “Humans keep the upper hand” report (CNIL, 2018).
Source: CNIL. (2018). Algorithms and artificial intelligence: CNIL's report on the ethical issues. https://www.cnil.fr/en/algorithms-and-artificial-intelligence-cnils-report-ethical-issues
UNITED KINGDOM
Alan Turing Institute – “Understanding Artificial Intelligence Ethics and Safety” (Leslie, 2019)
What it is
A comprehensive guide for UK public‑sector organizations on designing and deploying ethical and safe AI systems (Leslie, 2019).
Potential Harms
  • Discrimination
  • Exclusion
  • Opacity
  • Safety risks
Mitigation Measures
  • Impact assessments
  • Stakeholder engagement
  • Documentation
  • Robust performance evaluation
  • Avenues for contestation
Interpretability & Communication
Decisions should be understandable to both officials and affected people, not just technically “explainable”.
(Leslie, 2019)
Use / impact
Adopted and referenced in UK government guidance on using AI in the public sector; widely cited in academic and practitioner work as a model of operational AI ethics guidance (Leslie, 2019).
Source: Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute. https://doi.org/10.5281/zenodo.3240529
CDEI Review into Bias in Algorithmic Decision‑Making (2020)
What it is
A review by the UK Centre for Data Ethics and Innovation (CDEI) focusing on algorithmic bias in policing, financial services, local government, and recruitment (CDEI, 2020).
Core findings / ethics content
Confirms that algorithmic systems can embed and amplify historical discrimination.
Emphasizes the importance of outcome fairness (not just process) and calls for better data quality, impact assessments, and continuous monitoring.
(CDEI, 2020)
Recommendations
Suggests measures like mandatory transparency obligations for public‑sector algorithmic tools, better regulator coordination, and sector‑specific guidelines. These feed into UK government policy discussions on “pro‑innovation” AI regulation (CDEI, 2020).
Source: Centre for Data Ethics and Innovation. (2020). Review into bias in algorithmic decision-making. https://www.gov.uk/government/publications/cdei-publishes-review-into-bias-in-algorithmic-decision-making
House of Lords report: “AI in the UK: ready, willing and able?” (2018)
What it is
A wide‑ranging inquiry by the House of Lords Select Committee on Artificial Intelligence (House of Lords, 2018).
Ethical framing
Positions the UK as aiming to be a global leader in “ethically sound” AI.
Calls for putting ethics at the centre of AI development, with recommendations around public engagement, education, diversity in tech, and scrutiny of public‑sector AI deployment.
(House of Lords, 2018)
Impact
The government issued a formal response; the report helped justify the creation/empowerment of bodies like the CDEI and shaped the narrative that UK AI policy should combine innovation with strong ethical safeguards (House of Lords, 2018).
Source: House of Lords Select Committee on Artificial Intelligence. (2018). AI in the UK: ready, willing and able? HL Paper 100. https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf
ICO Guidance on AI & Data Protection / Fairness (updated 2023)
Non‑statutory but authoritative guidance from the UK Information Commissioner’s Office on how data‑protection law (UK GDPR, etc.) applies to AI (ICO, 2023).
Ethical / legal substance
Interprets fairness in AI as a legal requirement, not just a moral one—covering training‑data bias, disparate impact, and opaque profiling.
Offers practical expectations: DPIAs for high‑risk AI, human‑in‑the‑loop requirements for significant decisions, meaningful explanations, and governance over automated decision‑making.
(ICO, 2023)
2023 updates
Specifically deepen the fairness chapter, reflecting workshops with the Alan Turing Institute and industry, providing more worked examples of what regulators consider unfair or non‑compliant AI (ICO, 2023).
Source: Information Commissioner's Office. (2023). Guidance on AI and data protection. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/
MEXICO
Agenda Nacional Mexicana de IA (IA2030Mx, 2020)
¿Qué es? Una “Agenda Nacional Mexicana de IA” multi-stakeholder, desarrollada por la coalición IA2030Mx (sociedad civil, academia, sector privado) entre 2018 y 2020 (IA2030Mx, 2020).
Contenido Ético / Derechos:
Recomendaciones de Política Pública
Medidas para alinear el desarrollo de la IA mexicana con los ODS, los derechos humanos y el desarrollo inclusivo.
Principios de Gobernanza y Ética
Aboga por una gobernanza distribuida (consejo asesor multisectorial), transparencia en IA de alto riesgo, y atención a sesgos, impactos laborales y desigualdad regional.
(IA2030Mx, 2020)
Rol: No es una estrategia gubernamental oficial de IA, pero se considera (incluidas las de la OCDE y la UNESCO) un documento de referencia clave para una futura estrategia nacional de IA éticamente fundamentada (IA2030Mx, 2020).
Source: IA2030Mx. (2020). Hacia una Estrategia de IA en México: Aprovechamiento de la Inteligencia Artificial para el Bienestar. https://www.ia2030mx.org/
R3D (2025): “No nos vean la cara — Reconocimiento facial 101”
What it is: A civil‑society report by Mexican digital‑rights group R3D on facial‑recognition technology in public spaces. (R3D, 2025)
Ethical and human‑rights critique
Mechanism & Scenarios
Explains how facial recognition works, its lifecycle, and typical deployment scenarios.
Threats to Rights
Documents deployments in Mexican states like Coahuila and Aguascalientes and argues they threaten privacy, freedom of assembly, equality, and due process—particularly for marginalized communities.
Bias & Protest Impact
Highlights disability and biometric‑error concerns, bias, and the chilling effect on protest.
(R3D, 2025)
Policy stance
The report leans toward strong restrictions or moratoria for public‑space facial recognition, framing it as incompatible with robust human‑rights protections. (R3D, 2025)
Source: R3D: Red en Defensa de los Derechos Digitales. (2025). No nos vean la cara — Reconocimiento facial 101. https://r3d.mx/reconocimiento-facial/
UNAM – “Los desafíos éticos de la inteligencia artificial” (J. E. Linares, 2024)
What it is
A philosophical and societal essay in Revista de la Universidad de México (UNAM) examining ethical challenges of AI. (Linares, 2024)
Main themes
Critiques both utopian and dystopian narratives around AI, focusing instead on concrete issues: surveillance capitalism, labour precarity, concentration of power, and erosion of human autonomy.
Discusses generative AI, emotional‑monitoring tools, and their impact on subjectivity, democracy, and culture in Mexico and Latin America.
(Linares, 2024)
Contribution
Provides a conceptual, humanities‑oriented framing of AI ethics that can be put in dialogue with more legal/technical frameworks like UNESCO’s recommendation or IA2030Mx. (Linares, 2024)
Source: Linares, J. E. (2024). Los desafíos éticos de la inteligencia artificial. Revista de la Universidad de México.
Nakamura (2023): “La perspectiva ética y jurídica de la IA en la investigación jurídica de México”
What it is
A reflective academic article (Corona Nakamura & González Madrigal) in Revista Misión Jurídica about the ethical and legal implications of AI in Mexican legal research (Corona Nakamura & González Madrigal, 2023).
Ethical-legal focus
Explores how AI tools that search, classify, or predict legal outcomes introduce risks: bias, opacity, over‑reliance (“automation bias”), privacy violations, and unequal access.
Argues for ethical limits and regulatory frameworks to ensure that AI‑assisted legal research remains fair, transparent, and respectful of human dignity.
(Corona Nakamura & González Madrigal, 2023)
Takeaway
Positions AI in law as potentially beneficial (efficiency, better information access) only if bounded by human‑rights‑oriented norms and professional responsibility (Corona Nakamura & González Madrigal, 2023).
Source: Corona Nakamura, L. A., & González Madrigal, A. (2023). La perspectiva ética y jurídica de la IA en la investigación jurídica de México. Revista Misión Jurídica, 16(25).
UNESCO Recommendation on the Ethics of AI & Mexico's alignment
What it is (global)
UNESCO’s 2021 Recommendation on the Ethics of AI is the first global normative instrument on AI ethics, adopted by all member states (including Mexico). It emphasizes human rights, human dignity, transparency, accountability, and human oversight (UNESCO, 2021).
Mexico context
UNESCO’s Artificial Intelligence Readiness Assessment of Mexico stresses that Mexico could become a regional reference in AI governance if it develops an ethically oriented national strategy and legal framework (UNESCO, 2023).
Analyses of Mexico’s AI policy note that the country endorsed OECD and G20 AI principles and collaborated with UNESCO, but that there has been a lag in implementing comprehensive AI regulation after initial strategic work.
Ethical direction of travel
Together with IA2030Mx, this places Mexico on a path toward a human‑rights‑rooted AI strategy, even if the formal national plan is still emerging (UNESCO, 2021, 2023).
Sources:
  • UNESCO. (2023). Artificial Intelligence Readiness Assessment: Mexico.
Federal courts and AI: theses on “Inteligencia artificial aplicada en procesos jurisdiccionales” (2025)
1
Thesis 2031009
“Inteligencia artificial aplicada en procesos jurisdiccionales. Constituye una herramienta válida para calcular el monto de las garantías que se fijen en los juicios de amparo.” It upholds the use of AI tools to compute the amount of injunctive guarantees (bonds) in amparo proceedings. (Poder Judicial de la Federación, 2025)
2
Thesis 2031010
“Inteligencia artificial aplicada en procesos jurisdiccionales. Elementos mínimos que deben observarse para su uso ético y responsable con perspectiva de derechos humanos.” It sets minimum conditions for ethical and rights‑respecting use of AI in judicial processes (transparency, proportionality, data protection, and human supervision). (Poder Judicial de la Federación, 2025)
Ethical parameters
Commentaries and press reports highlight that:
AI is expressly framed as an auxiliary tool, not a decision‑maker. The judge retains full responsibility for the decision.
AI may be used, for example, for complex numerical operations (e.g., guarantees), provided that:
  • its use is disclosed;
  • outputs are verifiable and can be checked;
  • data‑protection and proportionality principles are respected;
  • no predictive or value‑laden determinations (e.g., of guilt) are delegated to AI.
(Poder Judicial de la Federación, 2025)
Significance
These theses are among the first in Latin America to explicitly regulate judicial use of AI, operationalizing an ethics‑of‑AI framework directly in appellate jurisprudence. (Poder Judicial de la Federación, 2025)
Source: Poder Judicial de la Federación. (2025). Tesis 2031009 y 2031010: Inteligencia artificial aplicada en procesos jurisdiccionales. Semanario Judicial de la Federación.
German Data Ethics Commission (Datenethikkommission) – Final Opinion (2019)
A comprehensive report commissioned by the federal government in 2018 and delivered in 2019, giving recommendations on data, AI, and digital-policy governance (Datenethikkommission, 2019).
Ethical framework & recommendations
Risk-Based Regulation
Proposes risk-based regulation of AI and algorithmic systems, with stricter requirements or bans for high-risk uses (e.g., social scoring, certain biometric surveillance).
Guiding Principles
Emphasizes human dignity, autonomy, democracy, and non-discrimination as guiding principles.
Oversight & Transparency
Recommends extensive transparency, auditing mechanisms, and institutional oversight.
Influential for German and EU debates; frequently cited in work on the UNESCO AI ethics recommendation and the EU AI Act as an example of a national ethics body pushing for strong rights-based controls (Datenethikkommission, 2019).
Source: Datenethikkommission der Bundesregierung. (2019). Gutachten der Datenethikkommission. https://www.bmi.bund.de/SharedDocs/downloads/DE/publikationen/themen/it-digitalpolitik/gutachten-datenethikkommission.pdf
Ethics Commission on Automated & Connected Driving – Report (2017)
A specialist ethics commission convened by the Federal Ministry of Transport to address self‑driving cars and connected vehicles (Ethics Commission on Automated and Connected Driving, 2017).
The “20 ethical rules”
1
Human life absolute priority
No algorithmic trade‑offs valuing one life over another based on characteristics like age or social status.
2
Harm reduction justification
Automated driving is only justifiable if it reduces overall harm compared to human driving.
3
Override & responsibility
No fully autonomous systems without a way to override; clear allocation of responsibility among manufacturers, operators, and drivers.
4
Data protection
Strong data‑protection and data‑minimization rules around vehicle and location data.
(Ethics Commission on Automated and Connected Driving, 2017)
Impact
These rules informed Germany’s approach to automated‑driving regulation and are widely cited internationally as an early attempt to translate abstract ethics into concrete sectoral norms (Ethics Commission on Automated and Connected Driving, 2017).
Source: Ethics Commission on Automated and Connected Driving. (2017). Report. Federal Ministry of Transport and Digital Infrastructure. https://www.bmvi.de/SharedDocs/EN/publications/report-ethics-commission.pdf
Germany’s AI Strategy – AI Watch Country Report
What it is
The AI Watch initiative (European Commission’s Joint Research Centre) tracks national AI strategies; for Germany it summarizes the “AI made in Germany” strategy and its updates.
Human-centric & Trustworthy Focus
Germany’s strategy emphasizes human‑centric and trustworthy AI, aligning with EU concepts.
Priority Areas
Research funding, digital infrastructure, SME support, skills, and ethical/legal frameworks that embed fundamental rights and democratic oversight.
Use
The AI Watch report is a comparative analytic tool, but it underscores that Germany aims to position itself as a leader in high‑quality, ethically governed AI rather than just low‑regulation innovation.
Source: European Commission Joint Research Centre. (2020). AI Watch: National strategies on Artificial Intelligence – Germany. https://publications.jrc.ec.europa.eu/repository/handle/JRC119974
Max Planck Society: AI’s social, ethical, and legal questions
The Max Planck Society hosts multiple research programs on AI’s societal impact (e.g., at Max Planck Institute for Intelligent Systems, Max Planck Law). Their public‑facing material explicitly addresses ethical and legal questions.
Algorithmic Bias & Black Boxes
Bias in algorithmic decision‑making (e.g., hiring algorithms disadvantaging women) and the difficulty of understanding “black box” models.
Interpretability & Safeguards
The need for causal reasoning, interpretability, and robust safeguards in AI design.
Beyond Technical Fixes
Recent research on how delegating unethical tasks to AI can increase dishonest human behavior, highlighting that technical safeguards alone are not enough and that legal/social frameworks are needed.
Role in your corpus
These are not regulatory acts, but they represent a major scientific actor in Germany helping to shape normative debates about AI, particularly around responsibility, accountability, and the limits of technical fixes.
Source: Max Planck Society. (Various years). Research on AI ethics, bias, and accountability. https://www.mpg.de/research/artificial-intelligence
COMPARING THE ETHICS
A comparative analysis of AI ethics frameworks across four countries, examining how different nations approach the governance of artificial intelligence through policy, regulation, academic discourse, and civil society engagement.
Approach & Authority
From binding judicial doctrine to voluntary frameworks, civil society reports to government commissions—each country employs different institutional mechanisms.
Ethical Orientation
Rights-based, risk-based, human-centric, or power-critical—frameworks reflect distinct philosophical and political traditions.
Key Mechanisms
Impact assessments, transparency requirements, auditing, oversight bodies, professional codes, and regulatory tiers shape how ethics translates into practice.
Scope & Focus
Some target specific sectors (autonomous vehicles, healthcare, facial recognition) while others address AI governance broadly across society.
The detailed comparison table below maps each instrument's year, type, scope, ethical orientation, and key mechanisms—revealing both convergence on core principles and divergence in implementation strategies.