HUMAN RIGHTS REVIEWS
INDEX KEY
(S) Surveillance
(P) Policing / criminal justice
(H) Hiring / labour
(W) Welfare / social services & broader equality rights
UNITED STATES
FTC v. Rite Aid (2023) (S, P, W)
The FTC found that Rite Aid deployed store‑wide facial recognition in thousands of cases without adequate notice, accuracy controls, or safeguards, leading to misidentifications (especially of women and people of colour), humiliating searches, and wrongful police involvement. The settlement bans Rite Aid from using facial recognition for five years and frames biased “face surveillance” as a consumer‑protection and civil‑rights problem that harms dignity, privacy, and equal treatment.
FTC Decision and Order, In the Matter of Rite Aid Corporation, File No. 2023-0130 (Dec. 19, 2023)
AI Now Institute – Litigating Algorithms (2018 workshop & 2019 U.S. report) (P, W, H)
These reports and workshops synthesize U.S. court cases challenging government algorithmic systems used in areas like welfare benefits, housing, criminal justice risk scores, and automated hiring. They document how opaque “black box” systems can deny due process, reinforce discrimination, and obscure accountability, and they extract litigation strategies to force transparency, protect notice and appeal rights, and push agencies away from harmful scoring tools.
AI Now Institute, Litigating Algorithms: Challenging Government Use of Algorithmic Decision Systems (2018 workshop report); AI Now Institute, Litigating Algorithms 2019 US Report (2019)
Georgetown Law – The Perpetual Line-Up (2016) (S, P)
This foundational investigation shows that over half of U.S. adults—more than 117 million people—are in face recognition networks searchable by police, often via driver’s‑license databases never collected for law‑enforcement purposes. It flags how largely unregulated systems, skewed accuracy on Black people, “perpetual” mugshot‑style databases, and secret use in everyday policing threaten privacy, equal protection, and freedom of association.
Clare Garvie, Alvaro Bedoya & Jonathan Frankle, The Perpetual Line-Up: Unregulated Police Face Recognition in America, Georgetown Law Center on Privacy & Technology (Oct. 18, 2016)
U.S. DOJ & EEOC Technical Assistance on Algorithmic Hiring & the ADA (2022+, Title VII guidance 2023) (H)
EEOC and DOJ warn that AI‑driven hiring and evaluation tools (resume screeners, online tests, video interview scoring, productivity analytics) can illegally disadvantage disabled workers and other protected groups. They explain that employers remain responsible if vendor tools screen out people with disabilities, fail to provide accommodations, or create disparate impact under ADA or Title VII, and they set practical guardrails: testing for bias, offering alternative assessments, and ensuring explainable, contestable decisions.
U.S. Department of Justice & U.S. Equal Employment Opportunity Commission, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (May 12, 2022); EEOC, The ADA and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (updated guidance); EEOC, Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII (May 18, 2023)
U.S. Commission on Civil Rights – Civil Rights Implications of the Federal Use of Facial Recognition Technology(2024) (S, P, W)
This federal report examines how agencies like DOJ, DHS, and HUD use FRT and finds serious risks around accuracy disparities, due process, discrimination in housing and policing, and weak oversight. It recommends stringent testing for bias, strict limits on deployment, transparency to affected communities, and suspending systems where racial or other disparities persist, positioning FRT as a structural civil‑rights issue rather than a neutral security tool.
U.S. Commission on Civil Rights, Civil Rights Implications of the Federal Use of Facial Recognition Technology (Feb. 2024)
Amnesty International – Ban the Scan / NYC (2021–ongoing) (S, P, W – protest & assembly)
Amnesty’s “Ban the Scan” campaign and mapping project show that NYPD facial recognition is heavily concentrated in Black and brown neighbourhoods and around protest sites, effectively creating “digital stop‑and‑frisk.” The research ties FRT camera density to existing patterns of racialized policing and argues that this surveillance chills protest, assembly, and everyday movement, calling for bans on biometric identification in public spaces.
Amnesty International, Ban the Scan NYC (ongoing campaign, 2021–present); Amnesty International, Mapping NYPD Facial Recognition Surveillance (2021)
Investigative reporting – New Orleans live facial recognition alerts (2025) (S, P)
Reporting reveals that New Orleans police secretly used live facial recognition alerts from a private camera network for two years, despite a city ordinance limiting FRT to post‑incident searches for serious crimes. The system repeatedly fed real‑time alerts into everyday policing, including non‑violent cases, without disclosure in court records, raising serious concerns about unlawful surveillance, due‑process violations, and the privatization of coercive state functions.
Investigative reporting on New Orleans Police Department's use of live facial recognition (2025)
UNITED KINGDOM
R (Bridges) v Chief Constable of South Wales Police (Court of Appeal, 2020) (S, P)
This landmark UK judgment addressed live facial recognition (LFR) technology. The Court of Appeal concluded that South Wales Police's use of Automated Facial Recognition (AFR) systems violated data-protection and equality duties. This was primarily due to insufficient legal guidance, weak Data Protection Impact Assessments (DPIAs), and a failure to adequately assess discriminatory impacts. While the court did not issue an outright ban on LFR, it emphasized that any future deployment must strictly adhere to "in accordance with the law," necessity, and equality-impact standards to comply with Article 8 ECHR.
R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058
ICO Opinion on Live Facial Recognition (LFR) in Public Places (2021) (S, P, H)
The UK Information Commissioner’s Opinion sets a very high bar for lawful LFR use, emphasizing that scanning everyone’s faces in public is inherently intrusive. It states that such use will rarely be “necessary and proportionate.” The Opinion requires organizations to demonstrate compelling justification, conduct robust bias and accuracy testing, implement strict data retention limits, and offer genuine alternatives. It specifically warns that routine deployment of LFR in shops, offices, or public venues for purposes like access control or staff monitoring is likely to be unlawful.
UK Information Commissioner's Office, ICO Opinion: The Use of Live Facial Recognition Technology in Public Places (June 2021)
EHRC Interventions & Statements on Met Police LFR (2025) (S, P)
The Equality and Human Rights Commission has challenged the Metropolitan Police's Live Facial Recognition (LFR) policy through a judicial review. The Commission argues that the policy is incompatible with ECHR Articles 8, 10, and 11, which protect privacy, freedom of expression, and assembly rights. Concerns raised include racial bias in LFR matches, the absence of a clear statutory basis, and inadequate safeguards. The regulator emphasizes that mass biometric checks in public spaces risk discriminatory policing and could create a chilling effect on protest.
Equality and Human Rights Commission, Intervention in judicial review of Metropolitan Police Live Facial Recognition policy (2025)
Ada Lovelace Institute: An Eye on the Future of Biometrics (S, P, H, W)
This report from the Ada Lovelace Institute highlights the UK’s current "legal grey area" in biometric rules, covering technologies such as Live Facial Recognition (LFR), emotion recognition, and biometrics in the workplace. It advocates for a comprehensive, risk‑based legal framework that includes bans on the most harmful uses (e.g., mass biometric surveillance), enhanced democratic oversight, and explicit rights to contest and opt out. The proposals are firmly grounded in privacy, non‑discrimination, and freedom‑of‑assembly protections.
Ada Lovelace Institute, An Eye on the Future of Biometrics: Towards a UK Regulatory Framework (report)
ICO Enforcement: Serco's Biometric Surveillance in the Workplace
The ICO ordered Serco Leisure and partner trusts to stop using facial recognition and fingerprints to track over 2,000 workers’ attendance. The ruling also mandated the deletion of most biometric data, finding the scheme unnecessary and disproportionate given the availability of less intrusive options like ID cards or fobs. This decision underscores that biometric monitoring in the workplace can violate workers’ data-protection and human rights, particularly when consent is compromised by inherent power imbalances.
UK Information Commissioner's Office, Enforcement Notice against Serco Leisure and partner trusts regarding biometric surveillance in the workplace
Coverage of LFR roll‑out & critiques (2024–2025) (S, P, W)
Investigations reveal widespread use of Live Facial Recognition (LFR) by UK police, scanning millions of faces and conducting retrospective searches across massive image databases. Official tests confirm significantly higher false-match rates for Black and Asian individuals and for women, particularly Black women.
Civil rights groups caution that this rapid expansion, without dedicated legislation, entrenches racialized surveillance. They warn it could create “no‑go” zones for minorities who avoid heavily scanned areas, thereby undermining fundamental equality and freedom of movement.
Media coverage and civil society reports on UK police LFR deployment (2024–2025)
FRANCE
CNIL – Algorithmic Video Surveillance for Paris 2024 (S, P)
The CNIL’s Q&A clarifies that for the Paris 2024 Olympics, France has legalized experimental “augmented” cameras. These cameras are designed to algorithmically detect specific events such as crowd surges or abandoned bags, while explicitly prohibiting facial recognition technology.
Despite the exclusion of facial recognition, CNIL highlights the significant risks to privacy and individual freedoms inherent in large-scale, real-time video analytics. They underscore the critical need for strict limitations, transparency, and clear avenues for people to exercise their data rights.
CNIL (Commission Nationale de l'Informatique et des Libertés), Q&A on Algorithmic Video Surveillance for Paris 2024 Olympics
EDRi dossier & civil‑society open letters on Olympics AVS law (2023–2024) (S, P, W)
Leading civil society organizations, including EDRi, La Quadrature du Net, and Amnesty, contend that the Olympics law's provisions for “intelligent video” normalize biometric-style mass surveillance in public spaces. Through open letters, they assert this scheme violates international human rights principles of necessity and proportionality, heightens discrimination risks through targeted monitoring, and establishes a hazardous precedent for expanding AI-driven surveillance beyond major events.
European Digital Rights (EDRi), La Quadrature du Net, Amnesty International, and other civil society organizations, Open letters and dossier on Olympics AVS law (2023–2024)
Paris 2024: A Testbed for Algorithmic Video Surveillance (S, P)
Commentary in venues like Lawfare and French media describes Paris 2024 as a real‑world testbed for Algorithmic Video Surveillance (AVS), with hundreds of AI‑enabled cameras tracking behaviors across the city’s public spaces and transport. Analyses question whether emergency security justifications truly met necessity and proportionality standards, highlight thin transparency (decrees published very late, unclear signage), and warn of “post‑Olympic drift” toward permanent AI video surveillance.
Commentary in Lawfare and French media on Paris 2024 as testbed for Algorithmic Video Surveillance (2024)
La Quadrature du Net – school FRT cases & Olympics “technopolice” work (2019–2024) (S, W)
La Quadrature du Net (LQDN) spearheaded litigation and advocacy efforts against the implementation of facial recognition gates in two high schools located in Nice and Marseille. Their actions led to rulings by CNIL and subsequently the administrative court, which deemed these pilot programs illegal for minors, determining they were neither necessary nor proportionate when compared to less intrusive access control methods. Expanding on this work, LQDN’s later “Technopolice” campaign draws connections between biometric surveillance, predictive policing, and Algorithmic Video Surveillance (AVS) deployed around Paris 2024, linking these technologies to structural racism and advocating for outright bans on biometric mass surveillance systems.
La Quadrature du Net, litigation and advocacy on school facial recognition (2019–2024); La Quadrature du Net, Technopolice campaign materials
CNIL’s earlier stance against school FRT (2019) & court follow‑up (2020) (S, W)
In 2019, CNIL formally rejected facial recognition at school entrances, concluding that scanning minors’ faces to "speed up" entry was neither necessary nor proportionate. They highlighted that attendance and access control could be effectively managed with badges or cards. Following this, the Marseille administrative court echoed CNIL's reasoning in 2020, annulling the regional decision that authorized the FRT pilot programs and strongly emphasizing GDPR‑based rights to privacy and data minimization for children.
CNIL decision rejecting facial recognition at school entrances (2019); Marseille Administrative Court ruling annulling FRT pilot authorization (2020)
Reporting on Implementation During the Paris Games (2024) (S, P)
Post-Games reporting reveals that approximately 485 AI-equipped cameras were deployed across Paris and its transport hubs during the Olympics. This extensive use occurred with limited public notice, and relevant decrees were sometimes published only after deployments had already begun. Journalists and NGOs have since documented widespread confusion regarding the systems' actual detection capabilities, data storage durations, and whether the evaluation committee will assess not only technical performance but also critical impacts on fundamental rights.
Post-Games reporting by journalists and NGOs on Paris 2024 AI surveillance implementation (2024)
MEXICO
No nos vean la cara (2025) (S, P, W – protest & assembly)
R3D’s in-depth report thoroughly maps facial recognition deployments across Mexican streets, airports (trusted-traveler kiosks), and stadiums (“Fan ID”). It concludes that Mexico is rapidly constructing a mass biometric surveillance infrastructure with minimal regulation. The report documents significant risks, including false matches, arbitrary detentions, data breaches, and chilling effects on protest and public life. It recommends severely limiting or abandoning facial recognition for biometric identification.
Wired’s explainer condenses R3D’s findings for a general audience: facial recognition is expanding quickly in Mexican cities, airports, and stadiums without strong legal safeguards. This rapid expansion creates serious risks to privacy and other rights. The article emphasizes how opaque “security” deployments can normalize constant biometric monitoring and urges policymakers to scrutinize their necessity, legality, and potential discriminatory impacts.
R3D (Red en Defensa de los Derechos Digitales), No nos vean la cara: Reconocimiento facial en México (2025); Wired explainer on facial recognition in Mexico (2025)
INAI Order on Biometric Systems at Mexico City Airport (2023) (S, W)
In 2023, INAI issued an order to the National Migration Institute (INM) regarding the biometric identification systems deployed at Mexico City's airport. This directive mandated the INM to disclose the legal bases, impact assessments, and data-handling practices associated with these systems, which include facial and other biometrics collected via automated kiosks.
The decision underscores the acknowledged risks of false positives and arbitrary detentions that can arise from such technologies. It firmly establishes transparency and purpose limitation as crucial safeguards when the state utilizes AI-enabled biometrics to manage movement and border crossings.
INAI (Instituto Nacional de Transparencia, Acceso a la Información y Protección de Datos Personales), Order to National Migration Institute regarding biometric systems at Mexico City Airport (2023)
El País & civil‑society work – INAI action on Fan ID (2025) (S, P, W)
El País reports that INAI has initiated a sanctions procedure against the Mexican Football Federation concerning its Fan ID system. This system collects sensitive biometric and identity data from fans via Incode. Civil‑society analyses highlight significant issues such as a lack of explicit consent, unclear data‑sharing practices with authorities, and security vulnerabilities.
Furthermore, concerns are raised that mandatory biometric registration for football matches infringes upon privacy and participation rights by conditioning access to cultural life.
El País reporting on INAI sanctions procedure against Mexican Football Federation Fan ID system (2025); civil society analyses on Fan ID privacy concerns
AI & human rights in Mexico (2024–2025) (H, W, IP)
Recent Mexican legal scholarship around Amparo Directo 6/2025 and broader AI governance debates argues that AI systems must be integrated into existing human‑rights frameworks rather than treated as rights‑bearing entities. Commentators emphasize that only human beings can be authors for copyright, and they warn that unregulated AI use—in surveillance, automated decisions, and creative industries—demands urgent legislative reform to safeguard privacy, labour rights, and cultural participation. Mexican legal scholarship on AI and human rights (2024–2025), including commentary on Amparo Directo 6/2025
AI‑driven mass surveillance in Chihuahua – Plataforma / Torre Centinela (S, P)
Reports from EFF, R3D, and journalists detail “Plataforma Centinela,” a state‑wide AI surveillance system deployed in Chihuahua. This extensive system incorporates thousands of cameras, facial recognition technology, license‑plate readers, drones, and a prominent 20‑story “Torre Centinela” command centre. Critics argue that this creates an atmosphere of permanent, militarized surveillance, impacting both migrants and residents. Concerns are particularly high due to a perceived lack of transparency, insufficient legal safeguards, and an absence of democratic oversight, raising serious questions about privacy, freedom of movement, and the potential for abuse. EFF, R3D, and journalistic reporting on Plataforma Centinela AI surveillance system in Chihuahua, Mexico
SCJN, Amparo Directo 6/2025 – AI‑generated works & authorship (2025) (W – cultural & IP rights)
In Amparo Directo 6/2025, Mexico’s Supreme Court ruled that works generated solely by AI cannot be registered as intellectual property, emphasizing that only natural persons can be authors. This decision is considered a landmark in Mexican AI law, effectively blocking “authorship” rights for AI systems.
Commentators highlight the urgent need to adapt legal frameworks to continue protecting human creators and prevent the undermining of cultural rights in an AI-mediated environment.
Suprema Corte de Justicia de la Nación (SCJN), Amparo Directo 6/2025 (2025)
GERMANY
Automating Injustice: “Predictive” Policing in Germany (2025) (P, S, W)
This report delves into predictive-policing, risk-assessment, and profiling tools deployed by German police, courts, and prisons. It highlights how these technologies disproportionately concentrate surveillance on already marginalized areas and racialized groups.
The report further argues that opaque algorithms, often trained on biased historical data, reinforce racial profiling and "targeted policing," thereby threatening fundamental principles of equality, due process, and the right to non-discriminatory law enforcement.
Automating Injustice: "Predictive" Policing in Germany (2025 report)
Federal Constitutional Court Ruling: Hesse & Hamburg Automated Data Analysis (2023) (S, P, W)
Germany’s Constitutional Court has invalidated key provisions that permitted Palantir-based “Hessendata” and similar tools to exploit extensive police and third-party datasets for crime prevention purposes. The court declared that such automated data analysis profoundly infringes upon the constitutional right to informational self-determination.
This ruling stipulates that such data analysis is only permissible under narrowly defined, specific laws directly linked to identifiable dangers, moving away from broad “pre-crime” style analytics.
German Federal Constitutional Court, Ruling on Hesse and Hamburg automated data analysis laws (2023)
GFF Challenges Renewed “Hessendata” Powers (2024) (S, P)
Following the 2023 ruling, Hesse amended its police law, yet it continued to authorize large-scale data mining. In response, GFF (the Society for Civil Rights) filed another constitutional complaint. They argue that this revised law still permits the analysis of data from uninvolved persons and mass incident logs. This practice, they contend, leads to false suspicion and invasive secret surveillance, directly contravening the Basic Law’s guarantees of privacy and informational self-determination.
GFF (Gesellschaft für Freiheitsrechte / Society for Civil Rights), Constitutional complaint challenging renewed Hessendata powers (2024)
EFF & civil‑society coverage of expanding biometric surveillance (S, P, W)
New AI Biometric Surveillance Plans
Despite existing court limits, German authorities are actively pushing new plans for AI‑enabled biometric surveillance, encompassing large‑scale image‑scraping and expanded facial recognition capabilities.
Erosion of Privacy Protections
Critics argue that Germany's historical sensitivity to surveillance is being significantly eroded by "weak guardrails," which fail to adequately protect civil liberties against these expansive proposals.
Risk of Predictive Policing
Proposals to mine all law‑enforcement data pose a substantial risk, potentially turning victims, witnesses, and complainants into permanent subjects of predictive policing, thus undermining fundamental rights and public trust.
EFF and civil society coverage of expanding biometric surveillance in Germany
Freedom House – Germany country reports (2023–2025) (S, W)
Contrasting Digital Rights & Surveillance
Freedom House reports indicate that while German courts consistently safeguard digital rights, national surveillance laws still confer extensive powers upon law-enforcement and intelligence agencies.
Key Concerns: Predictive Policing & Data Retention
The reports specifically highlight predictive-policing tools and broad data-retention practices as persistent issues that raise significant concerns about privacy and civil liberties.
Global Trend of Shrinking Civic Space
Germany's use of AI-enabled surveillance is presented within the context of a global pattern, contributing to the shrinking of both online and offline civic spaces worldwide.
Freedom House, Germany country reports on digital rights and surveillance (2023–2025)
Understanding Key Domains: S, P, H, W
S = Surveillance
Includes biometrics, AVS, and databases.
P = Policing
Covers criminal justice aspects.
H = Hiring
Relates to labour and employment.
W = Welfare
Encompasses social services and broader civil & political rights.