LAW CASES
UNITED STATES
Thaler v. Perlmutter, 130 F.4th 1039 (D.C. Cir. 2025)
1
Whether a work generated entirely by an AI system, with no asserted human creative contribution, is eligible for U.S. copyright registration, and whether the Copyright Office’s human‑authorship requirement is lawful.
2
Copyright protects “original works of authorship.” 17 U.S.C. § 102(a). See Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53 (1884); Feist Publications, Inc. v. Rural Tel. Serv. Co., 499 U.S. 340 (1991).
Historically, “author” under the Copyright Act and the Constitution has been interpreted to mean a human author.
The Copyright Office may refuse registration where a work lacks human authorship, consistent with this requirement.
3
Thaler sought to register an AI-generated image, listing only his AI system as “author” and claiming rights via ownership of the machine. The D.C. Circuit looked to the statutory text, constitutional history, and precedent and found a long‑standing human-authorship requirement. Because Thaler never alleged any specific human creative contribution, his work fell outside the statute’s scope. Work‑for‑hire theories failed because there was no underlying human-authored work or qualifying “employee.” The court held the Office’s practice of refusing purely machine-generated works was reasonable and not arbitrary or unconstitutional.
4
The court affirmed summary judgment for the Copyright Office: a work created solely by AI, with no human author, is not registrable; “author” under the U.S. Copyright Act means a human person.
The New York Times Co. v. Microsoft & OpenAI (S.D.N.Y. Apr. 4, 2025)
1
At the pleading stage, which claims could proceed against Microsoft/OpenAI for: (1) training on news content, (2) “regurgitating” articles, (3) summarizing (“abridging”) articles, (4) removing copyright management information (CMI), and (5) related state-law and trademark claims?
2
  • Twombly/Iqbal plausibility standard for Rule 12(b)(6).
  • Copyright infringement under 17 U.S.C. § 501 requires copying of protected expression and substantial similarity.
  • DMCA § 1202(b)(1), (b)(3) prohibit removal/alteration and certain distributions of CMI.
  • Common-law misappropriation claims can be preempted by the Copyright Act.
3
The court held that allegations of wholesale copying of Times and other outlets’ articles for training and “regurgitating” near‑verbatim outputs plausibly alleged direct and contributory infringement and could not be resolved as fair use on the pleadings. By contrast, “abridgments” alleged by the Center for Investigative Reporting—bullet‑point factual summaries with different wording, tone, and structure—were not substantially similar as a matter of law and were dismissed. The court dismissed common‑law unfair‑competition‑by‑misappropriation as preempted. On the DMCA, it trimmed § 1202(b)(3) theories but allowed certain § 1202(b)(1) claims (removal/alteration of CMI in outputs) to proceed. It also allowed trademark dilution claims in the Daily News action to go forward.
4
Core copyright and some DMCA and trademark claims survived; the court dismissed “abridgment”‑based copyright claims and certain DMCA/misappropriation theories, setting up later fights over fair use and damages.
Citation
The New York Times Co. v. Microsoft Corp., No. 1:23-cv-11195 (S.D.N.Y. Apr. 4, 2025) (Furman, J.)
Andersen v. Stability AI, Midjourney & DeviantArt (N.D. Cal. 2023– )
1
Whether artists stated viable claims that image‑generation models (e.g., Stable Diffusion, Midjourney) infringe their copyrights and other rights by using their works in training and producing allegedly similar outputs.
2
  • Direct copyright infringement requires ownership plus copying of protected expression.
  • Non‑copyright claims (right of publicity, unfair competition, etc.) cannot simply re‑package copyright claims and must satisfy their own elements.
  • Copyright does not protect artistic “style” as such.
3
Judge Orrick held that allegations that Stability AI copied plaintiffs’ works into training datasets and could sometimes produce recognizable derivatives were enough to plausibly allege direct infringement against Stability AI (and certain claims against other defendants), so those copyright claims survived. But many non‑copyright theories—right of publicity, negligence, unjust enrichment, and broad competition claims—were dismissed as inadequately pleaded or preempted. To the extent claims were based only on copying “style” (rather than particular expressive works), they failed as a matter of copyright law.
4
Most “extra” claims were trimmed, but core direct copyright infringement claims over training / outputs continue, making Andersen one of the key ongoing U.S. artist v. generative‑AI cases.
Citation
Andersen v. Stability AI Ltd., No. 3:23-cv-00201-WHO (N.D. Cal.) (ongoing)
Doe v. GitHub (Copilot) (N.D. Cal. 2022– )
1
Whether anonymous open‑source developers stated claims against GitHub, Microsoft, and OpenAI over Copilot’s training on GitHub repositories and output of code snippets without attribution, under copyright, DMCA, and contract/unfair competition theories.
2
  • Plaintiffs must allege specific violations of 17 U.S.C. and DMCA § 1202, and show standing and injury.
  • Breach-of‑license / contract claims require specific identification of license terms allegedly breached.
  • Many statutory and tort claims require particularized allegations of copying or removal of CMI.
3
On the first motion to dismiss, the court dismissed numerous claims—including DMCA §1202(a) and (b)(2) claims, tortious interference, fraud, Lanham false designation, negligence, some unjust enrichment and CCPA claims—for lack of specific factual allegations or preemption. But it allowed key claims to proceed: (1) certain DMCA § 1202(b)(3) theories (distribution of works after removal of CMI), and (2) breach of open‑source licenses based on Copilot allegedly outputting licensed code without the required attribution/license terms. Those surviving claims reflect the theory that Copilot reproduces code snippets in ways that strip CMI and violate license conditions, even if many other theories failed.
4
The case continues on narrower grounds—focused on DMCA and breach of OSS licenses—rather than as a broad attack on AI code‑assist tools.
Citation
Doe 1 v. GitHub, Inc., No. 4:22-cv-06823-JST (N.D. Cal.) (ongoing)
Thomson Reuters v. Ross Intelligence, Inc. (D. Del. Feb. 11, 2025)
1
Whether Ross’s use of Westlaw headnotes and editorial materials to train an AI‑powered legal research tool:
  1. Infringed Thomson Reuters’ copyrights; and
  1. Could be justified as fair use because copying occurred at an “intermediate” training step and outputs did not display the headnotes.
2
  • Copyright protects sufficiently original editorial material like headnotes and the Key Number System.
  • Fair use requires analysis of the four statutory factors, including purpose/character (transformative?), nature of the work, amount used, and market harm.
  • Intermediate copying can be fair use in some software‑interoperability contexts, but that doctrine is limited.
3
The court held that thousands of Westlaw headnotes were copyrightable and had been copied into “bulk memos” used to train Ross’s system. It then rejected Ross’s fair‑use defense as a matter of law: Ross’s use was commercial and aimed to build a competing legal research tool, not a different‑purpose transformative use; any public‑interest arguments didn’t override the strong market‑substitution effect for licensing headnotes as AI training data. The “intermediate copying” line of cases (e.g., Google v. Oracle) was distinguished because Ross did not need to copy headnotes to achieve interoperability; it simply used TR’s expressive editorial content to compete.
4
Partial summary judgment for Thomson Reuters: Ross infringed TR’s copyrights in thousands of headnotes and its use was not fair use—one of the first merits rulings rejecting an AI‑training fair‑use defense.
Citation
Thomson Reuters Enter. Ctr. GmbH v. Ross Intelligence Inc., No. 1:20-cv-00613-CFC (D. Del. Feb. 11, 2025)
State v. Loomis, 881 N.W.2d 749 (Wis. 2016)
1
Whether a sentencing court’s use of a proprietary risk‑assessment algorithm (COMPAS) violates due process, given its opacity and potential bias.
2
Due process at sentencing requires that:
  • A defendant not be sentenced based on inaccurate information.
  • The defendant have an opportunity to refute information relied on by the court.
  • Fundamental fairness is preserved, even when using actuarial tools.
3
Loomis argued that using COMPAS, whose methodology and inputs were proprietary, violated due process (COMPAS is a proprietary risk-assessment tool developed by Northpointe, now Equivant) because he could not challenge the algorithm’s inner workings, and because the tool allegedly built in gender/racial bias. The Wisconsin Supreme Court rejected a categorical bar on COMPAS but imposed safeguards: COMPAS scores may be considered but not determinative, courts must receive a written warning about limitations and potential biases, and judges must base sentences on individualized reasoning and traditional factors. The inability to inspect the source code, standing alone, did not make the sentence unconstitutional, so long as COMPAS was used only as one piece of information.
4
Use of a proprietary risk‑assessment algorithm at sentencing is permissible but must be accompanied by cautionary instructions and cannot be the sole or primary basis for a sentence—an early, influential case on algorithmic transparency and fairness in criminal justice.
FRANCE
Conseil constitutionnel, Décision n° 2025‑878 DC (24 Apr. 2025)
1
Whether Parliament could, in a law on transport security, prolong beyond March 31, 2025 the experimental use of algorithmic analysis of video‑surveillance images (originally authorized for the Paris 2024 Olympics), and whether that extension complied with constitutional limits on legislative procedure.
2
  • Under French constitutional law, legislative “riders” (dispositions sans lien, même indirect, avec l’objet du texte) are unconstitutional.
  • Provisions must have a sufficient connection to the subject matter of the bill as introduced.
3
The “JO 2024” law had set a temporary, tightly framed experiment for algorithmic processing of video feeds for major events, ending March 31, 2025. A later law on transport security attempted to simply extend this experiment to March 1, 2027 under the same framework. The Conseil constitutionnel held that this extension had no sufficient link—even indirectly—to the original provisions of the transport‑security bill, which concerned non‑biometric data tools for SNCF/RATP (e.g., responding to judicial requests), not large‑scale public algorithmic surveillance. As a legislative “cavalier,” the extension was censured.
4
The attempt to prolong Olympic‑era AI video‑surveillance in public space was struck down on procedural grounds; the experiment ended March 31, 2025, and any revival would require a dedicated law.
Citation
Conseil constitutionnel, Décision n° 2025-878 DC du 24 avril 2025, Loi relative aux transports
Conseil constitutionnel, décision du 12 juin 2025 – “boîtes noires” & organised crime
1
Whether expanding algorithmic “black box” surveillance—bulk analysis of internet connection data originally justified by counter‑terrorism—to organised crime respects constitutional principles of privacy, proportionality, and freedom of communication.
2
The French Constitution and the Declaration of the Rights of Man protect the right to privacy and secrecy of communications.
  • Any intrusive surveillance, especially algorithmic analysis of traffic data at scale, must be strictly necessary, proportionate, and precisely circumscribed.
3
A new “narcotrafic” law extended the existing framework of algorithmic scanning of telecom and internet metadata (the “boîtes noires”) beyond terrorism to serious organised crime. The Conseil constitutionnel largely upheld the law but censured or narrowed provisions where the extension of algorithmic surveillance was not sufficiently justified or proportionate to the new aims. In particular, the judges emphasized that bulk algorithmic monitoring of traffic patterns is a serious intrusion requiring strict tailoring, clear limits on offenses covered, time limits, and robust oversight—constraints that parts of the new extension failed to meet.
4
The law on narcotrafficking was validated in its core, but the attempted broad extension of “black box” algorithmic surveillance to organised crime was partially censured, reinforcing a proportionality‑driven approach to algorithmic monitoring.
Citation
Conseil constitutionnel, Décision n° 2025-XXX DC du 12 juin 2025, Loi relative à la lutte contre le narcotrafic et la criminalité organisée
UNITED KINGDOM
Getty Images (US), Inc. v. Stability AI Ltd [2025] EWHC 2863 (Ch) (4 Nov. 2025)
1
  1. Whether the Stable Diffusion model (or its weights) is itself an “infringing copy” of Getty’s images, giving rise to secondary copyright liability in the UK.
  1. Whether AI‑generated images reproducing Getty‑style watermarks infringe Getty’s trademarks.
2
  • Under the CDPA, secondary infringement requires dealings with an “infringing copy,” and an “article” can be intangible but must embody a copy of the work.
  • UK trademark law (TMA 1994 s.10) prohibits unauthorized use of identical or confusingly similar marks in the course of trade that affects origin functions or causes confusion.
3
The High Court accepted that “articles” for secondary infringement can be intangible but held that Stable Diffusion’s model weights do not store or reproduce Getty’s images; they are the product of learning patterns from training data, not “copies” of the works. Because the model itself contained no stored reproductions, it could not be an “infringing copy,” and hosting access did not constitute importation into the UK. Most copyright claims therefore failed (Getty had also abandoned primary UK‑training theories due to lack of UK‑based training evidence). However, the court found limited trademark infringement where generated images bore identical or confusingly similar “Getty Images” watermarks in the UK; some uses fell under s.10(1)/(2), though the evidence did not support systemic infringement or dilution.
4
Getty largely lost: the court rejected the theory that the model itself is an infringing copy but found narrow, fact‑specific trademark infringement for watermark outputs—a major UK precedent on AI training and model status.
Thaler v. Comptroller‑General of Patents [2023] UKSC 49 (DABUS)
1
Whether an AI system (DABUS) can be named as “inventor” on a UK patent application and whether the machine’s owner can claim patent rights via ownership of the machine alone.
2
  • Under the Patents Act 1977, an “inventor” is the “actual deviser” of the invention and must be a natural person.
  • Section 7 provides a closed code for who may apply for and obtain a patent, all ultimately deriving from a human inventor.
  • Applications that fail to identify a human inventor are to be treated as withdrawn.
3
Dr. Thaler filed applications naming DABUS as inventor, asserting that he owned the inventions by virtue of owning the machine. The UKIPO and lower courts rejected this. The Supreme Court agreed: reading sections 7 and 13 together, “inventor” must be a natural person; there is no rule of law by which intangible inventions created by a machine automatically belong to the machine’s owner (doctrine of accession applies to tangible property). Because DABUS is not a person, and Thaler did not identify any human inventor or lawful chain of title from such a person, the applications did not comply with statutory requirements and were correctly treated as withdrawn.
4
Only humans can be inventors under current UK patent law; naming an AI as inventor and claiming rights merely as its owner is insufficient. The case leaves open how to treat AI‑assisted inventions where a human makes a substantive inventive contribution.
MEXICO
SCJN, Amparo Directo 6/2025 (Segunda Sala, 27 Aug. 2025)
1
Issue
Whether denying copyright registration for a work generated by an AI system—on the ground that Mexican law limits authorship to human beings—violates constitutional rights to equality, non‑discrimination, or Mexico’s international IP obligations.
2
Rule
  • Under the Federal Copyright Law (LFDA), a “work” must be an original, human intellectual creation; authors are natural persons, and moral rights are personal, inalienable, and non‑waivable.
  • International instruments like the Berne Convention and USMCA (T‑MEC) are applied through national law and presume human authors (“life of the author” term, etc.).
3
Application
A petitioner sought to register an AI‑generated digital work; INDAUTOR refused, reasoning that works must be human creations. The litigant claimed this discrimination against AI‑generated works violated equality and failed to account for technological progress or foreign practice. The Supreme Court held that copyright’s link to human intellect, personality, and moral rights presupposes a human subject; AI systems lack legal personality and human attributes. Articles 3 and 12 LFDA, as interpreted to require human authorship, were deemed objective, reasonable, and consistent with treaties that implicitly assume human authors (“life of the author” term structure). The court rejected arguments that AI should be treated as an author or rights‑holder or that excluding AI works discriminates against their owners.
4
Conclusion
The amparo was denied. In Mexico, purely AI‑generated material cannot be registered as a copyrighted work; authorship and moral rights are reserved to natural persons.
Citation
Suprema Corte de Justicia de la Nación, Segunda Sala, Amparo Directo 6/2025, 27 de agosto de 2025
7º Tribunal Colegiado en Materia Civil del Primer Circuito (2024)
1
Issue
Whether courts may rely on AI tools during judicial proceedings (specifically, to help calculate the amount of injunctive bonds/suspensiones), and under what conditions this use is compatible with due process and judicial independence.
2
Rule
  • Judges must personally exercise their decisional function and ensure due process; they cannot abdicate decision‑making to automated systems.
  • Auxiliary tools may assist but cannot replace human reasoning or the duty to give reasons.
3
Application
In a precedential thesis, the civil collegiate court addressed the use of AI systems to estimate the monetary amount of a guarantee (fianza) required to suspend the challenged act. The court recognized that AI can process data and propose figures more quickly, and it allowed such tools as auxiliary means. However, it stressed that the judge must critically assess AI-suggested figures, independently weigh the facts and legal criteria, and ultimately decide and justify the amount. Blind reliance on AI outputs would violate procedural guarantees.
4
Conclusion
Mexican courts may use AI tools to assist with tasks like calculating bond amounts, but only as non‑binding aids. Human judges must retain control, perform independent reasoning, and remain accountable for the final decision.
Citation
7º Tribunal Colegiado en Materia Civil del Primer Circuito, Tesis I.7o.C. [number], 2024
TEPJF, SUP‑REP‑0893/2024 (Sala Superior, 2024)
1
Issue
How electoral law protects the “best interests of the child” when political spots use the image of a child created by AI, and what this implies for INE guidelines on emerging technologies in campaigns.
2
Rule
  • The Mexican Constitution and electoral law prioritize the interés superior de la niñez (best interests of the child) in any use of children’s images in propaganda.
  • Electoral authorities must issue and adapt guidelines ensuring that minors are not exploited or harmed, including in new technological contexts.
3
Application
A complaint challenged TV/radio spots criticizing government health policy, which used a depiction of a deceased child; evidence indicated that the child’s image had been generated with AI, not filmed from a real minor. The TEPJF confirmed that even AI‑generated images representing children raise the same constitutional concerns as real minors. It upheld findings that other alleged violations (misuse of pauta, calumny, religious symbols) were unfounded but agreed that the best‑interests‑of‑the‑child standard applied. The court ordered the electoral authority (INE) to adjust and clarify its guidelines to ensure that future political content—including AI‑created minors’ images—respects child‑protection norms and incorporates emerging technologies into its regulatory framework.
4
Conclusion
The ruling extends child‑protection standards to AI‑generated depictions of children in electoral propaganda and instructs regulators to update rules to address AI‑enabled political content.
Citation
Tribunal Electoral del Poder Judicial de la Federación, Sala Superior, SUP-REP-0893/2024, 2024
GERMANY
LG Hamburg, 27 Sept. 2024, 308 O 130/23, Kneschke v. LAION
1
Whether LAION’s downloading and use of a photographer’s online images to build a massive image–text dataset for potential AI training infringed German copyright law, or was permitted under EU/German text‑and‑data‑mining (TDM) exceptions.
2
  • EU DSM Directive 2019/790 art. 3–4 and German UrhG § 60d permit TDM reproductions for scientific research by certain entities.
  • Reproductions within the scope of § 60d, done for TDM by qualified research organizations, are lawful even if datasets might later be used for AI training, including by commercial actors.
3
LAION, a non‑profit, created an enormous dataset of image‑URL pairs by downloading images, including Kneschke’s, from the web and filtering them using software. The Hamburg court held that the mere downloading and analysis of the photographer’s images to build the dataset fell within § 60d’s scientific‑research TDM exception. The creation of a dataset as a basis for later training was itself regarded as “data mining” for scientific purposes, regardless of whether commercial firms could later use it. The court emphasized that the dataset was made freely available for research, and considered even the website’s anti‑scraping reservation (placed in natural‑language text) not decisive to exclude the statutory exception in this context.
4
The court dismissed the photographer’s claims, confirming that under German/EU law, certain data‑mining reproductions to create AI‑training datasets are lawful when done by qualified research organizations under TDM exceptions.
GEMA v. OpenAI, LG München I, 42 O 14139/24 (judgment 11 Nov. 2025)
1
  1. Whether the storage (“memorisation”) of protected song lyrics in a large language model’s parameters and
  1. The near‑verbatim reproduction (“regurgitation”) of those lyrics in ChatGPT outputs
    constitute copyright infringement under German law, and whether TDM or other exceptions justify these uses.
2
  • German UrhG § 16 (reproduction), § 19a (making available to the public), and § 23 (adaptations) protect lyrics.
  • TDM exception § 44b and related provisions permit certain preparatory reproductions but not uses that conflict with rights‑holders’ exploitation interests.
  • Storage in a system is a reproduction if the work can be made perceptible again.
3
GEMA alleged that OpenAI’s models memorized and reproduced lyrics from multiple songs. The Munich court accepted that training can lead to model parameters containing works in a way that allows them to be reconstructed via prompts. It held that memorisation in the model is a reproduction under §16 because the model can output recognisable lyrics; regurgitation in responses is both a reproduction and a public communication under §19a, and can qualify as an adaptation under §23 when modified but still recognisable. The court rejected reliance on TDM exceptions, reasoning that here the models had not just abstractly analyzed lyrics but effectively stored and reproduced them, intruding on exploitation interests; these were not merely preparatory copies. It also rejected the idea that rights‑holders had implicitly consented to such uses.
4
The court found OpenAI liable on core copyright claims (though not on personality‑rights theories) and awarded injunctive relief and damages—marking a major EU‑side decision on memorisation and regurgitation in LLMs, and standing in contrast to Kneschke v. LAION’s more generous view of training‑stage TDM.