COMPARATIVE SYNTHESIS
LIABILITY
Drawing on Frankenstein (Shelley, 1818), we can compare the creature’s situation with contemporary debates about artificial intelligence. The novel acts as a template for understanding how legal systems must adapt when new forms of agency and intelligence surpass established categories.
Viewed alongside modern AI discourse, Frankenstein (Shelley, 1818) functions as an uncannily prescient case study in the law of autonomous systems. Victor’s creature represents technologies whose actions are unpredictable and beyond their makers' full control.
Core Themes Mirroring AI Challenges:
  • Creation without Governance: The absence of regulatory frameworks for new inventions.
  • Monsters Through Exclusion: How societal rejection and lack of integration lead to negative outcomes.
  • Displacement of Blame: The difficulty in assigning responsibility when autonomous systems cause harm.
These narrative elements directly reflect challenges courts and regulators face with AI today.
REPETITION OF PATTERNS
Three motifs in Shelley’s novel (Shelley, 1818) reappear in today’s AI discourse:
Creation without Governance
Victor's single-minded pursuit of a technical breakthrough without a plan for oversight mirrors AI development cultures that prioritize capability and scale over safety, testing, and accountability.
Othering and Monstrosity
The creature is labeled a monster before it acts; AI is often anthropomorphized as a “rogue” or “superhuman” agent. Both framings emphasize alterity and unpredictability.
Displacement of Blame
Victor speaks as if he is persecuted by his own work. Modern institutions do something similar when they say “the algorithm decided,” erasing human authorship.
These continuities allow Frankenstein (Shelley, 1818) to serve as a literary lens for evaluating modern jurisprudential approaches to AI.
POSITIVISM
Liability in Shelley's Novel
A positivist approach, grounded in existing legal rules, would hold Victor liable for the harms caused by his creation, likely under negligence or strict-liability theories. The creature would not be recognized as a responsible legal person, but rather as a dangerous product or experimental device.
Implications for AI
Similarly, current AI law places responsibility on developers, deployers, and corporate entities, not the AI system itself. Proposals to grant AI legal personhood risk mirroring Victor's evasion, creating a new "someone" to blame and diluting accountability from the human and institutional actors responsible for its design, training, and deployment.

NATURAL LAW

Inherent Obligations Natural law emphasizes duties stemming from the inherent nature of an act. For Victor Frankenstein, bringing a powerful, sentient creature to life incurred heightened obligations for its guidance and care. Creation, in this view, is never morally neutral; it carries built-in responsibilities that Victor flagrantly ignored. AI Ethics Parallel This perspective directly parallels modern AI development. Designing systems that control access to housing, employment, credit, or liberty is not merely an engineering task but a profound moral decision impacting human life. Natural law thinking demands that those who wield such power bear a special duty to ensure fairness, transparency, and safety. Shelley's narrative serves as a cautionary tale (Shelley, 1818), illustrating the consequences when creators view innovation as self-sufficient, detaching it from ongoing moral responsibility.

ECONOMICS
In Frankenstein, (Shelley, 1818) Victor is uniquely positioned with the information, tools, and opportunity to avert catastrophe—either by abstaining from the experiment, refining its design, or intervening decisively post-creation.
This principle extends directly to AI systems, where developers and deploying firms function as the "cheapest cost avoiders." They possess the capacity to implement safer architectures, robust guardrails, rigorous testing, and judicious limitations on high-risk applications. End users and victims, typically lacking the requisite knowledge or systemic leverage, cannot enforce these safeguards. Thus, the novel reinforces the contemporary notion that liability should primarily rest with those upstream in the design and deployment chain.
SCAPEGOATS
The Mechanism of Othering
Critical and posthuman approaches reveal how law and culture construct "others" to maintain power structures. Shelley's monster serves as a prime example (Shelley, 1818): his appearance immediately marks him as an outsider, denying him protection and credibility. He becomes a convenient explanation for societal anxieties and violence, diverting blame from their true human origins.
AI and Modern Scapegoating
In contemporary AI debates, phrases like "rogue AI" or "monster models" function similarly. This rhetoric simplifies complex sociotechnical systems into singular, threatening entities. It encourages fear of the technology itself, rather than critical examination of the institutions, incentives, developers, corporate structures, and regulatory failures that truly shape AI's risks. As Frankenstein warns (Shelley, 1818), such scapegoating leaves the genuine sources of danger unaddressed.
CO-EXISTENCE
Makers and the Made
Law cannot treat AI and humans as interchangeable agents. Responsibility, in both fiction and reality, hinges on a clear distinction between creators and their creations.
Victor Frankenstein's tragedy began when he viewed his creature as fully independent, imagining its actions as separate from his own choices. This detached perspective absolved him of accountability.
The Responsibility Gap
In AI, a similar temptation arises: to speak of systems having their own will, intentions, or "decisions," detached from human design. This blurring is jurisprudentially dangerous.
It creates a "responsibility gap"—where harms are real, but no one is clearly answerable. Frankenstein dramatizes this (Shelley, 1818), showing how blaming the monster distracts from Victor and societal institutions' role in causation.
PERSONHOOD
Modern law distinguishes between:
Natural Persons (Humans)
Rights-holders recognized because they can suffer, have interests, and be wronged.
Artificial or Legal Persons (Corporations, Associations)
Entities recognized mainly to facilitate transactions, hold property, and centralize liability.
Corporations have no consciousness or inner life, but they are treated as “persons” for instrumental reasons. Frankenstein’s creature is the opposite (Shelley, 1818): he has rich inner life and moral claims, yet receives no legal status at all. This inversion reveals a tension in our concept of personhood: law can extend personhood to entities that are convenient, while ignoring beings who may deserve it in a deeper sense.
Advanced AI sits at the crossroads of these models. For now, AI is treated like a tool. But as systems become more autonomous and integrated into decision‑making, pressure builds to decide whether they will remain objects, become corporate‑style artificial persons, or be recognized—if they ever come to resemble the creature—as moral persons in their own right.
Different legal theories offer different ways to synthesize the novel with AI.
Legal Positivism
Emphasizes that personhood is whatever the legal system says it is. From this angle, both the creature and any AI system lack legal status until institutions grant it. The novel then functions as a critique of law’s silence: its refusal to recognize an obviously morally significant being exposes the limits of a purely rule‑based conception of personhood.
Natural-Law and Rights-Based Theories
Focus on moral thresholds. They frame the creature—and any future AI that shares his capacities—as entities that already possess rights by virtue of rationality, autonomy, and the capacity for suffering. Law’s task is to catch up. Here, Frankenstein becomes a warning (Shelley, 1818) about what happens when legal recognition lags behind moral reality.
Utilitarian and Consequentialist Views
Evaluate arrangements by their outcomes. They prompt questions such as: Would granting AI some form of personhood reduce harm by clarifying responsibility? Or would it increase harm by letting human actors hide behind AI entities? Frankenstein offers a vivid scenario of what happens when a powerful being is created without any corresponding framework for responsibilities and constraints: the result is suffering and instability for both creator and creation.
DUTIES
When placing Frankenstein alongside contemporary AI debates (Shelley, 1818), the parallels are striking. AI, like Shelley’s creature, is a human creation that strains existing categories of responsibility. Both raise questions about what duties creators owe to their creations and to society, and the novel vividly dramatizes the consequences when those duties are ignored or denied. This comparison reveals that co‑existence within unchanged frameworks is inadequate; instead, a cohesive relationship treating humans and AI as separate but coordinated partners is needed.
Key Parallels with Frankenstein
1. Creation without Responsibility
Victor's creation, like powerful AI systems, is often deployed without sufficient testing or accountability. When AI causes harm, it's tempting to blame "the algorithm," but Shelley shows this is a moral evasion. AI's behavior is traceable to human design choices, data, and institutional incentives. Any governance framework must anchor responsibility clearly in human and institutional actors, preventing creators from offloading liability.
2. Otherness and Reactions
The novel’s treatment of the creature's "otherness" mirrors contemporary reactions to AI. Public discourse swings between alarmist "monster" narratives and utopian ideals, obscuring AI's embeddedness in social contexts. Shelley illustrates how fear of the unfamiliar can lead to unjust measures, a warning against overbroad bans or symbolic regulation of AI that ignores its real-world application.
3. AI Personhood Debates
Debates around AI personhood echo the creature’s ambiguous legal status. While modern law recognizes non‑human entities like corporations, granting AI a similar form of personhood could reproduce Victor's error: allowing creators to shift blame to an "AI entity." Therefore, any legal recognition of AI must strengthen, not weaken, the traceability of responsibility back to human actors and institutions.
A Framework for Partnership
Rejecting simple co‑existence, this framework proposes a cohesive, separate‑but‑equal partnership. Humans retain full moral personhood and fundamental rights, remaining the ultimate duty‑bearers. AI systems are treated as structurally distinct, powerful partners, but always mediated by human norms and subject to defined boundaries. Risk‑based AI regulation, like the EU's AI Act, aligns with this approach, classifying systems by risk and imposing obligations such as transparency and human oversight.
Procedural innovations like algorithmic due process—requiring notification, explanation, and human review—address the pathologies Frankenstein dramatizes (Shelley, 1818), ensuring that AI-mediated decisions remain contestable. The lesson is clear: treating radical new creations within old categories leads to injustice. We must build a structured relationship where humans and AI operate in a coordinated framework, defining duties, capacities, and limits to align innovation with justice.
LAW CASES
The following case summaries and analyses highlight key legal developments and interpretations surrounding AI across different jurisdictions, revealing both convergences and divergences in approach.
1
Shared Baseline: Human Centrality
  • Across jurisdictions and subject areas, one strong convergence stands out:
  • Authorship and inventorship are reserved to humans: Thaler in the US (Thaler v. Vidal, 43 F.4th 1207, Fed. Cir. 2022), Thaler in the UK ([2023] UKSC 49), and the SCJN decision in Mexico all hold that under current statutes, AI cannot be an author or inventor.
  • Decision making is also formally human: Loomis, Mexican judicial AI decisions, and French surveillance rulings all insist that judges and public authorities must retain final, accountable control.
Comparative takeaway: These systems treat AI as an instrument that humans use, not as a bearer of rights or final authority.
2
Training Data: Diverging Doctrinal Paths
The sharpest differences emerge around training on copyrighted material.
a. United States – Fair Use Battleground
  • Thomson Reuters v. Ross: (S.D.N.Y. 2017) using Westlaw headnotes to train a competing legal research tool is not fair use.
  • NYT v. Microsoft/OpenAI: (S.D.N.Y. 2023) claims over training and “regurgitation” survive early motions, but fair use is not fully resolved.
  • Andersen / Doe v. GitHub: (N.D. Cal. 2023) courts prune overbroad claims but leave room for targeted copyright, DMCA, and license theories. US law uses fair use and substantial similarity as the main analytic tools, yielding fact specific, case by case outcomes.
b. EU/Germany – TDM Exceptions and Exploitation Rights
  • Kneschke v. LAION: (LG Hamburg, 2023) creation of a large image dataset by a non profit research organisation is sheltered by text and data mining exceptions.
  • GEMA v. OpenAI: (LG München, 2024) the same legal system draws a hard line when models memorise and reproduce lyrics, rejecting TDM as a defence at the output stage. This model separates:
  • Training/dataset building for research (more tolerated), from
  • Commercial memorisation and reuse (more restricted).
c. United Kingdom – The Model Is Not a Copy
  • Getty v. Stability AI: (High Court, 2024) High Court holds that Stable Diffusion’s weights are not “infringing copies” of Getty’s images, undermining the “model = infringing copy” theory. It leaves open broader training legality (especially where training occurred outside the UK), but blocks one aggressive rights holder argument.
Comparative synthesis:
  • US: fights are framed as fair use vs. infringement case by case.
  • EU/Germany: fights are about the scope of TDM exceptions vs. exploitation rights.
  • UK: first clears away the “model is a copy” argument and may later address training more directly.
3
Outputs and Memorisation: Emerging Consensus
Despite differences on training, there is more convergence on outputs:
  • Near verbatim outputs (lyrics, news articles, code snippets, watermarked images) are treated as strong evidence of infringement or trademark violation.
  • Transformative or summarised outputs (factual summaries, stylistic imitations without close copying) are more likely to be tolerated, at least under current cases.
Examples:
  • GEMA (lyrics) and NYT (articles) both treat recognisable regurgitation as problematic.
  • Doe v. GitHub focuses on Copilot outputting OSS code without required notices.
  • Getty focuses on watermarked outputs as trademark problems.
Comparative takeaway: Whatever the theory on training, courts in all systems become much more aligned once a model starts producing outputs that look like direct copies.
4
AI in Governance: Human in the Loop and Proportionality
Public law decisions show a fairly coherent cross border pattern.
a. Sentencing and Courts
  • Loomis (881 N.W.2d 749, Wis. 2016): COMPAS may be used, but only as a non determinative input with clear warnings; sentences must remain judge driven.
  • Mexican judicial AI precedents: AI tools can help calculate bond amounts and perform auxiliary tasks, but judges must understand, review, and justify results.
b. Surveillance and Security
  • French Conseil constitutionnel decisions: o Strike down a procedural “rider” that would have extended Olympic AI video surveillance. o Partially censor expansion of bulk algorithmic monitoring to organised crime, citing proportionality and privacy.
c. Elections and Synthetic Imagery
  • Mexican electoral decision (TEPJF): AI generated images of children in political propaganda are treated under the same child protection standards as real minors; guidelines must be updated to address AI enabled political content.
Comparative synthesis:
  • All of these systems allow some AI use in sensitive domains but insist on:
  • human control and reasoning,
  • proportionality and necessity (especially for surveillance),
  • and the protection of vulnerable groups (children, defendants, citizens).
5
Structural Frames: IP vs. Fundamental Rights
A useful comparative lens is what kind of law is invoked:
  • IP/Copyright/Trademark cases:
  • Frame conflicts mainly as bilateral: creator vs. AI company.
  • Tools: fair use, TDM exceptions, substantial similarity, CMI removal, license terms, trade marks.
  • Focus: economic incentives, markets, and the permissible scope of reuse.
  • Public law/constitutional cases:
  • Frame conflicts as systemic: state vs. citizens, political actors vs. voters.
  • Tools: due process, proportionality, privacy, best interests of the child, human rights standards.
  • Focus: liberty, surveillance, democratic integrity, procedural fairness.
Synthesizing across both: You get a two level picture:
  • Micro level regulation of copying and compensation (IP), and
  • Macro level regulation of power and legitimacy (public law), both being re drawn around AI.
6
Open Questions and Likely Flashpoints
Finally, the synthesis highlights some unresolved issues:
  • Threshold for AI assisted authorship
    Courts agree that AI alone cannot be an author or inventor, but they leave open how much human contribution is needed to claim rights over AI assisted content.
  • Cross border fragmentation
    A model’s lifecycle may be: o TDM protected in the EU, o facing fair use fights in the US, o and confronted with “model is not a copy” logic in the UK. This invites forum shopping and pressure for international harmonization.
  • Governance creep
    Olympic “experiments” and counter terrorism tools risk becoming permanent; courts are currently constraining that, but the political and technological push to expand AI surveillance is ongoing.
  • Synthetic persons and symbolic harms
    The Mexican decision on AI generated children suggests that law will have to grapple with realistic synthetic representations of protected groups—where no individual is depicted, but collective or symbolic harm is still real.
SEPARATE BUT EQUAL
The synthesis that emerges from reading Frankenstein alongside AI debates (Shelley, 1818) is not that humans and AI should be treated as identical legal actors, nor that AI should be demonized as an enemy. Rather, it suggests the need for a cohesive relationship of separate but equal partnership:
Separate
  • Humans and AI occupy different legal and moral categories.
  • Humans remain the bearers of rights, duties, and ultimate accountability.
  • AI, however advanced, remains an artifact whose actions are attributable to those who built and deployed it.
Equal
  • Law must take the role of AI seriously—not as a trivial tool, but as an integral partner in decision‑making.
  • AI's design and operation demand robust regulation, oversight, and ethical scrutiny.
In this model, AI is neither granted full personhood nor reduced to a neutral instrument. It is recognized as a powerful collaborator whose existence intensifies human responsibility rather than diluting it. Frankenstein (Shelley, 1818) thus becomes more than a cautionary tale: it is a conceptual bridge between literature and law, urging us to design a legal order in which humans and AI do not simply “co‑exist,” but stand in a structured, accountable partnership that safeguards both innovation and justice.
When applied to the future of AI, Frankenstein suggests that imagining humans and advanced AI as simply “co‑existing” within a single, undifferentiated category of personhood is unstable.
1
Danger 1: Too Quick Assimilation
If law assimilates AI too quickly to human status, it risks ignoring crucial differences in embodiment, vulnerability, and control. A machine that can be replicated, modified, or shut down at will cannot be treated identically to a mortal, embodied human subject without distorting the meaning of key rights and duties.
2
Danger 2: Unyielding Denial
If law, on the other hand, insists on treating advanced AI only as property or tools, even when they begin to display traits analogous to the creature’s intelligence and capacity for suffering, it risks repeating Victor’s fundamental error: creating a morally significant being while denying that it can ever make legitimate claims.
The comparative lesson from Frankenstein (Shelley, 1818) is that both extremes are dangerous. Unreflective assimilation erases difference; unyielding denial erases moral status.
The synthesis that emerges is neither simple fusion nor strict exclusion, but what we might call separate but equal partnerships between humans and AI. In this framework:
Distinct Legal Personalities
Humans and advanced AI would be recognized as distinct forms of legal personality, each governed by a tailored set of rights, duties, and constraints.
Tailored Statuses
Human legal status would continue to rest on embodiment, vulnerability, and social dependency; AI status would address issues such as operational limits, replication, and system‑level risks.
Balanced Claims
Neither regime would be subordinated to the other as mere instrument: both would be treated as possessing legitimate claims that law must balance.
This is not “co‑existence” in a flat sense, where all persons are assumed to be the same. Instead, it is a cohesive relationship in which differences are acknowledged and structured rather than denied. Frankenstein (Shelley, 1818) shows what happens when a powerful non‑human being is forced into a world that has no conceptual or institutional room for it. The comparative synthesis with AI suggests that a just legal order must avoid repeating that pattern.
If future AI systems remain mere tools, the traditional framework of human responsibility will suffice. But if they begin to resemble Shelley’s creature in intelligence, autonomy, and vulnerability, law will need to move toward this model of separate but equal partnerships—distinct, coherent regimes of personhood standing in a carefully designed relationship—so that neither humankind nor its creations are left, like the creature, without a place in the moral and legal community.