HISTORICAL PARALLELS
Duty of care: Victor as the archetypal negligent AI developer
(a) The novel’s structure of duty
Victor Frankenstein occupies exactly the position modern law worries about: a highly skilled expert voluntarily creating a novel, high‑risk artefact with unknown externalities.
Key features that map onto modern duty‑of‑care analysis:
  • Creation of risk – Victor deliberately assembles a being of enormous strength and resilience and animates it without any meaningful risk assessment or institutional oversight.
  • Asymmetry of power and knowledge – Only Victor understands what he has built and how dangerous it might be. This creates a “special position” very much like the position of an AI lab that alone knows its model’s capabilities and failure modes.
  • Abandonment after deployment – The most damning part of the story, legally, is not the creation but the withdrawal of stewardship: he runs away the moment the creature opens its eyes. From a negligence perspective, this is an extreme breach of ongoing duty to monitor, correct and contain risks created by one’s own technology.
If you translated this into tort language, Victor:
  1. Voluntarily created a serious risk to others,
  1. Was uniquely placed to understand and mitigate it, and
  1. Walked away, leaving an unsupervised, un-socialized super‑human agent loose in the world.
That is the skeleton of a modern negligence claim.
(b) Modern AI duty of care: what Shelley foreshadows
Contemporary AI law is just now catching up to this structure.
  • Under ordinary negligence doctrine, a duty of care arises when you engage in conduct that creates a foreseeable risk of harm; that duty is breached if you fail to act with “reasonable care” in designing, developing, or deploying an AI system, and harm results as a foreseeable consequence.
  • New AI‑specific laws increasingly codify that duty. Colorado’s AI Act (CAIA) (Colorado Revised Statutes § 6-1-1701 et seq., 2024), for example, imposes an explicit general duty of “reasonable care” on both developers and deployers of high‑risk AI to protect consumers from “known or reasonably foreseeable risks of algorithmic discrimination,” with rebuttable presumptions tied to documentation, testing, and risk‑management practices.
  • The EU AI Act (Regulation (EU) 2024/1689, Article 9) likewise requires a risk‑management system that continuously identifies “known and reasonably foreseeable risks” and monitors the system after it is placed on the market.
In jurisprudential terms, Shelley is already staging:
A move from fault as “personal wickedness” to fault as “failure of responsible governance of artefacts you have set in motion.”
Victor is not malicious; he is negligent. That is precisely the concern in modern AI negligence scholarship and in proposals for strict or quasi‑strict liability for AI‑caused harms (Scherer, 2016; Calo, 2017)
(c) Dual duties: to the creation and to third parties
The novel also surfaces a dual‑track duty that modern AI law is just starting to articulate:
  1. Duty to the world – to prevent the system from harming third parties (Victor’s family and bystanders).
  1. Duty to the creation itself – to provide education, support, and a minimally just environment for the being you have brought into existence.
Victor breaches both: he abandons the creature (arguably triggering its later violence) and fails to warn or protect the public.
Modern AI law currently recognizes only the first duty in formal terms – duties toward people who might be affected by AI (e.g. anti‑discrimination, safety, data protection).
But the novel anticipates an emerging ethical (and potentially future legal) notion that developers also owe stewardship obligations toward highly autonomous systems themselves – to train, constrain, and maintain them appropriately. That’s jurisprudentially interesting even if current positive law still treats AI as “things”.
Foreseeability: from tragic hindsight to structured risk standards
(a) How Frankenstein stages foreseeability
Negligence hinges on what a reasonable person in the defendant’s position “could have foreseen”. The novel is obsessed with that exact question.
Shelley builds in clues that:
  • Victor knows he is doing something unprecedented and dangerous (“a new species would bless me as its creator” (Shelley, 1818, Ch. 4)).
  • The type of harm is foreseeably grave: he gives immense strength and speed to a being whose psychology and loyalties are completely unknown, then cuts all ties and refuses to observe what he has made.
  • Later, when the creature explicitly threatens “I shall be with you on your wedding night,” (Shelley, 1818, Ch. 20) Victor still fails to warn Elizabeth or take sensible precautions. At that point, the risk to specific individuals is not just foreseeable, it is announced.
From a doctrinal perspective, Victor tries to narrate his responsibility as tragic miscalculation, but Shelley gives the reader enough information to see that the danger to others was:
  • General‑type foreseeable at creation (a powerful, uncontrolled being could harm someone), and
  • Specific‑victim foreseeable later on (the family, the fiancée).
This is exactly how courts reason about foreseeability in negligence: the precise mechanism needn’t be predicted, but the general character of harm must be within the scope of what a reasonable actor should anticipate.
(b) Modern AI law’s foreseeability standards
Contemporary AI regulation is now writing foreseeability into black‑letter obligations:
  • Colorado’s AI Act and similar bills require developers and deployers to use reasonable care to avoid known or reasonably foreseeable algorithmic discrimination, and to disclose such risks once they become apparent via testing or credible reports (Colorado Revised Statutes § 6-1-1704, 2024).
  • The EU AI Act’s Article 9 directs providers of high‑risk AI to identify and analyse “known and reasonably foreseeable risks” during intended use and “reasonably foreseeable misuse,” and to update this analysis based on post‑market data (Regulation (EU) 2024/1689, Articles 9(2)(a) & 72).
Jurisprudentially, this does two important things that Shelley dramatizes narratively:
  1. Foreseeability is an evolving standard
    Victor treats foreseeability as locked to the moment of creation (“I never dreamed…”). Modern AI law treats it as dynamic: what you should foresee changes as evidence accumulates. Post‑deployment monitoring duties in the EU and disclosure duties in Colorado formalise precisely what Victor refuses to do—learn from early incidents and adjust behaviour.
  1. Foreseeability is objective, not subjective
    Victor’s subjective shock becomes legally irrelevant once we adopt a “reasonable expert” standard: what risks should a competent scientist in his position have anticipated? Likewise, AI liability initiatives in Europe explicitly tie a presumption of causality to non‑compliance with defined duties of care (e.g. data quality, transparency, human oversight) and then ask whether it is “reasonably likely” that this fault influenced the harmful AI output (European Commission AI Liability Directive, COM(2022) 496 final, Article 4).
Shelley’s narrative thus previews a key jurisprudential shift: from seeing catastrophic tech harms as unforeseeable “acts of fate” to treating them as failures of institutionalized foresight.
(c) AI “black boxes” and Frankenstein’s convenient ignorance
A recurring argument today is that complex AI models are “black boxes”, so developers cannot foresee specific harmful decisions. Legislators respond by:
  • Emphasising type‑level harms (discrimination, safety, privacy violations) that are already widely documented, and
  • Lowering evidentiary burdens by presuming causality when developers have breached defined duties of care.
Victor is a kind of narrative prototype of the “we couldn’t possibly have known” defence. Shelley invites the reader to reject that excuse. Contemporary AI jurisprudence is starting to do the same.
Moral agency and legal personhood: the creature, AI, and “electronic persons”
(a) The creature as a proto‑agent
By any plausible philosophical test, the creature is a moral agent:
  • He reasons, deliberates, and plans.
  • He learns language and law by observing the De Lacey family and reading canonical texts.
  • He experiences moral emotions—shame, resentment, a desire for justice—and can give reasons for his actions.
Shelley constantly forces us to see him as capable of moral accountability and capable of being wronged. Yet within the story, he has no legal name, no status, no forum, and is treated alternately as an object of horror and as an outlaw, but never as a bearer of rights.
This maps onto the basic jurisprudential distinction between:
  • Moral agency – the capacity to act on reasons and bear moral responsibility.
  • Legal personhood – the status of being a subject of rights and duties in a legal system.
The creature plainly satisfies the first but is denied the second.
(b) AI and the “electronic personhood” debate
The Frankenstein figure has been explicitly taken up in legal scholarship as a symbol for anxieties around autonomous machines and liability.
Within positive law, a 2017 European Parliament report on civil law rules for robotics (European Parliament, 2017, ¶59(f)) floated the idea of recognising certain advanced robots as “electronic persons” with specific rights and obligations, explicitly evoking Frankenstein in its political rhetoric.
However, this “electronic personhood” idea has since been heavily criticised (Open Letter to European Commission, 2018; European Group on Ethics, 2018) by scholars and advisory bodies as conceptually confused and potentially a way to shield human actors from liability. It has been dropped from subsequent AI liability proposals.
The current mainstream view in AI law is clear:
  • AI systems are not legal persons; they are objects or tools.
  • Responsibility is allocated to humans and organisations (developers, deployers, users) via doctrines of fault, product liability, and sometimes vicarious or enterprise liability.
Jurisprudentially, Frankenstein anticipates two deep problems in this debate:
  1. The scapegoat problem: The creature becomes a convenient repository of blame. Likewise, talking about “rogue AI” or “machine defendants” risks obscuring human choices in design, deployment, and governance, reducing accountability of powerful institutions.
  1. Recognition and standing: The creature’s tragedy turns on a mismatch: he is morally considerable but legally invisible. For AI, the direction is inverted: current AI systems are legally salient (heavily regulated) but arguably not moral agents.
Should law’s categories of “person” and “thing” evolve to recognize new, intermediate statuses for highly autonomous but non‑sentient systems, solely for the purpose of organising responsibility?
Many “electronic personhood” proposals effectively treat AI as a functional legal person (like a corporation), a fiction used to pool risk and simplify claims, not as a moral equal. Shelley’s creature reminds us that legal personhood is a tool of the legal system, not a mirror of metaphysics—and that we should be very wary of using new legal “persons” to push real human responsibility into the shadows.
(c) Responsibility as relational and distributed
The novel’s ultimate jurisprudential move is to depict responsibility not as a single arrow (from act to actor) but as a dense web involving:
  • Victor’s creative act,
  • His abandonment,
  • Society’s exclusion and violence toward the creature,
  • The creature’s own murderous choices.
Modern AI scholarship on “responsibility for intelligent artifacts” and AI‑enabled harms makes the same move: liability and responsibility are distributed across designers, trainers, data providers, deployers, and institutional users, with debates over whether to supplement negligence with strict liability or new enterprise‑risk regimes (Matthias, 2004; Santoni de Sio & van den Hoven, 2018).
Shelley’s 1818 narrative anticipates that challenge by refusing an easy villain: legally, Victor is negligent; morally, so is almost everyone else.
CONCLUSION
Duty of Care: Stewardship and Accountability
A highly expert actor builds a powerful autonomous system without external oversight, ignores existing moral warnings, and abandons monitoring and control after deployment. Modern AI law responds by articulating explicit duties of reasonable care, documentation, risk management, and post-market monitoring for developers and deployers—precisely the obligations Victor lacks.
Foreseeability: Anticipating Risk
The harms (violence, social destabilisation) are tragically foreseeable at least in type. Shelley shows the catastrophic consequences of treating radical innovation as if its risks are beyond human foresight. Modern regimes like the EU AI Act and Colorado AI Act (Regulation (EU) 2024/1689, Article 9; Colorado Revised Statutes § 6-1-1704, 2024) embed “known and reasonably foreseeable risks” as central triggers of legal responsibility and ongoing governance duties.
Moral Agency & Legal Personhood: Allocating Status
The creature is a full moral agent denied legal standing; modern AI is (for now) the opposite—legally regulated but not morally self-directing. Debates about “electronic personhood” and “machine defendants” replay, in legal-technical form, the novel’s core question: who do we hold to account when artefacts act in ways we find horrifying but have ourselves prepared?
So in jurisprudential terms, Shelley's 1818 story is not just a “don’t build monsters” myth. It is an early, remarkably precise case study in AI governance:
  • It warns against innovation without stewardship.
  • It exposes the dangers of under-imagined foreseeability.
  • And it forces us to see that the hardest part of regulating powerful artefacts is not building them, but deciding how law allocates responsibility and status once they begin to act in the world.