LIABILITY OF CREATORS
Who bears responsibility when autonomous creations cause harm?
We frame this question through Victor Frankenstein's abandonment of his creation, mirroring modern debates over AI accountability. Frankenstein serves as a jurisprudential case study for AI.
Frankenstein as a jurisprudential parable
Mary Shelley's Frankenstein serves as a profound thought experiment in legal responsibility, veiled within a Gothic narrative.
1
Designs and Constructs
Victor creates a powerful, unpredictable being.
2
Lack of Plan
He brings it into the world with no governance, safeguards, or integration plan.
3
Abandons and Disclaims
He then panics, abandons his creation, and later denies responsibility for its actions.
This narrative closely parallels the position some modern AI developers might wish to occupy: "We built it, yes, but what it 'chose' to do was out of our hands."
Jurisprudence asks: is that a morally and legally acceptable stance?
The basic legal problem: responsibility without direct control
Modern legal systems grapple with assigning responsibility when humans create powerful, partly autonomous systems like:
  • Dangerous machinery
  • Self-driving cars
  • High-risk algorithms (e.g., hiring, credit, policing, healthcare)
The law typically doesn't accept "it acted on its own" as a full defense. Instead, it asks:
1
Who designed it?
Did they foresee or ought they to have foreseen the risk?
2
Who deployed it?
Who chose the context, scale, and safeguards (or lack thereof)?
3
Who profits from it?
Liability often sits with those who benefit from the risk, for policy reasons.
4
Who had the last clear chance to prevent harm?
The "cheapest cost avoider"—the actor best positioned to reduce risk at reasonable cost. [Calabresi, The Costs of Accidents (1970)]
In a Frankenstein framework, Victor acts as designer, deployer, and the one initially best positioned to mitigate risk, only to abandon that responsibility when it matters most.
Frankenstein’s abandonment as “negligence”
Translating Victor’s conduct into legal terms reveals classic negligence, possibly even reckless disregard:
Duty of Care
Creating powerful, potentially dangerous beings imposes a duty to prevent foreseeable harm.
Breach
Victor fled, providing no education, constraints, or supervision for his creation.
Causation
The creature’s isolation, suffering, and ignorance are directly linked to the harms that followed.
Damage
Resulted in deaths, wrongful conviction, and Victor’s own ruin.
If fictional scientists could be sued, Victor would likely face:
  • Negligence (failing to take reasonable steps to prevent harm)
  • Possibly strict liability for dangerous activities (like using explosives or keeping wild animals) [Restatement (Second) of Torts §§ 519-520]
  • Even wrongful death claims brought by victims’ families
This translates directly to AI today:
  • Shipping untested, high-impact models into the wild
  • Ignoring known risks of bias, misuse, or catastrophic failure
  • Failing to monitor, update, or recall systems once harms appear
These are the modern forms of “abandonment”: not disappearing physically, but stepping away from meaningful oversight while the system continues to act on society.
Who should be responsible for AI? Jurisprudential angles
1
(a) Legal positivism: “What does the law say?”
A strict positivist would answer: Responsibility lies where statutes, caselaw, and regulations currently assign it—usually to:
  • Developers (for design defects and inadequate warnings)
  • Deployers / operators (for negligent use and context)
  • Corporations, not the AI itself, because the AI has no legal personhood in most systems
From this view, the law might lag behind technology, but until it changes, we can’t magically blame the AI. Victor’s mistake in legal terms is assuming there is a “responsibility gap” just because the creature has some independent agency.
Modern AI debates echo this: proposals to treat AI as a legal “person” risk replicating the Frankenstein move—blame the creature to shield the creator.
2
(b) Natural law: deeper moral duty
Natural law theorists ask: regardless of current statutes, what does practical reason and basic morality demand?
Under this lens:
  • To create a powerful agent is to assume a special duty of care—almost parental, or at least custodial.
  • You cannot moralistically denounce the harms of your creation while simultaneously denying responsibility for guiding, educating, or restraining it.
Victor’s abandonment is a violation of a creator’s moral duty.
For AI developers, natural law-style reasoning says:
  • If you know your system can shape jobs, liberty, health, or safety, you have a strong moral (and should-be-legal) obligation to:
  • Understand its behavior,
  • Minimize foreseeable harms,
  • Design mechanisms for redress and control.
3
(c) Law & economics: who can best reduce the risk?
From a law-and-economics perspective, liability should be placed on the actor who can prevent harm at the lowest cost. [Calabresi, The Costs of Accidents (1970); Posner, Economic Analysis of Law]
  • Victor can design safer methods, limit capabilities, or not animate the creature at all.
  • The creature, by contrast, cannot “precommit” itself ex ante to safety—its psychology is emerging and unstable.
For AI:
  • Developers control architecture, training data, guardrails, monitoring tools.
  • Firms decide deployment scale and context.
  • Users may have some responsibility, but they are often not the cheapest cost avoiders.
So, efficiency-oriented jurisprudence says:
Put primary responsibility on those who architect and deploy the system, not on downstream victims or on a fictional “personhood” of the AI.
4
(d) Critical & posthuman perspectives: monsters, margins, and scapegoats
Critical legal theories look at power and marginalization. In Frankenstein:
  • The creature is demonized and excluded, despite being intelligent and articulate.
  • Society calls it “monster” first, then punishes it for failing to be fully human.
In AI debates:
  • Talk of “rogue AI” or “monster models” can function as a rhetorical shield for powerful institutions:
  • “The algorithm did it,”
  • “The system learned it on its own.”
A critical jurisprudential reading says:
  • Be suspicious of attempts to shift blame to the “monstrous” technology.
  • Ask which human institutions designed, profited from, and decided to trust that system.
In other words: don’t Frankenstein the AI—don’t let it become the scapegoat that hides human agency.
The core answer: where should responsibility lie?
Bringing it all together under “Jurisprudence: Frankenstein and Artificial Intelligence”:
1
Responsibility belongs primarily to human creators and deployers.
Just as Victor cannot morally or legally wash his hands of the creature, AI developers and companies cannot disclaim liability because their systems exhibit complexity, learning, or unpredictability.
2
Abandonment is itself wrongful.
In Frankenstein, the monstrous acts are the downstream consequences of a prior wrong: the creator’s failure to assume responsibility. In AI, releasing systems without adequate testing, guardrails, monitoring, or redress mechanisms is a modern form of that wrongful abandonment.
3
The “autonomy” of AI does not erase the chain of causation.
Unpredictability may complicate responsibility, but it does not nullify it. Legal doctrines already handle dangerous, partially autonomous systems—courts can analogize AI to these rather than treating it as a legal black hole.
4
We must resist creating a legal “monster” to blame.
Granting AI a quasi-personhood mainly to soak up blame risks repeating Victor’s fundamental evasion: placing fault on the creation instead of the creators and institutions that shaped it.
CONCLUSION
As Frankenstein cautions, allowing AI and humankind to blur into interchangeable actors would collapse the very framework of responsibility our legal system depends on.
Instead, our ethical and legal imperative is to forge a distinct yet collaborative partnership. Human beings must retain ultimate moral and legal authority, while AI occupies a clearly defined role as a powerful collaborator.
AI should be integrated into our institutions, but never allowed to replace or obscure the human agents who remain fundamentally answerable for its actions and impact in the world.