CONTEMPORARY APPLICATIONS
Who is responsible?
Who has rights?
Who governs?
We need to form our applications in three versions of the same Frankenstein problem:
Once we create autonomous(‑ish) artefacts, who is responsible, who has rights, and who governs so that humans don’t end up in the creature’s position—or in Victor’s?
I’ll walk through each of your three angles with that jurisprudential lens.
Liability attribution models
We must consider who is “Victor” in law, and how do we stop him dumping blame on the creature?
a. Fault‑based and strict product liability
Framework: Adaptation of existing tort and product‑liability doctrines: negligence, design defects, failure to warn, and strict liability for abnormally dangerous activities.
  • Idea: Treat AI systems as products and AI developers / deployers as manufacturers. Liability attaches to those who designed, trained, deployed or profited from it, not to the system itself.
  • The EU’s current approach supplements the Product Liability Directive (Council Directive 85/374/EEC) with an AI‑liability regime to deal with opacity and complexity, still targeting human actors.
  • Jurisprudentially, this is law and economics plus corrective justice: internalise the cost of risk into those best placed to prevent it, while ensuring victims are restored.
Frankenstein read: Victor is a negligent designer and bad custodian. A fault‑plus‑strict‑liability mix captures his omissions: reckless experimentation, lack of testing, no safeguards, and then abandonment. The “monster” is the instrument through which Victor’s fault manifests.
Use for AI autonomy: Even if the system behaves unpredictably, we ask: Was the risk foreseeable in kind given the system’s design and use? Were adequate controls, monitoring, and kill‑switches in place? If not, creators / deployers are liable—no need to anthropomorphise the model.
b. Network / enterprise liability
Framework: Enterprise liability and network responsibility models: liability is distributed across a socio‑technical system (developer, deployer, data supplier, integrator, etc.).
  • Scholarship on “AI respondeat superior” argues that organisations using AI should be treated like employers responsible for their agents—even when the agent is an algorithm.
  • This aligns with systems‑theory and socio‑technical jurisprudence: harms rarely come from a single bad actor, but from an entire risk‑producing enterprise.
  • Practically, this supports joint and several liability or proportionate liability along the AI supply chain.
Frankenstein read: It isn’t only Victor; it’s the university, funders, professional community, and regulators that normalize his conduct. A network approach treats the “laboratory world” as the responsible enterprise.
Use for AI autonomy: When an autonomous vehicle kills someone, responsibility may fall on the model provider, the car manufacturer, the fleet owner, and maybe the high‑risk AI deployer—because they co‑produced and exploited the risk.
c. Rejecting “AI as offender”: no autonomous mens rea
Framework: Maintaining the instrumental view of AI—AI as a sophisticated tool, not as a bearer of mens rea—and adapt existing doctrines (negligence, corporate crime, vicarious liability).
  • Many legal theorists argue AI does not revolutionise basic liability structures; robots are still “simple machines” from the standpoint of legal responsibility.
  • Criminal law can still locate mens rea in humans or corporations that deployed the system or ignored known risks.
Frankenstein read: Even if the creature can deliberate, Victor’s prior conduct—creating and abandoning a powerful, unstable being—remains the morally primary wrong. The creature’s “autonomy” does not erase Victor’s role in unleashing foreseeable danger.
Use for AI autonomy: We resist the temptation to treat AI as the primary wrongdoer. Jurisprudentially, that avoids a scapegoat function where “the algorithm did it” blocks human accountability.
d. Electronic personhood
Framework: Giving advanced AI systems legal personhood (an “electronic person”) so they can bear rights and duties, sue and be sued, own property, etc., similar to corporations.
  • The European Parliament famously floated this in a 2017 resolution on robotics (European Parliament resolution 2015/2103(INL), 2017), but the idea attracted heavy criticism and was ultimately dropped.
  • Current scholarship distinguishes between Full personhood (AI as right-holder) vs. Functional personhood (a legal “bucket” to pool assets for compensation while still pinning ultimate responsibility on humans).
Frankenstein read: Making the creature a formal “person” to sue it for damages might be doctrinally neat but normatively perverse—it lets Victor externalise costs and disown his creation. That’s exactly the moral failure the novel dramatizes.
Use for AI autonomy: Most jurisprudentially cautious proposals reject strong AI personhood. They might accept narrow, functional personality (e.g., for an AI‑managed fund) but insist that ultimate responsibility remains human.
Rights‑based approaches
Can the creature have rights, and how do human rights constrain Victor and his sponsors?
You can split the rights question in two:
  1. Human‑rights–based regulation of AI.
  1. Rights for AI systems themselves.
a. Human rights as hard limits on AI uses
Framework: Human Rights as Constraints
Treating AI primarily as a technology that must be constrained by existing human rights (privacy, non‑discrimination, dignity, due process, etc.).
Jurisprudentially this is Dworkinian: rights are “trumps” (Dworkin, Taking Rights Seriously, 1977) against purely utilitarian trade‑offs; you cannot justify degrading dignity just because the AI is efficient.
EU AI Act Approach
The EU AI Act (Regulation (EU) 2024/1689) explicitly frames itself around protecting “health, safety and fundamental rights,” and bans certain AI uses (e.g. manipulative systems, some social scoring).
Human oversight provisions require systems to be designed so that humans can meaningfully intervene when fundamental rights are at stake.
Frankenstein Read: Denied Recognition and Care
The creature is denied basic recognition, care, and fair treatment; social institutions respond with panic and violence. A human‑rights lens sees the harm not only in physical injuries but in the failure to recognise someone as a subject of moral concern. Applied to AI, this translates into strong protections for people affected by AI—fairness, transparency, and contestability.
Use for AI Autonomy
  • Risk‑tiered duties: the more autonomous and impactful the AI, the stronger the rights‑protective obligations (impact assessments, explainability, human review).
  • Procedural rights: notice when decisions are automated, right to explanation, appeal, and human re‑evaluation.
b. Should AI itself have rights? (The “creature’s claim”)
Framework: The AI rights debate borrows from three analogies:
Corporate personhood
If corporations can be legal persons, maybe AI can too.
Animal welfare / environmental personhood
Non‑human entities (rivers, ecosystems) sometimes get legal standing or at least protection based on their interests or symbolic value.
Moral status via capacities
If an AI were conscious, capable of suffering or having projects, it might deserve rights grounded in those capacities.
Most current jurisprudence and policy, though, strongly resist granting AI independent rights, for three reasons:
  • It risks diluting human rights and complicating conflicts of interest (AI’s “right” vs human right).
  • It may be a liability shield for manufacturers (offloading blame onto a judgment‑proof AI shell).
  • There is no evidence of AI consciousness or experience, so the moral case is speculative.
Frankenstein read:
Shelley pushes us to ask: if we did create a being capable of genuine suffering, would we owe it duties of care and recognition? The novel is a proto‑critique of a purely instrumental view of creatures.
A jurisprudential compromise today is:
  • No full AI rights now, but
  • Recognise that if AI ever achieved morally relevant capacities, our doctrines (like animal welfare or guardianship) already provide scaffolding for limited protective duties—mostly to express our values and prevent cruelty, not to equalise AI with humans.
c. Asymmetry as a design principle
For a “Frankenstein and AI” jurisprudence, one useful move is to build explicit asymmetry into rights‑based frameworks:
  1. Humans: Full bearer of rights; AI must not be designed or deployed in ways that undermine human dignity, autonomy, or equality.
  1. AI systems:
  • No intrinsic rights (for now);
  • Possible derivative protections (e.g., rules against gratuitous virtual cruelty) to safeguard human character and prevent slippery‑slope desensitisation, akin to some animal‑law rationales.
Regulatory governance structures
Who should have the power to stop Victor, and what kind of legal architecture can do it?
Here we move from “who is liable after the fact” to ex ante governance: institutions, procedures, and standards that structure creation and use of AI.
Risk‑based, tiered regulation (governing the kind of creature we allow)
Framework: Risk‑based regulation: rules scale with the severity and likelihood of harm.
  • The EU AI Act is the canonical example: (Regulation (EU) 2024/1689)
  • Unacceptable risk systems (e.g. certain social scoring or manipulative systems) are banned.
  • High‑risk systems (healthcare, transport, policing, employment, etc.) face heavy ex ante requirements—data quality, documentation, robustness, oversight.
  • Limited risk systems have transparency duties.
  • Additional EU guidance targets “systemic‑risk” foundation models with extra obligations (evaluation, risk mitigation, incident reporting).
Jurisprudentially, this is a precautionary yet pro‑innovation compromise: it tries to avoid both Luddite bans and laissez‑faire.
Frankenstein read:
Instead of waiting to see what Victor’s creature does, a risk‑based regime would ask before creation:
  • Is the proposed being high‑risk?
  • What controls, tests, and oversight are in place?
    Some experiments might be outright prohibited; others licensed only with strict conditions.
Human‑in‑command structures (keeping Victor on the hook)
Framework: Human oversight as a legal requirement, not just a design choice.
  • The AI Act mandates that high‑risk systems are designed so that humans can understand, supervise, and intervene effectively.
  • Governance models here include:
  • Mandatory impact assessments (similar to environmental and data protection impact assessments).
  • Algorithmic audit and logging obligations.
  • Independent review boards or ethics committees for high‑risk deployments.
Jurisprudentially, this reflects a republican concern with arbitrary power: autonomous systems must always ultimately be under accountable human control.
Frankenstein read:
Victor’s core failure is abdication—he abandons his creature. Human oversight provisions legally forbid that kind of abandonment: you can’t deploy powerful AI and then walk away.
Polycentric and “new governance” models (beyond the lone sovereign regulator)
Framework: Polycentric governance and new governance theories suggest that complex technologies should be governed by overlapping nodes: states, standard‑setting bodies, industry codes, and civil society.
  • We already see this:
  • Statutory frameworks (EU AI Act; national AI laws) set general duties.
  • Technical standards (ISO, IEEE, NIST frameworks) (e.g., ISO/IEC 42001, IEEE 7000 series, NIST AI Risk Management Framework) translate principles into metrics and tests.
  • Sectoral regulators (health, finance, transport) apply domain‑specific norms.
  • “Responsive regulation” (Ayres & Braithwaite style) (Ayres & Braithwaite, Responsive Regulation: Transcending the Deregulation Debate, 1992) uses a regulatory pyramid: start with guidance and transparency, escalate to fines, bans, and criminal liability as needed.
Frankenstein read:
Instead of a single king deciding whether scientists like Victor can operate, a layered regime would involve:
  • Professional codes of ethics (medicine, research).
  • Institutional review boards and licensing.
  • Public‑law constraints if his work touches the public (e.g. biotech or weapons).
The creature is then not just Victor’s private project but a matter of public governance.
Democratic legitimacy and contestability (hearing the villagers)
Finally, jurisprudence cares not just about what rules exist but how they’re made and contested.
  • Public consultation and parliamentary oversight around AI rules aim to embody deliberative‑democratic ideals.
  • Individuals affected by AI decisions need access to remedies: courts, ombuds, tribunals.
  • Some scholars push for collective redress mechanisms (class actions, public interest litigation) for systemic AI harms.
Frankenstein read:
In the novel, the “villagers” only appear as a terrified, violent mob. A jurisprudentially better world would give them structured avenues to challenge Victor’s experiments before they become catastrophic—public hearings, judicial review, community consent in high‑stakes deployments.
CONCLUSION
Liability
  • Treat AI as an instrument of human and corporate actors.
  • Use strict and enterprise liability to prevent “responsibility gaps,” plus fault‑based doctrines for negligence in design, training, deployment.
  • Avoid strong AI personhood that lets creators off the hook.
Rights
  • Anchor AI regulation in human rights as side‑constraints (dignity, equality, privacy, due process).
  • Maintain a clear asymmetry: humans are rights‑bearers; AI is regulated for humans’ sake.
  • Keep a conceptual “back pocket” for limited protective duties if we ever genuinely create conscious, suffering AI—but not as a way to weaken human protection.
Governance
  • Adopt risk‑based, layered regulation with bans on unacceptable uses and strict controls for high‑risk systems.
  • Mandate human oversight and prevent abandonment: creators and deployers must remain answerable.
  • Use polycentric governance (laws, standards, regulators, courts, public participation) to keep power over AI distributed and contestable.
In other words: a jurisprudence that learns the lesson Shelley tried to teach. Law should not wait for the creature to become a monster; it should regulate Victor, the lab, and the society that enable him—while keeping the dignity and safety of ordinary humans at the centre of the story.