DUTY OF SOCIETY
How can communities balance innovation with protection of rights, both through the viewpoint of ethics and morals but also legality?
In consideration of this, I will be framing through the idea that “Mary Shelley’s townspeople rejected the creature, just as society grapples with integrating AI into legal frameworks”; now, let’s treat Frankenstein as a jurisprudential case study for AI.
Frankenstein as a legal parable, not just a horror story
Mary Shelley’s novel (Shelley, 1818) isn’t only about a scientist and his creature; it’s also about the failure of law and community (Shelley, 1818).
  • Courts in the novel condemn innocent people on flimsy or biased evidence, and legal institutions fail to protect the accused.
  • The townspeople respond to the creature with immediate mob justice, driven by fear and appearance, not by fact or due process.
(Shelley, 1818, chapters 8, 16, 21)
In that sense, Frankenstein is a warning about what happens when fear of the “unnatural” replaces principled judgment. The “monster” is, in part, a product of social and legal negligence.
Today, AI plays that “creature” role: powerful, unfamiliar, easy to mythologize.
The question is whether we will behave like Shelley’s townspeople—reactive, punitive, and prejudiced—or like a community capable of measured, rights-respecting governance.
Shelley, M. (1818). Frankenstein; or, The Modern Prometheus.
Ethics and morals: avoiding the “Frankenstein syndrome”
Ethically, Frankenstein gives us three key warnings (Shelley, 1818) that map onto AI:
1
Creation without responsibility
Victor creates the creature and then abandons it (Shelley, 1818, chapters 5-6). Ethically, this is negligent: he takes credit for the innovation and dodges all the consequences.
  • With AI, the parallel is deploying powerful systems (for policing, hiring, credit, or warfare) and then pretending harm is “just what the algorithm did.”
2
Othering and exclusion
The townspeople reject the creature solely on appearance and difference; no one asks who actually caused the harms or whether the creature might be capable of moral action (Shelley, 1818, chapters 11-16).
  • With AI, a similar “Frankenstein syndrome” shows up as blanket fear—the idea that AI is inherently monstrous, rather than a tool embedded in human institutions whose harms or benefits depend on design, oversight, and context.
3
Misplaced blame
The moral failure in Frankenstein is not only the creature’s violence, but the community’s and Victor’s refusal to ask: Who designed this situation? Who had the power and walked away?
  • In AI ethics, this translates into a demand for responsibility of creators, deployers, and regulators, not just condemnation of the “system.”
So from an ethical standpoint, communities balance innovation and rights by refusing to either demonize or romanticize AI. Instead, they:
  • Focus on human responsibilities: design choices, training data, deployment contexts.
  • Condemn abandonment (unregulated deployment, no safety testing, no redress mechanisms) rather than the mere fact of innovation.
  • Reject mob-style moral panics that lead to overbroad bans born of fear, not evidence.
For AI ethics frameworks addressing these issues, see: Mittelstadt, B. D., et al. (2016). The ethics of algorithms: Mapping the debate. Big & Data Society, 3(2).
Legality: how jurisprudence frames the “creature”
Risk-based regulation instead of mob reaction
The EU’s AI Act (European Parliament, 2024) is a good example of a legal attempt to not behave like the townspeople. It doesn’t assume all AI is monstrous or all AI is benign; instead it uses a risk-based framework:
  • Unacceptable-risk AI (e.g., certain forms of social scoring or manipulative systems) is banned outright.
  • High-risk AI (e.g., in healthcare, credit scoring, policing, or critical infrastructure) is allowed but subjected to strict obligations around safety, transparency, and fundamental rights.
  • Lower-risk systems face lighter or no specific regulation.
(EU AI Act, 2024, Articles 5-7) At the same time, the Act includes regulatory sandboxes (EU AI Act, 2024, Article 57)—spaces where companies can test AI systems under supervision, precisely to reconcile innovation with rights protection.
This is a legal analogue to saying: Don’t burn down the lab; build supervised experimental spaces with rules.
Of course, it’s contested. Business leaders worry harsh or complex rules will stifle European AI innovation and advantage less-regulated competitors; some have called for delaying or softening high‑risk provisions. (See industry responses documented in Veale & Borgesius, 2021)
On the other side, digital-rights advocates warn that watering down these rules or delaying enforcement amounts to repeating Victor’s sin of neglect—creating powerful systems and then failing to constrain them to protect fundamental rights.
The jurisprudential lesson: procedure and proportionality, not panic or blind optimism.
Personhood and responsibility: who is the “defendant”?
Another frontier of AI jurisprudence is whether AI systems should ever be granted some form of legal personhood (limited or otherwise), analogous to corporations.
Arguments for limited AI personhood:
  • It could help assign liability and obligations to AI agents that operate autonomously in markets or digital environments.
  • Corporate personhood shows that law already treats non-human entities (Solum, 1992) as bearers of rights and duties.
Arguments against:
  • AI personhood might be used as a liability shield, where companies offload blame onto a “digital scapegoat” that has no assets, conscience, or real accountability.
  • It risks conferring rights without meaningful responsibilities, especially if human beings harmed by AI still struggle for effective remedies.
From a Frankenstein lens, granting full personhood to AI too early is like legally recognizing the creature while still letting Victor disappear into the background. Jurisprudence, if it wants to learn from Shelley, must:
  • Keep humans and institutions—developers, deployers, regulators—firmly in the frame as primary rights‑bearers and duty‑holders.
  • Consider any future AI “personhood” only under conditions that increase, not reduce, accountability and protection of human rights.
Due process and algorithmic justice
Shelley’s novel contains scenes of undermined justice—innocents condemned, courts swayed by bias, and no meaningful
Modern AI risks replaying this on a systemic scale when used in:
  • Predictive policing
  • Sentencing and risk assessment
  • Credit and insurance scoring
  • Hiring, welfare, and immigration decisions
(O'Neil, 2016; Eubanks, 2018) From a jurisprudence perspective, balancing innovation and rights here requires:
  • Transparency: individuals must know when AI is used in decisions that affect their rights.
  • Explainability and contestability: people need mechanisms to challenge and review AI-driven decisions (appeal, human review, disclosure duties).
  • Evidence standards: courts must decide how to treat algorithmic outputs—are they expert evidence, hearsay, or something else? Under what reliability criteria?
This is the opposite of the townspeople’s approach, which was essentially: We don’t need explanation; we’ve already decided who’s the monster.
References
European Parliament and Council. (2024). Regulation (EU) 2024/1689 on Artificial Intelligence (AI Act).
For debates on AI Act implementation, see: Veale, M., & Borgesius, F. Z. (2021). Demystifying the Draft EU Artificial Intelligence Act. Computer Law Review International, 22(4).
O'Neil, C. (2016). Weapons of Math Destruction. Crown. Eubanks, V. (2018). Automating Inequality. St. Martin's Press.
How communities can actually balance innovation and rights
Ethically grounded principles
Responsibility follows power
Those who design, deploy, and profit from AI bear heightened duties to anticipate and mitigate harm, just as Victor should have borne responsibility for his creation (Shelley, 1818).
Avoid ethical scapegoating
Don’t treat AI as magically good or evil. Ask: Who built it? Who chose the training data? Who decided where to deploy it?
Center those most affected
Communities should include marginalized groups—often most impacted by biased data and automated decisions—in the design of AI policy and oversight structures.
Legal mechanisms
Risk-based regulation with rights "hard stops"
Use frameworks like the AI Act’s gradation of risk, but maintain non‑negotiable red lines where systems pose unacceptable threats to human dignity or fundamental rights (EU AI Act, 2024).
Regulatory sandboxes and staged deployment
Allow innovation in supervised environments—sandboxed pilots, limited rollouts, external audits—where regulators, civil-society groups, and affected users can test claims and spot harms early (Floridi et al., 2018).
Strong accountability and liability rules
Ensure there is always a human or organization clearly responsible in law for AI actions that cause harm. Resist legal constructs that turn AI into a shield for its makers.
Procedural safeguards and algorithmic due process
Guarantee transparency, explanation, access to meaningful human review, and effective remedies when AI affects rights in areas like employment, housing, policing, or welfare (Citron & Pasquale, 2014).
Iterative lawmaking
Build in review clauses, sunset provisions, and mandatory impact assessments so that AI law can evolve as technology and social understanding shift—rather than being frozen by either fear or hype.
Framework informed by: Floridi, L., et al. (2018). AI4People—An Ethical Framework for a Good AI Society. Minds and Machines, 28(4), 689-707.
Citron, D. K., & Pasquale, F. (2014). The Scored Society: Due Process for Automated Predictions. Washington Law Review, 89(1), 1-33.
The Frankenstein test for AI jurisprudence
A simple way to tie this together is to imagine a “Frankenstein test” for new AI policies and systems:
1
Responsibility & Creators
Does this framework keep responsibility with the creators and deployers, or does it dump blame onto a faceless “monster”?
2
Dignity & Rights
Does it respect the dignity and rights of those affected, or does it treat them like the villagers treated the creature—suspicious, voiceless, expendable?
3
Balanced Approach
Does it avoid both unthinking rejection of innovation and uncritical embrace of it?
If a community can answer those questions well—ethically and legally—it is doing what Shelley’s world did not: integrating the “creature” into a moral and legal order instead of chasing it with torches. (Shelley, 1818)
Note: The 'Frankenstein test' is an original analytical framework developed for this presentation, drawing on Shelley's (1818) narrative structure and contemporary AI governance scholarship (Floridi et al., 2018; European Parliament, 2024).
CONCLUSION
1
Co-existence is Naïve
Frankenstein reveals that simple co-existence between humankind and its creations is structurally impossible; Victor’s disaster arose from expecting the creature to be absorbed into an order not designed for it (Shelley, 1818, chapters 5, 20-24).
2
AI Needs Distinct Space
AI cannot safely share the same undifferentiated moral and legal space as human beings. Doing so blurs responsibility, dilutes human dignity, and invites the very injustices the novel dramatizes (Shelley, 1818).
3
Crafting a Cohesive Relationship
What is required is a deliberately crafted, cohesive relationship—one that treats humans and AI as separate but equal partners, implying distinct juridical statuses within a shared framework (Solum, 1992; Bryson et al., 2017).
4
Clear Roles & Responsibilities
Human persons remain the ultimate bearers of rights and duties. AI systems are powerful but bounded instruments operating under clearly articulated constraints, liabilities, and oversight.
5
Innovation with Primacy of Rights
Only by preserving this structured separation while insisting on an integrated partnership can jurisprudence avoid both the monster myth and the abdication of responsibility, allowing innovation to proceed without sacrificing the primacy of human rights.
Theoretical framework draws on: Shelley, M. (1818). Frankenstein; and legal personhood debates in: Solum, L. B. (1992). Legal Personhood for Artificial Intelligences. North Carolina Law Review, 70(4), 1231-1287; Bryson, J. J., Diamantis, M. E., & Grant, T. D. (2017). Of, for, and by the people: the legal lacuna of synthetic persons. Artificial Intelligence and Law, 25(3), 273-291.