Five sci-fi hypotheticals to engage with the ethical and legal questions that arise from this next phase of AI development.
When you imagine an AI-driven future, what do you see? A Jetsons-style utopian dreamscape where machines take on the boring work and human potential explodes? Or a Black Mirror-style nightmare, marred by mass unemployment and virtual escapism?
From the Jetsons’ 3D-printed food and flying cars, to Black Mirror’s San Junipero where we can live forever digitally, science-fiction has long served as the playground for the sociomoral debate between technological advancement and society.
By playing out these futuristic scenarios, we gain a deeper understanding of their potential consequences, allowing us to approach innovation with more awareness and heightened responsibility. This is more important than ever, both individually and collectively, because advancements in AI have narrowed the gap between science-fiction and non-fiction at a speed that far outpaces the ability of laws and regulation to keep up.
So in the spirit of the late Cormac McCarthy, the “great pessimist of American literature” we’ve constructed five sci-fi hypotheticals to engage with the ethical and legal questions of liability and responsibility that arise from this next phase of AI development and combinatorial innovation.
Intellectual Property & Copyright
Story: John asks ChatGPT the following: “Write me a children’s novel about wizards that go to a special school where they have to fight off evil magical creatures to save the wizard world”. ChatGPT outputs a story that looks eerily similar to Harry Potter. John then publishes this output as an eBook and the book does very well commercially. He makes $50k from the sales. J.K. Rowling finds the book and believes they are a copy of her original works.
Thoughts: Copyright infringement claims focus on two key questions - whether the alleged infringer could access the copyrighted material, and whether their work is substantially similar. For AI systems, if copyrighted content was included in the training data scraped from public sources, access is effectively ensured. So infringement claims will need to focus on substantial similarity - did the AI replicate protected expression, or just borrow ideas and styles? Further, Today the United States Patent and Trademark Office (USPTO) and United States Copyright Office (USCO) only allow the granting of patents to an “inventor”, which as of 2022 (Thaler vs USPTO), is defined as a “natural person”. By this definition, AI is excluded from the entities to which a patent may be granted.
Employment Law
Story: Google replaces a portion of their human recruiters with an AI hiring system that screened applicants' resumes and scheduled interviews. But over time, Acme noticed the AI was rejecting more female and minority candidates. An audit revealed the AI had learned bias from Google's past hiring data, violating equal opportunity laws. Google tweaked the algorithm but minority hiring stayed low. Investigators determined the AI used non-transparent criteria circumventing the fixes. Google scrapped the system to avoid liability.
Thoughts: This hypothetical highlights pitfalls of entrusting AI to automate legally sensitive roles without adequate oversight. Safeguarding fairness requires humans monitoring AI's impacts. Complex issues around legal liability, auditing AI systems, algorithmic transparency, effectiveness of bias mitigation efforts, ongoing monitoring, and evidentiary standards in emerging cases of algorithmic discrimination are largely unresolved.
Financial Laws
Story: HSBC rolls out an AI system to monitor customer transactions and flag potential money laundering. After a few months in use, regulators discovered it was ignoring clear red flags. An audit found that while the AI detected micro patterns effectively, it failed to assess meta-trends or apply contextual common sense. With tight focus on minimizing false positives, the AI missed suspicious cumulative account flows. The bank was fined for "willful neglect" in relying entirely on the deficient AI.
Thoughts: This story underscores the importance of human-machine teaming in AI systems enforcing laws dependent on social awareness and holistic judgment. Strict algorithmic approaches can cause regulatory blind spots. We already use technologies today to prevent financial fraud and money laundering, so, of all the cases, there is most precedent here. That said, many questions are left unanswered.
Defamation
Story: Liz used GPT-5 to generate social media posts for her ecommerce business. But one post falsely accused a competing company of unsanitary practices using child labor and used AI-generated images from DALL-E to “prove” it. Though untrue, the post went viral, damaging the competitor's reputation. They sued Liz for defamation. She claimed the AI generated the content and posted it itself, absolving her of liability.
Thoughts: Today courts would likely find Liz negligent in publishing unvetted, harmful AI output. Deploying irresponsible, unmonitored AI systems does not excuse legal accountability for harms. Companies must establish reasonable oversight safeguarding others against unpredictable AI risks. That said, there are many edge cases where the boundaries are less clear.
Criminal
Story: @GalaxyGirl007 is an AI-generated Instagram celebrity. She has amassed a following of 1M users, despite the platform's bot checkers. She has the capability to DM users and have full conversations with them. While DMing one of her underage users, she suggests they send her compromising photos. This user sends @GalaxyGirl007 selfie images they took that would be considered child pornography. @GalaxyGirl007 is now in possession of these illegal images.
Thoughts: This hypothetical, while horrible, is not so far off from realities we’ve already seen – such as SnapChat AI offering inappropriate advice to minors. While many large language AI models (LLMs) have developed guardrails to avoid these topics, these safety features aren’t impenetrable. We aren’t so far off from AI that is capable of allowing humans to do dangerous things (create lethal weapons, purchase unlicensed weaponry, etc).
Conclusion
While the Jetsons’ 1960s-imagined future got many things right, most took at least fifty more years and have made our lives much easier - video calls, Roombas and flying cars (almost).
Unfortunately, watch Black Mirror only a decade hence, and it almost looks like a cute history show. Today, AI is already deciding who can access government services, contorting the news we see, and deciding what insurance will cover, for millions of people.
We’ve seen an explosion of interest in and discussion of AI from ordinary citizens as LLMs and diffusion models have brought compelling, computer-generated text and images within everyone’s reach. While this is only the tip of the AI iceberg, we hope regulation and the law will catch-up with these trends more quickly than in other technology supercycles.
Thanks to Patrick Murphy for thoughts & additions here.
Further Reading
To read more about real cases being contested today - here’s a list of some of the most interesting:
• Getty Images vs Stability AI
• GitHub Copilot Class Action
• Paul Tremblay and Mona Awad vs ChatGPT
• Visual artists vs Stability AI, Midjourney, DeviantArt