ZYZOL

View Original

Rewriting Copyright in the Age of AI: Finding the Middle Ground Between Creativity and Protection

As we step into an era where technology and creativity collide on an unprecedented scale, it’s clear that our current copyright laws are due for a serious update. With over 8 billion people on Earth, all contributing to an ever-growing flood of content, it’s becoming more challenging to determine where one person’s creativity ends and another begins. The rise of AI-generated content has only added to the complexity, raising new questions about originality, ownership, and what truly constitutes plagiarism.

Some claim that AI is "stealing" content, while others argue that creativity has always been a process of building on what’s come before. The reality lies somewhere in the middle. To simply say that any work that differs from another by even 1/8 billionth should be considered original is too extreme—such a standard would allow even the smallest of changes, like altering a single pixel or word, to be deemed “new.” On the other hand, holding onto rigid, outdated copyright laws risks stifling innovation and freedom of expression.

What we need is a new, objective way to measure creativity and originality—a system that allows both human and AI-generated content to flourish, while still offering protection for creators whose work is truly unique.

The Problem with "One Size Fits All" Copyright Laws

Copyright law has always been about striking a balance between protecting creators and allowing the free flow of information and ideas. But in the age of the internet and artificial intelligence, that balance is starting to tip. Our laws were built in a time when creativity was slower, more individualistic, and easier to monitor. Back then, it made sense to give artists, writers, and musicians exclusive rights to their work. But today, when content is shared, remixed, and transformed at lightning speed, these laws feel increasingly restrictive.

AI, in particular, has muddied the waters. Unlike humans, AI generates content based on vast amounts of data, creating new works by identifying patterns and making statistical predictions. This process is not "stealing" in the traditional sense; it’s more like sophisticated recombination. Yet, when an AI produces something that resembles existing content—even if only slightly—people are quick to call it plagiarism.

But the current copyright system doesn’t account for nuance. If two articles report on the same news event, or if two songs use similar chord progressions, does that mean one is copying the other? And if an AI generates content that’s only somewhat similar to a human’s work, how do we draw the line between inspiration and infringement?

This is where the idea of setting a more objective standard comes into play. But the concept of "1/8 billionth" as a threshold, where even the smallest differences would exempt a work from being considered plagiarized, is simply too permissive. At that level of nuance, virtually anything could be claimed as original, even if it barely deviates from the source. A single pixel, a single word, a single note could suddenly render a copycat creation “unique.”

This approach, while it might seem like a solution to the problem of over-regulation, would make it impossible for creators to protect their work. We can’t go that far—but we also can’t remain stuck in a system where a select few gatekeepers hold all the power over what counts as "too similar."

The Need for a Middle Ground

What we need is a middle ground between extreme nuance and total restriction. A standard that recognizes the value of building on existing ideas—something humans have done for centuries—while also protecting the integrity of truly original works. Instead of relying on subjective judgments from lawyers, courts, or corporations, we need a more measurable, data-driven approach to defining creativity and originality.

So, what would that look like?

  1. A New Scale of Similarity: Rather than relying on vague, subjective judgments about how “similar” two works are, we could develop a measurable scale for determining how much overlap is acceptable. This scale would need to take into account not just word-for-word copying, but thematic, structural, and stylistic similarities. For instance, the percentage of a work that overlaps with a pre-existing piece could be calculated based on objective criteria, like unique word combinations, sentence structures, or visual elements in the case of art or design.

  2. Thresholds for Different Types of Content: Not all content is created equal, and we shouldn’t apply a one-size-fits-all approach to originality. News reporting, for example, often involves shared facts and common language, while creative works like novels or paintings rely more heavily on unique expression. AI-generated works, which are created based on statistical analysis of vast datasets, are also fundamentally different from human-made art or writing. Each type of content needs its own thresholds for what constitutes originality.

  3. Room for Remix Culture: We live in an era of remix culture, where creativity often comes from combining, reshaping, and reinterpreting existing works. We need to embrace this reality and ensure that copyright law allows for reasonable, transformative uses of content. This means giving creators room to remix, parody, and build upon what’s come before, as long as they add something genuinely new to the conversation.

  4. Clear Definitions for AI Content: As AI becomes more integrated into creative fields, we need clearer guidelines for what counts as “original” AI content. If an AI generates a new piece of music that’s 99% identical to an existing song, it’s reasonable to consider that plagiarism. But if the AI’s work only bears a passing resemblance to a human-made piece, it should be treated as an original creation—especially if the differences are measurable in a meaningful way.

Why This Matters for the Future of Creativity

If we fail to adapt copyright law for the age of AI, we run the risk of either stifling creativity or allowing rampant, unchecked copying. Neither outcome is good for society. A system where every tiny difference renders a work "new" would lead to chaos—people could change a single pixel of a digital artwork or a single note in a song and claim ownership. On the flip side, overly restrictive copyright enforcement could lock down ideas and prevent the free exchange of information and inspiration that has always driven human progress.

The solution lies in finding that middle ground—a system that objectively measures originality without stifling creativity. We need a copyright standard that both protects creators and fosters innovation, allowing AI and human creativity to coexist and thrive.

By adopting a new, measurable way to define originality, we can ensure that the creators of truly unique works are protected, while also giving room for the natural evolution of ideas, art, and information. This isn’t just about safeguarding the rights of individuals or corporations—it’s about ensuring that creativity can flourish in a world where humans and machines work side by side to build the future.

The Path Forward

In the rapidly evolving world of AI and digital creativity, we need copyright laws that reflect the realities of the 21st century. The idea of using 1/8 billionth as a standard for originality may be too extreme, but we must still find a way to objectively measure differences in content and offer creators protection without stifling expression.

The key is to find a middle ground—one that recognizes the value of remixing and reinterpreting existing ideas, while also ensuring that genuine originality is preserved. With clear thresholds, objective measurements, and guidelines for different types of content, we can foster an environment where both human and AI creativity can thrive, without the fear of endless copyright disputes.

As we move forward, this kind of balanced approach will be essential to maintaining the freedom of expression that fuels innovation and keeps the creative world alive.