The One Thing AI Can't Truly Fake, And Why Hollywood Is Fighting Over It
Two years after historic AI protections for writers, new questions are reopening old fault lines.
When I published the Hollywood Doomsday Clock last week, I deliberately left out one of the major pillars of the entertainment business: writers. The Writers Guild of America (WGA) — the people who write the movie and TV scripts you know and love — will have its contract with studios expire on May 1, 2026.
I wanted to devote a separate piece to the writers for two reasons. First, well, because I’m a member of the WGA, so this is personal for me. Second, because I most often frame the issue of AI in Hollywood in terms of how it will change the visual mechanics of how film and TV are made (actors, VFX, editors, etc.). Among the many billions of dollars spent on generative AI startups focused on entertainment, I’ve seen the most cash plowed into disrupting the visual component of movies and TV. AI audio and text are major categories, but video has been and still is the holy grail for the tech companies working on AI entertainment tools.
But the truth is, writing is not only just as fundamental to the film and TV process, but the fact is that nothing even begins until something is written. An idea. A story. A world imagined from nothing, or torn from the life experiences of the writer. That is the framework upon which every other aspect of filmmaking and television production relies.
To that end, in 2022, I called the WGA office here in New York to sound the AI alarm. Long before AI became the topic of nearly every other business story, and most still dismissed AI images and text as curiosities not worthy of concern, I had already spent much of the year experimenting with open source AI. It took many months for me to wake up, but around the fall of 2022, I had come to the cold realization that AI was about to change everything for most people in the professional creative world. I had to tell warn someone.
When I finally spoke to someone in WGA’s New York office, I pushed for them to immediately hold meetings within the organization to discuss how WGA contracts could get ahead of the AI wave and bake in whatever protections deemed necessary. I explained how generative AI wasn’t static, and that the impressive yet rudimentary outputs I was producing would rapidly improve and present a challenge to human creatives. I explained how I had seen this kind of disruption before with paper magazines and newspapers failing to take the Internet seriously, a shift that prompted me to switch from being a music journalist to a tech reporter many years ago.
The First Alarm
Alas, my long phone call with the very solicitous WGA representative didn’t seem to set off the muted panic I had already recovered from. Realistically, I’d hoped and assumed someone there was already on the case. In the end, the person I spoke to promised to mention it to others at WGA and register my concern. After that, I’d hoped the seed had been planted, and moved on. In the meantime, I put my head down and decided to become as conversant as possible with generative AI in order to be equipped for what I knew was coming: wholesale disruption, again.
The following year, in 2023, I slowly began to see the startled musings on social media from journalists and scriptwriters realizing that generative AI wanted a piece of their professional pie. I wasn’t surprised. We had just recovered from two-year pandemic lockdowns and job losses, so no one was in the mood for a new hill to climb. But reality was slowly washing over everyone that generative AI couldn’t be ignored.
I spent nearly a year wondering if my voice, and surely other tech-native members, had been heard by WGA’s leadership. I’ll admit that the New York City cynic in me was almost shocked when, in October of 2023, the WGA moved to establish new AI guidelines for its Minimum Basic Agreement (MBA) [PDF] for member writers in film, TV, streaming, and digital projects.
Some of the new guidelines stated:
•No written material produced by generative AI could be considered literary material.
•Companies can’t give a writer an AI-generated screenplay and just pay the writer a small rewrite fee, thus denying the writer credit (and commensurate pay) as first writer of the screenplay.
•Contracting companies couldn’t require a writer to use AI software.
•Companies had to disclose to the writer if any materials (say, an outline, pitch, or treatment) given to the writer had been generated by AI, even partially.
•The WGA reserved the right to prohibit a writer’s work from being used to train AI.
That last one has always seemed to me rather difficult to enforce or audit. But the spirit of the guideline at least gave studios some sense of what protections writers were looking to secure.
Now, in 2026, AI is on every creative’s radar. Generative AI isn’t a mere contractual talking point, it’s seen as an existential risk for entire careers in Hollywood and media.
The question now is: Did the WGA move on AI quickly and broadly enough? Or, is it now too late?
When The Studios Catch Up
Like the actors’ union, SAG-AFTRA, the WGA also went on strike in 2023 for nearly half a year to secure those historic AI protections. The combined strikes, although separate, crippled Hollywood and changed the course of studios, careers, and film and TV budgets. So far, the prospects of another WGA strike seem relatively low, even though the Alliance of Motion Picture and Television Producers (AMPTP), the organization negotiating on behalf of the studios, seemingly has a stronger hand in 2026 with regard to AI.
So much material has already been used to train AI models, and those generative AI models have improved vastly since 2023. Additionally, several major studios are already engaged in their own intellectual property lawsuits with a few AI companies, so there doesn’t seem to be much new ground for the WGA to explore.
If the idea is to stop scripts from being used to train AI models, the path toward policing that is murky. Distinct characters and even animation styles can often be recognized if they pop up in the output of a generative AI model. But how do you police the writing output of an AI model that’s given the prompt:
“Combine the dialogue style of David Mamet with the narrative structure of Joel Coen & Ethan Coen...”
Sure, there may be clues in the output, but there are no real fingerprints, especially once a human writer modifies the AI-generated text to further obscure potential signs of provenance.
The chief negotiator for this contract renewal will be WGA Executive Director Ellen Stutzman, a 20-year veteran of the organization. She has proven to be attuned to the AI shifts in play and up to the task of negotiating with an eye toward AI’s future, as demonstrated by her work on the previous contract. If Stutzman and her team can keep the previous protections in place, that will be significant and calm a lot of screenwriter nerves in Hollywood in the shadow of shrinking writer’s rooms, and a surge of AI-powered production looming over the horizon.
If I’m reading the tea leaves correctly, studios may indeed push back on some of the 2023 guidelines. Now that it’s so much clearer how powerful generative AI is, I can see studios pushing to be allowed to develop AI-generated in-house summaries and scriptments and have those works treated as literary source material.
There’s also the issue of AMPTP-member studios like Disney (OpenAI), Netflix (internal via Eyeline), AMC Networks (Runway), and Lionsgate (Runway) that are working with AI companies to either train AI models on their film libraries or as a tool to aid their productions. This presents a kind of loophole that superficially doesn’t violate the 2023 WGA agreement because, technically, that contract doesn’t govern finished audiovisual works, just the written word.
Nevertheless, the studios allowing AI training on their film libraries are still effectively giving these AI models the stories of the human writers. Bottom line: Training AI on films is, in a way, de facto training AI on writing. Or so the argument from the WGA might go during negotiations.
We now have 114 days until we find out if the WGA will be forced to strike again. If AMPTP does give WGA a reason to strike that’s related to AI, it will be a sign that they’re not only exerting new leverage, but that a SAG-AFTRA AI-related strike might also follow in June.


