Column: Your boss wants AI to replace you. The writers’ strike shows how to fight back
So far, the story of the AI boom has been the one that the tech industry has wanted to tell: Silicon Valley companies creating AI services that can mimic human art and words and, according to them, replace millions of jobs and transform the economy.
The next chapter is about humans fighting back. If the robots are rising, then a rebellion is taking shape to stop them — and its vanguard can be seen in the crowds of striking writers assembled across Hollywood.
One of those workers put it to me bluntly on the picket line, where screenwriters were protesting, among other things, the entertainment industry’s openness to using artificial intelligence to churn out scripts: “F— ChatGPT.”
But it’s not just screenwriters — the movement includes illustrators, freelance writers and digital content creators of every stripe. “Every day,” the artist and activist Molly Crabapple tells me, “another place that used to hire human artists has filled the spot with schlock from [AI image generator] Midjourney. If illustrators want to remain illustrators in two years, they have to fight now.”
Each week brings more companies announcing they will replace jobs with AI, Twitter threads about departments that have been laid off, and pseudo-academic reports about how vulnerable millions of livelihoods are to AI. So, from labor organizing to class-action lawsuits to campaigns to assert the immorality of using AI-generated works, there’s an increasingly aggressive effort taking shape to protect jobs from being subsumed or degraded by AI.
ChatGPT and other new AI services benefit from a science fiction-infused marketing frenzy unlike anything in recent memory. There’s more to fear here than killer robots.
Their core strategies include refusing to submit to the idea that AI content generation is “the future,” mobilizing union power against AI exploitation, targeting copyright violations with lawsuits and pushing for industrywide bans against the use of cheap AI material.
They’re just getting started. And for the sake of everyone who is not a corporate executive, a middle manager or an AI startup founder, we’d better hope it works.
A big reason that the AI hype machine has been in overdrive, issuing apocalyptic claims about its vast power, is that the companies selling the tools want to make it all feel inevitable — to feel like the future — and have you believe that resisting it is both futile and stupid. Conveniently, most of these discussions eschew questions such as: Whose future? Whose future does AI really serve?
The answer to that is “Big Tech” and, to a lesser degree, “your boss.”
The AI Now Institute, a consortium of AI researchers and policy experts, recently published a report that concluded the AI industry is “foundationally reliant on resources that are owned and controlled by only a handful of big tech firms.” Its power is extremely concentrated in Silicon Valley, among giants such as Google and Meta, and that is where the economic benefits are all but certain to accrue.
The 2023 writers’ strike is over after the Writers Guild of America and the Alliance of Motion Picture and Television Producers reached a deal.
OpenAI, which has a $10-billion partnership with Microsoft, is in particular making the case that its tools can replace workers — a study the company conducted with the University of Pennsylvania claimed its AI services could affect 80% of American workers; for 1 in 5, it could do half the tasks that constitute their jobs. OpenAI is marketing its services to consulting firms, ad agencies and studio executives, among many others.
Fortunately, as the AI Now report points out, “there is nothing about artificial intelligence that is inevitable.”
The writers’ strike, in particular, has brought to the forefront questions about how AI will replace or degrade human work, and it’s given workers in other industries that stand to be affected a model response: Draw a line in the sand. Say no to cheap AI that lets executives drive down wages and erode your working conditions. Push back.
In its latest contract proposal, the Writers Guild of America asked that the entertainment industry agree not to use AI to replace writers. The industry declined, agreeing only to “annual meetings to discuss advancements in technology,” throwing red flags up all over the place. It’s one of the issues the studios refused to budge on, along with more routine demands such as pay increases, so the writers have brought the nation’s entertainment industry to a halt. They do so in order to protect the very future of their trade.
I went down to the picket line at 20th Century Studios, where dozens of writers spent the day walking back and forth along Pico Boulevard. I wanted to ask the writers how they felt about AI, so I put the question to the first writer willing to talk.
That was when I heard the profane response quoted above. It came from Matt Nicholas, a 30-year-old writer and WGA member, who was all too aware exactly how AI was going to be used by the film and television industry — not to replace writers, but to undermine them.
“I have heard executives say that this is going to be the future,” Nicholas said. That future being that the studios will use AI text generators to produce a script, however shoddy, and then “hire us to do rewrites of that material, which they’re going to treat as source material.”
Wildly profitable tech companies are citing an as-yet notional recession to make deep workforce cuts. They may have another agenda.
Studios pay lower rates for script rewrites, and many writers worry it would actually be more work for them to correct and improve the boilerplate output, so it’s simply a way for the industry to slash pay and break worker power. “It’s absolutely ridiculous.”
“It feels like the shoe that’s about to drop,” said another writer, Nastassja Kayln, “and they’re hanging it over our heads on a regular basis.”
“The same thing’s going to happen to other industries,” she added, “not just ours.”
Indeed. It’s already happening to other industries, and ones where workers have far less organized power or protections. As such, illustrators and artists have been the most aggressive in standing up to the AI companies — which makes sense, given that their battle is perhaps more existential.
A trio of illustrators has launched a class-action lawsuit alleging that the AI image generators Midjourney and Stable Diffusion trained their language models on copyrighted material, and now produce derivative works without the owners’ consent. Meanwhile, the Center for Artistic Inquiry and Reporting has published an open letter written by Crabapple and journalist Marisa Mazria Katz, the center’s executive director, calling on editorial outlets and newsrooms to “restrict AI illustration from publishing” altogether.
“This is an economic choice for society,” the letter reads. “While illustrators’ careers are set to be decimated by generative-AI art, the companies developing the technology are making fortunes. Silicon Valley is betting against the wages of living, breathing artists through its investment in AI.” At the time of writing, it had more than 2,700 signatories, including MSNBC host Chris Hayes, author Naomi Klein, actor John Cusack and Laszlo Jakab Orsos, vice president of arts and culture at the Brooklyn Public Library.
“I saw my work in the LAION-5B dataset used to train Stable Diffusion,” Crabapple says. “I saw DALL-E’s ability to churn out bastard versions of my work with the prompt ‘drawn by Molly Crabapple.’ I saw how tech corporations, backed by billions of dollars, had gobbled up my work and the work of countless other artists to train products whose goal is to replace us.”
AI generators, she notes, are cheaper and faster than humans, and most corporations won’t care too much about quality — they’ll happily use the synthesized works to replace artists, while the tech giants profit. “It’s the biggest art heist in history.”
A lot of outlets already would hesitate to publish AI-generated art for fear of blowback — the petition, built on the personal experience of many artists who’ve seen their work exploited, aims to formalize such instincts into policy.
“There is no ethical way to use the major AI image generators,” Crabapple says. “All of them are trained on stolen images, and all of them are built for the purpose of deskilling, disempowering and replacing real, human artists. They have zero place in any newsroom or editorial operation, and they should be shunned.”
While Crabapple and CAIR are focused primarily on artists’ rights, editorial workers in journalism, magazines and beyond are also starting to formulate human-first responses to AI.
Staff at magazines, including small science fiction publications such as Clarkesworld and industry leaders such as Wired, have made it clear that they will not accept AI-generated submissions. Freelance writers and digital content creators, meanwhile, are in the trenches, giving testimony at the U.S. Copyright Office and organizing a defense against the companies and outlets that appear to be seeking to automate content production.
And the Freelance Solidarity Project, a part of the National Writers Union, has begun discussions about how best to organize around the subject. The worry is that the most precarious writers, artists and digital content creators are at risk of being swept away by AI and that their work, already barely protected, is being unfairly consumed by the maw of the for-profit large language models.
“Any creative work that exists online is currently ‘fair game’ to be scraped to train AI engines and build economic value for those companies without regard for either the copyright or consent of the original creators,” Alexis Gunderson, a member of the Freelance Solidarity Project, tells me. “For many independent writers and artists, this reasonably feels like theft; for others, it can feel like an artistic violation.”
Worse, “there is also the very real fear — which the WGA strike is so successfully highlighting — that much of the work that digital media workers currently do, both as freelancers and in staff roles, is likely to be first on the chopping block once these LLMs get robust enough,” Gunderson says. “Which, in too many cases, they already are.”
Freelancers, who don’t have the benefit of union power to protect them from AI, are exploring other options, such as asserting moral rights to their work, and pressing the U.S. Copyright Office to make it easier to register — and protect — their published articles. But anxieties remain high, especially for less established and more vulnerable writers.
Finally, the online voices ringing out against AI have been surprisingly vigorous. Huge communities on Twitter, Reddit and other social media networks have called out the shoddiness and exploitative bent of the AI generation industry, and all this protest is already having an impact — beyond the strike, beyond the editorial policies and right down to the vibes, you could say. The sharing of AI-generated images online, for one thing, has gone from seeming cool and even a little spooky to lamer than an account with a blue check mark.
But there’s a long way to go. Too many executives in too many industries, such as entertainment, tech and journalism, recognize generative AI for what it is: an opportunity to wield leverage over already precarious workforces. There’s going to be a long, hard struggle, but it’s one worth fighting. The result will determine what kind of work we all get to do; who technology ultimately serves, us or the 1%; and whether we all profit from the rise of AI — or just those who own the algorithms.