- Share this article on Facebook
- Share this article on Twitter
- Share this article on Flipboard
- Share this article on Email
- Show additional share options
- Share this article on Linkedin
- Share this article on Pinit
- Share this article on Reddit
- Share this article on Tumblr
- Share this article on Whatsapp
- Share this article on Print
- Share this article on Comment
Consent, credit and compensation.
Those were the terms that SAG-AFTRA national executive director Duncan Crabtree-Ireland and Writers Guild of America West negotiating committee member John August called for in order for guild members’ work, likenesses and brands to be used to train artificial intelligence systems. At a hearing before the Federal Trade Commission on Wednesday, they joined other representatives for various groups — including authors, voice actors and musicians — to warn of the encroachment of generative AI into the media and entertainment industries that they say undercuts their labor and presents heightened risks of fraud.
Related Stories
The rise of AI tools have concerned creators who’ve been urging policymakers to institute guardrails surrounding the use of the technology. In the absence of regulation, the WGA secured a deal with studios and streamers that provides some protection for members on how it can be credited and utilized. SAG-AFTRA has been pushing for similar terms in its negotiations.
Crabtree-Ireland, who left the hearing early to return to negotiations with studios, said that human-generated content from actors, like in their likenesses, voices and performances, “reflects real and substantial work in its intellectual property” deserving of legal protection. He stressed a “double standard” in the potential use of AI by studios and other companies looking to deploy the technology.
“If an individual decided to infringe on one of these companies’ copyright-protected content, and distributed it without paying for the licensing rights,” Crabtree-Ireland said, “that individual would face a great deal of financial and legal ramifications.”
He added, “So why is the reverse not true? Shouldn’t the individuals whose intellectual property was used to train the AI algorithm at least be equally protected?”
Copyright doesn’t account for actors faces or singers voices, but there are laws in some states — like California, New York and Florida — that protect against unauthorized commercial use of a person’s name, likeness and persona, among other things. It’s meant to provide people the exclusive right to profit off of their identities. Music publishers are currently pushing for a federal right of publicity law to combat voice mimicry in AI tracks that would likely also help actors and other creators.
As part of their tentative deal with the Alliance of Motion Picture and Television Producers, writers ensured that the use of generative AI tools won’t affect their credit or compensation and must be disclosed by studios. August, pointing to those provisions of the agreement, likened scribes and other artists to small business, “each competing in the marketplace to sell their work.” To succeed, writers develop unique styles and brands that are essentially being stolen by AI companies indiscriminately scraping the Internet for material to train AI systems, he explained.
“This is theft, not fair use,” August said, referring to the legal concept that copyrighted works can be used to make new creations as long as they are transformative. “Our work, protected by copyright and our own contractual rights, are being used entirely without our authorization, without any attribution, or compensation.”
For writers and authors, the issue isn’t just about the copying of their scripts or books that are then fed into so-called large language models that power the human-mimicking AI bots that can produce pitches and loglines in a matter of seconds. It’s also about AI companies profiting off of their work by creating infringing material, which the FTC has signaled could rise to an unfair method of competition. August said that bad actors are “using stolen goods to undercut the prices of a seller,” like in AI-generated knockoffs of popular novels that are being sold on Amazon.
This remains a point of contention for writers, he continued, because the WGA deal only covers their work for studios, while “most of the real work on AI is being done by companies like Google, Facebook and OpenAI,” which have no contractual relationship with the guild. August emphasized, “Public policy will play a crucial role in protecting our members.”
At the hearing, much of concerns raised by the WGA were echoed by Authors Guild policy director Umair Kazi. He focused most of his remarks on the use of members’ works as training data for AI companies, which are fueling the production of competing derivative works.
“It is inherently unfair to use copyrighted works to create highly profitable tech, which is also able to produce competing derivative works without the creators consent, compensation or credit,” Kazi said. “There’s a serious risk of market dilution from machine generated books and other works that can be cheaply mass produced, and shall inevitably lower the economic and artistic value of human created works.”
For example, generative AI is already being used to impersonate popular authors to create low-quality ebooks. Kazi detailed, “Earlier this year, AI-generated books started dominating Amazon’s best-seller list in the young adult romance category.”
Last month, the Authors Guild — led by prominent authors including George R.R. Martin, Jonathan Franzen and John Grisham — stepped into the legal battle against OpenAI. The group, with more than 13,000 members in its ranks, represents what is likely the most formidable opponent suing the company in a case that could lead to hundreds of millions of dollars in damages and an order requiring it to destroy systems trained on copyrighted works.
A potential licensing market is a critical component of whether AI companies will be able to avail themselves of a fair-use defense in lawsuits accusing them of copyright infringement. AI companies will likely run into the Supreme Court’s recent decision in Andy Warhol Foundation for the Visual Arts v. Goldsmith, which effectively reined in the scope of fair use. In that case, the majority stressed that an analysis of whether an allegedly infringing work was sufficiently transformed must be balanced against the “commercial nature of the use.” If authors are able to establish that OpenAI’s scraping of their novels undercut their economic prospects to profit off of their works by, for example, interfering with potential licensing deals that the company could have instead pursued, then fair use is likely not to be found, according to legal experts consulted by THR.
An opt-in system should be mandatory under any potential licensing regime, said August. This means that creators wouldn’t be forced to opt out for their work to be excluded from training data.
Multiple speakers also alerted the FTC on the rise of fraud using AI tools. Tim Friedlander, president of the National Association of Voice Actors, pointed to deepfake ads featuring Tom Hanks and MrBeast. “Currently it only takes three seconds of source audio to create a realistic voice clone, and synthetic content can be used to deceive consumers into believing that trusted voices are communicating with them,” he said.
Just this week, Hanks and MrBeast took to social media to warn fans that businesses pilfered their likenesses without consent to create AI versions of themselves for commercial purposes.
In one of the more disturbing allegations, Sara Ziff, founder of the Model Alliance, said that modeling agencies are using AI deepfakes rather than hiring real models to meet diversity goals.
“A digital model who was created through AI in 2017 by the world’s first all digital modeling agency has appeared as a base of high-end brands, such as BMW and Louis Vuitton,” said Ziff, who noted that Levi also announced this year that it’s using AI-generated models to increase the appearance of diversity. “Critics have called this a form of digital blackface.”
The scope of potentially fraudulent business practices the FTC is concerned about also includes actions that some companies may be deploying to undermine labor. Commissioner Alvaro Bedoya pointed to studios cornering background actors into scanning their likenesses for future use.
THR Newsletters
Sign up for THR news straight to your inbox every day