Tech
The Kadrey v. Meta fair use ruling is just the start of a long, complex AI copyright battle

On Wednesday, the judge in the landmark AI copyright case Kadrey, et al. v. Meta Platforms Inc. ruled in Meta’s favor. And U.S. District Judge Vince Chhabria seemed to do so reluctantly, calling his own ruling “in significant tension with reality.”
Thirteen authors, including Sarah Silverman, Ta-Nehisi Coates, and Junot Diaz, sued Meta for its unlicensed use of their books to train its Llama AI models.
The facts of the case seemed particularly egregious. Not only did Meta pirate unlicensed copies of the authors’ works, but internal Meta messages revealed during discovery showed that the company's own employees expressed legal and ethical doubts about pirating those works. Other messages suggest that employees sought to eliminate traces of piracy, looking for words like "stolen" and "pirated" as part of the team's "mitigation" efforts.
Instead of settling the messy copyright battle over AI training, Chhabria's ruling adds another layer of complexity to this legal issue.
Just a day earlier, a judge in a similar AI copyright case ruled in favor of another AI company, Anthropic. In the same Northern District of California, U.S. District Judge William Alsup declared in Bartz v. Anthropic that Anthropic's use of pirated books in shadow libraries Books3 and LibGen (the same datasets in the Meta case) was fair use.
However, Robert Brauneis, an intellectual property law professor at George Washington University Law School, said Judge Alsup and Judge Chhabria used dramatically different reasoning. Both cases hinged on the fair use legal doctrine, particularly the fourth factor in such defenses — potential market harms.
"Judge Alsup has a very narrow view: if a generative AI output does not itself infringe a particular work used to train the model, any loss in sales of the training work caused by people using the AI output instead cannot be taken into account as 'market harm' under the fourth factor," said Brauneis, who was among a group of copyright lawyers that filed an amicus brief in support of plaintiffs in Kadrey v. Meta.
"Judge Chhabria says that's wrong: harm caused by 'diluting' the market for a training work can and should be taken into account, and serious market dilution harm can even outweigh a high level of transformativeness under the first factor."
So while both judges sided with the fair use argument, their opposing rationales lay the groundwork for a complex and fragmented legal landscape.
The Kadrey plaintiffs whiffed on the fair use argument
The plaintiffs tried, and failed, to argue against Meta’s fair use defense. In a blog post written after the May 1 oral arguments, Kevin Madigan, senior VP of policy and government affairs for the Copyright Alliance, wrote that the plaintiff’s lawyer “shockingly” failed to present potential counterarguments.
Of the four fair use factors, the case mostly hinged on factor one, whether the use is transformative, and factor four, whether the use harms the existing or future market for the copyrighted work. Chhabria favored Meta on factor one. "There is no serious question that Meta’s use of the plaintiffs’ books had a 'further purpose' and 'different character' than the books — that it was highly transformative," said Chhabria in his ruling.
The deliberation then turned to the fourth factor, or market harms, where Chhabria had much to say about the plaintiff's counsel's argument. They simply failed to successfully argue that Meta caused market harm.
In discussing market harms during oral arguments, Chhabria brought up a hypothetical — future Taylor Swifts.
"Even if a million songs are produced by [Meta's Llama] model in the style of a Taylor Swift song, it's not going to affect the market for Taylor Swift songs. But what about the next Taylor Swift?" Chhabria asked Meta lawyer Kannon Shanmugam. "What about the up-and-coming, relatively unknown artist who is writing songs… and by feeding copyrighted works like hers into the model, it enables the model to produce a billion pop songs?"
Chhabria seemed to foreshadow his eventual ruling when he questioned plaintiff counsel David Boies about evidence of market harms.
"Whether it's in the summary judgment record or not, it seems like you're asking me to speculate that the market for Sarah Silverman's memoir will be affected by the billions of things that Llama will ultimately be capable of producing," said Chhabria "and it's just not obvious to me that that's the case."
Chhabria told Boies, "you lose if you can't show that the market for the copyrighted works that are being used to train the models are dramatically impacted."
Ultimately, Chhabria decided that Meta had the stronger argument.
"Meta has defeated the plaintiffs’ half-hearted argument that its copying causes or threatens significant market harm," said Chhabria. "That conclusion may be in significant tension with reality, but it’s dictated by the choice the plaintiffs made… while failing to present meaningful evidence on the effect of training LLMs like Llama with their books on the market for [AI-generated] books."
On the day of the ruling, a Meta spokesperson provided this statement to Mashable: "We appreciate today’s decision from the Court. Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology."
In his decision, the district judge said his ruling was less about the fair use defense of using pirated books to train AI models and more about the shortcomings of the plaintiffs' argument. "The Court had no choice but to grant summary judgment to Meta," said Chhabria, before adding:
"This is not a class action, so the ruling only affects the rights of these thirteen authors—not the countless others whose works Meta used to train its models. And, as should now be clear, this ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful. It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one."
His ruling also leaves the door open for other artists to file similar copyright suits against Meta — and other AI companies. Chhabria even postulated that "it will be illegal to copy copyright-protected works to train generative AI models without permission."
But this ruling also has symbolic meaning for artists.
"If this case comes out and says that training of large language models on pirated datasets from which copyright information has been stripped is fair use, then that is a horrible, horrible outcome for millions of creative professionals around the world," said Justin Hughes, a law professor at Loyola Law School, in an interview with Mashable before the ruling.
AI is already impacting creative livelihoods
Kadrey v. Meta is one of dozens of copyright lawsuits against AI companies. At the time of publication, AI blog ChatGPT Is Eating the World counted 39 ongoing cases.
But while courts deliberate, generative AI is already making a big impact on creative industries.
Generative AI's ability to automate the creation of text, images, video, and audio is already replacing creative jobs. In 2024, researchers from the Imperial College London Business School and the Berlin School of Economics published a paper analyzing how generative AI is affecting the labor market. Since the introduction of ChatGPT, they found "nearly immediate decreases in posts for online gig workers across job types, but particularly for automation-prone jobs." The jobs most impacted were writing gigs, which decreased by 30 percent.
A 2023 report commissioned by the Animation Guild to measure generative AI's impact in entertainment industries stated, "almost two-thirds of the 300 business leaders surveyed expect GenAI to play a role in consolidating or replacing existing job titles in their business division over the next three years. According to the study, which was conducted by CVL Economics, that's 203,800 missing jobs by 2026.
Many artists see the existence of AI tools like Llama as an existential threat. Adding insult to injury, AI models were trained on the very human expression they’re accused of replacing.
In an amicus brief in support of the plaintiffs, the American Association of Publishers argued that this case was much simpler than it seemed. Meta, "a company valued at over a trillion dollars, asks this Court to declare that it is free to appropriate and commercially exploit the content of copyrighted works on a massive scale without permission or payment for that content, a ruling that would have catastrophic consequences for authors and publishers of books, journals and other textual works protected by copyright."
What happens now?
While Meta prevailed on the fair use ruling, Madigan called Chhabria's decision a "mixed bag."
"The things that are not good for copyright owners are Judge Chhabria's treatment of transformative use under the first factor, and also his unwillingness to recognize licensing markets under the fourth." Here, Madigan was referring to the plaintiff's potential loss of licensing deals, an argument that Chhabria said he wouldn't take into account.
"But why that is not necessarily the worst thing in the world, is that it's so cabined to the specifics of this case and the failure to develop a record and raise certain issues," Madigan continued. The plaintiffs will also likely appeal, he added.
A spokesperson for Boies Schiller Flexner, the firm representing the plaintiffs, told Mashable, "The court ruled that AI companies that ‘feed copyright-protected works into their models without getting permission from the copyright holders or paying for them’ are generally violating the law. Yet, despite the undisputed record of Meta’s historically unprecedented pirating of copyrighted works, the court ruled in Meta’s favor. We respectfully disagree with that conclusion." They did not respond to the question of whether they would file an appeal.
Kadrey v. Meta and Bartz v. Anthropic are often lumped together because they both focus on the inputs of pirated books as data to train AI models. By contrast, other high-profile AI copyright cases — the New York Times lawsuit against OpenAI and Microsoft, another case against Anthropic from major record labels (Concord v. Anthropic), and the more recent Disney v. Midjourney — focus on AI models' outputs.
For these cases, "where they've all shown evidence of infringing output, [Kadrey v. Meta] has absolutely no bearing," said Madigan. With cases that focus on output, "you don't have to get into sort of these more abstract doctrinal discussions about transformative use and whether training is transformative in purpose. You just have to show side-by-side verbatim copies," he continued.
Tech
You can no longer go live on Instagram unless you have 1,000 followers

It’s hard enough getting into the content creator space without the platform you’re on putting up restrictions. However, Instagram is now the latest social media app to institute such a restriction — forcing people to have at least 1,000 followers before they go live on the site. Previously, Instagram let anyone go live, regardless of account status.
The news first started circulating after smaller creators posted the notice on other social media channels.
The notice reads, "Your account is no longer eligible for Live. We changed the requirements to use this feature. Only public accounts with 1,000 followers or more will be able to create live videos."

Credit: Chance Townsend / Instagram screenshot
TechCrunch followed up with Instagram and confirmed that the social network giant made this change intentionally. As expected, small creators aren’t fans of the change, and it’s been mostly maligned across all of social media. Creators with private accounts won’t be able to go live at all, even if the account has over 1,000 followers. Instagram says the change was made to “improve the overall Live consumption experience.”
There are pros and cons to the decision, as TechCrunch notes. On the one hand, small creators will have an even harder time breaking out into the segment than they already do, as accumulating followers without buying them can be a long and painstaking process. By contrast, Instagram likely removed a lot of low-quality streams this way that only have a couple of viewers each, which makes it easier to find better live content while also saving Meta money.
This change brings Instagram more in line with TikTok’s live streaming rules. However, the number of followers you need on TikTok can vary, with plenty of people getting access long before they reach 1,000 subscribers. As of this writing, Facebook’s Help Center says that going live on Facebook only requires a 60-day-old account and at least 100 followers. YouTube still allows users to go live after just 50 followers, while Twitch remains the easiest to get started with a 0 follower limit.
Tech
Lovense has finally fixed its account takeover problem

Lovense is well-known for its selection of remote-controlled vibrators. It’s slightly less known for a massive security issue that exposed user emails and allowed accounts to be wholly taken over by a hacker without even needing a password. Fortunately, both issues have been fixed, but it didn’t happen without some drama.
As the story goes, security researcher BobDaHacker (with some help) accidentally found out that you could uncover a user’s email address pretty easily by muting someone in the app. From there, they were able to figure out that you could do this with any user account, effectively exposing every Lovense user’s email without much effort.
With the email in hand, it was then possible to generate a valid gtoken without a password, giving a hacker total access to a person’s Lovense account with no password necessary. The researchers told Lovense of the issue in late March and were told that fixes were incoming.
In June 2025, Lovense told the researchers that the fix would take 14 months to implement because it did not want to force legacy users to upgrade the app. Partial fixes were implemented over time, only partially fixing the problems. On July 28, the researchers posted an update showing that Lovense was still leaking emails and had exposed over 11 million user accounts.
"We could have easily harvested emails from any public username list," BobDaHacker said in a blog post. "This is especially bad for cam models who share their usernames publicly but obviously don't want their personal emails exposed."
It was around then that the news started making its way around the news cycle. Other researchers began reaching out to show that the exploit had actually been known as far back as 2022, and Lovense had closed the issue without issuing a fix. After two more days in the news cycle, the sex toy company finally rolled out fixes for both exploits on July 30.
It’s not Lovense’s first roll in the mud. In 2017, the company was caught with its proverbial pants down after its app was shown to be recording users while they were using the app and toy. Lovense fixed that issue as well, stating that the audio data was never sent to their servers.
Tech
Tom Holland teases the new suit for Spider-Man: Brand New Day
Sony and Marvel have revealed a fresh look for Tom Holland’s Spider-Man, and it’s a return to basics. In a very short 22-second teaser, fans got a decent look at Spidey’s new suit, which leans heavily into the classic comic design.
Gone is the ultra-slick Stark Suit, the high-tech armor gifted by Tony Stark, which Holland’s Peter Parker wore in three solo films and multiple Avengers crossovers.
Spoilers for 2021’s No Way Home:
By the film’s end, Peter’s high-tech suit is wrecked — and so is everything else. It's a brutal reset that leaves Peter truly alone and stripped of all the Stark tech that powered his previous adventures. This mirrors the more grounded, scrappy origins many fans felt had been missing from the MCU’s version of the character.
The closing shot in No Way Home is of a homemade suit — vibrant, hand-sewn, and all Peter — and signaled a fresh start. Now, with Brand New Day on the horizon, we’re finally seeing that suit in action. And yeah — it looks great. Here’s hoping the movie lives up to it.
Spider-Man: Brand New Day swings into theaters July 31, 2026, with Shang-Chi director Destin Daniel Cretton at the helm.
-
Entertainment5 months ago
New Kid and Family Movies in 2025: Calendar of Release Dates (Updating)
-
Tech5 months ago
The best sexting apps in 2025
-
Tech5 months ago
Every potential TikTok buyer we know about
-
Tech5 months ago
iOS 18.4 developer beta released — heres what you can expect
-
Politics5 months ago
DOGE-ing toward the best Department of Defense ever
-
Tech6 months ago
Are You an RSSMasher?
-
Politics5 months ago
Toxic RINO Susan Collins Is a “NO” on Kash Patel, Trashes Him Ahead of Confirmation Vote
-
Politics5 months ago
After Targeting Chuck Schumer, Acting DC US Attorney Ed Martin Expands ‘Operation Whirlwind’ to Investigate Democrat Rep. Robert Garcia for Calling for “Actual Weapons” Against Elon Musk