Tech
All the AI news of the week: Hands-on with Metas AI app, ChatGPT and and leaderboard drama

Just like AI models, AI news never sleeps.
Every week, we're inundated with new models, products, industry rumors, legal and ethical crises, and viral trends. If that's not enough, the rival AI hype/doom chatter online makes it hard to keep track of what's really important. But we've sifted through it all to recap the most notable AI news of the week from the heavyweights like OpenAI and Google, as well as the AI ecosystem at large. Read our last recap, and check back next week for a new edition.
Another week, another batch of AI news coming your way.
This week, Meta held its inaugural LlamaCon event for AI developers, OpenAI struggled with model behavior, and LM Arena was accused of helping AI companies game the system. Congress also passed new laws protecting victims of deepfakes, and new research examines AI's current and potential harms. Plus, Duolingo and Wikipedia have very different approaches to their new AI strategies.
What happened at Meta's first LlamaCon

Credit: Chris Unger / Zuffa LLC / Getty Images
At LlamaCon, Meta's first conference for AI developers, the two big announcements were the launch of a standalone Meta AI app to compete more directly with ChatGPT and the Llama API, now in limited preview. Following reports that this was in the works, CEO Sam Altman once joked that maybe OpenAI should do its own social media app, but now that is reportedly happening for real.
We also went hands-on with the new Llama-powered Meta AI app. For more details about Meta AI's top features, read Mashable's breakdown.
During LlamaCon's closing keynote, Mark Zuckerberg interviewed Microsoft CEO Satya Nadella about a bunch of trends, ranging from agentic AI capabilities to how we should measure AI's advancements. Nadella also revealed that up to 30 percent of Microsoft's code is written by AI. Not to be outdone, Zuckerberg said he wants AI to write half of Meta's code by next year.
ChatGPT has safety issues, goes shopping
Meta AI and ChatGPT both got busted this week for sexting minors.
OpenAI said this was a bug and they're working to fix it. Another ChatGPT issue this week made the latest GPT-4o update too much of a suck-up. Altman described the model's behavior as "sycophant-y and annoying," but users were concerned about the dangers of releasing a model like this, highlighting problems with iterative deployment and reinforcement learning.
OpenAI was even accused of intentionally tuning the model to keep users more engaged. Joanne Jang, OpenAI's head of model behavior, jumped on a Reddit AMA to do damage control. "Personally, the most painful part of the latest sycophancy discussions has been people assuming that my colleagues are irresponsibly trying to maximize engagement for the sake of it," wrote Jang.
Earlier in the week, OpenAI announced new features to make products mentioned in ChatGPT responses more shoppable. The company said it isn't earning purchase commissions, but it smells an awful lot like the beginnings of a Google Shopping competitor. Did we mention OpenAI would buy Chrome if Google is forced to divest it? Because they totally would, FYI.
The ChatGPT maker has had a few more problems with its recent models. Last week, we reported that o3 and o4-mini hallucinate more than previous models, by OpenAI's own admission.
Anyone in the U.S. can now sign up for Google AI Mode
Meanwhile, Google is barreling ahead with AI-powered search features. On Thursday, the tech giant announced that it's removing the waitlist to test out AI Mode in Labs, so anyone over 18 in the U.S. can try it out. We spoke with Robby Stein, VP of product for Google Search, about how users have responded to its AI features, the future of search, and Google's responsibility to publishers.
Google also updated Gemini with image editing tools and expanded NotebookLM, its AI podcast generator, to over 50 languages. Bloomberg also reported that Google has been quietly testing ads inside third-party chatbot responses.
We're keeping a close eye on that final development, and we are very curious how Google plans to inject ads into AI search. Would you trust a chatbot that gave you sponsored answers?
Leaderboard drama
Researchers from AI company Cohere, Princeton, Stanford, MIT, and Ai2, published a paper this week calling out Chatbot Arena for essentially helping AI heavyweights rig their benchmarking results. The study said the popular crowdsourced benchmarking tool from UC Berkeley allowed Meta, Google, OpenAI, and Amazon "extensive private testing" and gave them more prompt data, which "significantly" improved their rankings.
In response, LM Arena, the group behind Chatbot Arena said "there are a number of factual errors and misleading statements in this writeup" and posted a pointy-by-point rebuttal to the paper's claims on X.
This Tweet is currently unavailable. It might be loading or has been removed.
The issue of benchmarking AI models has become increasingly problematic. Benchmark results are largely self-reported by the companies that release them, and the AI community has called for more transparency and accountability by objective third parties. Chatbot Arena seemed to provide a solution by allowing users to choose the best responses in blind tests. But now LM Arena's practices have come into question, further fueling the conversation around objective evaluations.
A few weeks ago, Meta got in trouble for using an unreleased version of its Llama 4 Maverick model on LM Arena, which scored a high ranking. LM Arena updated its leaderboard policies, and the publicly available version of Llama 4 Maverick was added instead, ranking way lower than the unreleased version.
Lastly, LM Arena recently announced plans to form a company of its own.
Regulators and researchers tackle AI's real-world harms
Now that generative AI has been in the wild for a few years, the real-world implications have started to crystallize.
This week, U.S. Congress passed the "Take It Down" Act, which requires tech companies to remove nonconsensual intimate imagery within 48 hours of a request. The law also outlines strict punishment for deepfake creators. The legislation had bipartisan support and is expected to be signed by President Donald Trump.
The nonpartisan U.S. Government Accountability Office (GAO) published a report on generative AI's impact on humans and the environment. The conclusion is that the potential impacts are huge, but exactly how much is unknown because "private developers do not disclose some key technical information."
And in the realm of the frighteningly real and specific harms of AI, a study from Common Sense Media said AI companion apps like Character.AI and Replika are unequivocally unsafe for teens. The researchers say if you're too young to buy cigarettes, you're too young for your own AI companion.
Then there was the report that researchers from the University of Zurich secretly deployed AI bots in the r/changemyview subreddit to try and convince people to change their minds. Some of the bot identities included a statutory rape victim, "a trauma counselor specializing in abuse," and "a black man opposed to Black Lives Matter."
Other AI news…
In other news, Duolingo is taking an "AI-first" approach, which means replacing its contract workers with AI whenever possible. On the flip side, Wikipedia announced it's taking a "human-first" approach to its AI strategy. It won't replace its volunteers and editors with AI, but will instead "use AI to build features that remove technical barriers to allow the humans at the core of Wikipedia."
Yelp deployed a bunch of AI features this week, including an AI-powered answering service that takes calls for restaurants, and Governor Gavin Newsom wants to use genAI to solve California's legendary traffic jams.
Tech
Toxic relationship with AI chatbot? ChatGPT now has a fix.

"We don’t always get it right. Earlier this year, an update made the model too agreeable, sometimes saying what sounded nice instead of what was actually helpful. We rolled it back, changed how we use feedback, and are improving how we measure real-world usefulness over the long term, not just whether you liked the answer in the moment," OpenAI wrote in the announcement. "We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress."
Broadly, OpenAI has been updating its models in response to claims that its generative AI products, specifically ChatGPT, are exacerbating unhealthy social relationships and worsening mental illnesses, especially among teenagers. Earlier this year, reports surfaced that many users were forming delusional relationships with the AI assistant, worsening existing psychiatric disorders, including paranoia and derealization. Lawmakers, in response, have shifted their focus to more intensely regulate chatbot use, as well as their advertisement as emotional partners or replacements for therapy.
OpenAI has recognized this criticism, acknowledging that its previous 4o model "fell short" in addressing concerning behavior from users. The company hopes that these new features and system prompts may step up to do the work its previous versions failed at.
"Our goal isn’t to hold your attention, but to help you use it well," the company writes. "We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal 'yes' is our work."
Tech
The TikTok artist behind viral unknowing bunny song pits human creativity against AI illusion
Were you tricked by the video of a bunch of bunnies jumping on a trampoline on TikTok? Well, nearly 230 million people were — and plenty of those viewers had no idea that it was actually AI. In response, the creator who brought us the Punxsutawney Phil musical, Oliver Richman (or @olivesongs11), wrote and recorded a 30-second song about the AI video, also for TikTok. He wrote the song on day 576 of an ongoing project, where he writes a new song each day.
"That project has changed my life in so many ways," Richman told Mashable, adding that it brought him "back to the joy of creating." He scrolled across the viral video of the bunnies jumping on the trampoline and said he was "certainly fooled" and "thought they were real."
"So when I learned that they weren't, I was like, 'Oh, I think this is today's song."
The unknowing bunny song on TikTok now has over 3.8 million views, 600,000 likes, and hundreds of comments like, "Bo Burnham! At The Disco" and "Wait until you see the bear on a trampoline. Spoiler: also AI."
The song goes like this:
There were bunnies that were jumping on a trampoline
And I just learned that they weren't real
If a bot can inhabit
An unknowing rabbit
It might manufacture the way you make me feel
How do I know that the sky's really sunny?
Sometimes it feels like your love is as real as
An unknowing bunny
The video has inspired covers and renditions, stop-motion videos, reactions, and a variety of other really cool human-made art. As one creator wrote on a TikTok video using the sound, "The fact that this song written about AI is going viral is incredibly healing. Especially because us as artists and songwriters are being threatened of our livelihoods due to the use of AI. And AI could never create something this unique with this much feeling."
Richman said the response to his video has been "the most surreal thing ever."
"Every piece of art that I've seen, I like get emotional," he said. "It certainly made me feel connected to the beauty of the messiness of being a human. And the imperfections that AI tends to delete or perfect — seeing all of this human art has just been a very emotional and cool experience."
As Mashable's Tim Marcin recently wrote about the influx of faux surveillance footage of animals, it "seems to be a new genre of AI slop." But give the internet slop, and creators might make porridge (is that a saying?).
In the face of all the AI slop we see online, creators like Richman are staying positive. "Art is so cool. Human art is so cool, and that really excites me."
Updated on Aug. 4 at 3:00 p.m. ET — This story has been updated to include an interview with creator Oliver Richman. Some quotes have been lightly edited for clarity and grammar.
Tech
Verizon reportedly cuts loyalty discounts after increasing fees

Verizon customers reportedly got double bad news this week: the phone carrier is raising fees and removing loyalty discounts.
According to users on the Verizon subreddit, several customers reported receiving an email from Verizon informing them their account discounts are ending. "We are writing to let you know that a discount on your account will soon end," the email said, according a redditor. "This discount will be removed no sooner than September 1, 2025." Several other redditors chimed in on the thread, saying they had a received the same email about losing loyalty perks offered to longstanding customers. Mashable has reached out to Verizon for comment and will update this story with a response.
A few days earlier, Verizon confirmed to Tom's Guide that the company is increasing fees for activations, phone lines, and tablet plans by Sept. 1.
Verizon customers are understandably unhappy about the changes. Some commented that they might change phone carriers to T-Mobile or AT&T as a result. "They just keep finding ways to crap on loyal customers," commented one redditor, underscoring the general sentiment of the thread that loyal customers are being penalized for their loyalty.
According to Tom's Guide, Verizon is reportedly trying to persuade customers on older plans to switch to its newer myPlan subscription. "We want to ensure you get the best value and experience from Verizon and encourage you to check out our myPlan options for the plan that works best for you," the email to customers reportedly said.
Cutting loyalty discounts and upping fees is a bold way to do that, since it seems to be alienating customers even more.
-
Entertainment5 months ago
New Kid and Family Movies in 2025: Calendar of Release Dates (Updating)
-
Tech5 months ago
The best sexting apps in 2025
-
Tech5 months ago
Every potential TikTok buyer we know about
-
Tech5 months ago
iOS 18.4 developer beta released — heres what you can expect
-
Politics5 months ago
DOGE-ing toward the best Department of Defense ever
-
Tech6 months ago
Are You an RSSMasher?
-
Politics6 months ago
Toxic RINO Susan Collins Is a “NO” on Kash Patel, Trashes Him Ahead of Confirmation Vote
-
Politics6 months ago
After Targeting Chuck Schumer, Acting DC US Attorney Ed Martin Expands ‘Operation Whirlwind’ to Investigate Democrat Rep. Robert Garcia for Calling for “Actual Weapons” Against Elon Musk