Tech
Im a college professor. My advice to young people who feel hooked on tech

When I was a child, computers were a fixture in my home, from the giant Atari on which I learned my ABCs, to the Commodore Amiga that my dad used for his videography business, to the PC towers that facilitated my first forays onto the internet. But tech was still a niche hobby back then. Even in college in the late 1990s and early 2000s, many of my friends got by just fine without computers.
For people in college now—namely, my students—things are decidedly different. Gadgets are everywhere, and are increasingly designed to insert themselves into every aspect of our consciousness, colonizing every spare moment of our time and attention. Gen Z and Gen Alpha have never known a world without mini-computers within arm’s reach. They learned to relate to the world through gadgets, to turn to them for everything from entertainment to education to escape. And when the COVID-19 pandemic disrupted their lives, it took away even more of their access to the offline world, making tech feel paradoxically both like a lifeline and a prison.
It's easy to call young people “screenagers” and blame them for being glued to their devices. But I know better. My students feel conflicted; they know they’re hooked, and they worry for their younger siblings who seem even more in the grip of all-consuming tech.
Several years ago, it occurred to me that I could do something to help. I began requiring students to put away all devices, including laptops and tablets, in my classes. It was an experiment both for them and for me: What happens when we remove the barrier tech has put between us and other people, between us and our own thoughts? What does that teach us about how to handle the explosion of hype around generative AI?
How I went from gadget geek to tech skeptic
My own journey with tech predates our always-on devices, way back to that old Atari. I had always been a little obsessed with gadgets, and when I bought my first iPhone in 2008, it was almost a religious experience.
My wife and I were living in New York City, and my entire family drove down from Boston to witness my initiation. Like pilgrims, we journeyed together to the flagship Apple Store on Fifth Avenue. We all stood in reverence at the foot of the spiral staircase, beneath the illuminated glass cube, as I was welcomed into the cult of Apple.
From then on, almost without fail, I’ve upgraded my phone annually, a September ritual as cyclical for me as going back to school. And it wasn't just the iPhone; I had the first or second iteration of the iPad, AirPods, and the Apple Watch, too. Back then it felt like Steve Jobs might announce something that would reshape the world every time he stepped on stage.
But in the 2010s, something started to change. Underwhelming new tech releases grew increasingly common, and the constant hype around them began to feel empty and manipulative. As both a college professor and a parent, I began to see the benefits of our always-connected devices becoming overshadowed by the negatives. The young people in my life are obsessed with their gadgets, legitimately afraid they’ll be disconnected from society if they aren’t extremely online, and they hate it. Many worry as much as their parents do about their phone use.
So, even before the hype that greeted the AI revolution of the last few years, I’d begun to look a lot more skeptically at claims that tech was changing our lives, and that more apps, devices, or wearables were automatically better.
What happens when we turn off the tech?
One day, near the end of the spring semester in 2019, I looked out at my class to see rows of students focused intently… on their laptop screens. They presumably had their devices out to take notes, but I wasn’t lecturing. I was trying to lead them into a discussion. This moment for me is trapped in time: It was the moment I decided I had to take drastic measures to recapture my students’ attention.
The following fall, my syllabus included a new section, which has remained in place since. I call it my in-class technology use policy and it begins, “This class is a laptop/mobile phone/tablet/headphone/AirPods-free zone. Bring a notebook and pens to each class.” I explain my reasoning and, like a good academic, cite my sources. I provide exceptions for emergencies, explaining that if a student has to take an urgent call, they can quietly slip out of the classroom to do so without judgment or penalty.
That first fall, I was nervous. Would they go along with it? Would my classes, previously well-loved, suddenly struggle to fill? To my great relief there was no significant pushback, no mass exodus. Going tech-free is still a shock, to be sure. At the start of each semester, an hour and fifteen minutes without a phone seems impossible for many students. But in time, most find it to be a relief. It gives them permission to take a break from the requirement to be always connected, always reachable, always on. Hopefully, it also creates space for deep and sustained thought.
I begin most classes by distributing an article to read—often a recently-published opinion piece—printed on paper. I encourage students to read it with pen in hand, marking it up as they go. As they read quietly, I look around the room at a group of so-called screenagers concentrating, without a device in sight. When they finish reading, they open their notebooks and write a response, by hand. In those first few weeks, I often see students massaging their palms, sore from lack of practice. After they write for five minutes or so, I open a discussion on what we just read and, distraction-free, the students engage.
In those discussions, I love that my students are actually paying attention to one another when they speak. Not everyone of course; some look sleepy and bored, but even that is better than distracted. I call this productive boredom: Without a phone or laptop to divert them, there is little left to do other than sit with their thoughts. What a gift. I ask them, “When was the last time your only task was to think?”
Lessons for the AI invasion
This experiment with a device-free classroom has also shaped my response to the AI revolution (I sometimes think of it more as an AI invasion) that has swept higher education since the debut of ChatGPT in 2022. Like smartphones before them, AI tools are wrapped in revolutionary rhetoric, trying to convince all of us that we’ll be left behind if we don’t drop our old habits overnight and jump on the bandwagon.
I’m not a luddite: I continue to be as curious about new technologies as ever. As soon as it came out, I peppered ChatGPT with questions to see if it could imitate my writing style. (It kind of can!) And I know there’s no going back; whether we like it or not, AI will be a significant presence in our lives, and I see it as my job to teach students how to use it responsibly. In my long journey with tech, I’ve learned that we can incorporate devices into our work without surrendering to marketing hype and manufactured FOMO.
As a writing professor, my job is to convince students that, as William Zinsser wrote, “writing is thinking on paper.” The process of writing — not the final product — is what sharpens our logical reasoning and self-expression. For students who don’t use AI in smart ways, the result is essays that are all product, no process — and no process means no real learning.
In my classes, students glimpse a time before they were born, when fewer distractions inhibited learning, when sitting with one’s thoughts—and, yes, being bored—could be productive and creative. I’m reminded, too, of why I love teaching, for the magic that happens when 20 people sit together in a room attending to one another and talking about ideas.
When we leave the classroom, we’ll go back to our devices, and even to our new AI tools. But hopefully the time away from them reminds us we have the power to keep tech in its place—and gives us a taste of what only human minds can do.
Tech
Hackers found a way around Microsoft Defender to install ransomware on PCs, report says

Windows users should think about reinforcing their antivirus software. And while Microsoft Defender should provide a line of defense against ransomware, a new report claims that hackers have found a way to get around the ransomware tool to infect PCs with ransomware.
A GuidePoint Security report (via BleepingComputer) found that hackers are using Akira ransomware to exploit a legitimate PC driver to load a second, malicious driver that shuts off Windows Defender, allowing for all sorts of monkey business.
The good driver that's being exploited here is called "rwdrv.sys,' which is used for tuning software for Intel CPUs. Hackers abuse it to install "hlpdrv.sys," another driver that they then use to get around Defender — and start doing whatever it is they want to do.
GuidePoint reported seeing this type of attack starting in the middle of July. It doesn't seem like the loophole has been patched yet, but the more people know about it, the less likely it is for the exploit to work against them, at least in theory.
In the meantime, allow our colleagues at PCMag to recommend some fine third-party antivirus software to you for your Windows PC. For more information on the latest Akira ransomware attacks — including possible defenses — head to GuidePoint Security.
Tech
ChatGPT fans are shredding GPT-5 on Reddit as Sam Altman responds in AMA (updated)

GPT-5 is out, the early reviews are in, and they're not great.
Many ChatGPT fans have taken to Reddit and other social media platforms to express their frustration and disappointment with OpenAI's newest foundation model, released on Thursday.
A quick glimpse of the ChatGPT subreddit (which is not affiliated with OpenAI) shows scathing reviews of GPT-5. Since the model began rolling out, the subreddit has filled with posts calling GPT-5 a "disaster," "horrible," and the "biggest piece of garbage even as a paid user."
Awkwardly, Altman and other members of the OpenAI team had a preplanned Reddit AMA to answer questions about GPT-5. In the hours ahead of the AMA, questions piled up in anticipation, with many users demanding that OpenAI bring back GPT-4o as an alternative to GPT-5.
What Redditors are saying about GPT-5
Many of the negative first impressions say GPT-5 lacks the "personality" of GPT-4o, citing colder, shorter replies. "GPT-4o had this… warmth. It was witty, creative, and surprisingly personal, like talking to someone who got you. It didn’t just spit out answers; it felt like it listened," said one redditor. "Now? Everything’s so… sterile."
Another said, "GPT-5 lacks the essence and soul that separated Chatgpt (sic) from other AI bots. I sincerely wish they bring back 4o as a legacy model or something like that."
Several redditors also criticized the fact that OpenAI did away with the option to choose different models, prompting some users to say they're canceling their subscriptions. "I woke up this morning to find that OpenAI deleted 8 models overnight. No warning. No choice. No "legacy option," posted one redditor who said they deleted their ChatGPT Plus account. Another user posted that they canceled their account for the same reason.
As Mashable reported yesterday, GPT-5 integrates various OpenAI models into one platform, and ChatGPT will now choose the appropriate model based on the user's prompt. Clearly, some users miss the old system and models.
Ironically, OpenAI has also drawn criticism for having too many model options; GPT-5 was supposed to resolve this confusion by streamlining the previous models under GPT-5.
Sam Altman responds to the criticisms
When Altman and the team logged onto the AMA, they faced a barrage of demands to bring back GPT-4o.
"Ok, we hear you all on 4o," said Altman during the AMA. "Thanks for the time to give us the feedback (and the passion!). We are going to bring it back for Plus users, and will watch usage to determine how long to support it."
Altman also addressed feedback that GPT-5 seemed dumber than it should have been, explaining that the "autoswitcher" that determines which version of GPT-5 to use wasn't working. "GPT-5 will seem smarter starting today," he said. Altman also added that the chatbot will make it clearer which model is answering a user's prompt. OpenAI will double rate limits for ChatGPT Plus users once the rollout is finished.
“As we mentioned, we expected some bumpiness as we roll out so many things at once. But it was a little more bumpy than we hoped for!” Altman said in the AMA.
GPT-5 is an improvement, but not an exponential one
Expectations for GPT-5 could not have been higher — and that may be the real problem with GPT-5.
Gary Marcus, a cognitive scientist and author known for his research on neuroscience and artificial intelligence — and a well-known skeptic of the AI hype machine — wrote on his Substack that GPT-5 makes “Good progress on many fronts” but disappoints in others. Marcus noted that even after multi-billion-dollar investments, “GPT-5 is not the huge leap forward people long expected.”
The last time OpenAI released a frontier model was over two years ago with GPT-4. Since then, several competitors like Google Gemini, Anthropic's Claude, xAI's Grok, Meta's Llama, and DeepSeek R1 have caught up to OpenAI on benchmarks, similar agentic features, and user loyalty. For many, GPT-5 had the power to reinforce or topple OpenAI's reign as the AI leader.
With this in mind, it's inevitable that some users would be disappointed, and many ChatGPT users have shared positive reviews of GPT-5 as well. Time may blunt these criticisms as OpenAI makes improvements and tweaks to GPT-5. The company has also historically been responsive to user feedback, with Altman being very active on X.
"We currently believe the best way to successfully navigate AI deployment challenges is with a tight feedback loop of rapid learning and careful iteration," the company's mission statement avows.
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
UPDATE: Aug. 8, 2025, 3:20 p.m. EDT This story has been updated with Sam Altman's responses from the Reddit AMA.
Tech
YouTube will begin using AI for age verification next week

YouTube is officially rolling out its AI-assisted age verification next week to catch users who lie about their age.
YouTube announced in late July that it would start using artificial intelligence for age verification. And this week, 9to5Google reported that the new system will go into effect on Aug. 13.
The new system will "help provide the best and most age-appropriate experiences and protections," according to YouTube.
"Over the next few weeks, we’ll begin to roll out machine learning to a small set of users in the US to estimate their age, so that teens are treated as teens and adults as adults," wrote James Beser, Director of Product Management with YouTube Youth, in a blog post. "We’ll closely monitor this before we roll it out more widely. This technology will allow us to infer a user’s age and then use that signal, regardless of the birthday in the account, to deliver our age-appropriate product experiences and protections."
"We’ve used this approach in other markets for some time, where it is working well," Beser added.
The AI interprets a "variety of signals" to determine a user's age, including "the types of videos a user is searching for, the categories of videos they have watched, or the longevity of the account." If the system determines that a user is a teen, it will automatically apply age-appropriate experiences and protections. If the system incorrectly determines a user's age, the user will have to verify that they're over 18 with a government ID or credit card.
This comes at a time in which age verification efforts are ramping up across the world — and not without controversy. As Wired reported, when the UK began requiring residents to verify their ages before watching porn as part of the Online Safety Act, users immediately started using VPNs to get around the law.
Some platforms use face scanning or IDs, which can be easily faked. As generative AI gets more sophisticated, so will the ability to work around age verification tools. And, as Mashable previously reported, users are reasonably wary of giving too much of their private information to companies because of security breaches, as in the recent Tea app leak.
In theory, as Wired also reported, "age verification serves to keep kids safer." But, in reality, "the systems being put into place are flawed ones, both from a privacy and protection standpoint."
Samir Jain, vice president of policy at the nonprofit Center for Democracy & Technology, told the Associated Press that age verification requirements "raise serious privacy and free expression concerns," including the "potential to upend access to First Amendment-protected speech on the internet for everyone, children and adults alike."
"If states are to go forward with these burdensome laws, age verification tools must be accurate and limit collection, sharing, and retention of personal information, particularly sensitive information like birthdate and biometric data," Jain told the news outlet.
-
Entertainment5 months ago
New Kid and Family Movies in 2025: Calendar of Release Dates (Updating)
-
Tech5 months ago
The best sexting apps in 2025
-
Tech6 months ago
Every potential TikTok buyer we know about
-
Tech6 months ago
iOS 18.4 developer beta released — heres what you can expect
-
Politics6 months ago
DOGE-ing toward the best Department of Defense ever
-
Tech6 months ago
Are You an RSSMasher?
-
Politics6 months ago
Toxic RINO Susan Collins Is a “NO” on Kash Patel, Trashes Him Ahead of Confirmation Vote
-
Politics6 months ago
After Targeting Chuck Schumer, Acting DC US Attorney Ed Martin Expands ‘Operation Whirlwind’ to Investigate Democrat Rep. Robert Garcia for Calling for “Actual Weapons” Against Elon Musk