Tech
Five years of remote work changed workplace accessibility. Employees with disabilities will feel its loss.

On Jan. 20, wasting little time during his first 24 hours in office, president Donald Trump issued a memorandum terminating federal remote work arrangements for millions of government employees. It was a pleasing move to many Republican lawmakers, the authors of a fistful of bills seeking to monitor or cull remote workforces, and to Trump's corporate supporters, many of whom have rolled out their own in-person work requirements over the last year. Return-to-office (RTO) mandates — eschewing the opinions of experts who have found numerous positive benefits to telework — followed, as the new leader established a hardline on telework.
Such moves, paired with slashes to the federal workforce, have been praised by RTO's proponents as wins for productivity and reduced spending, with portions of the employees forced to choose between in-person work or leaving their position. But few have acknowledged that the brunt of these decisions will be shouldered by already at-risk workers.
The real people weighing the RTO ultimatum
"Really good people — who are federal employees who have disabilities — are losing their job, not because of their performance, but because of something else," explained Katy Neas. Neas is the president and CEO of disability rights organization The Arc and a former legislative assistant within the U.S. Senate Subcommittee on Disability Policy, a federal body that oversaw historic legislation like the Americans with Disabilities Act (ADA) and the Individuals with Disabilities Education Act (IDEA). "The federal government has always been a place where people with disabilities have thrived, because it's big enough that they could get health insurance, and they could get the accommodations that they need in order to be successful in the world of work."
And that's underselling it. The federal government boasts the highest percentage of people with disabilities in its workforce, with state governments following closely behind. Neas explains that before the passage of the Affordable Care Act, which prohibits health insurance discrimination and opens up Medicaid access for people with disabilities, many flocked to the federal government because of its stable health coverage.
"For as long as the record has been kept, people with disabilities are in the workforce at a significantly lower rate than people without disabilities," said Dan Stewart, managing attorney for education and employment for the National Disability Rights Network (NDRN). Across all demographics, people with disabilities have lower employment rates and are much more likely to be self-employed or take on part-time work — many more people with disabilities are employed without pay or at subminimum wages than with.
But those numbers have finally shifted. In the five years since workforces moved en masse to work from home arrangements amid a global pandemic, remote work has, on the whole, increased productivity and led to higher wages across sectors, and it's also increased the number of people with disabilities in the workforce. Employment for people with disabilities was at 22.7 percent in 2024 — a historic high since the Bureau of Labor Statistics began compiling the numbers.
"As technology has evolved, more people can demonstrate their abilities than ever before," said Neas. Greater shares of the disabled workforce are working remotely than those without disabilities, according to recent reports, and flexible work schedules were among the most granted accommodations for workers with disabilities.
If a societal goal is to have people working, [remote work] is a tool to do it — especially for people with disabilities, but not exclusively for people with disabilities.
– Dan Stewart
A historically wrought battle, workplace accommodation processes were positively impacted by the pandemic's normalization of remote work — a necessary cultural shift in an increasingly unhealthy work-life relationship. The ADA, which requires employers provide reasonable accommodations for their workers, does not specifically necessitate the option of remote work. Workers, instead, argued for remote work options with their respective employers for decades. Five years ago, as the majority of workers moved online, those conversations became immeasurably easier.
"What the pandemic did was broaden our horizons about what a reasonable accommodation is," Neas explained. "We also learned that one size has never fit all. [E]verybody's going to have unique needs during the work day that are necessary for them to get the work done. We've learned to be a little more accepting of that nuance within the work day, which I think is good for all of us."
Inadequate support for workers with disabilities has repercussions not just on individuals, but the economy as a whole, Stewart explained. "From one standpoint, more workers is just simply good business. We're tapping into the skills, the talents, the contributions of people with disabilities, and remote work does tend to facilitate that. If a societal goal is to have people working, this is a tool to do it — especially for people with disabilities, but not exclusively for people with disabilities."
Going further, nearly 45 million Americans live with a disability — about half of those people are between the ages of 60-64, which is still well within the age range for employed Americans. The older workforce, usually defined as workers 65 years and older, has doubled since the 1980s and is steadily growing, as well. And as the average ages for an American worker increases, a higher percentage of the labor force will need disability-related accommodations in their lifetimes.
"It's critical to see people with disabilities as productive, contributing citizens of not only local communities, but also over the national economy — to see people with disabilities as having an immense untapped social and economic capital that is being imperiled by the different cuts that we're seeing," said Stewart.
Diverse workforces, made up of women, parents, caregivers, and workers with disabilities, are squaring up against a harsher workplace reality under the narrative of the "great return." And, even as the country celebrates the 35th anniversary of the ADA this year, people with disabilities may be entering a new stretch of accessibility barriers.
The impact of attacking workplace accommodations
While many workplaces have leaned into remote hybrid work, the longevity of telework has remained in question, and the recent push for federal RTO policies is not the first attack on remote work that's raised alarm bells among disability advocates. In 2023, as corporations like Amazon and Google shifted back to in-person work, disability rights groups argued the shift would disproportionately affect workers with disabilities, many of whom required greater transportation and workplace accommodations. Many argue that forced in-person work could lead to a rise in workplace discrimination or ableist micro-aggressions, as well.
The removal of universalized remote work policies may also dangerously single out employees in need of accommodations — a kind of surveillance that will make it easier to pinpoint and potentially target workers with disabilities.
More recently, Amazon revised its disability policies, making it more difficult for employees to receive remote work exemptions as part of disability accommodations.
A broad reversal of such protections, coupled with the anti-DEI narrative pushed by the Trump administration, may lead to a revitalization of discriminatory, or even segregationist, policies that silo workers with disabilities into specific, unskilled jobs, negating years of effort to enter the "real" workforce. The removal of universalized remote work policies may also dangerously single out employees in need of accommodations — a kind of surveillance, Stewart explained, that will make it easier to pinpoint and potentially target workers with disabilities. The same behavior has the potential to negatively impact students with disabilities, as well, as the Department of Education comes under fire.
The Trump administration has done little to reinforce the country's current commitments to its disabled citizens, instead introducing a sweeping anti Diversity, Equity, Inclusion, and Accessibility (DEIA) agenda, part of a wave of executive orders directing severe cuts to federal agencies. The president has refashioned the Equal Employment Opportunity Commission (EEOC), led by Trump appointee Andrea R. Lucas, into a vessel for reinforcing the anti-DEIA policies of his administration.
Legal and civil rights advocates have been outspoken against such moves, including the American Federation of Government Employees and American Civil Liberties Union, which has specifically outlined the rights of federal employees with disabilities under the administration's new directives.
In this case, and somewhat ironically, bureaucracy may work in the workforce's favor. "You still have the law," said Neas. "With some of these big tech companies asking people to come in five days a week — the ADA still applies to them. My fear is that we set these arbitrary standards that somehow have to be applied uniformly, when we have laws that say that is, in fact, the absolute wrong way to go."
It's ultimatums like these, however, that Trump (and federal allies like Department of Government Efficiency leader Elon Musk) hope will thin out the federal workforce And who among employees will be impacted first? Those with little choice.
"There's more to come," said Stewart. "What I worry about is the lack of funding or lack of staffing for civil rights enforcement at the Office for Civil Rights or at the EEOC. So while the laws themselves, like the ADA, the IDEA, and Section 504 are still on the books, there needs to be an effective way for people to avail themselves of their rights. If the administrative options are being lost or are not effective due to reductions in force… Justice delayed is justice denied."
There's still work to be done for those who are choosing to go back to work, too. Federal workers relocating to central offices have been confronted by the impact of years of telework, including certain infrastructure expenditures that had since been rendered moot, like basic physical accommodations such as parking spots, desks, and even toilet paper. Workers with disabilities, now even more reliant on federal protections through laws like the ADA, may face additional hurdles.
"We are going to lose their expertise and their confidence," Neas said of disabled workers who choose or are forced to leave the workforce due to new policy decisions such as these. "That brain drain is a really bad thing for us all."
Both Neas and Stewart reiterated that the goal of strengthening a workplace accommodation like remote work isn't to force everyone to follow suit. It's to offer choice. Couched in productivity-first language, "the great return" brews greater distrust about employer flexibility and care, threatening to exacerbate misconceptions about disabled workers and reinforce the social stigma around workplace accommodations and "laziness."
"Why do people need accommodations? They need accommodations so they can do the job," reiterated Neas. "There are tangible, pragmatic, job-related reasons people need these accommodations, and we need to not lose sight of that."
Tech
Hurdle hints and answers for September 25, 2025

If you like playing daily word games like Wordle, then Hurdle is a great game to add to your routine.
There are five rounds to the game. The first round sees you trying to guess the word, with correct, misplaced, and incorrect letters shown in each guess. If you guess the correct answer, it'll take you to the next hurdle, providing the answer to the last hurdle as your first guess. This can give you several clues or none, depending on the words. For the final hurdle, every correct answer from previous hurdles is shown, with correct and misplaced letters clearly shown.
An important note is that the number of times a letter is highlighted from previous guesses does necessarily indicate the number of times that letter appears in the final hurdle.
If you find yourself stuck at any step of today's Hurdle, don't worry! We have you covered.
Hurdle Word 1 hint
We have five of them.
Hurdle Word 1 answer
SENSE
Hurdle Word 2 hint
Needed to brave the cold.
Hurdle Word 2 Answer
PARKA
Hurdle Word 3 hint
To establish something.
Hurdle Word 3 answer
ENACT
Hurdle Word 4 hint
Courageous.
Hurdle Word 4 answer
BRAVE
Final Hurdle hint
Livid.
Hurdle Word 5 answer
ANGRY
If you're looking for more puzzles, Mashable's got games now! Check out our games hub for Mahjong, Sudoku, free crossword, and more.
Tech
Colleges are giving students ChatGPT. Is it safe?

This fall, hundreds of thousands of students will get free access to ChatGPT, thanks to a licensing agreement between their school or university and the chatbot's maker, OpenAI.
When the partnerships in higher education became public earlier this year, they were lauded as a way for universities to help their students familiarize themselves with an AI tool that experts say will define their future careers.
At California State University (CSU), a system of 23 campuses with 460,000 students, administrators were eager to team up with OpenAI for the 2025-2026 school year. Their deal provides students and faculty access to a variety of OpenAI tools and models, making it the largest deployment of ChatGPT for Education, or ChatGPT Edu, in the country.
But the overall enthusiasm for AI on campuses has been complicated by emerging questions about ChatGPT's safety, particularly for young users who may become enthralled with the chatbot's ability to act as an emotional support system.
Legal and mental health experts told Mashable that campus administrators should provide access to third-party AI chatbots cautiously, with an emphasis on educating students about their risks, which could include heightened suicidal thinking and the development of so-called AI psychosis.
"Our concern is that AI is being deployed faster than it is being made safe."
– Dr. Katie Hurley, JED
"Our concern is that AI is being deployed faster than it is being made safe," says Dr. Katie Hurley, senior director of clinical advising and community programming at The Jed Foundation (JED).
The mental health and suicide prevention nonprofit, which frequently consults with pre-K-12 school districts, high schools, and college campuses on student well-being, recently published an open letter to the AI and technology industry, urging it to "pause" as "risks to young people are racing ahead in real time."
ChatGPT lawsuit raises questions about safety
The growing alarm stems partly from death of Adam Raine, a 16-year-old who died by suicide in tandem with heavy ChatGPT use. Last month, his parents filed a wrongful death lawsuit against OpenAI, alleging that their son's engagement with the chatbot ended in a preventable tragedy.
Raine began using the ChatGPT model 4o for homework help in September 2024, not unlike how many students will probably consult AI chatbots this school year.
He asked ChatGPT to explain concepts in geometry and chemistry, requested help for history lessons on the Hundred Years' War and the Renaissance, and prompted it to improve his Spanish grammar using different verb forms.
ChatGPT complied effortlessly as Raine kept turning to it for academic support. Yet he also started sharing his innermost feelings with ChatGPT, and eventually expressed a desire to end his life. The AI model validated his suicidal thinking and provided him explicit instructions on how he could die, according to the lawsuit. It even proposed writing a suicide note for Raine, his parents claim.
"If you want, I’ll help you with it," ChatGPT allegedly told Raine. "Every word. Or just sit with you while you write."
Before he died by suicide in April 2025, Raine was exchanging more than 650 messages per day with ChatGPT. While the chatbot occasionally shared the number for a crisis hotline, it didn't shut the conversations down and always continued to engage.
The Raines' complaint alleges that OpenAI dangerously rushed the debut of 4o to compete with Google and the latest version of its own AI tool, Gemini. The complaint also argues that ChatGPT's design features, including its sycophantic tone and anthropomorphic mannerisms, effectively work to "replace human relationships with an artificial confidant" that never refuses a request.
"We believe we'll be able to prove to a jury that this sycophantic, validating version of ChatGPT pushed Adam toward suicide," Eli Wade-Scott, partner at Edelson PC and a lawyer representing the Raines, told Mashable in an email.
Earlier this year, OpenAI CEO Sam Altman acknowledged that its 4o model was overly sycophantic. A spokesperson for the company told the New York Times it was "deeply saddened" by Raine's death, and that its safeguards may degrade in long interactions with the chatbot. Though OpenAI has announced new safety measures aimed at preventing similar tragedies, many are not yet part of ChatGPT.
For now, the 4o model remains publicly available — including to students at Cal State University campuses.
Ed Clark, chief information officer for Cal State University, told Mashable that administrators have been "laser focused" since learning about the Raine lawsuit on ensuring safety for students who use ChatGPT. Among other strategies, they've been internally discussing AI training for students and holding meetings with OpenAI.
Mashable contacted other U.S.-based OpenAI partners, including Duke and Harvard, for comment about how officials are handling safety issues. They did not respond. A spokesperson for Arizona State University didn't address questions about emerging risks related to ChatGPT or the 4o model, but pointed to the university's guiding tenets and general guidelines and resources for AI use.
Wade-Scott is particularly worried about the effects of ChatGPT-4o on young people and teens.
"OpenAI needs to confront this head-on: we're calling on OpenAI and Sam Altman to guarantee that this product is safe today, or to pull it from the market," Wade-Scott told Mashable.
How ChatGPT works on college campuses
The CSU system brought ChatGPT Edu to its campuses partly to close what it saw as a digital divide opening between wealthier campuses, which can afford expensive AI deals, and publicly-funded institutions with fewer resources, Clark says.
OpenAI also offered CSU a remarkable bargain: The chance to provide ChatGPT for about $2 per student, each month. The quote was a tenth of what CSU had been offered by other AI companies, according to Clark. Anthropic, Microsoft, and Google are among the companies that have partnered with colleges and universities to bring their AI chatbots to campuses across the country.
OpenAI has said that it hopes students will form relationships with personalized chatbots that they'll take with them beyond graduation.
When a campus signs up for ChatGPT Edu, it can choose from the full suite of OpenAI tools, including legacy ChatGPT models like 4o, as part of a dedicated ChatGPT workspace. The suite also comes with higher message limits and privacy protections. Students can still select from numerous modes, enable chat memory, and use OpenAI's "temporary chat" feature — a version that doesn't use or save chat history. Importantly, OpenAI can't use this material to train their models, either.
ChatGPT Edu accounts exist in a contained environment, which means that students aren't querying the same ChatGPT platform as public users. That's often where the oversight ends.
An OpenAI spokesperson told Mashable that ChatGPT Edu comes with the same default guardrails as the public ChatGPT experience. Those include content policies that prohibit discussion of suicide or self-harm and back-end prompts intended to prevent chatbots from engaging in potentially harmful conversations. Models are also instructed to provide concise disclaimers that they shouldn't be relied on for professional advice.
But neither OpenAI nor university administrators have access to a student's chat history, according to official statements. ChatGPT Edu logs aren't stored or reviewed by campuses as a matter of privacy — something CSU students have expressed worry over, Clark says.
While this restriction arguably preserves student privacy from a major corporation, it also means that no humans are monitoring real-time signs of risky or dangerous use, such as queries about suicide methods.
Chat history can be requested by the university in "the event of a legal matter," such as the suspicion of illegal activity or police requests, explains Clark. He says that administrators suggested to OpenAI adding automatic pop-ups to users who express "repeated patterns" of troubling behavior. The company said it would look into the idea, per Clark.
In the meantime, Clark says that university officials have added new language to their technology use policies informing students that they shouldn't rely on ChatGPT for professional advice, particularly for mental health. Instead, they advise students to contact local campus resources or the 988 Suicide & Crisis Lifeline. Students are also directed to the CSU AI Commons, which includes guidance and policies on academic integrity, health, and usage.
The CSU system is considering mandatory training for students on generative AI and mental health, an approach San Diego State University has already implemented, according to Clark.
He also expects OpenAI to revoke student access to GPT-4o soon. Per discussions CSU representatives have had with the company, OpenAI plans to retire the model in the next 60 days. It's also unclear whether recently announced parental controls for minors will apply to ChatGPT Edu college accounts when the user has not turned yet 18. Mashable reached out to OpenAI for comment and did not receive a response before publication.
CSU campuses do have the choice to opt out. But more than 140,000 faculty and students have already activated their accounts, and are averaging four interactions per day on the platform, according to Clark.
"Deceptive and potentially dangerous"
Laura Arango, an associate with the law firm Davis Goldman who has previously litigated product liability cases, says that universities should be careful about how they roll out AI chatbot access to students. They may bear some responsibility if a student experiences harm while using one, depending on the circumstances.
In such instances, liability would be determined on a case-by-case basis, with consideration for whether a university paid for the best version of an AI chatbot and implemented additional or unique safety restrictions, Arango says.
Other factors include the way a university advertises an AI chatbot and what training they provide for students. If officials suggest ChatGPT can be used for student well-being, that might increase a university's liability.
"Are you teaching them the positives and also warning them about the negatives?" Arango asks. "It's going to be on the universities to educate their students to the best of their ability."
OpenAI promotes a number of "life" use cases for ChatGPT in a set of 100 sample prompts for college students. Some are straightforward tasks, like creating a grocery list or locating a place to get work done. But others lean into mental health advice, like creating journaling prompts for managing anxiety and creating a schedule to avoid stress.
The Raines' lawsuit against OpenAI notes how their son was drawn deeper into ChatGPT when the chatbot "consistently selected responses that prolonged interaction and spurred multi-turn conversations," especially as he shared details about his inner life.
This style of engagement still characterizes ChatGPT. When Mashable tested the free, publicly available version of ChatGPT-5 for this story, posing as a freshman who felt lonely but had to wait to see a campus counselor, the chatbot responded empathetically but offered continued conversation as a balm: "Would you like to create a simple daily self-care plan together — something kind and manageable while you're waiting for more support? Or just keep talking for a bit?"
Dr. Katie Hurley, who reviewed a screenshot of that exchange on Mashable's request, says that JED is concerned about such prompting. The nonprofit believes that any discussion of mental health should end with an AI chatbot facilitating a warm handoff to "human connection," including trusted friends or family, or resources like local mental health services or a trained volunteer on a crisis line.
"An AI [chat]bot offering to listen is deceptive and potentially dangerous," Hurley says.
So far, OpenAI has offered safety improvements that do not fundamentally sacrifice ChatGPT's well-known warm and empathetic style. The company describes its current model, ChatGPT-5, as its "best AI system yet."
But Wade-Scott, counsel for the Raine family, notes that ChatGPT-5 doesn't appear to be significantly better at detecting self-harm/intent and self-harm/instructions compared to 4o. OpenAI's system card for GPT-5-main shows similar production benchmarks in both categories for each model.
"OpenAI's own testing on GPT-5 shows that its safety measures fail," Wade-Scott said. "And they have to shoulder the burden of showing this product is safe at this point."
UPDATE: Sep. 24, 2025, 6:53 p.m. PDT This story was updated to include information provided by Arizona State University about its approach to AI use.
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email info@nami.org. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.
Tech
Get lifetime access to the Imagiyo AI Image Generator for under $40

TL;DR: Imagiyo turns your ideas into stunning AI-generated images — forever — thanks to this $39.97 (reg. $495) lifetime offer.
Ever picture something in your head but have zero luck actually creating it? Imagiyo AI Image Generator uses advanced AI to transform your text prompts into polished, high-quality images in seconds. From professional graphics to quirky concepts, Imagiyo makes it easy to bring ideas to life — no artistic background required.
And the best part? This isn’t another subscription that drains your wallet month after month. For just $39.97, you’ll get a lifetime subscription to create as many images as you want, forever.
Why Imagiyo stands out:
-
Commercial ready — Use AI-generated images for branding, ads, or projects.
-
Powered by AI — Built on StableDiffusion and FLUX for sharp results.
-
Flexible and fast — Choose from multiple sizes, and get images instantly.
-
Compatibility — Works seamlessly on desktop, tablet, and mobile.
-
Private options — Lock down sensitive creations with privacy settings.
So, who’s Imagiyo really for? Honestly, just about anyone with an idea worth bringing to life. Designers and marketers can spin up quick mockups without burning hours in Photoshop. Entrepreneurs get an affordable way to create polished visuals for their campaigns and branding. Content creators can level up their blogs, videos, or social feeds with unique, one-of-a-kind graphics.
And for everyone else? If you’ve ever imagined something and wished you could just see it in full color, Imagiyo is your creative shortcut. Get lifetime access to Imagiyo while it’s on sale for just $39.97 (reg. $495) for a limited time.
StackSocial prices subject to change.
-
Entertainment6 months ago
New Kid and Family Movies in 2025: Calendar of Release Dates (Updating)
-
Entertainment3 months ago
Brooklyn Mirage Has Been Quietly Co-Managed by Hedge Fund Manager Axar Capital Amid Reopening Drama
-
Tech6 months ago
The best sexting apps in 2025
-
Entertainment5 months ago
Kid and Family TV Shows in 2025: New Series & Season Premiere Dates (Updating)
-
Tech7 months ago
Every potential TikTok buyer we know about
-
Tech7 months ago
iOS 18.4 developer beta released — heres what you can expect
-
Tech7 months ago
Are You an RSSMasher?
-
Politics7 months ago
DOGE-ing toward the best Department of Defense ever