Connect with us

Tech

FunkyFrogBait left their career as a software engineer for YouTube. It paid off.

Published

on

FunkyFrogBait surrounded by products on a colorful background.

Content creation wasn't in the plan for Kali, better known by their online handle FunkyFrogBait.

Growing up a child of YouTube, Kali looked up to creators like Jacksepticeye, dreaming of making videos themselves. But as college and the job market took priority, that dream started to feel more distant, replaced by the pressures of real-world responsibilities.

Then 2020 hit. With more time spent indoors and the rise of TikTok, Kali — affectionately known as "Funky" by their fans — decided to give content creation a shot. The gamble paid off. Today, Kali boasts millions of followers across platforms: 2.8 million on TikTok, 340K on Instagram, and 2.72 million on YouTube.

At VidCon 2025, we sat down with Kali to talk about their growth as a creator and how they pivoted to full-time once they hit it big.

FunkyFrogBait on a colorful background surrounded by logos.

Once a software engineer, FunkyFrogBait has amassed millions of subscribers on YouTube with their commentary.
Credit: Cole Kan/Mashable Composite; Funky Frogbait; Getty Images

When did you start creating content?

I started in 2020 'cause I was bored, and TikTok was popping off. I was in this theater group in college, and we were doing a performance of The Oregon Trail, which was super funny. In that group, there was somebody who was scrolling TikTok, and I have a distinct memory of them turning to look at me and being like, "Oh, I think you'd do great on TikTok. You should just make some videos."

And did you start on TikTok?

I started on TikTok. A lot of the stuff that was on my feed was sketch comedy, which I was like, "OK, I've done musical theater. I've done improv in college. This is kind of a convergence of a lot of my interests. This could be fun to do, and it's what I'm already watching." I feel like a lot of people, that's how they decide to make something, there's something they're already naturally gravitating toward.

I moved from random shitposting on TikTok, getting a few thousand views here and there, just using random audios, to writing some skits.

Most of them didn't do well for a while, but then one would pop off, and it was like, OK, what made this one work and not these? I kept following that formula of putting a sketch out and seeing if people liked the characters.

I did this series called "Nursing Homes in 2077." That's actually what I got the most well-known for. It was just a very simple concept: What are we gonna be like when we're in nursing homes one day? What little pieces of brain rot are gonna stay in our brains even after we've forgotten our grandchildren's names? That was the first sketch-comedy thing that I did that really popped off.

When did you migrate to YouTube?

I did that for a while, and it was really fun. Unfortunately, because of the nature of short-form content, and specifically the way that the TikTok algorithm works, it would be so unpredictable.

I would work really hard on a sketch, and it would get a few million views, and I would feel amazing about it. I would feel like this is the direction I need to go. I would do the same thing the next day and get less than 20,000 views. So it was just so up and down in a way that was so unpredictable that it started to get really discouraging. And I found myself posting less and less on there because it was just so much time spent on such a short video that would have very little payoff sometimes.

Then I was like, OK, I did this gaming channel as a little kid, I still like that type of content — let me try that. So I tried migrating my TikTok audience over to the FunkyFrogBeat YouTube channel, which was originally a gaming channel. I was posting myself; I would record myself playing a video game. It was impossible to move a TikTok audience over. TikTok has a very insulated platform.

Anytime you try to push out anything that even hints at presence on other social media, it immediately will lock it down and make sure nobody sees it. And once again, I hit a roadblock of just feeling really discouraged. I had like just this taste of like, "There is interest here, but I can't find it." I can't get this consistent community even though I'm having these little bumps of interest. I can't gather this audience into a single place and get that consistent viewership. And then over time, my personal consumption of the internet changed.

How so?

I spent most of my time watching gaming content. Then, probably around 2022, I started watching a lot of commentary creators—people who get in front of the camera and talk about weird things happening on social media.

I shifted my personal consumption of content, and I was starting to watch a lot more of that. And then one day, I'm scrolling TikTok, and this guy comes across my For You page. It's somebody who has convinced themselves and openly declared that they believe they are the reincarnation of Hitler. It's such an absurd thing that just came across my For You Page.

I'd had this idea of making commentary content for a while, but I didn't think that there was anything I specifically had to add. But then this was just one of those circumstances where it's like, "How is no one talking about this?"

People were reacting to it on TikTok, and they were getting like hundreds of thousands of views, but it hadn't migrated over to YouTube yet. I sit down at my desk, I prop my iPhone — I don't even have a tripod — and just sit and talk at my phone for a little over an hour.

I followed a similar formula to other commentary creators I'd seen, but I also was just like, I'm just talking and being weird and being myself. I'm writing dumb jokes. I'm doing little punchlines, you know? That video immediately got hundreds of thousands of views, which is like jumping from even millions on TikTok to hundreds of thousands on YouTube for a long-form video.

A lot of people who don't make content don't realize that views from different sites mean very different things. It was an immediate thing, and it was so unexpected. I actually almost didn't post the video 'cause I was almost done editing it, and I was talking to my partner at the time, and I was like, "Oh, I don't know, this is kind of stupid." And he convinced me to post it. It was such a cool moment of like, Wow, I'm so glad I did because it was an immediate yes from the universe that I'd been looking for — this is something that really works.

So I'm curious: What is your strategy now?

When I started out in commentary, it was more of a drama-focused angle because that was like a lot of commentary at that time. You're the underdog coming in, you're punching in all directions. You're making fun of people who are way more well-known in the space than you. And you're punching up at them.

But then my platform exploded so quickly that I realized that the dynamic had shifted. I was now, "Oh, here's this asshole with a million subscribers being mean to this person." That was a weird thing for me because in my head, I was still doing the same thing I was before. I had to recognize my position in this space had changed, where I have to be so much more cognizant of the fact that I am a lot more zoomed out now, not putting a magnifying glass on one specific person. Maybe there's a trend that I think is annoying or harmful, and I show you 20 different examples of people doing it rather than one person.

I had a hard time processing that for a while. I was like, "This is unfair. I'm the same person, and I want to be able to approach things the same way that I always have." But it's a two-way street, where it's not just who you are; it's what the platform feeds back into you.

And if the platform says, no, this is where you're at now. You have this level of responsibility, you have this level of influence, and you don't get to say I don't care. You have to recognize the reality of your situation. And personally, I've just felt mentally a lot better with that change. It's been good to be able to have a broader outlook and to feel a lot more proud of the things that I put out, because I do have to now put up things that I've spent a lot more time thinking through and researching, because of that extra responsibility.

Do I miss the days where I could just like punch in all directions and be an asshole? Of course. Because that's fun. That's really fun to do. But, also, I feel like the impact that I get to have now is so much greater, and the amount of good I'm able to do is so much greater. It's ultimately a good trade-off.

You have this great perspective that really gives you empathy when you approach the topic.

It's great to hear. That's what I try to do. I try to have a perspective of tough love. Even if I do have to show a specific example of somebody doing something that they should not be doing, I still try to come from the angle that I have nothing personally against this person.

I try to dig into the reasoning of why they're doing it and add extra context of like, here are the reasons why I think that this has a negative impact. Or maybe this individual person doesn't have that much of an impact, but they're a part of a larger trend that is kind of a problem. I don't wanna talk about somebody just doing something stupid. I wanna talk about a whole movement that I see online that is really concerning. It's a lot harder, but it's more rewarding.

Are you all self-taught on editing?

I've never taken any kind of cinematography or editing class. Everything that I do in that realm is self-taught or involves me begging a friend to say, "Hey, can you explain this to me?" Previously, I edited everything on my own, but this past year and a half, I have had an editor to help.

But my vision for my videos is very specific. Basically, how it works is I will write out my full video and write in the edits exactly how I need them to be. So even if I'm not physically editing, if you see a thing pop up on screen, a gag, or a cutaway, it's probably because I told the editor to do so.

So, I still have a lot of creative control over the editing. And sometimes, I still go back and edit because sometimes my vision is so specific, and for a particular topic, it's impossible to communicate it effectively to another person. I really felt for a long time that incorporating an editor would take away my agency and ownership of the content. But it was just a matter of finding somebody who understood my vision.

Has there been a moment when you realized this was your full-time career now?

That happened shockingly fast after the first commentary video. I had no sense of ad revenue or anything like that. There are a lot of assumptions when you're watching YouTube that every YouTuber is rich. I didn't know what views translated to when it came to income. The analytics take a couple of days to catch up to what you're actually gonna get paid out.

I was starting to do calculations, and I was like, "This is matching my current income at a job that, let's be honest, is significantly harder." If I keep getting this amount of viewership on each of these days, I'm going to start making more than what I'm doing at this job that I went to school for four years to do. I remember looking at the analytics tab and showing my partner, dumbfounded. I had to show another person, because I was like, "Am I crazy?" I've done the math, and this is actually doable.


I was a YouTube kid. I grew up watching all of these YouTubers come into their own, and I idolized that lifestyle so much, but I'd put it aside for college. I'd put it aside for more realistic avenues.

I was a YouTube kid. I grew up watching all of these YouTubers come into their own, and I idolized that lifestyle so much, but I'd put it aside for college. I'd put it aside for more realistic avenues. In fact, I'd honestly shut off a lot of my creative passions completely to pursue this particular career path. I'd completely deadened myself in many ways to the things that really made me feel like myself, and to be able to look at the raw numbers and realize I could do content creation was amazing.

I worked as a software engineer for a telecommunications company. I got my boss in a meeting, and I was like, "They're gonna hate me. They're gonna be so mad at me." And they were actually so chill with it. They were so encouraging. They were like, "That's amazing, and if it doesn't work out for you, you can always come back and work here again. We love you. We really want this to work out for you." It was another yes from the universe — this is the direction, this is the path.

I feel so creatively fulfilled, and I've had so many amazing opportunities. It's been so good to know that this was the right path to take, even if it didn't feel like it at the time. It's a wonderful thing that I'm privileged to be able to do.

Mashable will be live at the Anaheim Convention Center this week, covering VidCon 2025. Check back in the days ahead at Mashable.com, where we’ll be talking to your favorite creators, covering the latest trends, and sharing how creators are growing their followings, their influence, and making a living online.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Hurdle hints and answers for September 25, 2025

Published

on

By

If you like playing daily word games like Wordle, then Hurdle is a great game to add to your routine.

There are five rounds to the game. The first round sees you trying to guess the word, with correct, misplaced, and incorrect letters shown in each guess. If you guess the correct answer, it'll take you to the next hurdle, providing the answer to the last hurdle as your first guess. This can give you several clues or none, depending on the words. For the final hurdle, every correct answer from previous hurdles is shown, with correct and misplaced letters clearly shown.

An important note is that the number of times a letter is highlighted from previous guesses does necessarily indicate the number of times that letter appears in the final hurdle.

If you find yourself stuck at any step of today's Hurdle, don't worry! We have you covered.

Hurdle Word 1 hint

We have five of them.

Hurdle Word 1 answer

SENSE

Hurdle Word 2 hint

Needed to brave the cold.

Hurdle Word 2 Answer

PARKA

Hurdle Word 3 hint

To establish something.

Hurdle Word 3 answer

ENACT

Hurdle Word 4 hint

Courageous.

Hurdle Word 4 answer

BRAVE

Final Hurdle hint

Livid.

Hurdle Word 5 answer

ANGRY

If you're looking for more puzzles, Mashable's got games now! Check out our games hub for Mahjong, Sudoku, free crossword, and more.

Continue Reading

Tech

Colleges are giving students ChatGPT. Is it safe?

Published

on

By

This fall, hundreds of thousands of students will get free access to ChatGPT, thanks to a licensing agreement between their school or university and the chatbot's maker, OpenAI.

When the partnerships in higher education became public earlier this year, they were lauded as a way for universities to help their students familiarize themselves with an AI tool that experts say will define their future careers.

At California State University (CSU), a system of 23 campuses with 460,000 students, administrators were eager to team up with OpenAI for the 2025-2026 school year. Their deal provides students and faculty access to a variety of OpenAI tools and models, making it the largest deployment of ChatGPT for Education, or ChatGPT Edu, in the country.

But the overall enthusiasm for AI on campuses has been complicated by emerging questions about ChatGPT's safety, particularly for young users who may become enthralled with the chatbot's ability to act as an emotional support system.

Legal and mental health experts told Mashable that campus administrators should provide access to third-party AI chatbots cautiously, with an emphasis on educating students about their risks, which could include heightened suicidal thinking and the development of so-called AI psychosis.


"Our concern is that AI is being deployed faster than it is being made safe."
– Dr. Katie Hurley, JED

"Our concern is that AI is being deployed faster than it is being made safe," says Dr. Katie Hurley, senior director of clinical advising and community programming at The Jed Foundation (JED).

The mental health and suicide prevention nonprofit, which frequently consults with pre-K-12 school districts, high schools, and college campuses on student well-being, recently published an open letter to the AI and technology industry, urging it to "pause" as "risks to young people are racing ahead in real time."

ChatGPT lawsuit raises questions about safety

The growing alarm stems partly from death of Adam Raine, a 16-year-old who died by suicide in tandem with heavy ChatGPT use. Last month, his parents filed a wrongful death lawsuit against OpenAI, alleging that their son's engagement with the chatbot ended in a preventable tragedy.

Raine began using the ChatGPT model 4o for homework help in September 2024, not unlike how many students will probably consult AI chatbots this school year.

He asked ChatGPT to explain concepts in geometry and chemistry, requested help for history lessons on the Hundred Years' War and the Renaissance, and prompted it to improve his Spanish grammar using different verb forms.

ChatGPT complied effortlessly as Raine kept turning to it for academic support. Yet he also started sharing his innermost feelings with ChatGPT, and eventually expressed a desire to end his life. The AI model validated his suicidal thinking and provided him explicit instructions on how he could die, according to the lawsuit. It even proposed writing a suicide note for Raine, his parents claim.

"If you want, I’ll help you with it," ChatGPT allegedly told Raine. "Every word. Or just sit with you while you write."

Before he died by suicide in April 2025, Raine was exchanging more than 650 messages per day with ChatGPT. While the chatbot occasionally shared the number for a crisis hotline, it didn't shut the conversations down and always continued to engage.

The Raines' complaint alleges that OpenAI dangerously rushed the debut of 4o to compete with Google and the latest version of its own AI tool, Gemini. The complaint also argues that ChatGPT's design features, including its sycophantic tone and anthropomorphic mannerisms, effectively work to "replace human relationships with an artificial confidant" that never refuses a request.

"We believe we'll be able to prove to a jury that this sycophantic, validating version of ChatGPT pushed Adam toward suicide," Eli Wade-Scott, partner at Edelson PC and a lawyer representing the Raines, told Mashable in an email.

Earlier this year, OpenAI CEO Sam Altman acknowledged that its 4o model was overly sycophantic. A spokesperson for the company told the New York Times it was "deeply saddened" by Raine's death, and that its safeguards may degrade in long interactions with the chatbot. Though OpenAI has announced new safety measures aimed at preventing similar tragedies, many are not yet part of ChatGPT.

For now, the 4o model remains publicly available — including to students at Cal State University campuses.

Ed Clark, chief information officer for Cal State University, told Mashable that administrators have been "laser focused" since learning about the Raine lawsuit on ensuring safety for students who use ChatGPT. Among other strategies, they've been internally discussing AI training for students and holding meetings with OpenAI.

Mashable contacted other U.S.-based OpenAI partners, including Duke and Harvard, for comment about how officials are handling safety issues. They did not respond. A spokesperson for Arizona State University didn't address questions about emerging risks related to ChatGPT or the 4o model, but pointed to the university's guiding tenets and general guidelines and resources for AI use.

Wade-Scott is particularly worried about the effects of ChatGPT-4o on young people and teens.

"OpenAI needs to confront this head-on: we're calling on OpenAI and Sam Altman to guarantee that this product is safe today, or to pull it from the market," Wade-Scott told Mashable.

How ChatGPT works on college campuses

The CSU system brought ChatGPT Edu to its campuses partly to close what it saw as a digital divide opening between wealthier campuses, which can afford expensive AI deals, and publicly-funded institutions with fewer resources, Clark says.

OpenAI also offered CSU a remarkable bargain: The chance to provide ChatGPT for about $2 per student, each month. The quote was a tenth of what CSU had been offered by other AI companies, according to Clark. Anthropic, Microsoft, and Google are among the companies that have partnered with colleges and universities to bring their AI chatbots to campuses across the country.

OpenAI has said that it hopes students will form relationships with personalized chatbots that they'll take with them beyond graduation.

When a campus signs up for ChatGPT Edu, it can choose from the full suite of OpenAI tools, including legacy ChatGPT models like 4o, as part of a dedicated ChatGPT workspace. The suite also comes with higher message limits and privacy protections. Students can still select from numerous modes, enable chat memory, and use OpenAI's "temporary chat" feature — a version that doesn't use or save chat history. Importantly, OpenAI can't use this material to train their models, either.

ChatGPT Edu accounts exist in a contained environment, which means that students aren't querying the same ChatGPT platform as public users. That's often where the oversight ends.

An OpenAI spokesperson told Mashable that ChatGPT Edu comes with the same default guardrails as the public ChatGPT experience. Those include content policies that prohibit discussion of suicide or self-harm and back-end prompts intended to prevent chatbots from engaging in potentially harmful conversations. Models are also instructed to provide concise disclaimers that they shouldn't be relied on for professional advice.

But neither OpenAI nor university administrators have access to a student's chat history, according to official statements. ChatGPT Edu logs aren't stored or reviewed by campuses as a matter of privacy — something CSU students have expressed worry over, Clark says.

While this restriction arguably preserves student privacy from a major corporation, it also means that no humans are monitoring real-time signs of risky or dangerous use, such as queries about suicide methods.

Chat history can be requested by the university in "the event of a legal matter," such as the suspicion of illegal activity or police requests, explains Clark. He says that administrators suggested to OpenAI adding automatic pop-ups to users who express "repeated patterns" of troubling behavior. The company said it would look into the idea, per Clark.

In the meantime, Clark says that university officials have added new language to their technology use policies informing students that they shouldn't rely on ChatGPT for professional advice, particularly for mental health. Instead, they advise students to contact local campus resources or the 988 Suicide & Crisis Lifeline. Students are also directed to the CSU AI Commons, which includes guidance and policies on academic integrity, health, and usage.

The CSU system is considering mandatory training for students on generative AI and mental health, an approach San Diego State University has already implemented, according to Clark.

He also expects OpenAI to revoke student access to GPT-4o soon. Per discussions CSU representatives have had with the company, OpenAI plans to retire the model in the next 60 days. It's also unclear whether recently announced parental controls for minors will apply to ChatGPT Edu college accounts when the user has not turned yet 18. Mashable reached out to OpenAI for comment and did not receive a response before publication.

CSU campuses do have the choice to opt out. But more than 140,000 faculty and students have already activated their accounts, and are averaging four interactions per day on the platform, according to Clark.

"Deceptive and potentially dangerous"

Laura Arango, an associate with the law firm Davis Goldman who has previously litigated product liability cases, says that universities should be careful about how they roll out AI chatbot access to students. They may bear some responsibility if a student experiences harm while using one, depending on the circumstances.

In such instances, liability would be determined on a case-by-case basis, with consideration for whether a university paid for the best version of an AI chatbot and implemented additional or unique safety restrictions, Arango says.

Other factors include the way a university advertises an AI chatbot and what training they provide for students. If officials suggest ChatGPT can be used for student well-being, that might increase a university's liability.

"Are you teaching them the positives and also warning them about the negatives?" Arango asks. "It's going to be on the universities to educate their students to the best of their ability."

OpenAI promotes a number of "life" use cases for ChatGPT in a set of 100 sample prompts for college students. Some are straightforward tasks, like creating a grocery list or locating a place to get work done. But others lean into mental health advice, like creating journaling prompts for managing anxiety and creating a schedule to avoid stress.

The Raines' lawsuit against OpenAI notes how their son was drawn deeper into ChatGPT when the chatbot "consistently selected responses that prolonged interaction and spurred multi-turn conversations," especially as he shared details about his inner life.

This style of engagement still characterizes ChatGPT. When Mashable tested the free, publicly available version of ChatGPT-5 for this story, posing as a freshman who felt lonely but had to wait to see a campus counselor, the chatbot responded empathetically but offered continued conversation as a balm: "Would you like to create a simple daily self-care plan together — something kind and manageable while you're waiting for more support? Or just keep talking for a bit?"

Dr. Katie Hurley, who reviewed a screenshot of that exchange on Mashable's request, says that JED is concerned about such prompting. The nonprofit believes that any discussion of mental health should end with an AI chatbot facilitating a warm handoff to "human connection," including trusted friends or family, or resources like local mental health services or a trained volunteer on a crisis line.

"An AI [chat]bot offering to listen is deceptive and potentially dangerous," Hurley says.

So far, OpenAI has offered safety improvements that do not fundamentally sacrifice ChatGPT's well-known warm and empathetic style. The company describes its current model, ChatGPT-5, as its "best AI system yet."

But Wade-Scott, counsel for the Raine family, notes that ChatGPT-5 doesn't appear to be significantly better at detecting self-harm/intent and self-harm/instructions compared to 4o. OpenAI's system card for GPT-5-main shows similar production benchmarks in both categories for each model.

"OpenAI's own testing on GPT-5 shows that its safety measures fail," Wade-Scott said. "And they have to shoulder the burden of showing this product is safe at this point."

UPDATE: Sep. 24, 2025, 6:53 p.m. PDT This story was updated to include information provided by Arizona State University about its approach to AI use.

Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email info@nami.org. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.

Continue Reading

Tech

Get lifetime access to the Imagiyo AI Image Generator for under $40

Published

on

By

TL;DR: Imagiyo turns your ideas into stunning AI-generated images — forever — thanks to this $39.97 (reg. $495) lifetime offer.



Imagiyo AI Image Generator: Lifetime Subscription (Standard Plan)

Credit: Imagiyo

Ever picture something in your head but have zero luck actually creating it? Imagiyo AI Image Generator uses advanced AI to transform your text prompts into polished, high-quality images in seconds. From professional graphics to quirky concepts, Imagiyo makes it easy to bring ideas to life — no artistic background required.

And the best part? This isn’t another subscription that drains your wallet month after month. For just $39.97, you’ll get a lifetime subscription to create as many images as you want, forever.

Why Imagiyo stands out:

  • Commercial ready — Use AI-generated images for branding, ads, or projects.

  • Powered by AI — Built on StableDiffusion and FLUX for sharp results.

  • Flexible and fast — Choose from multiple sizes, and get images instantly.

  • Compatibility — Works seamlessly on desktop, tablet, and mobile.

  • Private options — Lock down sensitive creations with privacy settings.

So, who’s Imagiyo really for? Honestly, just about anyone with an idea worth bringing to life. Designers and marketers can spin up quick mockups without burning hours in Photoshop. Entrepreneurs get an affordable way to create polished visuals for their campaigns and branding. Content creators can level up their blogs, videos, or social feeds with unique, one-of-a-kind graphics.

And for everyone else? If you’ve ever imagined something and wished you could just see it in full color, Imagiyo is your creative shortcut. Get lifetime access to Imagiyo while it’s on sale for just $39.97 (reg. $495) for a limited time.

StackSocial prices subject to change.

Continue Reading

Trending