Lesley Mathis knows what her daughter said was wrong. But she never expected the 13-year-old girl would get arrested for it.
The teenage girl made an offensive joke while chatting online with her classmates, triggering the school’s surveillance software.
Before the morning was even over, the Tennessee eighth grader was under arrest. She was interrogated, strip-searched and spent the night in a jail cell, her mother says.
Earlier in the day, her friends teased the teen about her tanned complexion and called her “Mexican,” even though she’s not. When a friend asked what she was planning for Thursday, she wrote: “on Thursday we kill all the Mexico’s.”
Mathis said the comments were “wrong” and “stupid,” but context showed they were not a threat.
“It made me feel like, is this the America we live in?” Mathis said of her daughter's arrest. “And it was this stupid, stupid technology that is just going through picking up random words and not looking at context.”
People are also reading…
Surveillance systems in American schools increasingly monitor everything students write on school accounts and devices. Thousands of school districts across the country use software like Gaggle and Lightspeed Alert to track kids’ online activities, looking for signs they might hurt themselves or others. With the help of artificial intelligence, technology can dip into online conversations and immediately notify both school officials and law enforcement.
Educators say the technology has saved lives. But critics warn it can criminalize children for careless words.
A dangerous new ideology is taking shape in the U.S., reshaping global power dynamics. This “New America” is fueled by a potent blend of Trump…
"It has routinized law enforcement access and presence in students’ lives, including in their home,” said Elizabeth Laird, a director at the Center for Democracy and Technology.
In a country weary of school shootings, several states have taken a harder line on threats to schools. Among them is Tennessee, which passed a 2023 zero-tolerance law requiring any threat of mass violence against a school to be reported immediately to law enforcement.
The 13-year-old girl arrested in August 2023 was texting with friends on a chat function tied to her school email at Fairview Middle School, which uses Gaggle to monitor students' accounts. (The Associated Press is withholding the girl’s name to protect her privacy. The school district did not respond to a request for comment.)
Taken to jail, the teen was interrogated and strip-searched, and her parents weren't allowed to talk to her until the next day, according to a lawsuit they filed against the school system. She didn’t know why her parents weren’t there.
“She told me afterwards, ‘I thought you hated me.’ That kind of haunts you,” said Mathis, the girl's mother.
A court ordered eight weeks of house arrest, a psychological evaluation and 20 days at an alternative school for the girl.
Gaggle's CEO, Jeff Patterson, said in an interview that the school system did not use Gaggle the way it is intended. The purpose is to find early warning signs and intervene before problems escalate to law enforcement, he said.
“I wish that was treated as a teachable moment, not a law enforcement moment,” said Patterson.
Students who think they are chatting privately among friends often do not realize they are under constant surveillance, said Shahar Pasch, an education lawyer in Florida.
One teenage girl she represented made a joke about school shootings on a private Snapchat story. Snapchat’s automated detection software picked up the comment, the company alerted the FBI, and the girl was arrested on school grounds within hours.
Alexa Manganiotis, 16, said she was startled by how quickly monitoring software works. West Palm Beach's Dreyfoos School of the Arts, which she attends, last year piloted Lightspeed Alert, a surveillance program. Interviewing a teacher for her school newspaper, Alexa discovered two students once typed something threatening about that teacher on a school computer, then deleted it. Lightspeed picked it up, and “they were taken away like five minutes later,” Alexa said.
Amy Bennett, chief of staff for Lightspeed Systems, said the software helps understaffed schools “be proactive rather than punitive” by identifying early warning signs of bullying, self-harm, violence or abuse.
The technology can also involve law enforcement in responses to mental health crises. In Florida's Polk County Schools, a district of more than 100,000 students, the school safety program received nearly 500 Gaggle alerts over four years, officers said in public Board of Education meetings. This led to 72 involuntary hospitalization cases under the Baker Act, a state law that allows authorities to require mental health evaluations for people against their will if they pose a risk to themselves or others.
“A really high number of children who experience involuntary examination remember it as a really traumatic and damaging experience — not something that helps them with their mental health care,” said Sam Boyd, an attorney with the Southern Poverty Law Center. The Polk and West Palm Beach school districts did not provide comments.

Natasha Torkzaban stands outside Lawrence High School, where she and other students filed a lawsuit against the school district's use of digital surveillance software, Aug. 3 in Lawrence, Kan.
Natasha Torkzaban, who graduated in 2024, said she was flagged for editing a friend’s college essay because it had the words “mental health.”
“I think ideally we wouldn’t stick a new and shiny solution of AI on a deep-rooted issue of teenage mental health and the suicide rates in America, but that’s where we’re at right now,” Torkzaban said. She was among a group of student journalists and artists at Lawrence High School who filed a lawsuit against the school system last week, alleging Gaggle subjected them to unconstitutional surveillance.
School officials have said they take concerns about Gaggle seriously, but also say the technology has detected dozens of imminent threats of suicide or violence.
“Sometimes you have to look at the trade for the greater good,” said Board of Education member Anne Costello in a July 2024 board meeting.
AI chatbots can cushion the high school counselor shortage—but are they bad for students?
AI chatbots can cushion the high school counselor shortage—but are they bad for students?

During the pandemic, longtime Bay Area college and career counselor Jon Siapno started developing a chatbot that could answer high schoolers' questions about their future education options. He was using IBM's question-answering precursor to ChatGPT, Watson, but when generative artificial intelligence became accessible, he knew it was a game-changer.
"I thought it would take us maybe two years to build out the questions and answers," Siapno said. "Back then you had to prewrite everything."
An AI-powered chatbot trained on information about college and careers and designed to mimic human speech meant students at the Making Waves Academy charter school in the East Bay city of Richmond, California, could soon text an AI Copilot to chat about their futures. The idea was that students could get basic questions out of the way—at any hour—before meeting with counselors like Siapno for more targeted conversations, CalMatters reports.
Almost one-quarter of U.S. schools don't have a single counselor, according to the latest federal data, from the 2021-22 school year. California high schools fare better, but the state's student-to-counselor ratio when ChatGPT debuted the following year was still 464-to-1, a far cry from the American School Counselor Association's recommended ratio of 250-to-1.
Siapno wasn't the only one to see generative AI's potential to scale advising. A flood of bots designed to help people navigate their college and career options have surfaced over the past two years, often with human-sounding names like Ava, Kelly, Oli, Ethan and Coco. It's unclear how many California high schools tell students to use any of them, but the power of generative AI and the scale at which young people are already turning to chatbots in their personal lives is giving some people pause.
Julia Freeland Fisher is education director at the Clayton Christensen Institute, a nonprofit research organization that studies innovation. She recently sounded the alarm about the consequences of letting students develop relationships with AI-powered college and career counselors instead of human ones.
"It's so tempting to see these bots as cursory," Freeland Fisher said. "'They're not threatening real relationships.' 'These are just one-off chats.' But we know from sociology that these one-off chats are actually big opportunities."
Sociologists talk about "social capital" as the connections between people that facilitate their success. Among those connections, we have "strong ties" in close friends, family and coworkers who give us routine support, and "weak ties" in acquaintances we see less regularly. For a long time, people thought weak ties were less important, but in 1973 Stanford sociologist Mark Granovetter wrote about "the strength of weak ties" and a flood of studies since then have confirmed how important those more distant acquaintances can be for everything from job searches to emotional support.
As California considers regulating AI companions for young people, policymakers, tech companies and schools must consider how the burgeoning market for AI-driven college and career guidance could inadvertently become the source of a new problem.
"We're creating this army of self-help bots to help students make their way through school and toward jobs," Freeland Fisher said, "but those very same bots may be eroding the kinds of network-building opportunities that help students break into those jobs eventually."
'Like a mentor in your pocket'
The Making Waves Academy ensures all its graduates meet minimum admissions requirements to California's four-year public colleges. Nine out of 10 of them do pursue higher education, and while there, staff at the Making Waves Education Foundation offer 1:1 coaching, scholarships, budget planning and career planning to help them graduate on time with no debt and a job offer.
Patrick O'Donnell, CEO of Making Waves, said his team has been thinking about how to scale the kinds of supports they offer for years now, given the scarcity of counselors in schools.
"Even if counselors wanted to make sure they were supporting students to explore their college and career options, it's almost impossible to do and provide really personalized guidance," O'Donnell said.
Early superusers of the Making Waves AI CoPilot were 9th and 10th graders hungry for information but boxed out of meetings with school counselors focused on helping seniors plan their next steps.
CareerVillage is another California nonprofit focused on scaling good college and career advice. CareerVillage.org has been aggregating crowd-sourced questions and expert answers since 2011 to help people navigate the path to a good career.
When ChatGPT came out, co-founder and executive director Jared Chung saw the potential immediately. By the summer of 2023, his team had a full version of their AI Career Coach to pilot, thanks to help from 20 other nonprofits and educational institutions. Now "Coach" is available to individuals for free online, and high schools and colleges around the country are starting to embed it into their own advising.
At the University of Florida College of Nursing, a more specialized version of Coach, "Coach for Nurses," gives users round-the-clock career exploration support. Shakira Henderson, dean of the college, said Coach is "a valuable supplement" to the college's other career advising.
Coach for Nurses personalizes its conversation and advice based on a user's career stage, interests and goals. It is loaded with geographically specific, current labor market information so people can ask questions about earnings in a specific job, in a specific county, for example. Coach can also talk people through simulated nursing scenarios, and it's loaded with chat-based activities and quizzes that can help them explore different career paths.
Henderson is clear on the tool's limitations, though: "AI cannot fully replace the nuanced, empathetic guidance provided by human mentors and career advisors," she said. People can assess an aspiring nurse's soft skills, help them think about the type of hospital they'd like most or the work environment in which they'd thrive. "A human advisor working with that student will be able to identify and connect more than an AI tool," she said.
Of course, that requires students to have human advisors available to them. Marcus Strother, executive director of MENTOR California, a nonprofit supporting mentoring programs across the state, said Coach is worlds better than nothing.
"Most of our young people, particularly young people of color in low-income areas," Strother said, "they don't get the opportunities to meet those folks who are going to be able to give them the connection anyway."
By contrast, Coach, he said, is "like having a mentor in your pocket."
'A regulatory desert'
In February, California state Sen. Steve Padilla, a San Diego Democrat, introduced legislation to protect children from chatbots. Senate Bill 243 would, among other things, limit companies from designing chatbots that encourage users to engage more often, respond more quickly or chat longer. These design elements use psychological tricks to get users to spend more time on the platform, which research indicates can create an addiction that keeps people from engaging in other healthy activities or lead them to form unhealthy emotional attachments to the bots.
The addictive nature of certain apps has long been a critique of social media, especially for young people. In Freeland Fisher's research for the Clayton Christensen Institute, she included a comment from Vinay Bhaskara, the co-founder of CollegeVine, which released a free AI counselor for high schoolers called Ivy in 2023.
"I've seen chat logs where students say, 'Ivy, thank you so much. You're like my best friend,' which is both heartwarming, but also kind of scary. It's a little bit of both," the report quotes him as saying.
Reached by phone, Bhaskara said his company's tool is designed to be friendly and conversational so students feel comfortable using it. Millions of students have used the chatbot for free on CollegeVine's website and more than 150 colleges in California and around the country have offered the technology to their own students. After seeing how many millions of emails, text messages and online chat sessions have happened outside of working hours, Bhaskara now argues the insight and support students have gotten from the chatbot outweigh the risks.
In announcing Padilla's bill, his office referenced a number of cases in which chatbots directed children who had become attached to them to do dangerous things. At the most extreme, a Florida teen took his own life after a Character.AI chatbot he had become romantically involved with reportedly encouraged him to "come home to me." Padilla said his bill wouldn't keep young people from getting the benefits of college and career advising from chatbots; it would offer reasonable guidelines to address a serious need.
"This is a regulatory desert," Padilla said. "There are no real guardrails around some of this."
Freeland Fisher said the AI companions that young people are turning to for friendship and romantic relationships represent a far greater risk than AI-powered college and career advisors. But she said schools and tech developers still need to be careful when they seek out an AI solution to the counselor shortage.
Maybe the only current danger is replacing conversations with school advisors. Eventually, though, sophisticated tools that capture more of students' time and attention in the quest to fill a greater need could end up replacing conversations with other adults in their lives.
"These other supports matter down the line," Freeland Fisher said. When students spend more time with chatbots and, indeed, learn to prefer interactions with bots over humans, it contributes to social isolation that can limit young people's ability to amass all-important social capital. "That's part of the warning that we're trying to build in this research," Freeland Fisher said. "It's not to say 'Don't use bots.' It's just to have a much fuller picture of the potential costs."
For their part, Making Waves and CareerVillage are taking some responsibility for the risks chatbots represent. Making Waves is actually retiring the AI Copilot this summer as the foundation shifts its mission to finding a way to use technology to help kids build social capital, not just get answers to questions about college and career. And CareerVillage has already put safeguards in place to address some of Padilla's concerns.
While Coach does tell users the more they interact with the chatbot the more personalized its recommendations become, Chung, the executive director, said Coach is designed to only discuss career development. "If you try to go on a long conversation about something unrelated, Coach will decline," Chung said. He described a series of guardrails and safety processes the company put in place to make sure users never become emotionally attached to the chatbot.
"It's work," Chung said, "but I'm going to be honest with you, it's not impossible work."
Data reporter Erica Yee contributed to this reporting.
This story was produced by CalMatters and reviewed and distributed by Stacker.