How Teens Use AI: What the Pew Research Survey Tells Educators

We’ve spent a lot of time talking about what students might be doing with AI. The Pew Research Center just gave us actual numbers. Their February 2026 survey of 1,458 U.S. teens aged 13 to 17 paints a picture that will help us understand more about the intricacies of using AI among young learners.

64% of American teens now use AI chatbots. About three in ten use them daily. McClain et al. (2026) report that the most common uses are searching for information (57%), getting help with schoolwork (54%), and entertainment (47%). About four in ten use chatbots to summarize articles, books, or videos. One in ten say they do all or most of their schoolwork with chatbot assistance, and larger shares say they use them for some (21%) or a little (23%) of their work.

These aren’t fringe users experimenting with a novelty. This is mainstream behavior. When teens use chatbots for school, the top tasks are researching a topic (48%), solving math problems (43%), and editing their own writing (35%). And they find the tools useful: roughly a quarter say chatbots have been extremely or very helpful for schoolwork, another 25% say somewhat helpful, and only 3% say the tools were of little to no help.

So here’s the question educators need to reckon with: if more than half of teens are already using AI for schoolwork, and the vast majority find it helpful, what exactly are we accomplishing with policies that treat AI use as suspicious by default?

The cheating concern is real, and the teens know it. 59% say AI-powered cheating happens at least somewhat often at their school, and about a third say it happens extremely or very often. Teens aren’t oblivious to the problem. As McClain et al. (2026) note, “from what counts as cheating to trouble detecting it, the rise of AI in classrooms has posed a thorny issue for teachers” (p. 9). But the response from most institutions has been to police, detect, and punish, and the survey data suggests that approach misses the bigger picture.

What I find most telling is what teens themselves worry about. Among those who expect AI to have a negative impact on society, the top concern isn’t cheating. It isn’t job loss. It’s overreliance and loss of critical thinking, cited by 34%. That’s a population that has internalized the exact concern that researchers have been documenting for years. I’ve covered studies on cognitive offloading and its effects on critical thinking and on metacognitive laziness in AI-assisted learning, and the worry these teens express maps directly onto those findings. They know the risk. They’re articulating it more clearly than many of the policies written about them.

How Teens Use AI

On the positive side, teens who expect AI to help them personally cite making life easier (30%), supporting learning (20%), and boosting efficiency (19%). And here’s a data point that feels significant: teaching a skill is the only area where teens think AI would outperform humans (34% say AI would do better, versus 26% who say worse). For hiring decisions, medical diagnoses, and creative work like songwriting, teens mostly believe humans would do a better job. That’s a nuanced view of AI capabilities that doesn’t map onto either uncritical enthusiasm or blanket rejection.

How Teens Use AI

The demographic breakdowns in the Pew data add a layer that anyone working on AI policy in schools needs to understand. Black and Hispanic teens are more likely to use chatbots for schoolwork, more likely to find them helpful, and more likely to report doing all or most of their assignments with AI help. About seven in ten Black and Hispanic teens use chatbots, compared to 58% of White teens. Black teens report the highest confidence levels with these tools: 37% say they’re extremely or very confident using chatbots, compared to roughly a quarter of Hispanic and White teens.

The income data is even more pointed. Teens from households earning less than $30,000 a year are three times more likely to say they do all or most of their schoolwork with chatbot help (20%) compared to teens in households earning $75,000 or more (7%). That’s not a small gap. It suggests that for lower-income students, AI chatbots are filling roles that wealthier students might get from tutors, after-school programs, or parents with more time and education to help with homework.

This should give anyone pushing surveillance-based AI policies a serious pause. If the students relying most on AI for academic support are disproportionately Black, Hispanic, and from lower-income households, then punitive detection policies risk hitting these students hardest. I’ve written about how AI tools can support students with learning differences, and the principle applies here too: taking away the tool without providing an alternative doesn’t create equity. It reinforces the gaps that already exist.

The parent data rounds out the picture. There’s a notable perception gap. McClain et al. (2026) report that “64% of U.S. teens reported using chatbots. This is 13 percentage points higher than what their parents say” (p. 20). 28% of parents aren’t sure if their teen uses chatbots at all. And 42% say they’ve never talked with their teen about chatbot use. Parents are generally comfortable with teens using chatbots for information (79%) and entertainment (69%), but a majority (58%) aren’t okay with their teen seeking emotional support or advice from a chatbot. Only 18% support that use. Yet 12% of teens say they’ve already used chatbots for emotional support.

The guidance gap is real, at home and at school. And it creates a vacuum that teens are filling on their own, with varying degrees of skill and judgment.

What the Pew data tells me as an educator is this: the debate about whether students should use AI is already settled by practice. They’re using it. They find it useful. They’re aware of the risks. The remaining question is whether we build the pedagogical infrastructure to guide that use, or keep pretending we can contain it with detection tools and declaration forms. The RAND survey of K-12 AI tool adoption showed similar trends from the teacher side: teachers are using AI too, often without institutional support or clear frameworks.

Teens deserve better than policies written out of fear. They deserve AI literacy instruction that teaches them to evaluate outputs critically, to understand when chatbot answers are wrong, and to use these tools as thinking partners, not thinking replacements. The data is clear. The only thing missing is the pedagogical response.

References

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top