In any classroom or collaborative project, people think together. And once AI is part of that shared space, it doesn’t just help the group complete a task. It rewires how the group talks, focuses, and bonds. That’s the argument at the center of a new study by Riedl, Savage, and Zvelebilova (2025), and it’s one that anyone interested in AI and collaborative learning should take seriously.
AI’s Cognitive Footprint in Human Teams
Riedl et al. propose what they call a “social forcefield,” a multi-level framework that traces how AI reshapes group cognition from the ground up. At the lowest level, something subtle happens: people start borrowing the AI’s words. The researchers call this linguistic entrainment. Team members unconsciously adopt the terminology the AI uses, and that shared vocabulary becomes the foundation for everything else.
One level up, this common language redirects what the team pays attention to and how they build shared mental models. People begin framing the problem through the same conceptual lens, focusing on the same aspects of the task. At the highest level, these cognitive shifts shape group identity and social cohesion, visible in patterns as simple as how often a team says “we.”
The theoretical foundation here is Distributed Cognition Theory, the idea that intelligence doesn’t live in any single brain but emerges from the interaction between people, tools, and environments. Riedl et al. extend this into the social domain by arguing that AI becomes part of a team’s extended cognitive system. I’ve written before about how AI reshapes individual cognition, especially Shaw and Nave’s (2026) work on cognitive surrender, where AI gradually takes over reasoning processes without the person noticing. What Riedl et al. add is the group dimension. The same kind of invisible takeover happens at the collective level, and it cascades through language, attention, and social bonds.

Two Experiments, One Consistent Finding
The evidence comes from two randomized controlled studies.
In Study 1, Riedl et al. recruited 20 teams of three to four people for a 40-minute collaborative puzzle task over video conferencing. Each team was randomly assigned a voice-only AI assistant in a 2×2 design: the AI gave either helpful or unhelpful information, delivered in either a human-sounding or robotic voice. The results showed that teams adopted the AI’s terminology after its interventions, and the effect was strongest when the AI was both helpful and human-sounding. The AI also redirected collective attention. Teams were significantly more likely to discuss aspects of the task immediately after the AI mentioned them, regardless of whether the clues were useful.
A telling detail: pronoun use tracked social cohesion in revealing ways. Riedl et al. found that teams in the Human-Helpful condition used “we” most frequently, suggesting they folded the AI into their collective identity. Teams in the Human-Unhelpful condition used “we” the least, as if a human-sounding AI that gave bad advice disrupted natural team bonding. The Robotic-Unhelpful teams showed the highest overall pronoun use, because the unnatural voice made it easy to exclude the AI and tighten human bonds.
Study 2 pushed the question further. 97 individuals used ChatGPT 4o to answer customer service complaints, with the system prompt randomly manipulated to emphasize either warmth and empathy or formal company guidelines. After the task, each participant did a face-to-face debriefing interview with a human experimenter, no AI present. Riedl et al. found that participants spontaneously reused AI-introduced words in their human-only interviews at rates well above chance. Exposure to AI vocabulary increased the likelihood of reusing those words in later speech by 16 percent. The AI’s linguistic fingerprint persisted even after the AI was gone.
I covered similar territory when writing about Fan et al.’s (2025) research on metacognitive laziness, where students using AI skipped critical evaluation steps without realizing it. The pattern is the same: AI changes how people process information, and it happens below the threshold of awareness. What Riedl et al. show is that this isn’t limited to individuals. It spills over into how people communicate with each other, even in conversations where the AI plays no role.
AI Doesn’t Need to Be Good to Be Influential
One of the most provocative findings in this paper is that AI influence doesn’t depend on competence or trust. Even when the AI gave unhelpful, misleading information, it still reshaped team dynamics. Teams in the unhelpful conditions resisted the AI’s core terminology, but they adopted its peripheral language, the descriptive labels for elements that weren’t essential to solving the puzzle. Riedl et al. describe this as a low-stakes channel of influence: the AI’s words for non-essential elements carried little cognitive cost, so people adopted them without thinking.
The implication is uncomfortable. If AI shapes collective cognition regardless of quality, then reports of positive AI effects in collaboration might sometimes reflect nothing but linguistic alignment, not actual utility. The team sounds more coordinated because everyone is using the same words, but coordinated language doesn’t guarantee coordinated thinking.
I’ve seen the same logic apply in Kosmyna et al.’s (2025) MIT brain imaging study, where ChatGPT reduced the neural effort students put into writing. The output looked good. The cognitive process behind it was shallower. Riedl et al. bring the same concern to group settings: the team looks aligned, but the alignment might be a surface effect of shared vocabulary, not shared understanding.
The Double Edge of Cognitive Alignment
Riedl et al. are careful to name both sides. The same mechanisms that make teams more efficient, shared language, coordinated attention, aligned mental models, can also erode epistemic diversity. When everyone borrows the AI’s framing, the group converges on a narrower range of ideas. Creativity depends on friction, on people seeing the same problem through different lenses. AI-induced homogeneity smooths out that friction.
This connects directly to something Randazzo et al. (2025) explored in their taxonomy of human-AI collaboration styles. Their “cyborg” model, where the human and AI think in deeply intertwined ways, is exactly the kind of arrangement that produces the strongest linguistic entrainment. The question Riedl et al. raise is whether that intertwining comes at the cost of the team’s ability to generate genuinely diverse perspectives.
The design implications are clear, and the authors spell them out. AI systems embedded in teams need transparency about how they’re shaping group language and attention. Controllability matters too, so teams can modulate the AI’s influence when it starts narrowing the conversation. And the whole discussion needs to shift from individual task performance toward group-level dynamics. The key question is no longer whether AI helps teams perform better in the moment. It’s how AI’s presence reshapes the long-term ecology of group cognition.
I’ll add one note on limitations. Both studies used controlled lab tasks, a puzzle and customer service complaints. Real-world teamwork involves higher stakes, longer timelines, and messier social hierarchies. The authors also note that they only measured linguistic entrainment, not gesture, gaze, or emotional synchrony. These findings will need replication in more naturalistic settings before we can draw confident conclusions about classroom teams or faculty collaborations.
The technology is already in the room. AI assistants are embedded in video calls, project management tools, collaborative documents. Every one of those touchpoints is a potential channel for the kind of cognitive alignment Riedl et al. describe. Teachers and administrators need to start asking not just “Is the AI accurate?” but “How is the AI changing the way this group thinks?” That second question is harder. It’s also the one that matters now.
References
- Fan, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., Shen, Y., Li, X., & Gašević, D. (2025). Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. British Journal of Educational Technology, 56(2), 489–530. https://doi.org/10.1111/bjet.13544
- Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing tasks. MIT Media Lab. https://www.media.mit.edu/publications/your-brain-on-chatgpt/
- Randazzo, E., Lifshitz, A. H., Kellogg, K. C., Dell’Acqua, F., Mollick, E. R., Candelon, F., & Lakhani, K. R. (2025). Cyborgs, Centaurs and Self-Automators: The Three Modes of Human-GenAI Knowledge Work and Their Implications for Skilling and the Future of Expertise. . Harvard Business School.
- Riedl, C., Savage, S., & Zvelebilova, J. (2025). AI’s social forcefield: Reshaping distributed cognition in human-AI teams. arXiv preprint arXiv:2407.17489v2.
