AI in L2 Writing: Why Hyland Says the Problem Is Not the Tool

Hyland has spent four decades building the discourse-oriented side of L2 writing research, and his voice on metadiscourse, stance, and academic literacy has shaped how several generations of teachers think about what writing actually does. His new paper in the Journal of Second Language Writing (Hyland, 2026) brings that long view to the question many writing teachers have been asking: what does generative AI mean for learning to write in a second language?

The short answer, in Hyland’s hands, is that AI is neither the end of L2 writing nor a shortcut to solving its old problems. He refuses both the prohibition reflex and the uncritical embrace. What he offers in place of both is a reframing I find genuinely useful. The central question, Hyland argues, is not what AI can do for writing instruction but what kinds of writers, and what kinds of writing, we want to develop.

AI in L2 Writing

AI as a Disciplinary Mirror

Hyland’s most striking move is to call AI “a disciplinary mirror: one that reflects back our priorities, values and compromises” (p. 2). That reframing is a sharp observation about how institutions behave under pressure, not technological determinism dressed up in scare quotes.

Faced with a tool that can produce fluent text in seconds, most universities have shored up the status quo. They have bought detection software, tightened policy statements, and repositioned students as potential suspects. Hyland argues this reveals something uncomfortable about what the institution actually values. The default move is control, and the harder work of rethinking what assessment is for stays undone.

I agree, and I would go further. The pattern Hyland describes is the same one Eaton named when she argued for a postplagiarism orientation to academic integrity, and it is the exhaustion Corbin and colleagues traced when they wrote about the wicked problem of AI and assessment. Hyland’s contribution is to give that pattern a name that lands harder. AI is a stress test for educational values, and most institutions are failing it.

Fluency Is Not Competence

The second argument I want to highlight is Hyland’s insistence that fluent output is a poor proxy for learning. He puts it directly: “when fluency is mistaken for competence then pedagogical priorities are decisively distorted” (p. 3). For L2 writers, the consequence is serious. The whole point of a writing course is the struggle, the revision, the grappling with unfamiliar genres. When AI smooths over that struggle, students can walk out with a polished essay and no deeper understanding of the discipline the essay is supposed to belong to.

Hyland is careful not to claim this outcome is inevitable. He notes that AI can support learning when it prompts revision, comparison, and reflection, which lines up with the process-based case Alvaro built in his recent handbook on academic writing in the AI era. What makes Hyland’s version sharper is the institutional critique underneath it. Universities reward fluency because fluency is measurable. Struggle is not. That bias is what turns AI into a problem.

On Detection, Voice, and Whose English

Hyland is unsparing about detection technology. He calls it unreliable, especially for L2 writers whose texts already deviate from native-speaker norms, and he argues that “surveillance-oriented approaches reposition students as potential offenders and teachers as enforcers, undermining trust and diverting attention from learning” (p. 4). I made a similar argument on the blog when I covered the case Dawson and colleagues built for putting validity, not cheating, at the center of assessment design. Hyland extends that argument with a linguistic precision the generalist integrity literature often lacks.

His point about voice is one I find compelling and a little unsettling at the same time. Hyland argues that AI can reproduce the surface markers of academic interaction, hedges, boosters, engagement features, but without any communicative intent. The resulting text has what he calls expressive emptiness. All the moves are present, but the commitment behind them is missing. For L2 writers learning to project stance in a second language, that is a harder problem than it looks. Students can get fluent-looking text from AI that gives them no practice in the thing metadiscourse instruction is actually trying to teach.

The inequality argument is the one that hit me hardest. LLMs, Hyland notes, are trained on standardized Anglophone text, so they reproduce a narrow version of academic English and marginalize alternative rhetorical traditions. AI did not create this problem. Hyland is clear about that. But uncritical adoption will amplify it, and L2 writers, the population his paper is written about, will pay the cost.

Where I Think Hyland Could Push Further

My one quibble is that the paper, by design, stays at the level of values and principles. Hyland acknowledges this himself. He calls the piece a dialogic intervention, not a pedagogical manual, and I respect the choice. But the field needs both, and the distance between “we should value agency and responsibility” and “here is how you grade a portfolio when AI is in the room” is where most teachers actually live. Hyland’s values hold up. The implementation work is the next paper, and I hope someone writes it soon.

Still, this is the clearest articulation I have seen of why AI in L2 writing is a pedagogical problem, not a technological one. Hyland’s conclusion deserves to travel. The future of L2 writing in the AI era will be shaped less by technological capacity than by the values we choose to encourage. The tools will keep changing. The question of what we are teaching writing for will not.

References

  • Corbin, T., Bearman, M., Boud, D., & Dawson, P. (2025). The wicked problem of AI and assessment. Assessment & Evaluation in Higher Education. 1–17. https://doi.org/10.1080/02602938.2025.2553340 //  https://medkharbach.com/ai-and-assessment-a-wicked-problem/
  • Dawson, P., Bearman, M., Dollinger, M., & Boud, D. (2024). Validity matters more than cheating. Assessment & Evaluation in Higher Education, 49(7), 1005–1016. https://doi.org/10.1080/02602938.2024.2386662 // 
  • Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(23). https://doi.org/10.1007/s40979-023-00144-1 
  • Hyland, K. (2026). Writing in the AI era: Rethinking writing, research and teaching. Journal of Second Language Writing, 101302. https://doi.org/10.1016/j.jslw.2026.101302

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top