Teaching Writing in the AI Era: Why Process Beats Policing

If AI can generate polished prose in seconds, what is the point of teaching academic writing? I hear this question from instructors all the time, and it is legitimate. Any writing teacher who has watched a student hand in a flawless essay in a shaky second language already knows the question is overdue. Joseph Alvaro’s new handbook, Academic Writing in the AI Era (University Canada West, 2026), tries to answer it with research, classroom activities, and a refreshingly unpretentious tone.

Alvaro is upfront that the book is a starting point. The research base is still forming, the tools keep shifting, and any handbook written now will need updating by next term. I respect that framing. It is the right disposition for a field moving faster than peer review can keep up with.

AI as a Scaffold

Alvaro’s most useful move is to reframe generative AI as a linguistic scaffold for multilingual writers in English for Academic Purposes contexts. Drawing on Du and Alm (2025), he argues that EAP students often use AI the way earlier generations used grammar guides or writing centre tutors, to experiment with phrasing, reformulate sentences, and explore outlines before committing to a draft. Read that way, AI looks less threatening and a lot more pedagogical.

But Alvaro does not let the tool off the hook. The prose generative systems produce may look academically polished on the surface, but the reasoning behind it tends to be shallow. He points to findings that AI essays follow recognizable rhetorical templates, often carrying weak arguments, generalized claims, and fabricated citations. As Alvaro puts it, “the text generated by such systems should be interpreted as plausible language rather than as verified knowledge” (p. 7).

Teaching Writing in the AI Era

Process Over Product

The real argument of the handbook is that writing instruction has to move its attention from the final essay to the process that produced it. If instructors only grade the end product, they have no way of telling what the student actually did. When the process itself becomes the assessment, through outlines, staged drafts, peer review, oral defenses, and reflective commentary, AI text becomes “something students analyze, critique, and revise rather than something they submit unchanged” (p. 11).

I agree, and I would go further. This is exactly the argument Perkins and Roe made in their 2025 paper on the end of assessment as we know it, and it is the practical extension of the wicked problem Corbin and colleagues described when they mapped the tangled relationship between AI and assessment. Alvaro’s handbook adds the missing piece those earlier papers gestured toward: a concrete set of classroom activities an instructor could actually run on Monday morning.

Those activities include prompting workshops, hallucination hunts, AI summary checks, paraphrase critiques, human versus AI peer review, and transparency reflections. Each one turns AI into the object of analysis, which is where it belongs in a writing course. Students learn to read AI text the way they learn to read a peer-reviewed article, with suspicion and curiosity at the same time.

The AIAS as a Continuum

Alvaro gives central place to the AI Assessment Scale developed by Perkins and colleagues (2024), which I have covered before on the blog. The AIAS treats AI use as a continuum across five levels of permitted involvement, so instructors can calibrate at the assignment level depending on what the task is actually meant to assess. Roe and colleagues (2024) then adapted the scale into an EAP-AIAS framework that acknowledges EAP students already use grammar checkers and translation software, which makes the line between acceptable support and inappropriate substitution fuzzier in language classrooms than elsewhere.

What I like about Alvaro’s treatment of the AIAS is that he refuses the blanket-ban reflex. Alvaro argues, “what distinguishes appropriate use from problematic reliance is not simply the presence of AI in the writing process but the degree to which the student remains intellectually engaged with the task” (p. 14).

Where I Would Push the Handbook Further

The handbook is clear-eyed about detection, calling it a dead end because detectors produce too many false positives and false negatives, and because generative systems keep evolving. I agree. The answer Alvaro offers, oral explanations and transparency statements, is sound. I would add that oral defenses, as Hartmann (2025) argued in his case for oral exams in a GenAI world, work best when they are built into the course from the start, not bolted on as a gotcha at the end.

My other quibble is that the handbook spends less time on the instructor workload question. Alvaro acknowledges this more labor-intensive approach is the cost of serious assessment in the AI era, but he does not offer much on how departments should resource it. That is a real political problem for writing programs already stretched thin, and the research community will need to grapple with it before process-based assessment becomes the default.

Still, this is a handbook I will be recommending widely. Alvaro has done the thing the field needed. He has gathered the emerging research, translated it into usable classroom practice, and trusted instructors to adapt it to their own courses. The technology will keep changing. The pedagogical questions, as this handbook shows, are becoming clearer by the month.

References

  • Alvaro, J. J. (2026). Academic writing in the AI era: Theory and practice. A handbook for instructors. University Canada West.
  • Corbin, T., Bearman, M., Boud, D., & Dawson, P. (2025). The wicked problem of AI and assessment. Assessment & Evaluation in Higher Education. 1–17. https://doi.org/10.1080/02602938.2025.2553340
  • Du, J., & Alm, A. (2024). The impact of ChatGPT on English for Academic Purposes (EAP) students’ language learning experience: A self-determination theory perspective. Education Sciences, 14 (7), 726. https://doi.org/10.3390/educsci14070726
  • Perkins, M., Roe, J., & Furze, L. (2024). The AI assessment scale revisited: A framework for educational assessment. arXiv preprint arXiv:2412.09029. 
  • Perkins, M., & Roe, J. (2025). The end of assessment as we know it: GenAI, inequality and the future of knowing. In AI and the future of education: Disruptions, dilemmas and directions (pp. 76–80).  https://durham-repository.worktribe.com/output/4472558/the-end-of-assessment-as-we-know-it-genai-inequality-and-the-future-of-knowing

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top