Beyond Grammar Checking: Building AI Literacy for Multilingual Academic Writers
Article Series - Part 1
Amara sits at her desk, her dissertation proposal open on one screen, ChatGPT open in her internet browser. She’s a brilliant doctoral student in applied linguistics; she’s fluent in Bangla, Hindi, and English. But right now, she’s frozen. Should she ask the AI to polish her literature review? Will that count as cheating? If she doesn’t use it, is she disadvantaging herself compared to native English-speaking peers who don’t need the same level of language support?
Amara’s dilemma is playing out in graduate schools, research labs, and university writing courses worldwide. Multilingual scholars, teachers, and writers stand at a unique crossroads where AI tools promise unprecedented support, yet also present unprecedented risks.
About This Series
This is the first article in a six-part series exploring how multilingual academic writers can navigate the AI era with agency, integrity, and strategic awareness. Over the coming weeks, we’ll build a complete framework: from understanding AI’s capabilities and limitations, to using it strategically across your writing process, to designing equitable policies and assignments that support rather than police multilingual writers.
This series is inspired by my three-week work as the English Language Specialist in Bangladesh last December. Yet, the work is for multilingual writers, teachers, scholars, and students anywhere in the world.
Let’s begin with the foundation: understanding what AI literacy really means.
The Multilingual Writer’s Unique Position
Multilingual writers have always navigated academic writing differently than their monolingual peers. Whether in Bangladesh, Brazil, South Korea, or Spain, these scholars bring rich linguistic repertoires and cross-cultural perspectives to their research and writing. Yet they also face what scholars call the ‘linguistic injustice’ of academic publishing, the pressure to conform to native-like English while their ideas get filtered through translation and cultural adaptation.
Enter AI technologies like ChatGPT, Claude, and specialized academic writing assistants offer what seems like a solution: instant grammar correction, style suggestions, and even help generating discipline-specific jargon. But here’s the challenge: without understanding how AI actually works, multilingual writers risk trading one form of linguistic inequality for another.
Recent research reveals a troubling pattern. Liang et al. (2023) found that AI detectors misclassify over 61% of essays written by non-native English speakers as AI-generated, while accurately identifying native-speaker writing. The problem? These detectors measure ‘perplexity,’ basically, how predictable the word choices are. Multilingual writers, who typically use more constrained linguistic expressions, score lower on perplexity measures and get flagged as suspicious (Liang et al., 2023). The very care that multilingual writers bring to their English becomes evidence against them.
Four Dimensions of AI Literacy
So how do multilingual writers navigate this landscape? The answer lies in developing what I call comprehensive AI literacy: understanding not just how to use AI, but when, why, and whether it serves your goals. This means building competency across four key dimensions:
Operational Literacy: Understanding how large language models (LLMs) generate text. These systems predict the next most likely word based on patterns in their training data; they don’t “think” or “understand” in any human sense. This explains both their power (they’ve seen millions of examples of academic writing) and their limitations (they can confidently generate completely fabricated citations).
Analytical Literacy: Evaluating AI output through rhetorical and disciplinary lenses. Can ChatGPT write a methods section? Sure. Does it understand the epistemological commitments of your field, the subtle differences between interpretive frameworks, or the nuanced argument you’re building? Absolutely not. As Jacob et al. (2024) demonstrate in their case study of a multilingual writer using ChatGPT, the tool works best when the writer maintains critical evaluation and integrates AI suggestions with their own expertise.
Critical Literacy: Questioning bias, epistemology, and whose voices are centered. AI systems are trained predominantly on English texts from Western contexts. They reproduce dominant academic discourse patterns and can erase the very linguistic and cultural perspectives that make multilingual scholarship valuable. Recent research on translanguaging reveals that multilingual doctoral students use AI not just for language support but as a space for ‘power negotiation, ’actively resisting standardization while asserting their disciplinary voice (Ou et al., 2025).
Ethical Literacy: Understanding responsible use aligned with academic integrity norms. This isn’t just about avoiding plagiarism; it’s about maintaining authorship and agency over your ideas. The line between ‘AI-assisted’ and ‘AI-generated’ isn’t always clear, but one principle remains: you should be able to explain and defend every claim in your writing, whether AI helped you express it or not.
What AI Does Well (and Poorly)
Let’s be specific about AI’s actual capabilities for multilingual academic writers.
Where AI excels:
Grammar and syntax correction at scale
Generating alternative phrasings and vocabulary suggestions
Explaining disciplinary conventions and genre expectations
Helping brainstorm and organize ideas
Summarizing long texts (with careful verification)
Where AI falls short:
Original analysis and interpretation
Accurate citations (it frequently fabricates sources)
Nuanced disciplinary arguments
Cultural and contextual sensitivity
Maintaining your unique scholarly voice
Research on ChatGPT in EFL writing confirms this pattern. Studies show AI can enhance surface-level accuracy and even boost student motivation, but its effectiveness depends entirely on the user’s existing knowledge and critical engagement (Boudouaia et al., 2024).
The Voice Preservation Challenge
Here’s what always challenges me as a writing teacher and researcher: AI can make anyone sound like a proficient academic writer. But should it?
Multilingual writers often bring fresh perspectives precisely because they think across linguistic and cultural boundaries. When we ask AI to make our writing sound more academic or more native-like, we risk erasing the very distinctiveness that makes the scholarship and writing valuable, unique, and distinctly ours.
The goal isn’t to avoid AI entirely; that ship has sailed. The goal is to use it in ways that amplify rather than replace your voice. This means being intentional: using AI for brainstorming but doing the analysis yourself; asking for grammar corrections but making your own rhetorical choices; seeking help with genre conventions while maintaining your argument’s integrity.
Moving Forward
Amara from our opening scenario? She eventually found her approach: she uses AI to check her grammar and suggest alternative phrasings, but she makes every analytical claim, interprets every data point, and constructs every argument herself. She keeps an AI log documenting exactly how she used these tools. And most importantly, she can defend every word in her dissertation because the ideas are genuinely hers.
This is what AI literacy looks like in practice: not rejection, not uncritical adoption, but strategic, transparent, and thoughtful integration.
In the next article, we’ll move from understanding AI to using it strategically. How can multilingual writers harness AI as a ‘genre explorer’ to decode the often-mystifying conventions of academic writing? What specific tools work best at different stages of the writing process?
Understanding AI’s capabilities and limits is crucial. But knowing how to actually use it to support your writing process is where the real power lies.
References
Boudouaia, A., Mouas, S., & Kouider, B. (2024). A study on ChatGPT-4 as an innovative approach to enhancing English as a foreign language writing learning. Journal of Educational Computing Research. https://doi.org/10.1177/07356331241247465
Jacob, S., Tate, T. & Warschauer, M. (2025). Emergent AI-assisted discourse: a case study of a second language writer authoring with ChatGPT. Journal of China Computer-Assisted Language Learning, 5(1), 1-22. https://doi.org/10.1515/jccall-2024-0011
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7). https://doi.org/10.1016/j.patter.2023.100779
Ou, A. W., Tai, K. W. H., & Wang, X. (2025). The emergence of academic writers: Multilingual doctoral students’ translanguaging and transpositioning in AI-mediated academic writing. System, 130. https://doi.org/10.1016/j.jeap.2025.101613
To read more about my work in Bangladesh, read my previous Substack article Empowering Bangladeshi Teachers with Human-Centered AI



Great read! Very informative and cogent. I'm looking forward to the next installment! Keep me posted!