Argument in Academic Writing: A Systematic Scoping Review
Abstract
Background: Argumentative writing represents a core dimension of academic literacy within higher education; however, research concerning “argument,” “argumentation,” and “argumentative writing” remains dispersed across distinct disciplinary paradigms and commonly draws upon non-equivalent conceptual definitions and analytical methodologies. This fragmentation has practical consequences for teaching and assessment, particularly as technology-enhanced writing environments and AI-mediated support expand the range of tools used to scaffold and evaluate argumentation.
Purpose: To map the conceptual and methodological approaches to studying argument in academic writing in higher education, including definitions of argumentative writing, the argument models employed, the ways argument quality is operationalized, the dominant research directions, and the role of digital and AI-mediated environments in the teaching and assessment of argumentation.
Materials and Methods: The review followed PRISMA-ScR guidance and used a PCC framework to define the sources eligibility. A structured search was conducted in Scopus on 27 September 2025 using predefined Boolean queries covering argumentative writing, academic writing, and higher education; backward reference searching was also applied to identify additional relevant studies. Records were screened against inclusion and exclusion criteria, and data were charted in a standardised extraction form. The synthesis combined (a) bibliometric keyword co-occurrence mapping using VOSviewer to identify major thematic blocks and (b) expert thematic coding to interpret conceptualisations, models, and methodological patterns across the literature under analysis. Inter-coder agreement was established during iterative coding.
Results: Ninety-five sources were included in this review. Publication output increased after 2018, with the largest share of studies appearing around 2019–2020, and the evidence base was geographically concentrated in Asia and the Americas. Bibliometric mapping and expert synthesis converged on several recurring blocks: theoretical and definitional work on argument/argumentation in academic writing; pedagogical studies on teaching argumentative writing and related scaffolding; assessment-oriented research (rubrics, indicators of quality, and technology-supported evaluation); sociolinguistic and interlinguistic perspectives, especially in EFL/L2 contexts; and an emergent strand focused on digital writing environments, automated feedback, and AI-enabled support. Across the corpus, Classical, Toulmin-based, and Rogerian traditions function as influential modelling frameworks, but they are applied inconsistently, and operationalisations of argument quality vary substantially: most commonly privileging detectable structural elements over comparably stable measures of reasoning strength or epistemic integration.
Conclusion: The review shows that the development of research on argumentative writing in higher education is constrained not by a lack of studies, but by the absence of conceptual and methodological coherence. Differences in the definitions of argument, the models used for its analysis, and the approaches to assessing its quality limit the comparability of findings and complicate the translation of research insights into teaching and assessment practice. Under these conditions, the integration of argumentation theories with more robust and substantively oriented approaches to argument assessment becomes particularly important, especially against the backdrop of the growing automation of the structural aspects of writing in digital and AI-mediated environments.
Keywords: Argumentative writing, Academic writing, Higher education, Argumentation, Argument quality, Toulmin model, Rogerian argument, Assessment, Writing pedagogy, EFL writing, Digital writing environments, Artificial intelligence
Introduction
Argumentative writing has become a core component of academic literacy in higher education because it is one of the main textual forms through which students participate in disciplinary discussion and demonstrate reasoned judgement. In institutional settings, it is increasingly treated as a learning outcome precisely because it makes visible how writers justify positions, evaluate competing claims, and work with evidence in a transparent way (Goldman et al., 2016; Ozfidan and Mitchell, 2020; Pei et al., 2017; Sharadgah et al., 2019; Zhang and Zhang, 2021; Kaewneam, 2025). In academic writing, argumentation is realised through adopting a stance on a contestable issue and supporting it with claims, relevant evidence, and explicit reasoning (Rapanta and Macagno, 2019; Yasuda, 2023). At the same time, argumentative writing is not reducible to persuasion, since it also performs an epistemic function when writers use visual (Tikhonova and Mezentseva, 2025) and linguistic sources to qualify, support, and contest claims, thereby shaping stance and contributing to knowledge building (Ferretti and Fan, 2016; van Weijen et al., 2019; Yasuda, 2023).
Despite this shared educational importance, research on argumentative writing operates with noticeably different understandings of what counts as an argument and how argument quality should be described. Studies often address different aspects of the same phenomenon and then treat their results as comparable, even when the underlying construct focus is not aligned. This divergence is visible in how quality is operationalised. Structural features of claims and support, dialogic work with counterpositions, and rhetorical organisation across genres are invoked with different emphases and are sometimes treated as interchangeable indicators of competence (Stapleton and Wu, 2015; Yang, 2021; Lu and Swatevacharkul, 2021; Su et al., 2023). As a result, the field has developed a productive vocabulary, but commensurability across studies remains limited, and identical labels can conceal distinct analytical commitments (Ferretti and Fan, 2016; Allen et al., 2019).
Over the past decade, research on argumentative writing has expanded across institutional settings, including higher education, teacher education, multilingual classrooms, and academic literacy programmes, and it has increasingly moved into digitally mediated environments that shape both writing processes and the forms of instructional support available to writers (Newell et al., 2015; Cheong et al., 2021; De Castro Daza et al., 2022). At the same time, a growing body of work has turned to the interactional and evidential labour that academic writers are expected to perform, including anticipating readers’ objections, engaging alternative viewpoints, and producing rebuttals grounded in credible sources (Kibler and Hardigree, 2017; Sánchez-Peña and Chapetón, 2018; Hutasuhut et al., 2023). These shifts have made differences in constructs and methods more visible and, in practical terms, have raised the stakes of how the field defines what counts as a successful argument in academic writing.
As the number of works on the topic under discussion has increased, conceptual fragmentation has become increasingly apparent. Definitions of argument, argumentation, and argumentative writing diverge across applied domains. Some approaches prioritise reasoning processes and their development (Mulyati and Hadianto, 2023; Hu and Saleem, 2023), others foreground evidence use and rhetorical effectiveness (Shi et al., 2022; Uzun, 2024), and a further strand conceptualises argumentation in dialogic and sociocultural terms (Najjemba and Cronjé, 2020; De Castro Daza et al., 2022). This divergence is also reflected in the use of analytic frameworks. Classical, Toulmin-based, and Rogerian models are frequently cited, yet they are applied in non-equivalent ways, and the boundaries between their functions are not always made explicit (Stapleton and Wu, 2015; Zainuddin and Shammem, 2016). The problem is compounded by locally adapted coding schemes and context-specific instruments which may be well aligned with local educational aims but reduce comparability across studies (Lesterhuis et al., 2018; Paris et al., 2025). In this context, existing reviews have tended to remain restricted in scope and do not fully integrate definitional, methodological, and technology-mediated developments into a single map of the field (Amini Farsani et al., 2025).
Against this background, the task is not simply to compile findings but to reconstruct how the field is organised. This requires clarifying how central concepts are defined, how argument structure and function are modelled, and which recurring patterns shape research trajectories across traditions. The need for such reconstruction has become more acute as automated writing evaluation, argument mining, and AI-assisted writing have expanded and begun to influence both pedagogy and assessment (Wambsganss et al., 2020; Lawrence and Reed, 2020; Guo et al., 2022; Yang et al., 2023; Khampusaen, 2025). Digital environments enable scalable feedback and analytics. One of the fundamental questions discussed in literature is made more relevant, namely which aspects of argumentation are being supported and which are being measured when support is mediated by tools (Benetos and Bétrancourt, 2020; De Castro Daza et al., 2022). In parallel, cognitive, sociocultural, and multilingual strands continue to propose new typologies of reasoning and new approaches to evaluating argument quality, widening the conceptual space that must be mapped rather than assumed to follow a single developmental pathway (Mateos et al., 2018; Kleemola et al., 2022; Wang and Newell, 2025).
An integrative reconstruction is consequential for three groups of readers. For researchers, it clarifies which constructs and measures can be treated as comparable and where disagreements are better explained by non-equivalent operationalisations than by substantive theoretical conflict. For educators and programme designers, it makes visible what different models privilege in instruction and how those priorities translate into tasks and rubrics, whether the emphasis falls on textual organisation, warranting and evidence-based reasoning, or dialogic engagement with alternative positions (Rapanta and Macagno, 2019; Stapleton and Wu, 2015; Lu and Swatevacharkul, 2021). For scholars working in technology-enhanced writing, the map is necessary for aligning tool affordances with explicit theoretical commitments rather than treating digital support as conceptually neutral (Wambsganss et al., 2020; Lawrence and Reed, 2020; Guo et al., 2022).
In response, the present study conducts a systematic scoping review of research on argument in academic writing in higher education. The review treats argumentative writing as a genre-specific realisation of argumentation that brings together instructional, cognitive, and rhetorical dimensions, providing a coherent basis for mapping conceptual and methodological diversity across the field (Ferretti and Fan, 2016; Allen et al., 2019; Yasuda, 2023). Conceptually, the reconstruction is organised around four analytic axes, namely definitional choices, model selection, operationalisations of argument quality, and the mediation of instruction and assessment through digital and AI-enabled environments. The analytic stance is integrative and diagnostic. Rather than adopting a single model as a default, the review reads the literature as a set of partially overlapping traditions and examines where they converge, where they diverge, and which tensions persist, particularly in definitions, the use of argument models, and the operationalisation of argument quality (Najjemba and Cronjé, 2020; Shi et al., 2022; Lesterhuis et al., 2018; Paris et al., 2025). This position follows from the interdisciplinary character of the field and recognises that different traditions may capture different aspects of argumentative writing (Musa, 2019; Hutasuhut et al., 2023; De Castro Daza et al., 2022).
To maintain analytical tractability, the review has its eligibility criteria and is bounded to English-language publications from 2015 to 2025 that address higher-education academic writing contexts. These boundaries function as a principled delimitation for reconstructing how argument models, typologies, and quality dimensions are developed and applied within a defined segment of the literature. On this basis, the review addresses the following research questions:
RQ1. How is “argumentative writing” conceptualised in higher-education academic writing research, and which definitional components are treated as constitutive of the phenomenon?
RQ2. Which argument models and analytic frameworks are used to represent argument structure and relations in student academic writing, and how are these frameworks operationalised in empirical studies?
RQ3. How is argument quality operationalised across the field, and which dimensions and indicators are most commonly used in assessment and analysis?
RQ4. Which thematic and methodological patterns organise the mapped literature, including dominant research foci, study designs, and educational contexts?
RQ5. How do digital and AI-mediated environments shape the teaching, scaffolding, and assessment of argumentative writing, and how are technology-enhanced interventions aligned with the underlying argument constructs?
Method[1]
Transparency Statement and Protocol
We conducted a scoping review to map the existing literature on argumentative writing in higher education to address the research questions. The review aimed to delineate the scope of the research landscape, identify emerging evidence, and highlight knowledge gaps, thereby contributing to ongoing scholarly discourse and educational policy development. This review adhered strictly to PRISMA-ScR protocol (the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews). Prior to commencing the study, a detailed protocol was developed and established. The authors confirm that this manuscript provides an accurate, thorough, and comprehensive account of the research conducted; it covers all key aspects of the study.
Eligibility Criteria
This scoping review followed a structured methodological framework comprising five key stages: (1) defining the research questions; (2) identifying relevant sources; (3) selecting studies for inclusion; (4) charting and extracting pertinent data; and (5) collating, summarizing, and reporting the results. Inclusion criteria were established using the PCC framework as recommended in PRISMA-ScR guidelines. Additional eligibility restrictions were applied based on language of publication, publication date range, geographical focus/affiliation of the first author or study setting, and document type (Table 1). The review encompassed various source types, including peer-reviewed journal articles, books, book chapters, conference proceedings/papers, and one unpublished master’s thesis.
Table 1. Eligibility Criteria
Таблица 1. Критерии отбора

Information Sources and Search Strategy
The literature search was performed in Scopus database. Adherence to the PRISMA-ScR protocol is depicted in the flow diagram (Figure 1). A preliminary search (n = 375 records identified) informed the selection and refinement of search terms and index terms pertinent to argumentative writing in higher education contexts. The final search strategy, incorporating these optimized terms, was run on September 27, 2025. The search terms obtained and refined by consulting relevant publications connected with the topic of interest of this study were combined using Boolean operators (OR, AND or NOT) and truncation symbols. The search entries were the following:
- TITLE-ABS-KEY ("argumentative writing" OR "argument writing" OR "written argument*" OR "academic argument*");
- TITLE-ABS-KEY ((argument* OR reasoning) AND ("academic writing" OR "university writing" OR "higher education writing"));
- TITLE-ABS-KEY (("argumentative writing" OR "written argument*" OR "academic argument*" OR argument*) AND ("academic writing" OR "higher education" OR university OR EAP) NOT (law OR legal OR court OR medical OR clinical OR debate OR speech OR politic* OR "high school" OR K-12));
- TITLE-ABS-KEY ((argument* AND writing AND (university OR "higher education")) NOT (debate OR speech OR "oral argument*" OR rhetoric*));
- TITLE-ABS-KEY (("argumentative writing" OR argument*) AND ("academic writing" OR university OR "higher education") NOT (primary OR elementary OR "middle school" OR "high school" OR K-12)).
81 studies were found in the Scopus database. Furthermore, the reference lists of the selected studies were examined to identify additional revelant research. From this search, 14 studies were retrieved (as shown in Figure 1).
Figure 1. PRISMA-ScR Protocol
Рисунок 1. Протокол PRISMA-ScR

Selection of Evidence Sources
The references retrieved from the search were imported into a Zotero library for organization. Duplicate records were identified and removed using the reference management software's built-in deduplication function. Two independent reviewers screened the deduplicated library in sequential phases: (1) title, keyword, and abstract screening against the predefined inclusion and exclusion criteria; (2) full-text evaluation of potentially eligible sources. Consensus discussions were held after each phase to resolve any discrepancies regarding eligibility. Persistent disagreements were adjudicated by a third reviewer.
All duplicate records were removed prior to screening. During the title, keyword, and abstract screening stage, 49 sources were excluded as they fell outside the scope of argumentative writing. Of the remaining 289 sources, 78 were excluded due to unavailability of full text. Full-text review of the 211 accessible sources resulted in the exclusion of 130 that did not meet the inclusion criteria, leaving 81 sources for initial inclusion. An additional 14 relevant sources were identified through backward citation searching (hand-screening of reference lists from included studies). Ultimately, 95 sources were included in this scoping review (see Appendix 1 for the complete list).
Data Charting Process
Data charting was conducted in duplicate and independently by two reviewers, with one reviewer performing initial extraction and the second conducting verification of all entries. Any inconsistencies were resolved via consensus discussions. A pre-defined, standardized Excel-based charting template was used to extract and organize pertinent data from included sources. Charted variables encompassed: author names and institutional affiliations; year of publication; document objectives and summary description; definitions of argumentative writing; argument classification schemes; argument structures/models; research trends in argumentative writing; and considerations of digital contexts and assessment. This methodical process supported a rigorous descriptive synthesis of the evidence in relation to the review's aims.
Summarising and Reporting the Results
Following data charting, the reviewers synthesized findings related to key aspects of “argument” identified in the sources. Given the observed terminological inconsistency during screening, a dedicated analysis of definitions of “argument” and “argumentative writing” was performed to identify shared core characteristics, ultimately informing a consensus operational definition. Definitions were systematically compiled into numbered Microsoft Word documents.
Thematic analysis adhered to the reflexive approach described by Braun and Clarke (2006), involving: familiarization with the data; independent and collaborative code generation and refinement; development of candidate themes; independent application of codes/themes by both coders; and iterative review to achieve consensus. Inter-coder reliability reached >93% agreement on themes, codes, and extracts. Disagreements were resolved via detailed discussion, leading to code/theme adjustments and a confirmatory recoding round. The same structured coding procedure was applied to synthesize evidence on causes and consequences of superfluous verbiage in academic texts, as documented in the reviewed literature.
Results
Demographic Characteristics of the Selected Sources
Publication Years of the Included Sources
An analysis of 95 selected studies reveals a clear developmental trajectory in academic writing research over the past decade (Figure 2). In the early period (2015-2017), publication output was relatively modest, slowly increasing during these years, indicating the consolidation of foundational research themes. This upward trend continued into 2018, suggesting growing scholarly engagement with academic and argumentative writing.
A marked expansion occurred in 2019 and 2020, when the number of studies peaked at twelve per year. This surge reflects intensified interest in cognitive, pedagogical, and technological dimensions of academic writing, including assessment, instructional interventions, and digital tools. Although output declined slightly in 2021, productivity remained consistently high in 2022 and 2023, demonstrating sustained research momentum and diversification of methodological approaches.
In the most recent years, 2024 recorded six studies and 2025 seven studies. This apparent decrease is best interpreted cautiously, as it may reflect incomplete indexing and publication delays rather than a substantive reduction in scholarly activity. Overall, the longitudinal pattern suggests that academic writing has evolved into a mature and stable research field, characterized by strong growth after 2018, a productivity peak around 2019-2020, and continued high-level output in subsequent years.
Figure 2. Yearwise distribution of the sources
Рисунок 2. Распределение источников по годам

Geographic Affiliation of the Sources’ Authors
The geographic distribution (Figure 3) of authors’ affiliations of the selected studies, made using MapChat software, shows the density map (dark red represents higher output; light orange shows lower output). The clearest pattern is the dominance of China and the United States, which are tied as the most frequent affiliations (18 studies each). A second tier of contribution is led by Indonesia (10), which appears notably prominent in Southeast Asia. Several countries form a mid-level group, including Iran and Thailand (6 each) and Canada (4), producing visible but less intense shading. Additional contributions are more dispersed: Malaysia and Spain contribute 3 affiliations each, while a set of countries, including the Netherlands, Switzerland, Turkey, Saudi Arabia, Colombia, and New Zealand, appear at 2 affiliations each. Many other countries occur only once, such as Australia, Japan, India, Germany, Finland, Mexico, Portugal, South Africa, Egypt, the UK, the UAE, Vietnam, Belgium, Croatia, and Hungary, which corresponds to very light shading or minimal visibility on the map.
Figure 3. Geographic distribution of the sources
Рисунок 3. Географическое распределение источников

Regionally, the pattern indicates a strong concentration in Asia (53.7% of affiliations), followed by the Americas (26.3%), with Europe (14.7%) contributing across many countries but in smaller, fragmented amounts. Oceania (3.2%) and Africa (2.1%) are comparatively limited. Research activity is concentrated in a few key countries (China and the USA) supported by a smaller but substantial cluster in Southeast Asia, while much of the rest of the world shows scattered, low-frequency contributions to the topic.
Types of the Selected Sources
An analysis of the publication types in the selected sources shows a strong dominance of journal articles, which account for approximately 84%. This indicates that research on argumentative writing is primarily disseminated through peer-reviewed journals, reflecting a mature and actively publishing research field. Book chapters are represented by about 12%, suggesting a secondary but still meaningful contribution from edited volumes, typically used for theoretical consolidation, methodological discussion, or pedagogical synthesis. Books are only around 2%, which implies that monographic treatments of argumentative writing are relatively rare compared with article-based dissemination. Only one conference paper (approximately 1%) and one unpublished master’s thesis (approximately 1%) are present, indicating that grey literature and preliminary research outputs play a minimal role in this corpus (Figure 4).
Figure 4. Publication types of the sources
Рисунок 4. Типы выбранных источников

The distribution underscores that research on argumentative writing in academic settings relies primarily on journal-based sources, with books and chapters providing supplementary theoretical or pedagogical contributions. This pattern reflects a mature, empirically grounded research area where peer-reviewed articles serve as the central vehicle for knowledge accumulation and dissemination.
Clusters by VosViewer
To obtain an initial overview of the research space, we mapped keyword co-occurrences in the selected corpus using VOSviewer (Figure 5). Each colour represents a separate cluster obtained through modular clustering, based on the frequency with which keywords co-occur in the selected publications. The size of each node corresponds to the number of times the keyword appears in the corpus, while the thickness of each line indicates the strength of the connection between two terms. Nodes in the centre of the network are more connected to other terms, while those on the periphery are relatively isolated. The software produced 22 clusters, reflecting the thematic dispersion of the literature and the tendency of keywords to form small, locally dense groupings. Because VOSviewer relies on co-occurrence rather than semantic proximity, the resulting structure is informative as a first-pass map but too granular for analytic synthesis. For this reason, we use the VOSviewer output as an exploratory baseline and then consolidate clusters into four higher-order clusters guided by semantic relatedness, shared research tasks, and established classifications in argumentation research (Appendix 2).
Figure 5. Distribution of the clusters based on the co-occurrence network of words in the VosViewer program
Рисунок 5. Распределение кластеров на основе сети совместного появления ключевых слов в программе VOSviewer

Figure 5 therefore serves as a visual point of departure for the subsequent reconstruction, rather than as an analytic classification in its own right. The dispersion of terms reflects how VOSviewer constructs clusters: keywords are grouped by co-occurrence patterns rather than by semantic proximity or field-specific conceptual relations. This procedure produces a useful exploratory map, but it can also yield an overly fine-grained structure in which conceptually adjacent topics are distributed across multiple small clusters. Accordingly, we do not interpret the 22 clusters as analytic categories. Instead, we consolidate them into four enlarged thematic clusters (Appendix 2). The consolidation is guided by three criteria: semantic proximity of keywords as indicators of a shared research direction, unity of the research task, and consistency with established classifications and typologies in argumentation research. As a result, four conceptually coherent clusters were identified: (1) Theoretical and methodological foundations of argumentation in academic writing; (2) Evaluation and analysis of argumentative writing; (3) Sociolinguistic and interlinguistic aspects; (4) Cognitive-psychological and educational determinants of argumentation.
This approach made it possible to eliminate the excessive fragmentation of automatic clustering and present the structure of the research field in a more coherent and analytically appropriate form. In the future, these enlarged blocks will be compared with clusters identified manually on the basis of expert analysis of publications with annotations.
Cluster 1. Theoretical and Methodological Foundations of Argumentation in Academic Writing[2]
This cluster covers studies in which argumentation is viewed as a rhetorical-discursive phenomenon embedded in the system of academic writing and described through the prism of established theoretical and methodological approaches. The key concepts academic writing (Ferretti and Fan, 2016; MacArthur and Graham, 2016; Uzun, 2024), academic integrity (Goldman et al., 2016; van Weijen et al., 2019; Deane et al., 2019), Toulmin argument structure (Ferretti and Fan, 2016; Zainuddin and Shammem, 2016; Rapanta and Macagno, 2019; McCarthy et al., 2022), argument quality (Stapleton and Wu, 2015; Lesterhuis et al., 2018; Allen et al., 2019), argumentation (Newell et al., 2015; Zhang, L., 2016; Lawrence and Reed, 2020; Yasuda, 2023; Kaewneam, 2025), critical thinking (Pei et al., 2017; Sharadgah et al., 2019; Raj et al., 2022; Hu and Saleem, 2023; Wang and Newell, 2025), appraisal theory (Yang, 2016; Hatipoğlu and Algı, 2018; Papangkorn and Phoocharoensil, 2021), systemic functional linguistics (Zhang, X., 2019; Zhang and Zhang, 2021; Uzun, 2024), which indicates the considerable theoretical depth of the field.
Several sub-areas can be identified within this cluster. The first is related to the adaptation of classical models of argumentation (in particular, Toulmin's structure) to the teaching of written language in an academic context (Newell et al., 2015; Weyand et al., 2018). The second involves the integration of critical thinking and metacognitive strategies into the process of developing argumentative competencies, which is reflected in the active use of the concepts of metacognition and critical thinking (Newell et al., 2015; Mateos et al., 2018; Arroyo González et al., 2020; Lu and Swatevacharkul, 2021). The third is the application of appraisal theory (Martin and White, 2005; Yang, 2016) and systemic functional linguistics (SFL) to analyze the pragmatic and interpersonal functions of argument in academic texts (Stapleton and Wu, 2015).
To characterise the internal dynamics of Cluster 1, we interpret the VOSviewer research timeline and density maps (Figures 6-7). The color scale in Figure 6 shows the average publication year of the works in which the keyword appears: purple and blue shades indicate earlier studies (2016-2019), light blue and greenish shades indicate the middle period (2020-2022), and yellow and light green shades indicate the most recent studies (2023-2025). The timeline depicts that academic writing, academic integrity, Toulmin-oriented argument structure, and critical thinking constitute a stable conceptual core across the period, with early consolidation visible from 2016-2018. In 2020-2022, the thematic profile broadens to include argument quality, metacognition, and appraisal theory. In 2023-2025, the timeline shows stronger coupling to digitally mediated and intercultural contexts, suggesting that these contexts have become more tightly connected to the established core.
Figure 6. Distribution of research topics on argumentative writing over time (2016-2025) according to VOSviewer visualization data
Рисунок 6. Распределение тем исследований по аргументативному письму по годам (2016-2025 гг.) согласно данным визуализации VOSviewer

The density map (Figure 7) shows high saturation in the same area, indicating sustained attention and frequent co-occurrence of the corresponding keywords within the mapped corpus. The colored areas reflect the density of keyword co-occurrence: the brighter and warmer the color (yellow, light green), the higher the concentration of publications on the topic; areas in cooler shades (blue, purple) indicate a lower frequency of the term in the sample and lower research saturation of the topic. The timeline and density views indicate that, within the mapped corpus, foundational constructs such as academic writing, Toulmin-oriented structure, and critical thinking form a stable conceptual core, with later years showing stronger coupling to digitally mediated and intercultural contexts. This shift adds new contexts of application and new methodological entry points.
Figure 7. Density of research on argumentative writing topics based on mapping the co-occurrence of keywords in VOSviewer
Рисунок 7. Плотность исследований по темам аргументативного письма на основе картирования совместного появления ключевых слов в VOSviewer

Cluster 2. Assessment and Analysis of Argumentative Writing[3]
This cluster focuses on developing, validating and applying tools for assessing argumentative writing, and on using digital technologies to support and analyse written texts. Key concepts include assessment (Lesterhuis et al., 2018; Rapanta and Macagno, 2019; Deane et al., 2019), rubric development (MacArthur et al., 2019; Ozfidan and Mitchell, 2022), learning outcomes (Hisgen et al., 2020; Luna et al., 2020; Hu and Saleem, 2023), comparative judgement (Stapleton and Wu, 2015; Lesterhuis et al., 2018), text assessment (Allen et al., 2019; McCarthy et al., 2022), validity (Lesterhuis et al., 2018; Ozfidan and Mitchell, 2022), digital writing support (Benetos and Bétrancourt, 2020; Jin et al., 2020; Guo et al., 2022; Shi et al., 2022), technology-enhanced learning (Lam et al., 2018; Latifi et al., 2020; Wambsganss et al., 2020), corpus-based study (Allen et al., 2019; Yang J. et al., 2023), interactional metadiscourse (Papangkorn and Phoocharoensil, 2021; Al-Otaibi and Hussain, 2024).
Research within this cluster addresses two key objectives. First, it focuses on the development of objective and reliable procedures for assessing argumentative writing (Wambsganss et al., 2020; Cheong et al., 2021), including the design of analytic scoring schemes and rubrics and the empirical examination of their validity and interpretability (e.g., Abdollahzadeh et al., 2017; Deane et al., 2019; Ozfidan and Mitchell, 2022; Stapleton and Wu, 2015). These studies analyse how textual features such as claims, evidence, counter-arguments, and engagement markers are operationalised in assessment criteria and how rater judgements align with theoretical models of argument quality.
Second, this body of research investigates the integration of digital and technology-enhanced tools into both assessment and instruction. This includes the use of corpus-based analysis to examine patterns of support and counter-arguments in learner writing (McCarthy et al., 2022), scenario-based and automated assessment approaches (Deane et al., 2019), and digital authoring and feedback systems designed to scaffold argumentative processes (Benetos and Bétrancourt, 2020; Maharani and Santosa, 2021). More recent studies explore AI-mediated and chatbot-supported writing environments, demonstrating their potential to provide real-time scaffolding, formative feedback, and enhanced learner engagement in argumentative writing tasks (Guo et al., 2022; Guo et al., 2023; Su et al., 2023; Khampusaen, 2025).
Across the mapped studies, technology-mediated support is not distributed uniformly across modelling traditions. Work situated in assessment and analytic coding most often draws on Toulmin-oriented componential logic and is commonly implemented in technology-enhanced formats that foreground structured reasoning and feedback, including blended and gamified designs (Lam et al., 2018; Jin et al., 2020; Wambsganss et al., 2020; Shi et al., 2022; Mulyati and Hadianto, 2023). By contrast, research discussing digital authoring and online training tends to treat support at the level of general writing environments rather than framing it around Classical arrangement as an explicit model-specific design principle (Benetos and Bétrancourt, 2020; Luna et al., 2020; Maharani and Santosa, 2021). Studies with a dialogic orientation increasingly report AI-mediated and chatbot-supported scaffolding that prompts writers to articulate alternative positions and revise toward more dialogic formulations (Najjemba and Cronjé, 2020; De Castro Daza et al., 2022; Guo et al., 2022; Guo et al., 2023; Su et al., 2023; Khampusaen, 2025).
The early stage (2016-2019) involves assessment, rubric development, learning outcomes, validity and a corpus-based study. From 2020 to 2022, the focus shifted to comparative judgement and interactional metadiscourse. From 2023 to 2024, there is active development of digital writing support and technology-enhanced learning. This area (centre-right) has one of the highest densities on the heatmap, especially around argumentative writing and assessment. This reflects the intensive research efforts and inter-cluster connections that make assessment a central element of the field.
Cluster 3. Sociolinguistic and Interlinguistic Aspects of Argumentation[4]
This section covers research examining argumentation in the context of interlingual interference, cultural and linguistic differences, and the specifics of teaching English as a second or foreign language. The key terms identified in this section are EFL students (Abdollahzadeh et al., 2017; Tankó and Csizér, 2018; Ghanbari and Salari, 2022; Almashour and Davies, 2023; Mallahi, 2024), argumentative essays (Stapleton and Wu (2015; Setyowati et al., 2017; McCarthy et al., 2022; Geng et al., 2024), second language writing (Kibler and Hardigree, 2017; Rubiaee et al., 2019; Cheong et al., 2021; Cvikić and Trtanj, 2022), bilingual advantage (Hsin and Snow, 2017; van Weijen et al., 2019), language minority (Goldman et al., 2016; Ozfidan and Mitchell, 2020; Thompson, 2021), first language (van Weijen et al., 2019; Cheong et al., 2021; Cvikić and Trtanj, 2022), second language (Hu and Li, 2015; Abdollahzadeh et al., 2017; Hatipoğlu and Algı, 2018; Papangkorn and Phoocharoensil, 2021), source use behavior (Kibler and Hardigree, 2017; Cheong et al., 2021; Shi et al., 2022), dialogic teaching (Weyand et al., 2018; Musa, 2019; Najjemba and Cronjé, 2020; De Castro Daza et al., 2022), language development (Hsin and Snow, 2017; Rubiaee et al., 2019; Cheong et al., 2021; Uzun, 2024).
Research in this field has examined how a student's first language and cultural background can influence the structure of arguments in English academic writing (Hatipoğlu and Algı, 2018; Amini Farsani et al., 2025). Important topics include dialogic teaching (Su, 2023) and mediative practices aimed at developing argumentative skills in bilingual and multilingual students (Mercer and Littleton, 2007). Particular attention is given to the specifics of source use behaviors and academic intertextuality in second language (L2) writing (Hatipoğlu and Algı, 2018).
Early topics (2016-2019) – EFL students, argumentative essays, language development, dialogic teaching. Period 2020-2022 – source use behavior, bilingual advantage, language minority. In 2023-2025 – perspective taking, sociocognitive development. On the density map, this zone (top right) is less concentrated than Clusters 1 and 2, reflecting greater thematic dispersion and a relatively smaller volume of publications, although within the EFL subgroup the density is higher than in other L2 areas.
Cluster 4. Cognitive-psychological and Educational Determinants of Argumentation[5]
The fourth cluster covers studies in which argumentation in academic writing is analysed through the prism of general cognitive, psychological, and educational factors. Key concepts include critical thinking disposition (Zainuddin and Shammem, 2016; Pei et al., 2017; Widyastuti, 2018; Nguyen and Nguyen, 2020; Hu and Saleem, 2023), cognitive processes (Klein et al., 2016; Mateos et al., 2018; Lamb, Hand and Yoon, 2019; Allen et al., 2019; Wang and Said, 2024), self-regulation (Latifi et al., 2020; Hisgen et al., 2020), problem solving (Ferretti and Fan, 2016; Nesbit et al., 2019; Davies, Barnett and van Gelder, 2021), academic performance (Rubiaee et al., 2019; García et al., 2020; Luna et al., 2020; Lu and Swatevacharkul (2021), motivation (Lam, Hew and Chiu, 2018; Najjemba and Cronjé, 2020; Hutasuhut et al., 2023), learning disabilities (MacArthur et al., 2019; Hisgen et al., 2020), reading comprehension (Goldman et al., 2016; Kibler and Hardigree, 2017; van Weijen et al., 2019), syntax (MacArthur et al., 2019; Cvikić and Trtanj, 2022), grammar (Hu and Li, 2015; Hatipoğlu and Algı, 2018), and writing skills (Ferretti and Lewis, 2019; Newell et al., 2015; Setyowati et al., 2017; Mallahi, 2024; Ilyas and Arifin, 2025).
Some studies often go beyond analyzing argumentation as a separate skill, considering it in relation to overall academic literacy, language competence, and cognitive development (Ferretti and Fan, 2016; Goldman et al., 2016; Newell et al., 2015). Within this broader perspective, argumentative writing is conceptualised as a cognitively demanding, literacy-embedded activity that draws on disciplinary reading practices, source integration, and reasoning across texts (Mateos et al., 2018; Cheong et al., 2021; van Weijen et al., 2019). Accordingly, this line of research includes both the study of predictors of successful argumentative writing, such as motivation, metacognitive strategies, problem-solving ability, cognitive flexibility (Nesbit et al., 2019; Allen et al., 2019; Arroyo González et al., 2020; Murtadho, 2021; Almashour and Davies, 2023; Hutasuhut et al., 2023), and the analysis of limitations associated with learners who experience academic or cognitive difficulties, including low-achieving students, basic writers, and those with constrained linguistic or strategic resources (Hisgen et al., 2020; MacArthur et al., 2019; Ozfidan and Mitchell, 2020; Mallahi, 2024; Wang and Said, 2024).
The early stage (2016–2019) is represented by students, writing skills, reading comprehension, grammar, syntax, critical thinking disposition, and cognitive processes. From 2020 to 2022, the focus shifts to self-regulation, problem solving, motivation, pre-service teachers and classrooms. From 2023 to 2024, predictive validity and cognitive load research emerge. On the density map, the lower left sector shows a moderate concentration around students and writing skills, reflecting significant, yet more evenly distributed, interest, without the pronounced peaks that are characteristic of Clusters 1 and 2.
Thematic Сlusters by Researchers
The Vosviewer program analysed and grouped all the studies’ metadata into 22 clusters. We consolidated these detailed into 4 enlarged thematic clusters: (1) Theoretical and methodological foundations of argumentation in academic writing; (2) Evaluation and analysis of argumentative writing; (3) Sociolinguistic and interlinguistic aspects; (4) Cognitive-psychological and educational determinants of argumentation (Appendix 2).
While the VOSviewer mapping provided a valuable macro-level visualization of keyword co-occurrences and identified four broad metadata-based thematic clusters, it could not fully capture how argument is conceptualized, modelled, taught, and assessed across the reviewed literature. A subsequent full-text qualitative synthesis of the included studies (n = 95) therefore enabled a more interpretive and content-sensitive structuring of the field. This analysis produced five analytical clusters: (1) Nature of argumentation; (2) Argumentative writing and its aspects; (3) Models of arguments; (4) Critical thinking as a tool for argumentative writing; and (5) Digitalisation and AI tools for argumentative writing. Together, these clusters provided a more differentiated account of the field than metadata mapping alone.
The qualitative mapping further showed that the studies differ not only in thematic focus, but also in their definitions of argumentative writing, their preferred models of argument, and the criteria used to operationalize argument quality, including in digitally mediated contexts. Viewed together, these dimensions position argument theory, assessment practices, and technology-enhanced writing support as interconnected rather than isolated strands of research.
Cluster 1. Nature of Argumentation
Argumentation refers to a scholarly form of communication through which writers address controversial questions and manage differing viewpoints in order to move toward reasoned resolution (Van Eemeren and Grootendorst, 2004). It involves taking a position and working to make that position more acceptable to an academic audience (Leitão, 2001) by advancing clearly stated claims supported through evidence and explicit reasoning (Jin et al., 2020). As a textual genre, argumentation is integral to the construction of scientific articles and functions as a key means by which writers participate in, and are initiated into, disciplinary knowledge and its communicative practices (Arroyo González et al., 2020). In doing so, academic argumentation operates as a rhetorical process that relies on structural logic, i.e. presenting opinions, supporting them with facts, and incorporating counteractive moves, while using appropriate tone, voice, and language to convince readers within scholarly debate (Rapanta and Macagno, 2019; Yasuda, 2023).
Argumentation is a rhetorical and communicative practice that is used to address controversial issues and resolve differences of opinion. It involves taking a stance on an issue and defending it within an ongoing dialogue (Van Eemeren and Grootendorst, 2004; Musa, 2019). This is typically achieved by using structural logic to present opinions, support them with facts and incorporate counteractive moves (Ferretti and Fan, 2016; Rapanta and Macagno, 2019; Yasuda, 2023). Argumentation is integral to scientific articles, functioning as a means of conveying disciplinary knowledge and its communicative practices (Arroyo González et al., 2020).
Argumentation is also understood as an attempt at rational persuasion. In this context, writers provide reasons to strengthen or reject an opinion, position, or idea, and demonstrate the truth or untruth of a statement using rhetorical strategies (Mulyati and Hadianto, 2023; Murtadho, 2021). However, developing written argumentation requires additional complexity and control of written conventions, as it is not simply a case of transferring oral arguments into writing (Hsin and Snow, 2017). As argumentation is fundamental to critical thinking and effective participation in society and academia, it is considered a vital lifelong skill and educational objective, helping learners to formulate reasoned opinions and manage increasingly complex knowledge (Tankó and Csizér, 2018; Musa, 2019; Ekalia et al., 2025). High-quality argumentation depends on supporting reasons and evidence; a claim cannot stand alone without justification, otherwise it remains mere opinion. Weaknesses in reasons and evidence may also reflect limited background knowledge of the topic (Widyastuti, 2018).
Cluster 2. Argumentative Writing and Its Aspects
Across the reviewed sources, “argumentative writing” is defined through different emphases rather than a single shared formulation. Some authors treat it primarily as persuasion and rhetorical influence on the reader (Yang, 2016; Mulyati and Hadianto, 2023; Murtadho, 2021), while others frame it as a cognitively demanding problem-solving activity that involves inquiry, evidence evaluation, and self-regulation (MacArthur and Graham, 2016; Hu and Saleem, 2023; Lee et al., 2022; Pei et al., 2017). Another group places greater weight on dialogic engagement, including refutation, counterargument, and audience responsiveness (Najjemba and Cronjé, 2020; De Castro Daza et al., 2022; Robillos and Art-in, 2023). Table 2 compiles the definitions extracted from the mapped studies and illustrates this variation in construct emphasis.
Table 2. Definitions of argumentative writing
Таблица 2. Определения аргументативного письма

The definitions share a common core but differ in what is treated as decisive, whether stance and persuasion, evidential justification, or interactional positioning among voices (Allen et al., 2019; De Castro Daza et al., 2022; Su et al., 2023). These differences feed directly into later operational choices, including coding schemes, rubrics, and automated indicators used to represent “argument quality” (Stapleton and Wu, 2015; Lu and Swatevacharkul, 2021; Shi et al., 2022).
On the basis of the definitions summarised in Table 2, we derived a set of recurring characteristics that appear across the corpus. The resulting profile covers rhetorical organisation, evidential and inferential work used to justify claims, and dialogic engagement with alternative positions and audience expectations (Allen et al., 2019; De Castro Daza et al., 2022; Najjemba and Cronjé, 2020). Table 3 presents these characteristics together with the sources that motivate each feature.
Table 3. Characteristics of argumentative writing
Таблица 3. Характеристики аргументативного письма


The profile in Table 3 indicates that argumentative writing is treated as a composite construct. Studies that use similar labels may therefore rely on different combinations of structural, reasoning-based, and dialogic criteria, which reduces comparability unless operational definitions are made explicit (Stapleton and Wu, 2015; Shi et al., 2022; Rapanta and Macagno, 2019).
Culture and Language Related Aspects in Argumentative Writing
Argumentative writing places heavy cognitive and rhetorical demands on writers even in the first language. To produce an effective argumentative paragraph, writers must choose among metadiscourse resources and use them in a controlled way to manage stance, guide the reader, and sustain a coherent line of reasoning (Hatipoglu and Algi, 2018). In a second or foreign language, these demands intensify. L2 writers often have a narrower repertoire of metadiscourse markers and may be less confident in expressing nuance or calibrating engagement. Under such constraints, novice writers may fall back on familiar routines and transfer preferred patterns from L1 into L2 texts (Hatipoglu and Algi, 2018).
Argumentative performance is shaped not only by individual linguistic resources but also by cultural and cross-linguistic genre conventions. Genres develop within educational and rhetorical traditions, and their typical expectations can differ across languages and cultures (Graham and Rijlaarsdam, 2016). At the same time, the degree of divergence appears to depend partly on how closely related the languages are (van Weijen et al., 2019). In practical terms, this means that conventions for stating claims, signalling certainty, and incorporating opposing views may not align across contexts, and multilingual writers must learn to adjust their argumentative choices to unfamiliar norms.
These issues become especially visible in source-based argumentative writing. Here, success depends on more than general writing ability and L2 proficiency. Students must acquire practices in at least two domains: argumentation and reasoning, and the use of sources as evidence, including selection, integration, evaluation, and attribution in ways that meet academic conventions (Ferretti and Fan, 2016; van Weijen et al., 2019; Arroyo González et al., 2020). The task therefore requires students to construct justified claims while also building an evidential chain that is transparent to the reader and defensible within scholarly norms.
Cross-language research further suggests that some aspects of writers’ argumentative behaviour may be relatively stable across languages. Van Weijen et al. (2019), for example, report that writers may reproduce similar surface features across tasks in different languages, including the use of titles or reference lists, which points to transfer across writing contexts. At the same time, composing in an L2 can prompt simplification in argumentative design. In L2 settings, writers may rely more heavily on a canonical structure that foregrounds source-based support for a single position while leaving limited space for counterarguments and rebuttals (van Weijen et al., 2019). Such patterns reduce dialogic depth and can weaken the appearance of engagement with alternative perspectives, a dimension often valued in academic argumentation.
For students whose first language is not English, an additional difficulty lies in expressing positions accurately and appropriately within English academic discourse (Amini Farsani et al., 2025). Together with cultural and rhetorical differences, this helps explain why reasoning and argument quality are frequently reported as problematic in L2 contexts. Cheong et al. (2021) emphasise that sustained, reasonable argumentation is demanding even in L1 and becomes more difficult in an L2, where limitations in linguistic resources can constrain reasoning performance. Empirical work likewise indicates that L2 academic writing often shows weaker reasoning than expected for higher education study (Cheong et al., 2021), and difficulties with argumentation are reported not only for L2 writers but also for many L1 writers, with L2 learners often facing persistent challenges across their programmes (Tanko and Csizer, 2018). These findings point to the need for instruction that goes beyond accuracy. It must address metadiscourse control, shifting genre expectations, cross-cultural rhetorical norms, and the reasoning and source-use practices on which academic argumentation depends.
Teaching Argumentative Writing
Argumentative competence is widely treated as a core condition of academic development because it enables students to articulate a position, evaluate claims, detect weak reasoning, and negotiate competing viewpoints through counterarguments and rebuttals (Cheong et al., 2021; Sánchez-Peña and Chapetón, 2018; Kaewneam, 2025). Written argumentation presupposes an intellectual exchange between writer and reader in which alternative perspectives are interpreted, weighed, and addressed through reasoned responses (Sánchez-Peña and Chapetón, 2018). Beyond academic performance, sustained work in this genre is associated with problem solving and independent critical thinking, capacities that support informed participation in social life (Hisgen et al., 2020; Akib et al., 2024). Argumentative writing can also strengthen students’ academic agency by requiring them to locate credible sources and to summarise and synthesise evidence in support of an explicitly justified stance (Thompson, 2021). For second language learners, this form of literacy often functions as a fundumental, helping them align emerging academic skills with the argumentative practices of academic discourse communities (Amini Farsani et al., 2025).
At the same time, argumentative writing places heavy cognitive demands on learners because it requires coordination of language resources, reasoning, and topic knowledge. These demands become particularly salient in EFL contexts (Liu et al., 2023; Ekalia et al., 2025). When students lack awareness of what constitutes effective writing, the quality of their argumentative texts tends to deteriorate, even when they possess relevant ideas (Rubiaee et al., 2019). Novice writers often face additional constraints. They may enter higher education with limited guidance in argumentation, their prior educational experiences may not match university expectations, and many hesitate to take a clear position or rely on a one-sided understanding of argument that leaves little room for alternative perspectives (Kleemola et al., 2022). In empirical accounts, written argumentation is often described as slow to develop, with student drafts frequently exhibiting weak quality and difficulty in producing strong arguments (Allen et al., 2019). Recurrent shortcomings include weak evaluative standards, limited supporting proof, and underdeveloped engagement with counterarguments, which together point to the need for targeted instructional support (Ferretti and Lewis, 2019). More generally, younger or less experienced writers tend to find argumentative writing more challenging than narrative writing, consistent with the view that argumentation is an intellectually demanding problem (van Weijen et al., 2019; Ferretti and Fan, 2016).
Instructional expectations in academic argumentation centre on position taking supported by credible evidence. In many higher education writing tasks, students are required to adopt a stance and justify it through research drawn from reliable sources (Setyowati et al., 2017). Evidence use involves more than citation; it requires selecting information that can plausibly validate an argument and making its relevance explicit (Kibler and Hardigree, 2017). This is the point at which warranting becomes critical. Students must not only present data but also establish why that data is significant and sufficient for the claim at hand (Weyand et al., 2018). Source based argumentative writing adds further complexity because writers must build a coherent argument from reading or listening materials while simultaneously analysing and evaluating content knowledge (Cheong et al., 2021). In addition, instruction must incorporate academic integrity and credibility. Learners need guidance in strengthening claims with authoritative sources and in treating citation practices as ethical scholarly obehaviour rather than mechanical formatting (Uzun, 2024).
Learners’ difficulties are often visible in a recurrent set of problem areas. Students may struggle to formulate a defensible thesis, organise ideas coherently, provide sufficient and appropriate evidence, and address counterarguments with adequate reasoning. Difficulties also arise at the level of writing process, including planning, revision, time management, and control of mechanics (Almashour and Davies, 2023). In EFL research, cognitive challenges are frequently grouped into limited command of argument structure, weak critical thinking, and insufficient feedback conditions (Wang and Said, 2024). Against this background, strategy instruction is discussed as a practical mechanism for supporting task understanding, idea organisation, and development of coherent argumentation (Almashour and Davies, 2023). Table 4 summarises strategy categories that recur in the literature and illustrates how they translate into concrete classroom practices.
Table 4. Cognitive strategies for argumentative writing tasks[6]
Таблица 4. Когнитивные стратегии заданий по аргументативному письму

The strategy categories in Table 4 clarify why effective teaching cannot be reduced to a generic template. Planning and monitoring support structural control and coherence, elaboration strengthens evidential development, and critical evaluation targets the quality of reasoning. In EFL settings, where cognitive load is typically higher, these supports become central rather than auxiliary (Liu et al., 2023; Wang and Said, 2024).
Teaching practices therefore need to be explicit, strategic, and responsive to students’ academic histories. Higher education students arrive with uneven exposure to argumentation, yet they are expected to construct arguments that take multiple positions into account (Robillos and Art-in, 2023). Research links effective instruction with explicit modelling and scaffolded practice, especially when the aim is to develop reasoning depth rather than formulaic responses. This also requires adaptive teaching expertise, namely flexible knowledge of strategies that can be adjusted to learners with different educational trajectories (Athanases et al., 2015; Newell et al., 2017). Counterargumentation is frequently underdeveloped, and several authors therefore recommend teaching counterargument explicitly so that students do not default to arguing against a source text without analytical engagement (Cheong et al., 2021). Instruction also benefits from direct work with warrants, backings, and coherent sequencing of argument components (Lesterhuis et al., 2018). In addition, familiarising students with relational claim types and practising support appropriate to those claim types may broaden rhetorical repertoires and lead to greater variation in essay structures (Tankó and Csizér, 2018).
Different orientations toward evidence use and warranting have been described as shaping instructional priorities. Weyand et al. (2018) distinguish three argumentative epistemologies. A structural approach treats argument construction as application of rules that yield a predetermined structure (De La Paz et al., 2012, as cited in Weyand et al., 2018). An ideational approach frames evidence-based writing as a means of developing, organising, and presenting ideas (Langer and Applebee, 1987, as cited in Weyand et al., 2018). A social process approach emphasises purpose driven argumentation situated in social contexts (Ivanič, 2004, as cited in Weyand et al., 2018). This distinction helps explain why teaching cannot be limited to naming “parts” of an argument. It must also address what arguments are for, how evidence functions in context, and how writers anticipate readers’ perspectives.
Tools and assessment practices can support teaching, although they may also narrow what is attended to if they are used as substitutes for instruction rather than as supports. Argument mapping is often presented as a useful aid in source-based tasks, particularly when students must work with diverse materials under time constraints. Mapping can help learners organise information, clarify perspective space, and make inferential relations explicit (Robillos and Art-in, 2023; Robillos and Thongpai, 2022; Davies et al., 2021). Automated writing evaluation is discussed in a similar enabling register. Content focused AWE feedback may support baseline knowledge about evidence use, including what constitutes a complete argument and how sources and reasons can be incorporated. At the same time, more advanced judgements about evidence relevance and persuasiveness appear to depend on collaborative work and guided interpretation rather than automation alone (Shi et al., 2022). Several authors therefore caution against evaluating written argumentation solely by counting argument elements, because such procedures can mask deficits in reasoning quality and responsiveness to alternative positions (Cheong et al., 2021). The complexity of argumentative reasoning and the difficulty of assessing it transparently may partly explain why argumentation remains underrepresented as a teaching tool in many academic writing handbooks and courses (Rapanta and Macagno, 2019).
Teaching argumentative writing therefore involves more than improving language accuracy or enforcing a standard form. It requires sustained support for responsible position taking, credible evidence use, explicit warranting, and engagement with alternative viewpoints through modelling, scaffolded practice, and feedback that targets both structure and reasoning depth (Cheong et al., 2021; Lesterhuis et al., 2018; Wang and Said, 2024). When these conditions are met, argument writing instruction can contribute not only to stronger academic performance but also to broader capacities for critical participation, including source evaluation and evidence-based judgement (Hisgen et al., 2020; Thompson, 2021).
Assessment of Argumentative Writing Skills
Argumentative writing is among the most frequently used genres in classroom writing practice and formal writing assessment. In authentic academic settings, producing an argumentative text typically requires writers to evaluate, select, and use content knowledge from sources, rather than relying on personal opinion alone (Cheong et al., 2021). Persuasiveness therefore depends on whether claims are supported by an adequate evidential basis and on whether the text makes the reasoning that links evidence to claims explicit. For this reason, the assessment of argumentative writing cannot be reduced to general language accuracy. It must address evidence use and justificatory reasoning as core dimensions of performance (Paris et al., 2025).
In accountability settings, assessments of written argument are commonly organised into three broad formats: selected response tests, constructed response tasks, and projects or portfolios (Deane et al., 2018). These formats differ in the kind of evidence they provide about argumentative competence and in how confidently a score can be interpreted as reflecting students’ ability to produce extended argumentation. Table 5 summarises the three formats and the principal interpretive constraints associated with each.
Table 5. Assessments of written argument[7]
Таблица 5. Оценка письменного аргумента

As Table 5 indicates, the three formats vary in what they can capture. Selected response tasks can efficiently sample knowledge about argument features, whereas constructed responses and portfolio-based work provide stronger evidence of how students coordinate claims, evidence, counterpositions, and rebuttals in extended writing. This distinction becomes consequential in high stakes assessment contexts, where argumentative performance is used to support decisions about students’ readiness for academic work and progress within programmes (Zhang and Zhang, 2021).
Because argument quality is difficult to evaluate in a transparent and defensible way, particularly in foreign language settings, assessment practice depends on clearly specified criteria. In writing assessment, well designed rubrics are therefore essential, because they articulate the criteria by which performance is judged and help separate language control from the quality of argumentation (Ozfidan and Mitchell, 2022; Paris et al., 2025). At the same time, rubric design makes visible a central difficulty in the field. Studies suggest that students may produce texts containing numerous formal argument elements while still demonstrating weak reasoning. Abdollahzadeh et al. (2017), for example, report that the frequency of argument components can be relatively high even when reasoning quality remains limited. This implies that assessment should not rely on counting elements alone, but should adopt integrative criteria that attend to relevance, justification, and engagement with alternative viewpoints as part of audience-oriented persuasion (Abdollahzadeh et al., 2017; Paris et al., 2025).
One response to this challenge is to combine structural and reasoning focused criteria within analytic scoring. Stapleton and Wu (2015) propose the Analytic Scoring Rubric for Argumentative Writing, which integrates evaluation of structural components with judgement of reasoning quality. Rubrics of this kind are designed to distinguish the superficial presence of argumentative elements from deeper competence in justification, relevance, and responsiveness to counterpositions, thereby strengthening the interpretability of assessment outcomes (Stapleton and Wu, 2015; Abdollahzadeh et al., 2017).
Alongside human scoring, technology is increasingly discussed as a means of expanding feedback capacity. Automated systems can, in principle, support learners by providing argumentation focused feedback that is less constrained by instructor availability and can be integrated into iterative revision cycles (Wambsganss et al., 2020). Such feedback becomes particularly relevant when instructional aims prioritise improvement in evidence use, justificatory reasoning, and engagement with counterarguments, which rubrics and classroom assessments are intended to capture.
Assessment is also closely tied to instruction. Writing programmes and writing across the curriculum initiatives are encouraged to provide sustained opportunities for students to participate in argumentative practices in which they evaluate claims, justify positions, attend to contradictory views, and develop counterarguments and rebuttals (Abdollahzadeh et al., 2017). In this sense, assessment functions not only as measurement, but also as a mechanism that shapes what kinds of evidence-based reasoning students learn to produce in academic contexts.
Cluster 3. Models of Arguments
Argumentation has long been recognised as a central component of academic writing; however, it is not a unitary or universally defined construct. Across disciplines and research traditions, scholars have proposed diverse models of argument that emphasise different structural, rhetorical, cognitive, and social dimensions of argumentative practice (e.g., Toulmin, 1958; Rogers, 1961; Sanchez-Peña and Chapeton, 2018; Mateos et al., 2018; Musa, 2019). These models range from formal and logical representations of claims, evidence, and warrants to dialogic, genre-based, and socio-cognitive approaches that foreground audience engagement, stance construction, and contextual meaning-making (Stapleton and Wu, 2015; Zhang and Zhang, 2021; Geng et al., 2024). As a result, the field is characterised by conceptual plurality rather than theoretical consensus.
Within the mapped corpus, Toulmin based approaches are most frequently operationalised explicitly, particularly in studies focused on assessment, analytic coding of argument structure, and technology mediated writing support (Stapleton and Wu, 2015; Zainuddin and Shammem, 2016; Lu and Swatevacharkul, 2021). Classical rhetorical organisation is less often presented as an explicit model, yet it informs conventional essay structuring and is reflected in rubric based instructional contexts (Zhang, 2016; Ozfidan and Mitchell, 2022; Zhang and Zhang, 2021). Rogerian oriented principles appear mainly in studies adopting dialogic and sociocultural perspectives, especially where collaborative writing, perspective taking, and audience responsiveness are foregrounded (Musa, 2019; Hutasuhut et al., 2023; Najjemba and Cronje, 2020; De Castro Daza et al., 2022).
Models of argumentation are fundamental to argumentative writing because they offer structured representations of how reasoning is organised, communicated, and evaluated in academic discourse. By making explicit the components of argument and the relationships among claims, evidence, and justification, argument models enable systematic analysis of argumentative texts and provide analytical lenses for examining argument quality across contexts (Stapleton and Wu, 2015; Deane et al., 2019; McCarthy et al., 2022). From a research perspective, these models function as frameworks for describing and comparing argumentative performance across genres, disciplines, proficiency levels, and linguistic backgrounds (Abdollahzadeh et al., 2017; Cheong et al., 2021; van Weijen et al., 2019). In pedagogical contexts, argumentation models operate as cognitive and instructional scaffolds that support writers in externalising complex reasoning processes, anticipating alternative viewpoints, and aligning their texts with disciplinary expectations (Ferretti and Lewis, 2019; Luna et al., 2020; Lu and Swatevacharkul, 2021). Empirical studies further indicate that model-based instruction can enhance learners’ reasoning quality, evidence integration, and critical thinking in argumentative writing (Zainuddin and Shammem, 2016; Mateos et al., 2018; Kaewneam, 2025).
Importantly, different models foreground complementary dimensions of argumentative competence, including logical coherence, rhetorical persuasion, dialogic engagement, and epistemic stance (Rapanta and Macagno, 2019; Yasuda, 2023; Papangkorn and Phoocharoensil, 2021). Argument models also mediate between theory and practice by informing assessment frameworks, rubric development, and feedback practices, thereby promoting transparency and consistency in evaluating academic arguments (Ozfidan and Mitchell, 2022; Paris et al., 2025; Lesterhuis et al., 2018). In digital and AI-mediated learning environments, these models increasingly guide the design of adaptive feedback systems, argument mapping tools, and chatbot-based scaffolding for argumentative writing (Benetos and Bétrancourt, 2020; Guo et al., 2022; Su et al., 2023; Wambsganss et al., 2020). Consequently, the study and integration of argumentation models are not optional but essential for advancing both theoretical understanding and pedagogical practice in argumentative writing within higher education.
The Classical (Aristotelian) Argument Model
The Classical Argument Model originates in classical rhetoric, most notably in the works of Aristotle and Cicero, and represents one of the earliest systematic frameworks for persuasive communication (Lawson-Tancred, 1992). It conceptualises argumentation as a structured, goal-oriented process aimed at persuading an audience through logical coherence, clarity of reasoning, and effective organisation. The classical argument structure is a persuasive writing format that consists of five main parts: introduction, narration, confirmation, refutation, and conclusion (Table 6). Owing to its emphasis on orderly reasoning and persuasive effectiveness, this model has been widely applied in philosophy, rhetoric, law, and academic and persuasive writing.
Table 6. Classical argument model[8]
Таблица 6. Классическая модель агрумента

In the contemporary research corpus on academic argumentative writing, however, the Classical model is not explicitly foregrounded as an analytical or pedagogical framework. Instead, it functions primarily as a theoretical background tradition that informs later models of argumentation (Zhang, 2016; Rapanta and Macagno, 2019). None of the reviewed empirical studies adopts Aristotelian categories such as ethos, pathos, and logos as formal coding schemes or assessment criteria. Rather, classical principles surface indirectly in discussions of claims, evidence, persuasion, audience awareness, and logical coherence.
Several studies implicitly draw on Aristotelian reasoning without naming the model directly. For example, Abdollahzadeh et al. (2017) analyse argumentative writing behaviour through claims, reasons, and evidence. Although their analytical framework is largely Toulmin-inspired, the underlying concern with rational persuasion and coherent reasoning reflects classical rhetorical logic. Similarly, Stapleton and Wu (2015) distinguish between surface textual features and the substantive quality of arguments, foregrounding logos-driven persuasion and argumentative substance (central concerns of Aristotelian rhetoric). Foundational overviews of argumentative writing by Ferretti and Fan (2016) and Ferretti and Lewis (2019) situate modern instructional and cognitive approaches within a broader rhetorical tradition, explicitly acknowledging classical rhetoric as the historical foundation upon which contemporary models are built.
Overall, the Classical Argument Model occupies a historical and conceptual role within the corpus rather than serving as an active methodological choice. Its influence is mediated through later frameworks (most notably Toulmin’s model) which offer greater analytical explicitness and pedagogical operability. This pattern reflects a broader disciplinary shift in academic writing research from rhetorically general models toward analytically precise and empirically tractable approaches, while still retaining classical rhetoric as an enduring intellectual foundation for understanding persuasion, reasoning, and audience engagement in academic argumentation.
Critical Literacy Argument
Sánchez-Peña and Chapetón (2018) do not propose a formal, text-analytic classification of arguments (e.g. Toulminian or Aristotelian). Instead, they advance a critical-dialogic re-conceptualisation of argumentation, where arguments are classified implicitly through their social function, ethical orientation, and dialogic purpose, rather than through structural components (Table 7). Their classification of arguments emerges qualitatively from learners’ writing practices within a critical literacy framework. Thus, argument types are characterized by how participants construct and support claims as well as the discursive means they use to make their arguments persuasive and contextually grounded.
Table 7. Critical-dialogic argument classification[9]
Таблица 7. Критико-диалогическая классификация аргумента

Classification Based on the Level of Argument Integration
To operationalise argumentative writing quality, some studies use classifications that focus on how writers integrate perspectives and information drawn from multiple sources. A widely cited example is an integration-level classification. Mateos et al. (2018) describe argument quality as a continuum of epistemic integration, capturing how writers incorporate, relate, and transform potentially conflicting information from sources. In this model, higher levels correspond to more coordinated perspective management and more advanced knowledge construction, providing an analytic basis for distinguishing superficial juxtaposition from integrative synthesis (Table 8).
Table 8. Argument classification based on its level of integration[10]
Таблица 8. Классификация аргумента, основанная на уровне его внедрения

Although integration scales make the construct measurable, they also demonstrate why “argument quality” is not necessarily comparable across studies. For example, text may score highly on integration but remain weak on evidential justification, or vice versa, depending on the criteria adopted.
Functional Classification of Argument
In addition to component-based and integration-based approaches, the corpus includes functional classifications that treat argumentation as a distribution of discourse roles across extended text units. McCarthy et al. (2022), for example, operationalise argument structure by identifying paragraph functions such as support, counter-argument, and related moves (Table 9). This perspective is designed for corpus-scale analysis and foregrounds how argumentative work is staged across a text rather than how individual claims are diagrammed.
This classification differs from Classical and Toulmin-based frameworks in that it does not decompose arguments into components such as claims, warrants, and rebuttals. Instead, it operationalises argumentation at the level of extended discourse units by assigning functional roles to paragraphs or segments. This approach supports systematic comparison of linguistic and rhetorical features across supportive and counter-argumentative discourse within student texts (Table 9).
Table 9. Functional argument classification[11]
Таблица 9. Функциональная классификация аргумента

Operational Structure of Arguments
Some studies define argument structure through pedagogical enactment rather than formal typology. Musa (2019), for instance, specifies an operational structure that connects argumentative elements to staged instructional activities and expected textual outcomes (Table 10). In this formulation, formal components of argument are coupled with dialogic competencies, including explicit attention to counterargument and rebuttal.
Table 10. Operational argument structure[12]
Таблица 10. Операционная структура аргумента

Table 10 operationalises argument structure in a granular way by specifying the expected configuration of components within a student text. In the illustrated scheme, an argument based on Musa (2019) comprises fourteen elements, including one standpoint, four reasons, six subordinating reasons, one counterargument, one rebuttal, and one rhetorically functional repetition. This operationalisation is hybrid in nature, because formal textual components such as the claim, reasons, and rebuttal are coupled with dialogic work on alternative perspectives, treating argumentation as both structured composition and interactive reasoning practice. Framed this way, the scheme makes explicit how instructional sequencing and assessment criteria are anchored in particular assumptions about what counts as a strong argument.
The Rogerian argument
The Rogerian argument (or Rogerian rhetoric) is a form of argumentative reasoning that aims to establish a middle ground between parties with opposing viewpoints or goals (Table 11). It was developed by psychotherapist Carl Rogers (1961) and adapted to rhetoric argumentation by writing scholars Young, Becker, and Pike (1970). While Aristotelian styles of argument are often seen as eristic (concerned primarily with winning), the Rogerian argument can be viewed as more dialectic in nature (a conversation between two or more parties with the goal of arriving at some mutually-satisfying solution). It is particularly useful for highly controversial topics, conflict resolution, negotiation, counseling, and fields like psychology or social work where building consensus is key.
Table 11. The Rogerian model of argumentation [13]
Таблица 11. Модель аргументации Роджерса

The Rogerian model of argumentation, which emphasises empathy, audience alignment, and consensus-building, is not clearly represented in the corpus. No study explicitly frames its analysis or pedagogy around a Rogerian structure. Nevertheless, Rogerian principles appear indirectly in work that foregrounds dialogic interaction, perspective-taking, and collaborative meaning-making. For example, studies on dialogic teaching and collaborative argumentation (e.g., De Castro Daza et al., 2022; Najjemba and Cronjé, 2020; Musa, 2019) reflect Rogerian concerns with mutual understanding and negotiated reasoning. Similarly, research on audience awareness and stance-taking (e.g., Hutasuhut et al., 2023; Papangkorn and Phoocharoensil, 2021) resonates with Rogerian ideas without naming the model explicitly. In these cases, Rogerian argumentation is conceptually diffused rather than structurally instantiated.
The Toulmin Model of Argumentation
The Toulmin model of argumentation (Toulmin, 1958; 2003; Toulmin et al., 1984), which posits that the function of an argument is the justification of a claim through components such as claim, grounds, warrant, backing, qualifier, and rebuttal, has been pivotal in highlighting the need to address alternative viewpoints in claims (Table 12). Extremely common in social sciences, natural sciences, policy arguments, and fields requiring nuanced analysis, it offers a structured, adaptable approach for identifying, tackling, and teaching effective argumentative writing strategies, particularly for students struggling with argumentative essays.
Table 12. Toulmin classification of argument[14]
Таблица 12. Классификация аргумента по Тулмину

This framework aligns theory with practice, potentially inspiring comprehensive questionnaires to evaluate students' grasp of these strategies and fostering proficiency in crafting compelling, well-structured essays (Mallahi, 2024). Its pedagogical usability is evident in how it perpetuates argumentative writing as identifiable forms, including thesis identification, supportive evidence, warrant assessment connecting thesis, evidence, and context, and anticipation of counterarguments in diverse societies (Newell et al., 2015; Weyand et al., 2018). Despite its strengths, researchers have noted challenges in reliably applying the model, as student arguments often fit multiple elements, complicating classification (Stapleton and Wu, 2015). Nonetheless, Toulmin remains the most explicitly present and systematically employed framework in empirical research on argumentative writing, especially in EFL and academic contexts, where it bridges theory and practice through analytical transparency and measurable operationalization of argument quality.
Multiple studies directly adopt or adapt its components for instructional, analytical, and assessment purposes. For instance, Zainuddin and Shammem (2016) developed six scoring criteria: claims, data, and warrant (0–6 points) and propositions, opposition, and response (0–3 points), emphasizing the core triad's heavier weighting. Lam, Hew, and Chiu (2018) used a 1-4 analytic rubric across five criteria: stance with evidence, anti-thesis support, evaluation of viewpoints and inferences, rebuttals, and thesis-anti-thesis-supported conclusions. Similarly, Deane et al. (2019), Lu and Swatevacharkul (2021), and others ground assessments in Toulmin by classifying texts into components for component-specific scoring.
Corpus and empirical analyses further underscore implicit Toulminian influences: McCarthy et al. (2022) distinguished support and counterarguments via claim–data–rebuttal logic; Stapleton and Wu (2015) linked substance (claims, evidence quality, reasoning) to the claim–data–warrant triad; Ferretti and Fan (2016) and Ferretti and Lewis (2019) framed pedagogy around claims, evidence, reasoning, and counterarguments; Kibler and Hardigree (2017) examined evidence-claim connections as warrants; Weyand, Goff, and Newell (2018) explicitly addressed the social construction of warrants; and Geng, Chen, and Li (2024) mapped argument chains (claim development, justification, alternatives) to extended Toulmin structures. This widespread adoption highlights the model's versatility in evaluating evidence use, counterargumentation, and overall argument quality.
In EFL and academic writing contexts, Toulmin’s model functions as a vital bridge between theoretical constructs and practical application. It helps students move beyond formulaic writing by encouraging them to articulate claims clearly, ground them in relevant evidence, justify connections through warrants, anticipate counterarguments, and respond respectfully to alternative viewpoints, skills essential for participation in heterogeneous societies and complex disciplinary discourse (Newell et al., 2015). Whether used explicitly in scoring rubrics, implicitly in corpus analyses, or pedagogically to scaffold argumentative forms (Weyand et al., 2018).
Cluster 4. Critical Thinking as a Tool for Argumentative Writing
Critical thinking (CT) is widely regarded as a central objective of higher education and a foundation for developing key learning outcomes (Sharadgah et al., 2019; Ilyas and Arifin, 2025). It requires educators to carefully design instructional materials and create classroom activities that stimulate deep reflection and strengthen reasoning skills (Wang and Newell, 2025) which enables students to reflect on and understand their own viewpoints, drawing on their observations and experience to make sense of information and situations (Raj et al., 2022). Empirical work suggests a reciprocal relationship between CT and writing: the more proficient students are in critical thinking, the better their writing skills tend to be, and strong writing practice can in turn foster more advanced CT (Sharadgah et al., 2019).
Argumentative writing is a particularly rich site for the exercise and development of critical thinking because it requires higher order skills such as analysis, reasoning, and evaluation (Ilyas and Arifin, 2025; Ekalia et al., 2025). These cognitive processes, identifying issues, weighing evidence, judging the relevance and credibility of information, and justifying positions, are typically understood as core CT skills (Lu and Swatevacharkul, 2021; Nguyen and Nguyen, 2020). For ESL/EFL learners in higher education, such skills are crucial for producing sound written texts, especially argumentative essays that must present claims, support them with appropriate evidence, and address alternative viewpoints (Ghaemi and Mirsaeed, 2017; Newell et al., 2015; Weyand et al., 2018). Argumentation enhances critical thinking by prompting writers to compare competing ideas, evaluate reasons and evidence, and reach rational conclusions on the basis of their evaluations (Ferretti and Fan, 2016; Klein et al., 2016).
From a cognitive standpoint, the basis of effective argumentative writing lies in critical and logical thinking: writers must connect premises and evidence to draw coherent, justified conclusions (Mulyati and Hadianto, 2023). Evidence in this sense includes facts, testimony, information, or authoritative sources that can be used to support or challenge a claim, and critical thinking enables students to judge how such evidence should be selected, interpreted, and integrated (Mulyati and Hadianto, 2023). Thus, CT is not only visible in the final written product but also drives the reasoning processes that underlie the construction of arguments and their components, such as claims, warrants, and supporting data (Newell et al., 2015; Weyand et al., 2018).
In argumentative writing, students generate an argument where questions, claims, and evidence cohesively connect to explain the outcomes of an inquiry (Lamb et al., 2019). Through argumentation, the writer is able to assemble the facts in such a way, so that he is able to show whether an opinion or a certain thing is true or not (Mulyati and Hadianto, 2023). Argumentative writers need to take each of the following elements into careful consideration in order to compose a successful argumentative essay, namely, Claim, My side, Your side, Evidence, Backing, Rebuttal and Conclusion (Lu and Swatevacharkul, 2021). Argumentative writing becomes more “critical” when the following reasoning skills are evident: (1) the construction of valid arguments; (2) the construction and integration of valid counter-arguments; and (3) the accurate and relevant use of evidence (Rapanta and Macagno, 2019).
Pedagogically, fostering students’ critical thinking through argumentative writing requires more than teaching text structures; it calls for instructional approaches that explicitly cultivate reflective and evaluative reasoning. Critical pedagogy is frequently identified as a key approach in this regard, as it encourages learners to interrogate assumptions, question sources, and engage with controversial or complex issues rather than merely reproduce given ideas (Qian, 2015). Teachers who implement such curricular goals need to design writing tasks that invite students to think, to scrutinise the originality and relevance of their ideas, and to develop arguments that can withstand critical examination (Ghanbari and Salari, 2022). When students are guided to construct, justify, and revise their positions in this way, argumentative writing becomes both an expression of critical thinking and a powerful means of cultivating it.
Cluster 5. Digitalisation and AI Tools for Argumentative Writing
Digitalisation has expanded how argumentation is taught, learned, and enacted in writing, leading to a growing ecosystem of tools that support learners’ reasoning and text production. Within this landscape, digital tools for argumentation can be described as falling into two broad, non-exclusive types: (1) tools that primarily support idea generation and conceptual relations, and (2) tools that primarily support the linguistic and formal quality of written products (Benetos and Bétrancourt, 2020).
The first type supports idea generation and relationship building between concepts, often through diagrammatic representations. These tools typically rely on predefined argumentation models to help learners map arguments for and against a position. In collaborative learning contexts, such modelling can guide reasoning by drawing attention to cognitive conflict and promoting its resolution. Their overarching aim is to support valid argument construction for “sense-making” and theory-building, enabling learners to test concepts, theories, and hypotheses through structured argumentation (Benetos and Bétrancourt, 2020). Argument visualization tools exemplify this approach: they function as cognitive tools that scaffold learners’ analysis, evaluation, and construction of arguments by guiding the development of argument schemas (Nesbit et al., 2019).
The second type of tool focuses on improving the textual product. Here, digital support targets mechanical aspects of writing quality, including micro-level language concerns such as grammar, language use, and sentence structure, as well as macro-level concerns such as overall organization (Benetos and Bétrancourt, 2020). Despite these advances, a key limitation remains: there is a lack of tools that directly support writing processes at both the macro-level (e.g., strategy development, self-monitoring, argumentative structure, rhetorical moves) and the micro-level (e.g., reasoning, argument validity, and structural coherence) (Benetos and Bétrancourt, 2020). This gap is central for the design of digital authoring tools intended to support argumentative writing.
Accordingly, the design of such tools draws on two major research areas: first, the conditions that engage learners in resolving cognitive conflict, which can lead to deeper learning and conceptual change; and second, the difficulties novices face when producing argumentative texts as a genre (Benetos and Bétrancourt, 2020). This design orientation also aligns with broader work encouraging researchers to examine how technology and AI supports English argumentative writing both inside and outside the classroom (Maharani and Santosa, 2021; Lee et al., 2022; Khampusaen, 2025). In particular, online collaborative writing environments allow students to jointly complete argumentative texts without constraints of time and place, and may offer more opportunity than classroom discussion for learners to focus on the language of their argumentative writing, an aspect they often neglect during in-class interaction (Jin et al., 2020).
Within this digital turn, conversational agents have emerged as a promising mechanism for supporting argumentation and argumentative writing. ChatGPT, for example, has been positioned as a tool that can facilitate communication skills by engaging students in debate-like interactions where they practice and develop argumentative skills (Farrohnia et al., 2023).
Crucially, the literature published toward the end of the review period (2023–2025) marks a clear paradigm shift from traditional Automated Writing Evaluation (AWE) to Generative AI and Large Language Models (LLMs). While earlier AWE systems (relying on traditional NLP) primarily functioned as diagnostic tools, identifying missing structural components or highlighting grammatical errors, modern Generative AI (e.g., ChatGPT) acts as a dialogic partner. These LLM-based tools shift the pedagogical focus from post-hoc correction to synchronous co-creation, offering dynamic counterarguments, adjusting reading levels for diverse learners, and modeling Rogerian empathy in real-time. This transition requires a fundamental re-evaluation of assessment rubrics, moving from evaluating the mere presence of argument elements to assessing the student's ability to critically prompt, evaluate, and iteratively refine AI-generated reasoning.
More specifically, chatbots may support argumentative writing in several ways: by providing prompt and substantive feedback throughout the writing process to promote continuous reflection; by asking guiding questions that elicit essential argument components such as claims, evidence, and reasoning; by storing interactions as transcripts that document the writing process and reveal learning needs for teacher support; and by integrating into instant messaging platforms to make writing support convenient and familiar (Guo et al., 2022). Khampusaen (2025) noticed marked progression in argument construction, evidence integration, and academic voice as a result of AI integration, particularly ChatGPT, into academic writing practices. A concrete example is “Argumate”, a chatbot designed to scaffold argumentative writing by prompting key argument elements, suggesting ideas that support a student’s position, and proposing opposing views that learners must rebut (Guo et al., 2022). In practice, learners’ work with chatbots may also be mediated by additional digital supports (online information sources, translation tools, and typing assistants) which shape how collaboration unfolds and how texts are produced (Guo et al., 2023).
In parallel, natural language processing approaches provide further possibilities for digitalising argumentation in text. Argument mining seeks to automatically extract arguments and the relationships between them, typically through (1) Argument Component Identification, which locates and labels components such as claims and premises, and (2) Relationship Identification, which determines whether components support, attack, or have no relation to each other (Lawrence and Reed, 2020). Since understanding argumentative structure helps identify not only an author’s stance but also the reasons offered to justify it, argument mining can be seen as a potential foundation for tools that diagnose and scaffold reasoning quality in learners’ writing (Yang et al., 2023). Together with argument visualization tools that externalise argument structure for learners to inspect and revise (Nesbit et al., 2019), these approaches indicate a broader shift: from tools that only capture ideas or polish language, toward systems that can scaffold argumentative writing as a structured, reflective, and cognitively demanding process (Benetos and Bétrancourt, 2020).
Discussion
The present systematic scoping review set out to reconstruct how research on argument in academic writing in higher education has been conceptualised and operationalised over the past decade. The analysis focused on definitions of argumentative writing, the models used to represent argument structure and quality, the functions attributed to argumentation, and the dominant thematic directions shaping the field. Such a focus was necessary in view of the well documented heterogeneity of conceptual and methodological approaches, which complicates the accumulation and comparison of findings across studies.
The mapping of 95 sources indicates that the field is simultaneously consolidating and diversifying. On the one hand, a relatively stable set of modelling traditions continues to structure research, most notably Toulmin based approaches, alongside Classical and dialogic perspectives. On the other hand, the thematic scope of studies is expanding, particularly through the growing attention to digitally mediated writing environments and AI supported instruction. The field is organised around four broad clusters, including theoretical and methodological work, assessment and analytic approaches, sociolinguistic and cross linguistic dimensions, and cognitive and educational determinants of argumentative writing. At the same time, publication patterns reveal a strong geographical concentration, with a substantial proportion of studies conducted in Asian educational contexts, which may have for the generalisability of reported findings.
The evidence further confirms that research on argumentative writing is inherently interdisciplinary, drawing on applied linguistics, writing studies, educational psychology, and technology enhanced learning. This interdisciplinarity helps explain the coexistence of heterogeneous conceptual vocabularies and analytic traditions. Empirical studies dominate the corpus and most frequently take the form of corpus-based analyses, classroom interventions, and assessment-oriented research-centred on rubric design and rating practices. However, the distribution of designs also suggests that many studies prioritise local explanatory adequacy over cross study comparability, particularly when argument quality is operationalised through context specific coding schemes or locally developed assessment instruments (Stapleton and Wu, 2015; Paris et al., 2025).
Several gaps emerge with sufficient consistency to be interpreted as structural features of the field. First, operational definitions of argument quality remain uneven, even within studies that draw on the same modelling traditions. In particular, the relationship between structural completeness and the quality of reasoning is not always made explicit, which complicates the interpretation of findings. This issue has been noted in studies showing that the presence of argument components does not necessarily correspond to the strength of justification or the relevance of evidence (Abdollahzadeh et al., 2017). Second, although digitalisation is a clear trend, the integration of AI based tools into established models of argumentation remains limited. While chatbots and automated systems are increasingly used to support the generation of claims, evidence, and counterarguments, their implications for construct validity and assessment interpretation are not yet fully articulated (Benetos and Bétrancourt, 2020; Wambsganss et al., 2020; Khampusaen, 2025). Third, sociolinguistic and multilingual perspectives are often examined independently of assessment practices, despite the fact that argument quality is shaped by both linguistic resources and evaluative criteria. Finally, although cognitive and motivational variables are widely discussed, their connection to specific instructional designs and long-term development remains insufficiently explored, with relatively few studies adopting longitudinal perspectives (Kibler and Hardigree, 2017).
Comparison of the Most Popular Argument Models
The mapping indicates that model choice is not merely terminological. It structures what studies treat as argument quality, which instructional targets they prioritise, and which analytic procedures they adopt. To make this relationship explicit, this section compares the three most visible traditions in the corpus, namely Classical rhetorical organisation, Toulmin’s componential model, and Rogerian dialogic reasoning, with attention to their core orientation, organisational logic, and typical pedagogical and assessment uses in higher education EFL contexts (Mallahi, 2024; Wang and Said, 2024).
Across the mapped studies, these traditions align with different dimensions of argumentative competence and, accordingly, with different outcome measures. Classical organisation most directly supports rhetorical sequencing and genre coherence, providing novice and EFL writers with a stable macro structure for controlling genre conventions, textual coherence, and audience orientation (Ferretti and Lewis, 2019; Zhang and Zhang, 2021; Ozfidan and Mitchell, 2022). Yet its formulaic character can constrain rhetorical flexibility and may not readily accommodate culturally diverse discourse practices or rhetorical expectations, particularly for linguistically and culturally diverse learners working within academic conventions shaped by Western traditions (Allen et al., 2019; Sharadgah et al., 2019; Kleemola et al., 2022; Ghanbari and Salari, 2022).
Toulmin based approaches foreground structural adequacy and justificatory relations by treating arguments as interrelated functional components, thereby strengthening learners’ ability to justify claims, integrate evidence, and articulate warrants (Stapleton and Wu, 2015; Zainuddin and Shammem, 2016; Ghaemi and Mirsaeed, 2017; Hu and Saleem, 2023). This analytic clarity also explains their prominence in assessment and coding-oriented studies. At the same time, classroom application can be complicated by classification ambiguity, since student statements may plausibly fit multiple elements, which reduces reliability in both instruction and assessment (Stapleton and Wu, 2015). In addition, a strong emphasis on logos-centred analysis may underrepresent ethos and pathos as resources for persuasive effectiveness, and some accounts caution that pedagogical use can drift toward a formal checklist that obscures argumentation as situated rhetorical practice (Ferretti and Lewis, 2019; Weyand et al., 2018).
Rogerian reasoning highlights dialogic engagement, reframing argumentation as a collaborative practice oriented toward perspective taking, intercultural communicative competence, and negotiated understanding (Musa, 2019; De Castro Daza et al., 2022; Robillos and Art in, 2023). In EFL settings, this orientation is relevant to instruction that aims to develop respectful handling of alternative viewpoints and audience responsiveness. However, the approach is linguistically and temporally demanding, requiring sophisticated language resources for empathetic restatement and nuanced positioning, which can pose difficulties for lower proficiency learners (Papangkorn and Phoocharoensil, 2021; Almashour and Davies, 2023). Its consensus orientation may also be less effective in high stakes contexts that reward direct claims and decisive argumentative positioning (Yasuda, 2023).
These models function as complementary rather than interchangeable pedagogical resources. The Classical tradition offers structural scaffolding for genre control, Toulmin supports analytic rigour in evidence-based justification, and Rogerian foregrounds dialogic sensitivity and audience attunement. The mapped literature therefore points toward principled integration, in which instructional design and assessment criteria make explicit which dimension of argumentative competence is being targeted and how it is weighted, rather than assuming that different models yield commensurable outcomes by default (Mallahi, 2024; Wang and Said, 2024). Toulmin’s model serves as the most systematically employed framework in the empirical study and teaching of argumentation, offering clarity, structure, and depth to the art of reasoned persuasion.
A Three-dimensional Taxonomy of Argument Quality Operationalisations
A central outcome of the present review concerns the internal differentiation of how argument quality is conceptualised across the literature. The observed fragmentation reflects not only variation in terminology but also deeper differences in what is treated as the core of argumentation. To make these differences analytically explicit, the review identifies three recurrent dimensions along which argument quality is operationalised across studies.
The first dimension concerns structural adequacy, understood as the presence and organisation of key components of an argument. This dimension is most closely associated with Toulmin based approaches and with assessment practices that foreground claims, evidence, and their relations (Stapleton and Wu, 2015; Rapanta and Macagno, 2019; Paris et al., 2025). The second dimension relates to dialogic engagement, which captures how writers address alternative viewpoints, integrate counterarguments, and position their claims in relation to competing perspectives. This dimension is particularly salient in dialogic and interaction-oriented approaches to argumentation and is increasingly foregrounded in studies that emphasise the role of perspective taking and audience awareness (Su et al., 2023; De Castro Daza et al., 2022; Najjemba and Cronjé, 2020). The third dimension concerns rhetorical organisation, including the structuring of arguments in accordance with genre expectations, the management of stance, and the overall persuasive coherence of the text. This aspect is especially visible in research on academic discourse and multilingual writing, where rhetorical conventions and audience expectations play a central role (Yasuda, 2023; Cheong et al., 2021; Papangkorn and Phoocharoensil, 2021). In Appendix 4 we mapped the review sources to these dimensions.
Considered together, these dimensions help explain why studies that appear to address the same construct often produce non comparable results. Research that focuses on structural features tends to rely on component based coding or automated analysis, whereas studies that foreground dialogic or rhetorical aspects more often employ rubric-based evaluation requiring interpretive judgement. Unless the relationship between these dimensions is made explicit, differences in findings may reflect differences in operationalisation rather than substantive disagreement.
This issue becomes particularly salient in digitally mediated environments, where support and assessment are increasingly delivered through automated feedback systems, argument-mining procedures, and AI-assisted writing tools (Benetos and Bétrancourt, 2020; Wambsganss et al., 2020; Khampusaen, 2025). Under these conditions, the interpretability of outcomes depends on whether digital support is aligned with the specific construct dimension being targeted, rather than treated as a conceptually neutral intervention.
In contrast to review-level work that narrows the scope to particular subdomains, the present study was designed as a field reconstruction across definitional, methodological, and technology-mediated lines of research. For example, a recent systematic review charts L2 argumentative writing as a domain of second language writing research (Amini Farsani et al., 2025), while a scoping review focused on EFL contexts synthesises cognitive challenges in argumentative writing (Wang and Said, 2024). These contributions consolidate specific dimensions of the literature, yet they do not aim to map how model choice, operational definitions, assessment practices, and digital mediation co-evolve across the broader research space. The present review addresses this gap by integrating evidence across these dimensions and showing how their interaction structures current research trajectories.
The contribution of this study lies in making visible the underlying structure of the field and in clarifying where conceptual alignment is possible and where it remains limited. At the same time, the findings should be interpreted in light of the review’s scope, which is restricted to English language publications indexed in Scopus over the period from 2015 to 2025. As a result, the map does not fully capture research published in other languages or outside major indexing systems.
Technology-enhanced Applications
Technology-enhanced implementations are uneven across the three traditions, and this unevenness mirrors the extent to which each model offers an operational structure that can be translated into digital tasks and feedback. In studies that draw on Classical organisation, technology is more often discussed at the level of general digital authoring environments and online training rather than framed around Classical arrangement as a model-specific design principle (Benetos and Bétrancourt, 2020; Luna et al., 2020; Maharani and Santosa, 2021). This pattern is consistent with the model’s role as a macro-template for text organisation, which can be supported digitally without being named explicitly as Classical.
By contrast, Toulmin based pedagogy and analysis are more frequently connected to technology-enhanced instruction, including blended and gamified interventions that aim to strengthen engagement and reasoning (Lam et al., 2018; Jin et al., 2020). This linkage is also theoretically congruent with Toulmin’s componential architecture, which lends itself to structured digital tasks and feedback systems that foreground claims, evidence, warrants, and rebuttals as explicit and assessable units (Nesbit et al., 2019; Wambsganss et al., 2020; Davies et al., 2021; Liu et al., 2023). Related designs reinforce this logic-focused pathway through online scripting, peer-feedback, and automated feedback that targets evidence use and revision (Latifi et al., 2020; Shi et al., 2022; Mulyati and Hadianto, 2023). The Rogerian tradition shows the most direct linkage to current AI-supported pedagogy, particularly chatbot and ChatGPT-assisted scaffolding that prompts perspective-taking, supports neutral summarisation of opposing views, and facilitates revision toward more dialogic positioning (Guo et al., 2022; Guo et al., 2023; Su et al., 2023). This aligns with broader discussions of generative AI in writing pedagogy and its emerging instructional affordances (Farrokhnia et al., 2023; Khampusaen, 2025), as well as with work emphasising dialogic processes in collaborative digital argumentative writing (Najjemba and Cronjé, 2020; De Castro Daza et al., 2022). Taken together, the mapped literature suggests three technology pathways, namely comparatively under-specified support compatible with Classical organisation, interactive and feedback-rich tool use aligned with Toulmin’s analytic reasoning, and emerging AI enabled scaffolds that operationalise Rogerian perspective-taking and consensus-oriented negotiation.
Strengths and Pedagogical Contributions
In the mapped literature, the three most frequently invoked argumentation traditions are associated with partially different instructional targets, which helps explain variation in both pedagogical design and outcome measures. Work aligned with the Classical tradition is typically used to provide novice writers with a stable organisational template for academic essays, supporting early development of coherent text structure and audience-oriented framing in EFL contexts (Zhang and Zhang, 2021; Ferretti and Lewis, 2019). By contrast, Toulmin based instruction is most often framed as a means of strengthening analytic reasoning within argumentative writing, particularly through explicit attention to claim support and to the inferential link between evidence and conclusions (Zainuddin and Shammem, 2016; Ghaemi and Mirsaeed, 2017; Hu and Saleem, 2023). Several studies in the corpus further indicate that Toulmin oriented pedagogy is compatible with technology supported designs, including blended and gamified interventions, where the model provides a componential structure for scaffolding and feedback (Lam et al., 2018; Jin et al., 2020). A third strand is visible in work drawing on Rogerian and dialogic orientations, where instruction emphasises perspective taking, respectful acknowledgement of alternative viewpoints, and collaborative negotiation of positions, which is particularly salient in diverse classroom contexts (Musa, 2019; De Castro Daza et al., 2022). Within this strand, collaborative and dialogic writing practices are used to support engagement and self-regulation, and recent studies also report AI assisted scaffolding that prompts writers to articulate opposing positions and revise toward more balanced formulations (Robillos and Thongpai, 2022; Robillos and Art in, 2023; Guo et al., 2022; Guo et al., 2023; Su et al., 2023).
Conclusion
This review set out to provide a systematic reconstruction of research on argument in academic writing in higher education. The analysis has shown that the field is characterised by both continuity and diversification. Established modelling traditions remain influential, yet new lines of work, particularly those related to digital and AI mediated writing, are reshaping both research practices and instructional contexts. At the same time, the synthesis makes clear that shared terminology often conceals differences in how argument quality is defined and measured.
Ultimately, the fragmentation in defining and evaluating argument quality has direct practical implications for curriculum design. For practitioners, this review suggests that assigning “argumentative essays” without explicitly aligning instruction to a specific model (e.g., Toulmin for logical rigor, Rogerian for consensus-building) risks leaving learners confused about expectations. Educators are encouraged to move away from generic rubrics and instead adopt multidimensional assessment frameworks that transparently weight structural adequacy alongside dialogic engagement. Furthermore, as AI tools increasingly automate the mechanical structuring of texts, instructional time must pivot toward higher-order cognitive tasks: evaluating the epistemic validity of evidence, mapping complex argumentative networks, and fostering the ethical integration of alternative viewpoints.
One of the main outcomes of the study is the identification of three recurrent dimensions of argument quality, relating to structural adequacy, dialogic engagement, and rhetorical organisation. Making these dimensions explicit helps to clarify why findings across studies are not always directly comparable and provides a basis for more transparent interpretation of results. It also highlights the need to specify more precisely what is being assessed when argumentation is evaluated in academic writing.
The conclusions of the review should be interpreted in light of several limitations. The analysis is restricted to English language publications indexed in Scopus between 2015 and 2025, which limits the coverage of research produced in other linguistic and regional contexts. In addition, the study follows the logic of a scoping review and does not include a formal quality appraisal of individual sources. Finally, the diversity of research designs and operational definitions within the corpus constrains the extent to which direct comparisons between studies can be drawn.
These limitations point to several directions for further research. There is a need for greater clarity in the definition and measurement of argument quality, including explicit consideration of how different dimensions of argumentation are combined in analytic and assessment frameworks. Further work is also required to align digital and AI mediated tools with theoretical models of argumentation, particularly with respect to how such tools influence both writing processes and evaluation practices. In addition, more attention should be given to the integration of sociolinguistic and multilingual perspectives with instructional design and assessment. Finally, longitudinal research is needed to trace the development of argumentative competence over time, especially in contexts where digital feedback and automated support are embedded in academic writing instruction.
STATEMENT. The authors used an AI-based language editing to assist with proofreading and improving the clarity and linguistic quality of the manuscript. The tool was used solely for language refinement and did not contribute to the conceptualisation, analysis, or interpretation of the data. All content, arguments, and conclusions remain the responsibility of the authors.
[1] This section is conceptualised and organized according to the PRISMA-ScR guidelines (Andrea C. Tricco, Erin Lillie, Wasifa Zarin, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann Intern Med.2018;169:467-473. [Epub 4 September 2018]. DOI:10.7326/M18-0850).
[2] Unites Clusters 6, 7, 8, 9 grouped by the VOSviewer program.
[3] Unites Clusters 13, 14, 15, 16 grouped by the VOSviewer program.
[4] Unites Clusters 4, 10, 17 grouped by the VOSviewer program.
[5] Unites Clusters 1, 2, 3, 5 grouped by the VOSviewer program.
[6] Based on the information from Almashour, M. and Davies, A. (2023). Exploring learning strategies used by Jordanian University EFL learners in argumentative writing tasks: The role of gender and proficiency. Frontiers in Education, 8. https://doi.org/10.3389/feduc.2023.1237719
[7] Based on the information from Deane, P., Song, Y., van Rijn, P., O’Reilly, T., Fowles, M., Bennett, R., Sabatini, J. and Zhang, M. (2019). The case for scenario-based assessment of written argumentation. Reading and Writing, 32(6), 1575–1606. https://doi.org/10.1007/s11145-018-9852-7
[8] Based on Lawson-Tancred, H. (1992). The Art of Rhetoric. Penguin Publishing Group.
[9] Based on Sánchez-Peña, D. and Chapetón, C. M. (2018). Fostering the Development of Written Argumentative Competence in ELT from a Critical Literacy Approach. Revista Colombiana de Educación, 75, Article 75. https://doi.org/10.17227/rce.num75-8107
[10] Based on Mateos, M., Martín, E., Cuevas, I., Villalón, R., Martínez, I. and González-Lamas, J. (2018). Improving Written Argumentative Synthesis by Teaching the Integration of Conflicting Information from Multiple Sources. Cognition and Instruction, 36(2), 119–138. https://doi.org/10.1080/07370008.2018.1425300
[11] Based on McCarthy, P. M., Kaddoura, N. W., Al-Harthy, A., Thomas, A. M., Duran, N. D. and Ahmed, K. (2022). Corpus Analysis on Students’ Counter and Support Arguments in Argumentative Writing. Pegem Egitim ve Ogretim Dergisi, 12(1), 256–271. Scopus. https://doi.org/10.47750/pegegog.12.01.27
[12] Based on Musa, H. I. (2019). Dialogic vs. Formalist teaching in developing argumentative writing discourse and reducing speaking apprehension among efl majors. Journal of Language Teaching and Research, 10(5), 895–905. Scopus. https://doi.org/10.17507/jltr.1005.01
[13] Based on Rogers, C. R. (1961). On Becoming a Person: A Therapist's View of Psychotherapy. Houghton Mifflin and Young, R. E., Becker, A. L. and Pike, K. L. (1970). Rhetoric: Discovery and Change. Harcourt, Brace & World. 383 p. https://books.google.ru/books?id=tKktkEUZxWQC
[14] Based on Toulmin, S. (1958). The Uses of Argument. Cambridge University Press and Toulmin, S. E., Rieke, R. D. and Janik, A. (1984). An introduction to reasoning (2nd ed.). New York: Collier Macmillan.


















Reference lists
Abdollahzadeh, E., Amini Farsani, M. and Beikmohammadi, M. (2017). Argumentative writing behavior of graduate EFL learners', Argumentation, 31 (4), Article 4. DOI: 10.1007/s10503-016-9415-5 (In English)
Akib, E., Muhsin, M. A., Hamid, S. M. and Irawan, N. (2024). Critical thinking in authentic assessment: An exploration into argumentative writing in non-English departments in higher education, International Journal of Language Education, 8 (4), 854–869. DOI: 10.26858/ijole.v8i4.70051 (In English)
Allen, L. K., Likens, A. D. and McNamara, D. S. (2019). Writing flexibility in argumentative essays: A multidimensional analysis, Reading and Writing, 32 (6), 1607–1634. DOI: 10.1007/s11145-018-9921-y (In English)
Amini Farsani, M., Stapleton, P. and Jamali, H. R. (2025). Charting L2 argumentative writing: A systematic review, Journal of Second Language Writing, 68, Article 101208. DOI: 10.1016/j.jslw.2025.101208 (In English)
Almashour, M. and Davies, A. (2023). Exploring learning strategies used by Jordanian university EFL learners in argumentative writing tasks: The role of gender and proficiency, Frontiers in Education, 8. DOI: 10.3389/feduc.2023.1237719 (In English)
Al-Otaibi, G. M. and Hussain, A. A. (2024). The use of interactional metadiscourse markers by Saudi EFL male and female college students: The case of a gender-sensitive topic, Humanities and Social Sciences Communications, 11(1). DOI: 10.1057/s41599-024-03506-3 (In English)
Arroyo González, R., de la Hoz-Ruiz, J. and Montejo Gámez, J. (2020). The 2030 challenge in the quality of higher education: Metacognitive, motivational and structural factors predictive of written argumentation, Sustainability, 12(19), 8266. DOI: 10.3390/su12198266 (In English)
Athanases, S. Z., Bennett, L. H. and Wahleithner, J. M. (2015). Adaptive teaching for English language arts: Following the pathway of classroom data in preservice teacher inquiry, Journal of Literacy Research, 47 (1), 83–114. DOI: 10.1177/1086296X15590915 (In English)
Benetos, K. and Bétrancourt, M. (2020). Digital authoring support for argumentative writing: What does it change?, Journal of Writing Research, 12 (1), 263–290. DOI: 10.17239/jowr-2020.12.01.09 (In English)
Braun, V. and Clarke, V. (2006). Using thematic analysis in psychology, Qualitative Research in Psychology, 3 (2), 77–101. DOI: 10.1191/1478088706qp063oa (In English)
Cheong, C.-M., Zhu, X. and Xu, W. (2021). Source-based argumentation as a form of sustainable academic skill: A comparison of L1 and L2 writing, Sustainability, 13 (22). DOI: 10.3390/su132212869 (In English)
Cvikić, L. and Trtanj, I. (2022). Expressing causality in Croatian L1 and L2 argumentative writing, Rasprave Instituta za Hrvatski Jezik i Jezikoslovlje, 48 (1), 233–244. DOI: 10.31724/rihjj.48.1.10 (In English)
Davies, W. M., Barnett, A. and van Gelder, T. (2021). Using computer-aided argument mapping to teach reasoning, in Blair, J. A. (ed.) Studies in critical thinking. 2nd edn. Windsor: Windsor Studies in Argumentation, 115–152. (In English)
De Castro Daza, D., Argote, Z. L. P. and Roncancio, N. R. (2022). The dialogic nature of regulation in collaborative digital argumentative writing practices, Dialogic Pedagogy, 10, DT1–DT21. DOI: 10.5195/dpj.2022.468 (In English)
Deane, P., Song, Y., van Rijn, P., O’Reilly, T., Fowles, M., Bennett, R., Sabatini, J. and Zhang, M. (2019). The case for scenario-based assessment of written argumentation, Reading and Writing, 32 (6), 1575–1606. DOI: 10.1007/s11145-018-9852-7 (In English)
De La Paz, S., Ferretti, R., Wissinger, D., Yee, L. and MacArthur, C. (2012). Adolescents’ disciplinary use of evidence, argumentative strategies, and organizational structure in writing about historical controversies, Written Communication, 29, 412–454.(In English)
Ekalia, Y., Jemadi, F., Susanto, I. and Artikel, I. (2025). Critical thinking skills and argumentative writing ability: Is there any correlation?, DIAJAR: Jurnal Pendidikan dan Pembelajaran, 4 (3), 471–482. DOI: 10.54259/diajar.v4i3.5108 (In English)
Farrokhnia, M., Banihashem, S. K., Noroozi, O. and Wals, A. (2023). A SWOT analysis of ChatGPT: Implications for educational practice and research, Innovations in Education and Teaching International, 61 (3), 460–474. DOI: 10.1080/14703297.2023.2195846 (In English)
Ferretti, R. P. and Fan, Y. (2016) Argumentative writing, in MacArthur, C. A., Graham, S. and Fitzgerald, J. (eds.) Handbook of writing research. 2nd edn. New York: Guilford Press, 288–300. (In English)
Ferretti, R. P. and Lewis, W. E. (2019). Best practices in teaching argumentative writing, in Graham, S., MacArthur, C. A. and Fitzgerald, J. (eds.) Best practices in writing instruction. 3rd edn. New York: Guilford Press, 135–161. (In English)
García, L., Calle, M., De Castro, A., Soto, J. D., Torres, L., Candelo-Becerra, J. E. and Schettini, N. (2020). A short intervention study of argumentative writing in engineering: Less is more, European Journal of Engineering Education, 45 (2), 273–291. DOI: 10.1080/03043797.2019.1636211 (In English)
Geng, Y., Chen, G. and Li, M. (2024). Analyzing engagement strategies in argument chains: A comparison of high- and low-scoring EFL undergraduate essays, Journal of English for Academic Purposes, 71, 101428. DOI: 10.1016/j.jeap.2024.101428 (In English)
Ghaemi, F. and Mirsaeed, S. (2017). The impact of inquiry-based learning on EFL students’ critical thinking skills, EFL Journal, 2 (2), 89–102. DOI: 10.21462/eflj.v2i2.38 (In English)
Ghanbari, N. and Salari, M. (2022). Problematizing argumentative writing in an Iranian EFL undergraduate context, Frontiers in Psychology, 13, 862400. DOI: 10.3389/fpsyg.2022.862400 (In English)
Goldman, S. R., Britt, M. A., Brown, W., Cribb, G., George, M., Greenleaf, C., Lee, C. D., Shanahan, C. and Project READI (2016). Disciplinary literacies and learning to read for understanding, Educational Psychologist, 51 (2), 219–246. DOI: 10.1080/00461520.2016.1168741 (In English)
Guo, K., Li, Y., Li, Y. and Chu, S. K. W. (2023). Understanding EFL students’ chatbot-assisted argumentative writing: An activity theory perspective, Education and Information Technologies, 29 (1). DOI: 10.1007/s10639-023-12230-5 (In English)
Guo, K., Wang, J. and Chu, S. K. W. (2022). Using chatbots to scaffold EFL students’ argumentative writing, Assessing Writing, 54, 100666. DOI: 10.1016/j.asw.2022.100666 (In English)
Hatipoğlu, Ç. and Algı, S. (2018) Catch a tiger by the toe: Modal hedges in EFL argumentative paragraphs, Educational Sciences: Theory & Practice, 18, 957–982. DOI: 10.12738/estp.2018.4.0373 (In English)
Hisgen, S., Barwasser, A., Wellmann, T. and Grünke, M. (2020). The effects of a multicomponent strategy instruction on the argumentative writing performance of low-achieving secondary students, Learning Disabilities: A Contemporary Journal, 18 (1),
93–110. (In English)
Hsin, L. and Snow, C. (2017). Social perspective taking: A benefit of bilingualism in academic writing, Reading and Writing, 30 (6), 1193–1214. DOI: 10.1007/s11145-016-9718-9 (In English)
Hu, C. and Li, X. (2015). Epistemic modality in the argumentative essays of Chinese EFL learners, English Language Teaching, 8 (6), 20–31. DOI: 10.5539/elt.v8n6p20 (In English)
Hu, Y. and Saleem, A. (2023). Insight from the association between critical thinking and English argumentative writing, PeerJ, 11, e16435. DOI: 10.7717/peerj.16435 (In English)
Hutasuhut, M. L., Chen, H. and Matruglio, E. (2023). Engaged at first sight: Anticipating your audience as a way to think critically in writing an argument, Indonesian Journal of Applied Linguistics, 12 (3), 680–693. DOI: 10.17509/ijal.v12i3.55170 (In English)
Ilyas, H. and Arifin, S. (2025). Critical thinking in EFL students’ argumentative writing: Manifestations and challenges, Voices of English Language Education Society, 9 (2), 358–371. DOI: 10.29408/veles.v9i2.29656 (In English)
Ivanič, R. (2004). Discourses of writing and learning to write, Language and Education, 18, 220–245. (In English)
Jin, T., Su, Y. and Lei, J. (2020). Exploring blended learning design for argumentative writing, Language Learning & Technology, 24 (2), pp. 23–34 [Online], available at: http://hdl.handle.net/10125/44720 (Accessed 30.09.2025). (In English)
Kaewneam, C. (2025). Thai EFL students’ ability to reason as results of training in written argumentation, LEARN Journal: Language Education and Acquisition Research Network, 18 (2), 158–182. DOI: 10.70730/UMKL8104 (In English)
Khampusaen, D. (2025). The impact of ChatGPT on academic writing skills and knowledge: An investigation of its use in argumentative essays, LEARN Journal: Language Education and Acquisition Research Network, 18 (1), 963–988. DOI: 10.70730/PGCQ9242 (In English)
Kibler, A. K. and Hardigree, C. (2017). Using evidence in L2 argumentative writing: A longitudinal case study across high school and university, Language Learning, 67 (1), 75–109. DOI: 10.1111/lang.12198 (In English)
Kleemola, K., Hyytinen, H. and Toom, A. (2022). The challenge of position-taking in novice higher education students’ argumentative writing, Frontiers in Education, 7, 885987. DOI: 10.3389/feduc.2022.885987 (In English)
Klein, P. D., Arcon, N. and Baker, S. (2016). Writing to learn, in MacArthur, C. A., Graham, S. and Fitzgerald, J. (eds.) Handbook of writing research. 2nd edn. New York: Guilford Press, 243–256. (In English)
Lamb, R., Hand, B. and Yoon, S. Y. (2019). An exploratory neuroimaging study of argumentative and summary writing, in Prain, V. and Hand, B. (eds.) Theorizing the future of science education research. Cham: Springer, 75–94. DOI: 10.1007/978-3-030-24013-4_5 (In English)
Lam, Y., Hew, K. F. and Chiu, K. F. (2018). Improving argumentative writing: Effects of a blended learning approach and gamification, Language Learning & Technology, 22 (1), 97–118. DOI: 10.10125/44583 (In English)
Langer, J. A. and Applebee, A. N. (1987). How writing shapes thinking: A study of teaching and learning. Urbana, IL: National Council of Teachers of English. (In English)
Latifi, S., Noroozi, O. and Talaee, E. (2020). Worked example or scripting? Fostering students’ online argumentative peer feedback, essay writing and learning, Interactive Learning Environments, 31 (2), 655–669. DOI: 10.1080/10494820.2020.1799032 (In English)
Lawrence, J. and Reed, C. (2020). Argument mining: A survey, Computational Linguistics, 45 (4), 765–818. DOI: 10.1162/coli_a_00364 (In English)
Lawson-Tancred, H. (1992). The Art of Rhetoric. Penguin Publishing Group. Available at: https://books.google.ru/books?id=t4OwipLjr54C (Accessed 30.09.2025). (In English)
Lee, M., Liang, P. and Yang, Q. (2022). CoAuthor: Designing a human–AI collaborative writing dataset for exploring language model capabilities, in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. New York: Association for Computing Machinery, Article 388, 1–19. DOI: 10.1145/3491102.3502030 (In English)
Lesterhuis, M., Van Daal, T., Van Gasse, R., Coertjens, L., Donche, V. and De Maeyer, S. (2018). When teachers compare argumentative texts: Decisions informed by multiple complex aspects of text quality, L1 Educational Studies in Language and Literature, 18, 1–22. DOI: 10.17239/L1ESLL-2018.18.01.02 (In English)
Leitão, S. (2001). Analyzing changes in view during argumentation: A quest for method, Forum Qualitative Sozialforschung, Forum: Qualitative Social Research, 2 (3). (In English)
Liu, Q., Zhong, Z. and Nesbit, J. C. (2023). Argument mapping as a pre-writing activity: Does it promote writing skills of EFL learners?, Education and Information Technologies, 29 (7). DOI: 10.1007/s10639-023-12098-5 (In English)
Lu, C. and Swatevacharkul, R. (2021). A pedagogical framework for teaching Chinese college English learners’ argumentative writing via infusion of critical thinking, World Journal of English Language, 11 (1), 1–8. DOI: 10.5430/wjel.v11n1p1 (In English)
Luna, M., Villalón, R., Mateos, M. and Martín, E. (2020). Improving university argumentative writing through online training, Journal of Writing Research, 12 (1), 233–262. DOI: 10.17239/JOWR-2020.12.01.08 (In English)
MacArthur, C. A. and Graham, S. (2016). Writing research from a cognitive perspective, in MacArthur, C. A., Graham, S. and Fitzgerald, J. (eds.) Handbook of writing research. 2nd edn, Guilford Press, New York, 24–40. (In English)
MacArthur, C. A., Jennings, A. and Philippakos, Z. A. (2019). Which linguistic features predict quality of argumentative writing for college basic writers?, Reading and Writing, 32 (6), 1553–1574. DOI: 10.1007/s11145-018-9853-6 (In English)
Maharani, A. A. P. and Santosa, M. H. (2021). The implementation of process approach combined with Screencast-O-Matic and BookCreator to improve students’ argumentative writing, LLT Journal: Journal on Language and Language Teaching, 24 (1), 12–22. DOI: 10.24071/llt.v24i1.2516 (In English)
Mallahi, O. (2024). Exploring the status of argumentative essay writing strategies and problems of Iranian EFL learners, Asian-Pacific Journal of Second and Foreign Language Education, 9 (1), 19. DOI: 10.1186/s40862-023-00241-1 (In English)
Mateos, M., Martín, E., Cuevas, I., Villalón, R., Martínez, I. and González-Lamas, J. (2018). Improving written argumentative synthesis by teaching integration of conflicting information, Cognition and Instruction, 36 (2), 119–138. DOI: 10.1080/07370008.2018.1425300 (In English)
McCarthy, P. M., Kaddoura, N. W., Al-Harthy, A., Thomas, A. M., Duran, N. D. and Ahmed, K. (2022). Corpus analysis on students’ counter and support arguments in argumentative writing, Pegem Eğitim ve Öğretim Dergisi, 12 (1), 256–271. DOI: 10.47750/pegegog.12.01.27 (In English)
Mulyati, Y. and Hadianto, D. (2023) Enhancing argumentative writing via online peer feedback-based essays, International Journal of Instruction, 16 (2), 195–212. DOI: 10.29333/iji.2023.16212a (In English)
Musa, H. I. (2019). Dialogic vs. formalist teaching in developing argumentative writing discourse, Journal of Language Teaching and Research, 10 (5), 895–905. DOI: 10.17507/jltr.1005.01 (In English)
Murtadho, F. (2021). Metacognitive and critical thinking practices in developing EFL students’ argumentative writing skills, Indonesian Journal of Applied Linguistics, 10 (3), 656–666. DOI: 10.17509/ijal.v10i3.31752 (In English)
Najjemba, J. L. and Cronjé, J. (2020). Engagement with and participation in online role-play collaborative arguments: A sociocultural perspective, Electronic Journal of E-Learning, 18 (5), 436–448. DOI: 10.34190/JEL.18.5.006 (In English)
Nesbit, J., Niu, H. and Liu, Q. (2019). Cognitive tools for scaffolding argumentation, in Adesope, O. and Rud, A. (eds.) Contemporary technologies in education, Palgrave Macmillan, Cham, 97–117. (In English)
Newell, G., Bloome, D. and Hirvela, A. (2015). Teaching and learning argumentative writing in high school English language arts classrooms. New York: Routledge. DOI: 10.4324/9781315780498 (In English)
Newell, G. E., Goff, B., Buescher, E., Weyand, L., Thanos, T. and Kwak, S. B. (2017). Adaptive expertise in the teaching and learning of literary argumentation, in English language arts research and teaching, Routledge, New York, 157–171. DOI: 10.4324/9781315465616 (In English)
Nguyen, T. S. and Nguyen, H. B. (2020). Unravelling Vietnamese students’ critical thinking and its relationship with argumentative writing, Universal Journal of Educational Research, 8 (11B), 5972–5985. DOI: 10.13189/ujer.2020.082233 (In English)
Ozfidan, B. and Mitchell, C. (2020). Detected difficulties in argumentative writing: The case of culturally and linguistically Saudi-background students, Journal of Ethnic and Cultural Studies, 7 (2), 15–29. DOI: 10.29333/ejecs/382 (In English)
Ozfidan, B. and Mitchell, C. (2022). Assessment of students’ argumentative writing: A rubric development, Journal of Ethnic and Cultural Studies, 9 (2), 121–133. DOI: 10.29333/ejecs/1064 (In English)
Papangkorn, P. and Phoocharoensil, S. N. (2021) A comparative study of stance and engagement in English argumentative essays, International Journal of Instruction, 14 (1), 867–888. DOI: 10.29333/iji.2021.14152a (In English)
Paris, A. S., Lustiyantie, N., Murtadho, F., Rosyidi, A. Z. and Suryadi, H. (2025). Developing a rubric for argumentative writing assessment based on a multidimensional approach, Jurnal Ilmiah Global Education, 6 (2), 1226–1231. DOI: 10.55681/jige.v6i2.4125 (In English)
Pei, Z., Zheng, C., Zhang, M. and Liu, F. (2017). Critical thinking and argumentative writing among EFL learners in China, English Language Teaching, 10 (10), 31–42. DOI: 10.5539/elt.v10n10p31 (In English)
Qian, J. (2015).A study of critical thinking’s impact on English majors’ argumentative writing. Unpublished master’s thesis, Central China Normal University. (In English)
Raj, T., Chauhan, P., Mehrotra, R. and Sharma, M. (2022). Importance of critical thinking in education, World Journal of English Language, 12 (3), 126–133. DOI: 10.5430/wjel.v12n3p126
Rapanta, C. and Macagno, F. (2019). Evaluation and promotion of argumentative reasoning in academic writing, Revista Lusófona de Educação, 45, 125–142. DOI: 10.24140/issn.1645-7250.rle45.09 (In English)
Robillos, R. J. and Art-in, S. (2023). Argument mapping with translanguaging pedagogy, International Journal of Instruction, 16 (4), 651–672. DOI: 10.29333/iji.2023.16437a (In English)
Robillos, R. J. and Thongpai, J. (2022). Computer-aided argument mapping within a metacognitive approach, LEARN Journal: Language Education and Acquisition Research Network, 15 (2), 160–186. (In English)
Rogers, C. R. (1961). On becoming a person: A therapist's view of psychotherapy, Houghton Mifflin, Boston. (In English)
Rubiaee, A. M., Darus, S. and Abu Bakar, N. (2019). The effect of writing knowledge on EFL students’ argumentative essays, Arab World English Journal, 10 (4), 263–287. DOI: 10.24093/awej/vol10no4.20 (In English)
Sánchez-Peña, D. and Chapetón, C. M. (2018). Fostering written argumentative competence from a critical literacy approach, Revista Colombiana de Educación, 75. DOI: 17227/rce.num75-8107 (In English)
Setyowati, L., Sukmawa, S. and Latief, M. A. (2017). Solving students’ problems in writing argumentative essays through planning, CELT: A Journal of Culture, English Language Teaching & Literature, 17 (1), 86–102. DOI: 10.24167/celt.v17i1.1140 (In English)
Sharadgah, T. A., Sa’di, R. A. and Ahmad, H. H. (2019). Promoting and assessing EFL college students’ critical thinking skills through argumentative essay writing, Arab World English Journal, 10 (4), 133–150. DOI: 10.24093/awej/vol10no4.11 (In English)
Shi, Z., Liu, F., Lai, C. and Jin, T. (2022). Enhancing the use of evidence in argumentative writing through collaborative processing of content-based automated writing evaluation feedback, Language Learning & Technology, 26 (2), 106–128. DOI: 10.10125/73481 (In English)
Stapleton, P. and Wu, Y. (2015) Assessing the quality of arguments in students’ persuasive writing, Journal of English for Academic Purposes, 17, 12–23. DOI: 10.1016/j.jeap.2014.11.006 (In English)
Su, Y., Lin, Y. and Lai, C. (2023). Collaborating with ChatGPT in argumentative writing classrooms, Assessing Writing, 57, 100752. DOI: 10.1016/j.asw.2023.100752 (In English)
Tankó, G. and Csizér, K. (2018). Individual differences and micro-argumentative writing skills in EFL, in Multilingual education, vol. 29. Cham: Springer, 149–166. DOI: 10.1007/978-3-319-95198-0_11 (In English)
Thompson, V. E. (2021). Integrating global Englishes into literature and writing units, in Devereaux, M. D. and Palmer, C. C. (eds.) Teaching English language variation in the global classroom, Routledge, New York, 82–91.(In English)
Tikhonova, E. V. and Mezentseva, D. A. (2025). Integrating visualization tools into the text of an original research manuscript: Lexical connectors and textual commentary, Integration of Education, 29 (2), 316–338. DOI: 10.15507/1991-9468.029.202502.316-338 (In Russian)
Toulmin, S. E. (1958). The uses of argument, Cambridge University Press, Cambridge. (In English)
Toulmin, S. E. (2003). The uses of argument. Updated edn, Cambridge University Press, Cambridge. (In English)
Toulmin, S. E., Rieke, R. D. and Janik, A. (1984). An introduction to reasoning. 2nd edn. New York: Collier Macmillan. (In English)
Uzun, K. (2024). Enhancing written communication skills for academic purposes, in Teaching English for academic purposes: Theory into practice. Cham: Springer Nature, 169–190. DOI: 10.1007/978-3-031-72545-6_8 (In English)
Van Eemeren, F. H. and Grootendorst, R. (2004). A systematic theory of argumentation: The pragma-dialectical approach, Cambridge University Press, Cambridge. (In English)
van Weijen, D., Rijlaarsdam, G. and van den Bergh, H. (2019). Source use and argumentation behavior in L1 and L2 writing, Reading and Writing, 32 (6), 1635–1655. DOI: 10.1007/s11145-018-9842-9 (In English)
Wang, Q. and Newell, G. E. (2025). Teaching and learning argumentative writing as critical thinking in an EFL composition classroom, Learning, Culture and Social Interaction, 51, 100891. DOI: 10.1016/j.lcsi.2025.100891 (In English)
Wang, Y. and Said, S. M. (2024). Cognitive challenges in argumentative writing for EFL learners: A scoping review, International Journal of Academic Research in Progressive Education and Development, 13 (4), 3053–3065 (In English)
Wambsganss, T., Niklaus, C., Cetto, M., Söllner, M., Handschuh, S. and Leimeister, J. M. (2020). AL: An adaptive learning support system for argumentation skills, in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. New York: ACM, 1–14. (In English)
Weyand, L., Goff, B. and Newell, G. (2018). The social construction of warranting evidence in two classrooms, Journal of Literacy Research, 50(1), 97–122. DOI: 10.1177/1086296X17751173.
Widyastuti, S. (2018). Fostering critical thinking skills through argumentative writing, Cakrawala Pendidikan, 37 (2), 182–189. DOI: 10.21831/cp.v37i2.20157 (In English)
Yang, J., Zheng, M. and Liu, Y. (2023). Fusion weighted features and BiLSTM-attention model for argument mining of EFL writing, Frontiers in Psychology, 14, 1049266. DOI: 10.3389/fpsyg.2023.1049266 (In English)
Yang, L. (2021). Focus and interaction in writing conferences for EFL writers, SAGE Open, 11 (4). DOI: 10.1177/21582440211058200 (In English)
Yang, Y. (2016). Appraisal resources in Chinese college students’ English argumentative writing, Journal of Language Teaching and Research, 7 (5), 1002–1013. DOI: 10.17507/jltr.0705.23 (In English)
Yasuda, S. (2023). What does it mean to construct an argument in academic writing?, Journal of English for Academic Purposes, 66, 101307. DOI: 10.1016/j.jeap.2023.101307 (In English)
Young, R. E., Becker, A. L. and Pike, K. L. (1970). Rhetoric: Discovery and Change. Harcourt, Brace & World, 383 p. [Online], Available at: https://books.google.ru/books?id=tKktkEUZxWQC (Accessed 30.09.2025). (In English)
Zainuddin, Z. and Shammem, R. G. (2016). Effects of training in Toulmin’s model on ESL students’ argumentative writing and critical thinking, Malaysian Journal of Language and Linguistics, 5 (2), 114–133. (In English)
Zhang, L. (2016). Writing critically 3: Argumentative writing. Beijing: Foreign Language Teaching and Research Press. (In English)
Zhang, T. and Zhang, L. J. (2021). Taking stock of a genre-based pedagogy, Sustainability, 13, 11616. DOI: 10.3390/su132111616 (In English)
Zhang, X. (2019). EFL writers’ reconstruction of writing beliefs in a functional linguistics-based curriculum, SAGE Open, 9(2). DOI: 10.1177/2158244019853915 (In English)