Your new experience awaits. Try the new design now and help us make it even better

CORRECTION article

Front. Educ.

Sec. Digital Learning Innovations

Volume 10 - 2025 | doi: 10.3389/feduc.2025.1709300

Charting the Field: A Review of Argument Visualization Research for Writing, Learning, and Reasoning

Provisionally accepted
  • 1Simon Fraser University W A C Bennett Library, Burnaby, Canada
  • 2Simon Fraser University, Burnaby, Canada
  • 3Mount Saint Vincent University, Halifax, Canada
  • 4National Taichung University of Education, Taichung, Taiwan

The final, formatted version of the article will be published soon.

Introduction Context The capacity to develop and deliver evidence-based and reasoned arguments remains one of the central aims of post-secondary education (Andrews, 2015; Wingate, 2012). To meet this aim, students are often expected to demonstrate, outline, or pre-write their reasoning and claims before producing an essay, frequently with reference to disciplinary conventions (Bean & Melzer, 2021). Instructors typically design pre-writing activities to scaffold this process, including outlining, brainstorming, freewriting, diagramming, or using argument visualization. Among these pedagogical approaches, web-based argument visualization tools, in particular, have become more common in higher education to support learners' development of critical thinking, reasoning, and writing skills (Rousseau & van Gelder, 2024). In online and hybrid learning contexts, these web-based tools are particularly valuable because they provide structure and feedback where face-to-face interaction may be limited. These argument visualization tools have demonstrated important pedagogical significance because they can externalize complex reasoning processes into visual formats. In higher education contexts, argument visualization serves two main purposes. First, in philosophy courses, students learn to analyze the arguments of others by transforming texts into visual representations that clarify the functions of different components (Davies, 2011; Harrell, 2011). Second, students use argument mapping to construct their own arguments. This may involve supporting a perspective on a contentious issue by creating an argument map either as a stand-alone project or as a pre-writing step for an argumentative essay. This dual application suggests that argument visualization tools could play a significant role in developing students' critical thinking capabilities across disciplines. Research Gaps Despite this pedagogical significance and increasing integration in online education, existing studies highlight the potentials of argument visualization tools to enhance critical thinking and writing, yet robust empirical evidence remains scarce and limited. As van Gelder (2015) suggested, the effectiveness of argument mapping instruction often relies on the availability of skilled instructor feedback, which is not always possible due to workload (Mueller & Schroeder, 2018; Mello et al., 2024). Furthermore, the diversity of tools, instructional aims, and implementation contexts makes it difficult to determine when and how argument visualization is the most effective for supporting student learning outcomes. Given the growth of online courses, the need for clear evidence of how these tools function as cognitive scaffolds in digitally mediated environments is especially urgent. Most critically for online education, this lack of consolidated evidence leaves instructors and designers with limited guidance for how to integrate these web-based tools in online environments and in ways that align with specific disciplinary needs and the demands of online or blended learning contexts. Without systematic evidence about tool effectiveness, implementation strategies, and disciplinary applications, educators cannot make informed decisions about adopting these potentially transformative pedagogical tools. Objectives To address these critical gaps, although many research studies imply that argument mapping can enhance writers' abilities in argumentation and critical thinking (Davies, 2011; Liu et al., 2024; Manalo & Fukuda, 2024), we need to consolidate the available evidence as the first step to better understand the scope and coverage of these tools. Therefore, this paper adopts a scoping review methodology guided by the PRISMA-ScR framework (Tricco et al., 2018). Through this systematic scoping approach, we aim to map the breadth of research on argument visualization and learning by identifying key themes and fields and highlighting gaps in empirical studies. By synthesizing existing evidence across disciplines and contexts, we hope that the scoping review will provide a foundation for future research directions, particularly in the areas of experimental validation and pedagogical design. This consolidation is especially timely given the rapid expansion of online and hybrid learning environments where such tools could provide crucial cognitive scaffolding. Research Questions Given the growing importance of critical thinking and writing skills in post-secondary education, and recognizing the gaps identified above, there is a need to scope existing evidence on how argument visualization tools support learning outcomes. Our review begins by providing an overview of the history and pedagogical uses of visual argumentation, particularly argument mapping; the potential learning outcomes and benefits of digital tools; and how research to date reveals a scarcity of validation for the effectiveness of argument visualization. We then describe our use of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Reviews (Tricco et al., 2018) framework to systematically examine the use of argument visualization tools in higher education. To systematically address the identified gaps, our analysis considers three research questions, each selected to address a specific gap aligned with the overall objective of argument visualization research. 1. The empirical gap: what evidence exists regarding the impact of argument visualization tools on students' critical thinking and writing skills in higher education settings? 2. The diversity gap: what types of argument visualization software are currently being used in higher education, and how are they integrated into teaching and learning practices in different fields? 3. The instructional strategy gap: Under different instructional objectives (e.g., pre-writing planning, collaborative debate), how do tool usage strategies (such as individual versus collaborative) moderate student learning outcomes? Literature Review History & Background Over the past decades, argument visualization has emerged as a pedagogical method for enhancing writers' reasoning and critical thinking skills (Kirschner et al., 2012; Reed et al., 2007). Although its roots can be traced back to the nineteenth century, it has only recently become a prominent educational tool. In particular, recent decades have seen the development of specialized software that supports argument diagramming in diverse learning contexts (Butcher, 2006; Davies, 2011; Manalo & Fukuda, 2024; Reed et al., 2007; van Gelder, 2015). This evolution from historical concept to digital tool reflects a fundamental shift in how educators approach argumentation instruction. Argument visualization transforms abstract argumentative content into formats such as diagrams or maps, designed to show logical connections between propositions, premises, evidence, and conclusions. As Hitchcock (2017) noted, viewing arguments as structured networks of interrelated premises and conclusions makes them well suited for diagrammatic display. Blair (2012) similarly argued that evaluating arguments requires more than judging persuasive impact, as it demands attention to logical norms, coherence, and evidence. By making these abstract logical relationships visible, these dimensions of argument visualization tools can make the tools more accessible. This historical trajectory demonstrates the value of visual representations as cognitive tools in educational settings. This historical foundation sets the stage for examining specific argument mapping strategies and their documented benefits Argument Mapping Strategy and Its Benefits Building on this historical foundation, among the various argument visualization techniques, argument mapping is the most prominent. It provides structured graphic layouts that represent reasoning processes more explicitly than conventional textual approaches (Royer & Royer, 2004). These techniques help writers, especially novice writers, to recognize the underlying architecture of arguments and to understand complex reasoning structures more clearly (Liu et al., 2023). The visual structure follows a consistent pattern: typically, argument mapping uses box-and-arrow designs in which propositions appear in nodes linked by directional arrows that show inferential relationships (Dwyer et al., 2013; van Gelder, 2015). Many argument visualization tools, regardless of discipline or platform, adopt a logic structure grounded in informal reasoning: claims are supported by reasons and evidence, challenged by counterclaims or rebuttals, and synthesized into a conclusion. Figure 1 shows a sample argument map structure based on Freeman's framework (1991) and reflects common design features found in argument visualization software. Beyond their structural features, empirical research has documented multiple cognitive and pedagogical benefits. Research has shown that visualization techniques can support students' argumentation by activating and refining their prior knowledge (e.g., cognitive schemas related to argumentation). As a result, these techniques can advance writers' critical thinking and writing skills (Nussbaum & Schraw, 2007; Liu et al., 2023; Nesbit et al., 2018). Furthermore, argument mapping is not an isolated tool; it plays a foundational role in preparing students for activities such as collaborative dialogue, structured debates, and argumentative essay writing (Chen et al., 2022; Harrell, 2022). It also serves as a reliable way to assess students' reasoning and analytical progress (Rapanta & Walton, 2016). Building on these cognitive benefits, digital tools offload tasks like layout adjustments (Agostinho et al., 2011; Hoffmann & Paglieri, 2011), freeing resources for argumentation. Research has also further highlighted its cognitive benefits (Dwyer et al., 2010, 2013). For example, Dwyer et al. (2010, 2013) showed that argument maps can aid short-term memory by organizing propositions and reducing cognitive load. Although sustained gains for long-term retention may require repeated practice, students who constructed argument maps or hierarchical outlines performed better in recall tasks than those using text summarization alone. Taken together, these findings suggest that argument maps function not only as visual aids but also as cognitive tools that help organize, integrate, and retrieve complex information. This cognitive scaffolding becomes even more significant when implemented through digital online platforms. Figure 1 Sample argument visualization structure (Argument Map) Digital Visualization Tools and the Existing Evidence The transition from paper-based to digital argument mapping has fundamentally transformed the pedagogical possibilities of these tools. Advancement in online digital technology have led to interactive argument visualization tools that create new pedagogical opportunities for developing students' reasoning skills in online environments. These computer-based online tools (see examples of argument visualization interface in Davies, 2009) allow learners to modify, rearrange, and iteratively refine arguments with less cognitive effort, particularly when working with large or complex maps (Chen et al., 2022). This technological shift has yielded measurable learning benefits: research indicates that using digital tools in online settings can help students construct more sophisticated argument structures, engage more deeply with content, and maintain motivation over time (Eftekhari et al., 2016). The cognitive advantages of digital implementation stem from their ability to reduce extraneous processing demands. One important benefit is that these digital tools can offload lower-level tasks, such as adjusting layouts or redrawing connections, freeing mental resources for higher-order tasks like analyzing counterarguments or constructing rebuttals (Agostinho et al., 2011; Hoffmann & Paglieri, 2011; Nesbit et al., 2018). Therefore, these tools serve not just as visual aids but as cognitive scaffolds that help learners critically engage with complex material (Belland, 2010; Benetos, 2023). However, a significant disconnect exists between the theoretical promise and empirical validation of these tools. Despite growing interest in argument visualization tools, robust empirical evidence supporting the effectiveness of argument visualization remains limited (Hornikx & Hahn, 2012; Scheuer et al., 2010). Few large-scale controlled experiments have been conducted, although some quasi-experimental studies document promising outcomes in diverse contexts (Noroozi et al., 2012). The available evidence, while encouraging, comes primarily from small-scale studies. For example, Liu et al. (2023) studied 190 EFL undergraduates in China who used the Dialectical Map tool as a pre-writing activity and found positive effects on students' argumentation. Similarly, Darmawansah et al. (2024) found that argument mapping supported students' dialectical thinking, such as reaching conclusions after weighing multiple perspectives. Other studies use pre-and post-testing to measure effects on critical thinking skills. For instance, van Gelder et al. (2004) employed the California Critical Thinking Skills Test (CCTST) and reported a 20% improvement among students taught with argument mapping. These empirical findings reveal both values and critical gaps that shape our research approach. Taken together, these studies indicate that argument visualization tools have potentials for improving critical thinking and writing skills in higher education. Yet, controlled validation remains limited. This gap informs our first research question, which synthesizes evidence on learning impacts. Our second question focuses on the range of tools and their use across disciplines. Our third question considers instructional aims and strategies to clarify in which contexts argument visualization tools work best and through which mechanisms. Method A scoping review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Reviews (Tricco et al., 2018) framework serves to systematically examine the use of argument visualization tools in higher education. This was selected as the most appropriate methodology to address our research questions due to the relatively recent nature of our topic, with the understanding that the literature might be limited and methodologically heterogeneous, making a full systematic review or meta-analysis impractical at this stage. Scoping reviews enable researchers to map out the current state of knowledge on a particular subject, highlight gaps in the literature, and clarify existing concepts and their characteristics (Tricco et al., 2018; Peters et al., 2020). Analytical Framework Referring to the Technology-enhanced Learning (TEL) model proposed by Lin and Hwang (2019) for examining studies related to the use of technologies in school settings and professional training, this review study took several dimensions into account, including author, year, discipline, study design, sample size, training provided, tools used, purpose of use, and outcome measures. Lin and Hwang's (2019) TEL model was originally applied in flipped learning contexts; its broader analytical strength, however, offers a structured, multidimensional framework that our review can use in order to examine technological affordances, pedagogical strategies, and learner-centred outcomes. Particularly, Lin and Hwang's TEL model provides a strong yet adaptable basis for our paper that allows us to systematically analyze the diverse range of argument visualization tools, their instructional integration, and their measured effects on students' critical thinking and writing skills. Our review's analytical approach is aligned with the TEL model's core principles in a way that it situates educational technologies, such as argument visualization tools, within pedagogical contexts to evaluate their educational impact. At the same time, the TEL model emphasizes enabling affordances, which may risk foregrounding positive impacts and underrepresenting null effects or critical perspectives. To address this, our synthesis explicitly considered methodological variations of argument visualization research in terms of study design, sample size, and assessment methods, and noted limitations such as methodological constraints, small effect sizes, and challenges in cross-study comparisons. By integrating these variations, we sought to avoid presupposing the TEL model's implicit bias toward positive impacts. Search Strategy Our team developed our protocol using the guidelines provided by the Preferred Reporting Items for Systematic Reviews and Meta-analysis Protocols (Shamseer et al., 2015). The protocol was collaboratively revised by the research team in a series of meetings to ensure alignment with our research questions. The inclusion and exclusion criteria were refined based on these discussions and the review of the initial sample. Screening, data extraction, critical appraisal, coding and validation were completed independently by the first two authors to ensure rigour and eliminate bias. Databases The database search was conducted in February 2024 using EBSCOhost, which included access to several education-focused databases such as ERIC, Education Research Complete, and Academic Search Complete. This ensured broad coverage of peer-reviewed literature in educational technology and instructional design. Web of Science and Google Scholar were also used to capture multidisciplinary and grey literature. Scopus was considered but excluded due to significant overlap with Web of Science and feasibility constraints, including limited research personnel and time for managing deduplication across large, redundant datasets. While Scopus offers robust coverage in social sciences and international journals, potentially capturing studies from regions like Europe or Asia, this risk was mitigated by including Google Scholar, which indexes a broad range of grey literature and multidisciplinary sources, ensuring comprehensive coverage of relevant studies (Falagas et al., 2008). No date restrictions were applied to the search because research on argument visualization in post-secondary education could span several decades, given the long-standing use of argument mapping in various forms. Search Parameters The search string used was ("argument map*" OR "argument visual*") AND ("post-secondary education" OR "higher education" OR "university" OR "college"), designed to capture a broad range of studies related to argument mapping and its pedagogical applications. Synonyms such as "argument diagram*," "argument chart," and "argument representation" were initially tested but excluded from the final search string due to extremely low or irrelevant retrieval rates. This low retrieval rate likely reflects the field's preference for standardized terms like 'argument mapping' and 'argument visualization,' which are more consistently used in educational technology and critical thinking literature, possibly due to their association with specific software (e.g., Rationale) and established pedagogical frameworks (Davies, 2011). The primary focus was kept on the dominant terms "argument map*" and "argument visual*," which are consistently used in the empirical literature. The search strategy was applied consistently across databases but adapted as needed for each system's syntax. EBSCO yielded 129 results, Web of Science returned 131 articles, and Google Scholar produced approximately 230 results. Manual searches in Google Scholar were conducted to ensure the inclusion of gray literature and articles not indexed in traditional databases. Table 1 illustrates the coding scheme used to categorize each study and the parameters examined by the coders. Table 1. Coding Scheme Category Dimension coding item Example Basic Information Author Authoring information Smith & John Year Year of publication 2003 Discipline The subject domain in which the study was situated. Language arts (Academic writing) Research Design Study Design Whether the methodology is case study, qualitative, quantitative (quasi-experimental), or mixed methods. An experiment researching the effect of argument visualization tool. Sample size The total number of participants in the study. We categorize each study's number of participants based on Slavin & Smith's recommendations for the relationships between sample size and effect sizes (Slavin & Smith, 2009). "Small studies" have number of participants less than 50, "medium studies" have number of participants between 51 and 99, and "larger studies" have number of participants more than 100. 120 (Large) 98 (Medium) 14 (Small) Training Whether or not training is provided to students or students are familiarized with the tool. If a study provides a familiarization task for its participants, then we categorize the task as either argumentation scheme training or tool familiarization only. Argumentation scheme refers to studies focus their familiarization on building learners' theoretical knowledge of argumentation, whereas tool familiarization refers to those studies only focus on training learners how to use the argument visualization tool upon first use. Argumentation scheme training – students are familiarized with the meaning of premise, supporting evidence, rebuttal, and conclusion. Tool familiarization – Students watched a two-minute video explaining the features of the computer-aided mapping tool Tools Used The name of the argument visualization tool used in the study Dialectical Map is used. Purpose of Use Following Panasuk & Todd (2005) and Widodo's (2006) suggestion for effectiveness lesson planning, we identified the goals or intended purpose of using the argument visualization tools in each study. Four categories have emerged, such as Planning and Reading Comprehension, Reasoning and Teaching Argumentation, Knowledge Construction, and Others. Planning and Reading Comprehension refer to studies that utilize the AM tools for pre-writing activities or comprehension purposes, whereas Reasoning and Teaching Argumentation refers to the studies which the tools have been used for enhancing critical thinking, debates, or essay writings. Others can mean those studies which utilize the tools for disciplinary purposes, such as medical diagnosis. While these categories are analytically distinct, 'Planning and Reading Comprehension' emphasizes preparatory or comprehension-focused uses (e.g., pre-writing or text retention), and 'Reasoning and Teaching To improve debate skills in higher education. Use it as pre-writing activity Argumentation' focuses on skill-building outcomes (e.g., critical thinking or debate); We acknowledge some overlap exists, as pre-writing activities inherently support argumentation development. Outcome Measures The tools or methods used to evaluate the dependent variables or outcomes in the study. These methods can include essay, reproduction of argument maps, achievement tests, survey, or combinations of any of these. Achievement Tests Perception of the tool Student perception In this category, we coded whether a study collected the opinions of students using the argument visualization tool. Following Moore & Benbasat's (1991) way of measuring perceptions of technological innovation we categorize the student perception into the following aspects: complexity perception, compatibility perception, behavioral perception, or combinations of any of these. Complexity perception refers to the degree to which the functionality of argument map tool is perceived as being difficult or easy to use (Davis, 1986). Compatibility perception refers to the degree to which an argument map tool is perceived as being useful or enhancing the users' learning experience and skill of argumentation (Hurt & Hibbard, 1989). Behavioral perception refers to the actual behavior where learners use argument mapping tool for their learning in the discipline (Moore & Benbasat, 1991). General attitudes refer to those studies only ask open-ended questions for their students. A survey examining students' evaluation of the perceived ease of use for argument visualization tools for learning. Complexity perception (F): I found [AM tool's name]'s interface is easy to work with. Compatibility perception (L): I found [AM tool's name] helps me write better arguments. Behavioral perception (I): I used [AM Tool's name] to brainstorm my ideas for my essay. General attitude (G): what can we do to improve [AM Tool's name]? Group Structure Collaboration Whether learners worked with a group or worked individually on the argument maps. The coding of this categories includes whether students work on the argument map on their own (individual), they are allowed to work with peers, such as discussion debate or collaborative writing projects (collaboration), or both. Collaboration or Individual Screening Process and Exclusion Criteria After removing duplicates and applying our exclusion criteria, we identified 45 studies for inclusion in the review. Articles were excluded if they were not directly related to argument mapping strategies or student learning in higher education contexts or if the argument mapping context was not specified. Additionally, non-English or non-indexed articles were excluded to maintain data extraction and analysis consistency. This scoping review aims to provide a comprehensive overview of the existing literature, identify knowledge gaps, and propose directions for future research on argument visualization tools in higher education. Figure 2 below illustrates the screening process by following the PRISMA framework. Figure 2 Study screening and coding process – PRISMA flowchart for the study inclusion Results We reviewed 45 studies (a list of articles is provided in Appendix 1). We found a gradual increase in publications related to argument visualization tools, with a sharp rise in 2022 and 2023, as shown in Figure 3 in Appendix 2. This period had the highest number of studies, with 8 publications in 2022 and 6 in 2023. Outcome Measures As seen in Figure 4 (See Appendix 2 for detailed distribution), among the 45 studies, we discovered that the most frequently measured outcomes for argument visualization software were achievement tests (n=15), reproductions of argument maps (n=9), and essays as the outcome measure (n=7). For the studies utilizing achievement tests, they might use the CCTST or student grades to assess the outcome of the argument mapping intervention. These outcome measures were used to assess the impact of argument visualization tools on students' critical thinking and writing skills. Some studies used diverse outcome measures to document the effectiveness of the argument visualization tools they used, including combinations of survey, achievement tests, argument maps, presentation, interview, or any combinations of these (n=14). For the achievement tests, we have observed that there were measures varied from standardized instruments like the CCTST (e.g., van Gelder et al., 2004) to course-specific grades or custom assessments, reflecting quantitative or qualitative evaluation differences in focus. Subject-Area Distribution The most represented subject area is EFL with 8 studies (n=8), followed by Education with 6 studies (n=6), philosophy with 3 studies (n=3), and business education with 3 studies as well (n=3). Among these Education studies, some studies train pre-service teachers and in-service teachers to use argument visualization software, and some are used for teacher training in Teaching English to Speakers of Other Languages (TESOL) or Early Childhood Education (ECE). A range of other subject areas, including food microbiology, medicine, early childhood education, Teaching English to Speakers of Other Language (TESOL), psychology, nursing, journalism, and various others, have between 1 and 2 studies for each. Figure 5 (See Appendix 2 for details) highlights the diversity of fields in which argument visualization tools have been studied, with a clear concentration on language-learning-related disciplines, teacher education, or logic training, such as EFL, Education, and philosophy. Study Design Types Figure 6 (below) shows the types of study designs to research the effects of argument visualization software as a teaching practice. We have found that the majority of the studies (n=34) employed a quasi-experimental design, making this the most common study type for researching argument visualization. Other study designs included qualitative studies (n=4), mixed methods (n=3 studies), and case studies (2 studies). Additionally, 2 studies did not specify a design, making it challenging to infer one based on the provided descriptions. We concluded that quasi-experimental approaches are popular in the research on argument visualization tools. Figure 3 Type of Study Design for Argument Visualization Research Argument Visualization Tools Figure 7 (See Appendix 2 for details) illustrates the diverse range of tools used in research on argument visualization. In the 45 studies reviewed here, we found that the most frequently used tool is Rationale, utilized in 10 studies, followed by CAM/CAAM tool in 3 studies, and ZJU YueQue in 2 studies. Several other tools, such as Argunet, Dialectical Map, MindMeister.com, and Write Reason web app, were each used in 1 study respectively. Additionally, 7 studies did not specify the tool used, marked as "N/A." Furthermore, when researchers used the tools, the majority of research studies specified training sessions for their participants (n=29). Among these studies, the majority of the studies train students by presenting the features of the tool to the students, intending to familiarize students with the tool (n=18), whereas the rest (n=11) train students via teaching argumentation schemes (n=11). Only 16 studies did not provide training to the participants, nor did they specify whether training was offered to their participants (n=16), as shown below in Figure 8. Figure 4 Training in Argument Visualization Tools Studies also showed a trend toward individual work with the tools. As seen in Figure 9 below, most of the studies (n=28) had students work with their argument visualization tools individually, whereas 14 studies had their students work in a group setting. Three studies compared both (n=3), and 3 studies did not specifically specify whether students worked individually or with a group (n=3). 18 16 11 0 2 4 6 8 10 12 14 16 18 20 Tool Familiarization N/A Argumentation Scheme Training Number of Studies Training in Argument Visualization Tools Figure 5 Collaboration Type Study Size We found that investigation of argument mapping tools often relies on small-scale studies in educational settings. As shown in Figure 10 below, among the 45 studies, we have found that the majority of studies were classified as small studies, where there were only fewer than 50 participants (n=26). We have found that these small-scale studies often focused on using argument visualization as pedagogical interventions to pilot the effectiveness of the tool in teaching argumentation. There were 13 studies classified as large studies involving more than 100 participants, whereas only 6 studies were classified as medium studies, with sample sizes ranging from 51 to 99 participants. Figure 6 Sample Size Classification Perceptions of Argument Visualization Tools Our analysis of 45 studies showed diverse levels of attention to perceptions of argument visualization tools among students and instructors. Specifically, we found that the majority of studies (n=23) did not examine perceptions of the tool in any form. In the remaining 22 studies, however, student perceptions were categorized following Moore and Benbasat's (1991) framework, which assesses the perception of technological innovations through dimensions of complexity, compatibility, behavioural aspects, or any combinations of these. As shown in Figure 11 below, 5 studies addressed both compatibility and behavioural perceptions, evaluating the alignment of the tool with learning goals and the students' behavioural engagement in using the tool (n=5). Four studies (n=4) explored compatibility and complexity perceptions, meaning that these studies focus on how well the tool fits learning goals and the tool's ease of use. Three studies (n=3) examined compatibility only, meaning that these studies assessed whether the tools enhanced student learning experiences or asked students to self-report their improved argumentation skills; another three studies investigated general attitudes, capturing open-ended student feedback on the tools (n=3). Furthermore, only two studies considered a more thorough examination of student perception of tool use, including investigation of all of the complexity, compatibility, behavioural aspects, and general attitudes (n=2); these offer a holistic perspective on student perceptions of the tools. Only one study focused solely on complexity and behavioural aspects(n=1), while another exclusively explored behavioural perceptions (n=1). Figure 7 Perceptions of Argument Visualization Tools Our analysis of the 45 studies demonstrated four distinct purposes for using argument visualization tools in higher education settings: Planning and Reading Comprehension, Reasoning and Teaching Argumentation, Knowledge Construction, and Others. The most frequent purpose was identified as Planning and Reading Comprehension, which accounted for 21 studies (n=21). The studies identified in this category use argument mapping tools to facilitate pre-writing activities before essay instructions, while some studies use these tools to enhance students' understanding of reading materials, retention of textual arguments or recall of arguments. The second most frequent purpose was Reasoning and Teaching Argumentation, in which we identified 18 studies among the 45 (n=18). These studies concentrated on using argument visualization tools as pedagogical tools for fostering critical thinking, reasoning skills, debating skills, or improving argumentative essay writing skills. There are 4 studies in the Others category (n=4). These studies mostly use argument visualization tools for specialized disciplinary purposes, such as supporting diagnostic reasoning in medical education or other subject-specific applications. The least represented category, Knowledge Construction, was observed in only 2 studies. These studies explored the use of argument visualization tools to facilitate the collaborative or individual construction of knowledge frameworks (see Figure 12). Figure 8 Purpose of Use Discussion In this study, we conducted a scoping review of research studies focusing on argument visualization software and closely examined how this technique has been used in teaching and learning contexts in higher education. We closely reviewed 45 empirical articles across various post-secondary fields and found a noticeable increase in publications related to argument visualization tools since 2022. This increase is particularly significant given the rapid expansion of online and distance education following the COVID-19 pandemic, which has created urgent 21 18 4 2 0 5 10 15 20 25 Planning and Reading Comprehension Reasoning and Teaching Argumentation Others Knowledge Construction Number of Studies Purpose of Use needs for digital tools that can support complex cognitive tasks in remote learning environments. The findings demonstrate both the growing adoption of these tools across diverse disciplines and the need for more rigorous empirical validation of their effectiveness. Our analysis reveals four key areas that characterize the current state of argument visualization research: 1) disciplinary applications, 2) assessment approaches, 3) tool adoption patterns, and 4) instructional strategies that moderate effectiveness. Disciplinary Applications of Argument Visualization Specialized Purposes Argument visualization has been adopted as a pedagogical strategy across a broad range of instructional aims and disciplinary contexts in higher education. Its application and use cases are shaped by the epistemic goals and pedagogical traditions of specific domains. In more specialized disciplinary settings, for instance, argument visualization has been used to scaffold diagnostic reasoning in medical education (Wu et al., 2023, 2024), structure journalistic analysis (Borden, 2007), facilitate philosophical reasoning and logic training (Kaeppel, 2021; Twardy, 2004), and support biological argumentation in teacher education programs (Garcia et al., 2021). In other cases, it has been integrated in EFL and TESOL (Teaching English to Speakers of Other Languages) contexts to develop collaborative debate skills (Chen et al., 2022; Chen et al., 2024a) or oral argumentation and speaking (Mubarok et al., 2023). These studies illustrate the diverse ways in which argument visualization tools can be used to supporting instructions of individual disciplines, sometimes as standalone reasoning support, and other times as mediators of communication or language learning. This disciplinary diversity demonstrates the flexibility of argument visualization, yet it also raises questions about optimal implementation strategies across different fields. Critical Thinking and Argumentation Beyond discipline-specific applications, our review identified a substantial focus on general critical thinking development. A significant group of argument visualization studies focus more on supporting students' critical thinking and argumentation skills, and their instructional outcomes are often grounded in general education and writing-intensive courses. These studies range from philosophical instruction, academic writing, to interdisciplinary learning environments (Butchart et al., 2009; Harrell, 2008, 2011; Kunsch et al., 2014; Maftoon et al., 2014; Scheuer et al., 2014; Sönmez et al., 2020; Uren et al., 2006; Yilmaz-Na & Sönmez, 2023a, 2023b). More importantly, the emphasis here is to develop students' academic literacy abilities, such as generating arguments, structuring evidence-based claims, evaluating reasoning statements, and responding to counterarguments. This focus on transferable academic skills suggests that argument visualization tools may serve as cross-disciplinary cognitive scaffolds. Comprehension and Retention Complementing these applications, research has also documented cognitive processing and memory enhancement benefits. Another instructional application of argument visualization focuses on reading comprehension and knowledge retention. Several studies assessed how visualizing arguments during or after reading enhances students' understanding of complex texts, recall of information, and ability to trace logical connections between ideas (Archila et al., 2022; Cullen et al., 2018; Davies, 2009; Dwyer et al., 2010, 2013; Eftekhari & Sotoudehnama, 2018; Eftekhari et al., 2016; Gargouri & Naatus, 2017; Jeong & Kim, 2022; Loll & Pinkwart, 2013; Memis et al., 2022; Ngajie et al., 2020). These studies, although less concerned with argumentative writing, suggest that argument visualization can serve as a cognitive tool to improve information processing (Nesbit et al., 2018), especially when dealing with dense or abstract content. This is consistent with Cognitive Load Theory, which posits that external visual representations can help manage working memory constraints and facilitate deeper learning by reducing extraneous cognitive load (Sweller, 1994). The cognitive benefits documented here provide theoretical grounding for understanding how these tools support learning. Argument Visualization for EFL Writers A distinctive body of research focuses on developing EFL learners' writing skills, where argument visualization tools are introduced as pre-writing activities (Alsmari, 2022; Binks et al., 2022; Liu et al., 2023; Robillos, 2021; Robillos & Art-in, 2023). In these contexts, the argument visualization tools are not simply aids for organizing content but quite instrumental in supporting language learners' development of rhetorical structure in writing argumentative essays. These studies altogether show that argument visualization has been particularly valuable for learners who must navigate both conceptual and linguistic challenges simultaneously. This diversity of applications reveals both opportunities and challenges. While argument visualization demonstrates remarkable flexibility across disciplines, this same diversity complicates efforts to assess overall effectiveness, leading us to examine how these tools have been evaluated across contexts. Assessing the Effectiveness of Argument Visualization Turning from applications to evidence of effectiveness, our analysis reveals significant methodological patterns and limitations in how these tools have been evaluated. In the studies we reviewed, argument visualization research employs a variety of assessment approaches to investigate the effectiveness of argument visualization, with an emphasis on measurable learning outcomes. An obvious pattern emerged in the assessment approaches of argument visualization: instructional strategies and collaborative learning approaches play a key role in the efficacy of argument visualization, but variability in training and context shapes student outcomes. Our findings suggest that argument visualization tools may serve as effective pedagogical scaffolds for developing both analytical and constructive argumentation skills; however, the lack of standardized methods complicates synthesis and generalization. Critical Thinking and Writing Our first research question sought to examine the evidence regarding the impact of argument visualization tools on students' critical thinking skills and writing skills. Our findings indicate a predominant use of achievement tests (Butchart et al., 2009; Chen et al., 2024a; Crudele & Raffaghelli, 2023; Cullen et al., 2018; Dwyer et al., 2010; Dwyer et al., 2013; Eftekhari & Sotoudehnama, 2018; Eftekhari et al., 2016; Harrell, 2008; Harrell, 2011; Kunsch et al., 2014; Loll & Pinkwart, 2013; Memis et al., 2022; Ngajie et al., 2020; Twardy, 2004; Uçar & Demiraslan Çevik, 2020; Wu et al., 2013; Wu et al., 2014), essays (Alsmari, 2022; Binks et al., 2022; Liu et al., 2023; Maftoon et al., 2014; Robillos, 2021; Robillos & Art-in, 2023; Robillos & Thongpai, 2022), or mixture of both (Cullen et al., 2018; Memis et al., 2022) as methods to assess the effects of argument visualization. Our analysis suggests that these studies strongly focus on measurable learning outcomes and the learning effectiveness of the intervention. However, we also note that the heterogeneity of achievement tests, ranging from validated tools like CCTST to subjective grades, poses evaluation challenges for our synthesis, potentially obscuring patterns. For example, there could be stronger effects in standardized than contextual measures, further limiting cross-study comparability. Thus, the promise of these findings is tempered by significant methodological limitations. Previously, van Gelder et al.'s (2004) findings indicated a 20% improvement in critical thinking skills as a result of argument mapping intervention. However, when we reviewed these studies, the diversity in assessment methods and the lack of standardization across studies make it quite challenging to draw definitive conclusions about these tools' effectiveness and this method's validity. Also, according to our results, most of the studies examining the effectiveness of argument visualization utilize small-scale study design (i.e., Jeong & Kim, 2022; Uren et al., 2006). Although small studies provide granular insights into the nuanced influences of argument visualization on writing and critical thinking, the relatively smaller number of large-scale studies indicates the problem of generalizability, demonstrating a need for broader research examination to validate findings across diverse and larger populations. Qualitative Evaluation While achievement tests provided quantitative measures of improvement and effectiveness, some studies employed the qualitative evaluation of argument maps (Chen et al., 2024b; Jeong & Kim, 2022; Kaeppel, 2021; Sönmez et al., 2020), offering insights into students' actual reasoning processes and the structural understanding of arguments. This dual approach to assessment reflects what Davies (2011) identified as the two primary pedagogical applications of argument visualization in higher education, such as the analysis of existing arguments (Archila et al., 2022; Borden, 2007; Cullen et al., 2018; Davies, 2009; Dwyer et al., 2010, 2013; Eftekhari & Sotoudehnama, 2018; Eftekhari et al., 2016; Gargouri & Naatus, 2017; Jeong & Kim, 2022; Loll & Pinkwart, 2013; Memis et al., 2022; Ngajie et al., 2020) and the creation of new argumentation (Alsmari, 2022; Binks et al., 2022; Crudele & Raffaghelli, 2023; Liu et al., 2023; Robillos, 2021; Robillos & Art-in, 2023; Robillos & Thongpai, 2022; Uçar & Demiraslan Çevik, 2020). Based on our scoping review, what we can conclude so far is that integrating argument visualization tools in teaching writing or reasoning could likely be effective in limited controlled pedagogical contexts. In summary, measurable outcomes are frequently reported, but methodological inconsistency and small sample sizes limit the field's capacity for generalization, underscoring the need for more robust research designs. Types and Uses of Argument Visualization Tools These assessment findings highlight the need to understand not just whether these tools work, but which specific tools are being adopted and why. Moving from assessment approaches to the practical tool adoption, our second research question reveals significant patterns in how different platforms have been integrated across disciplines. Adoption of Tools Our second research question focused on the types of argument visualization tools and their disciplinary uses. The adoption of argument visualization tools in higher education is highly heterogeneous, with certain tools dominating particular contexts, while others remain underutilized due to access and design factors. Based on our findings, Rationale emerged as the most widely used tool across the reviewed studies, and other platforms such as Dialectical Map (Liu et al., 2023; Nesbit et al., 2024), LASAD (Loll & Pinkwart, 2013; Scheuer et al., 2014), or Argunet (Uçar & Demiraslan Çevik, 2020) appeared in only one or two studies. This uneven distribution raises important questions about the factors driving tool selection. These usage patterns do not necessarily reflect differences in pedagogical effectiveness, but more likely stem from disparities in institutional access, platform usability, researcher familiarity, and visibility within academic communities. For example, Dialectical Map may still remain underused not due to underperformance but due to limited dissemination, marketing, minimal integration with widely used Learning Management System platforms, or lack of peer-reviewed comparative studies. Overall, these patterns are theoretically aligned with Lin & Hwang's model (2019), as TEL emphasizes on affordances (e.g., Rationale's usability driving adoption); nevertheless, the challenge counts on its bias towards positive impacts. This raises a need to highlight further investigation for any underutilized technologies due to access barriers, helping to extend the model to consider contextual factors in technology integration. Comparatively, Rationale's popular, widespread use likely originates from its user-friendly box-and-arrow interface and built-in feedback mechanisms, making it usable and suitable for individual critical thinking tasks in philosophy and EFL (e.g., Butchart et al., 2009). In contrast, tools like Dialectical Map emphasize structured argumentation features, enhancing essay writing for language learning purposes (Liu et al., 2023), while other tools (like Argunet) offer more flexible representations for argumentation but may require more training (Uçar & Demiraslan Çevik, 2020). These differences in affordances and usability for novices versus scalability for collaboration highlight the need for tool selection based on pedagogical goals, though there have been limited comparative studies or quantitative synthesis hinder broader pedagogy and instructional recommendations. The theoretical framework for understanding these adoption patterns comes from technology acceptance research. These adoption patterns are congruent with the Technology Acceptance Model (Davis, 1986), which suggests that perceived usefulness and perceived ease of use are critical predictors of educators' and students' acceptance of new educational technologies. Limited integration or underutilization may reflect lower perceived compatibility or accessibility within institutional environments. On the other hand, rather than indicating conflicting findings about tool effectiveness, this uneven adoption even suggests a need for more systematic evaluations of which design features or instructional alignments make a tool sustainable in specific disciplinary settings. Understanding adoption patterns naturally leads to examining how these tools are integrated into teaching practices. Integration of Tools Our analysis reveals meaningful patterns in how these tools are pedagogically integrated. Our second research question also examined the integration of visual argumentation software into post-secondary teaching practices. Our review revealed that Rationale emerged as the dominant tool, though the field shows considerable fragmentation with multiple platforms being used across different contexts (Butchart et al., 2009; Eftekhari & Sotoudehnama, 2018; Eftekhari et al., 2016; Harrell, 2008; Maftoon et al., 2014; Memis et al., 2022; Ngajie et al., 2020; Rapanta & Walton, 2016; Robillos, 2021; Sönmez et al., 2020). The prevalence of EFL studies and philosophy courses in the adoption of argument visualization technology suggests that these disciplines have been early adopters, likely due to their explicit emphasis on cultivating language, reasoning, and argumentation skills essential for post-secondary success. One reason for this adoption may be the potential of argument visualization tools to clarify argumentative structure, which is particularly valuable for learners in these fields (Van Den Braak, 2006). Limitations of Tools Our review also uncovered significant challenges that temper the enthusiasm for these tools. Yet, while these EFL studies show that argument mapping tools help students organize complex argument and reduce cognitive load and enhance critical thinking and writing, these tools may also introduce additional cognitive demands, particularly for novice writers (Jeong & Kim, 2022; Shehab & Nussbaum, 2015) Specifically, studies comparing expert and novice use of mapping tools reveal that novice writers often struggle with organizing and linking claims effectively, indicating that instructional support is important to avoid overwhelming learners' working memory (Jeong & Kim, 2022). These limitations manifest differently across disciplinary contexts. While the structured nature of argument maps provides clarity, we can see that their effectiveness depends heavily on user expertise and tool design. Nevertheless, the integration of these tools varies significantly across disciplines, with some fields emphasizing using the tools for planning, such as pre-writing activities (i.e., Liu et al., 2023; Robillos, 2021; Robillos & Art-in, 2023) while others focus on teaching argumentation, such as diagnostic reasoning (Harrell, 2008; Kaeppel, 2021; Twardy, 2004; Wu et al., 2013, 2014) or debate preparation (Chen et al., 2024a, 2024b). Summary of Integration Findings Despite these challenges, several consistent patterns emerge from our analysis of tool integration. Argument visualization consistently supports students' critical thinking and structured reasoning skills across disciplines, though disciplinary differences shape pedagogical implementation. Given that many argument visualization tools are delivered through web-based platforms (e.g., Rationale, LASAD, Dialectical Map), they have particular instrumental value for learners in online and blended learning environments, where immediate instructor feedback is limited. These tools' interactive visual interface provide structure for asynchronous reasoning tasks (e.g. Kaeppel, 2021), peer collaboration (e.g. Harrell, 2008), and formative assessment (e.g. Cullen et al., 2018), making these tools valuable cognitive scaffolds for supporting critical thinking in digitally mediated learning contexts. Studies of Students' Perception Understanding tool effectiveness requires examining not just outcomes but also user experiences and perceptions. In our analysis, some of the studies are concerned with the student perception of using these tools (Alsmari, 2022; Chen et al., 2022; Chen et al., 2024a; Jeong & Kim, 2022; Kaeppel, 2021; Loll & Pinkwart, 2013; Robillos, 2021; Robillos & Art-in, 2023; Robillos & Thongpai, 2022; Scheuer et al., 2014; Sönmez et al., 2020; Uçar & Demiraslan Çevik, 2020; Uren et al., 2006; Wu et al., 2013; Wu et al., 2014), yet nearly half of the studies did not assess student perception of the argument visualization tools. Among those that did examine perception, some studies focused on student users' perception of the argument visualization method or the argument mapping tool's compatibility with their actual learning experience (Alsmari, 2022; Chen, 2024a, Chen et al., 2022; Iandoli et al., 2016; Kaeppel, 2021). This inconsistency in perception research represents a significant gap. Mostly, there were considerable inconsistent variations in the aspects examined, with only relatively few studies adopting comprehensive frameworks (e.g. Loll & Pinkwart, 2013). This gap demonstrates that there is a possible disconnection between technology-focused research and student-centered pedagogical approaches, potentially overlooking barriers to adoption and effectiveness. Our results indicate the need for future research to design a more comprehensive survey that systematically evaluates student user perceptions by incorporating multiple dimensions, providing a richer understanding of how these argument visualization tools are effectively used by students and instructors in higher education. These varied applications and perceptions naturally lead us to examine the instructional strategies that moderate their effectiveness. Instructional Aims and Strategies Understanding which tools are used and how they're integrated provides essential context, but the critical question remains about what instructional strategies make these tools effective. Our third research question examines how they are implemented pedagogically, revealing critical factors that influence learning outcomes. Training Our third research question examined how instructional aims and tool usage strategies, such as training, collaboration, and pedagogical integration can moderate student learning outcomes. A key practice-based pedagogical insight inferred from our findings is that most training emphasizes tool familiarization (i.e. Iandoli et al., 2014; Maftoon et al., 2014) over training for argumentation schemes (i.e. Alsmari, 2022; Eftekhari et al., 2016). Potentially, doing either way would limit deeper skill development for students, as integrated pedagogical approaches that prioritize both may give rise real learning benefits. Our analysis of instructional aims and strategies revealed several interesting patterns in how argument visualization methods are used in higher education and their specific impact on student outcomes. It seems that provision of training emerged as a key differentiator. As previously shown, most studies included explicit training sessions for student participants (Alsmari, 2022; Archila et al., 2022; Binks et al., 2022; Borden, 2007; Butchart et al., 2009; Carrington et al., 2011; Chen et al., 2024a; Crudele & Raffaghelli, 2023; Cullen et al., 2018; Dwyer et al., 2013; Eftekhari & Sotoudehnama, 2018; Eftekhari et al., 2016; Harrell, 2008; Iandoli et al., 2016; Jeong & Kim, 2022; Jeong & Seok-Shin, 2023; Kunsch et al., 2014; Liu et al., 2023; Memis et al., 2022; Ngajie et al., 2020; Ouyang et al., 2024; Rapanta & Walton, 2016; Scheuer et al., 2014; Uçar & Demiraslan Çevik, 2020; Wu et al., 2014; Yilmaz-Na & Sönmez, 2023a), while the rest of the studies reported no training provision (e.g., Dwyer et al., 2010; Garcia et al., 2021; Iandoli et al., 2014; Kaeppel, 2021; Robillos & Art-in, 2023; Robillos & Thongpai, 2022; Uren et al., 2006; Wu et al., 2013; Yilmaz-Na & Sönmez, 2023b). However, the nature of this training varied. Therefore, for those studies that provided training, most focused solely on introducing the features of the tools or teaching the underlying structures and logic of argumentation (i.e., a claim, supporting reasons, evidence, etc.). Instructional Applications The implementation of these tools reveals distinct disciplinary patterns that align with pedagogical goals. With respect to how argument visualization tools are used in instructions, we have also found that the instructional applications of argument visualization tools showed distinct patterns across different educational contexts. In EFL settings, which consist of the largest subject area, the diverse tools were predominantly used as pre-writing support for post-secondary language learners (Alsmari, 2022; Binks et al., 2022; Liu et al., 2023; Robillos, 2021; Robillos & Art-in, 2023). For instance, Liu et al. (2023) demonstrated how the Dialectical Map tool effectively supported 201 undergraduate EFL students in China with their argumentative essay drafting process. Similarly, Robillos' study (2021) employed Rationale software with 28 TESOL students as a pre-writing activity, reporting improved essay quality and positive student perceptions of the tool used in the classroom settings. In contrast, philosophy courses demonstrate a different instructional focus. In philosophy courses, the focus of instruction shifted toward developing learners' critical thinking and analytical skills for this discipline (Butchart et al., 2009; Cullen et al., 2018; Harrell, 2011; Kaeppel, 2021; Twardy, 2004). Cullen et al. (2018) incorporated MindMap as a tool in philosophy education. They used achievement tests with essay assessments to evaluate both immediate comprehension of texts and transfer of critical thinking skills. Similarly, in English education, Harrell (2008) utilized Rationale with 180 students in English translation courses, measuring outcomes through CCTST. This approach also demonstrated how argument visualization tools could scaffold the development of complex reasoning skills in philosophical discourse. Overall, we also find that there is a potential overlap between planning-focused and reasoning-focused purposes; this highlights implications for pedagogical design for argument visualization. For instance, tools used for pre-writing (e.g., in EFL) may inadvertently enhance argumentation skills, suggesting that hybrid implementations could maximize benefits across categories as evidence has shown that explicit planning instruction like argument visualization raises argumentation quality when the planning is tied to argument structure (Graham et al., 2015); however, we acknowledge that further research might be needed to disentangle these effects between argumentative structure quality and reasoning quality. Teaching Argumentation We have identified which studies provide training and familiarization with the tool in order to help illustrate the variability in instructional conditions and tool deployment strategies across settings. Our findings may point to the fact that some researchers in the field often prioritize the technical usability of the tools' features, but instructions for argumentation schemes are overlooked (e.g., Binks et al., 2022; Carrington et al., 2011). This oversight has important pedagogical implications. We argue that future studies intending to use argument visualization as pedagogy should place more emphasis on structured training of both argument visualization software features as well as argumentation schemes (Visser et al., 2022). This dual approach aligns with van Gelder's (2015) assertion about the importance of quality instruction and feedback in argument-mapping education. These findings suggest that instructional strategies such as collaboration and structured training moderate the effectiveness of argument visualization tools. When learners construct argument maps, instructors should not only offer technical training but also provide argumentative scheme training with feedback for learners, as feedback facilitates learning of argumentation (e.g., Zhu et al., 2017; Zhu et al., 2020). Collaborative Uses Equally important to training is the role of collaborative learning environments. Interestingly, our review revealed that collaborative learning environments were frequently integrated with argument visualization tools (Alsmari, 2022; Archila et al., 2022; Carrington et al., 2011; Chen et al., 2024a, 2024b; Cullen et al., 2018; Harrell, 2008; Maftoon et al., 2014; Memis et al., 2022; Mubarok et al., 2023; Ngajie et al., 2020; Ouyang et al., 2024; Robillos, 2021; Uçar & Demiraslan Çevik, 2020). Chen et al. (2024) utilized ZJU YueQue in their educational setting with 17 participants, preparing for debate and collaborative argument construction. Their achievement test results showed positive outcomes when students worked together to build up and analyze arguments. This collaborative approach was also evident in Carrington et al.'s (2011) study of 291 business education students, where group work enhanced the tool's effectiveness for argumentation. While the assessment methods varied across these studies, achievement tests emerged as the most common evaluation approach. These were often complemented by argument map analysis and essay assessment, providing a multi-faceted, integrated view of student learning. This triangulation of assessment methods suggests that instructors generally recognized the need to evaluate both the immediate comprehension of argumentation principles and their practical application in academic writing and critical thinking Summary of Findings Through this scoping review, we have mapped the prevalent use of argument visualization tools across post-secondary settings. Three key patterns emerge: the concentration in language learning contexts, the emphasis on developing critical thinking and writing skills, and the critical role of structured training and collaborative approaches. For instance, Eftekhari et al. (2016) conducted a study with 180 EFL students, highlighting the inclusion of structured training sessions as a component for effectively using argument visualization software. Similarly, Wu et al. (2013) examined the use of a dual mapping cognitive tool in a diagnostic reasoning context with 29 medical students, illustrating the significance of tool-task alignment. These findings altogether outline the current landscape while highlighting that further research is needed to understand the broader applicability and variations in effectiveness across diverse disciplines and educational contexts. Implications These findings have immediate practical relevance for contemporary educational contexts, particularly as institutions increasingly adopt online and hybrid delivery modes. The present findings demonstrate the unique affordances of argument visualization tools for online and blended learning environments, where students often face limited opportunities for immediate instructor feedback. Externalizing the reasoning processes into clear, modifiable visual structures serves as cognitive scaffolds that support critical thinking (Alsmari, 2022; Eftekhari & Sotoudehnama, 2018; Eftekhari et al., 2016), and collaborative argumentation (Carrington et al., 2011; Chen et al., 2024a, 2024b; Cullen et al., 2018; Harrell, 2008) in digital contexts. Instructors designing online or hybrid courses may therefore consider integrating argument mapping software not only as a pre-writing aid but also as a means of sustaining student engagement and formative assessment when synchronous interaction is constrained. Limitations and Future Directions Limitations of Current Studies Design Upon careful examination of the 45 studies focusing on argument visualization tools, we have noted several limitations in the current research landscape of argument visualization tools in higher education. Firstly, quasi-experimental designs were predominant in classrooms. While the quasi-experimental design is valuable for the initial exploration of the tool's pedagogical effect, we believe that this also presents a significant methodological constraint in the field. Although these studies provide important insights into the potential effectiveness of argument visualization tools of various kinds, the lack of randomized experimental controlled trials would limit our understanding and ability to draw strong causal conclusions about their impact on student learning outcomes. The predominance of quasi-experimental studies introduces potential selection bias, as non-randomized groups may differ in motivation or prior skills, complicating the causal inferences about a tool's effectiveness. Furthermore, heterogeneous evidence that varies in sample sizes, pedagogical goals, assessment methods, and contexts would possibly limit the generalizability of its effectiveness, particularly beyond EFL and philosophy disciplines. Diverse Assessment Methods We also note that diverse assessment methods were used across studies in various settings, ranging from achievement tests (i.e. course based grading vs. standardized critical thinking tests) to argument map evaluations and essays. Student user perception of the tools was also limited in a way that only one or two dimensions of perception were examined. Although these could provide rich data about the student learning process, at the same time, such diversity makes it quite challenging to conduct meaningful cross-study comparisons or meta-analyses of intervention effectiveness. It is because this variability may introduce measurement bias, as subjective tools like argument map evaluations could favour certain instructional styles, underscoring the need for standardized protocols to enhance reliability and external validity (Rapanta & Walton, 2016). Establishing common-ground outcome measures and reporting standards would significantly facilitate meta-analytical syntheses, thus enabling stronger causal inferences about the effects of argument visualization tools on argumentation. On the other hand, we note that most of the studies were small-scale with fewer than 50 participants, and this gap reflects the need to conduct larger-scale research so that researchers can validate findings. This small-size investigation may stem from systemic factors in educational technology research, such as limited funding for large-scale interventions, challenges in accessing diverse participant pools in classroom settings, and logistical barriers to multi-site studies (Lortie-Forgues & Inglis, 2019). Disciplinary Representation Another significant limitation lies in the uneven distribution of research across disciplines. While EFL, philosophy, and business education are well-represented, many other disciplines have minimal representation in our sample. This concentration in specific fields limits our understanding of how argument visualization tools might be effectively implemented across different academic contexts. Additionally, the variation in training approaches, with some studies providing comprehensive instruction and others offering minimal or no training at all, makes it difficult to determine the most effective implementation strategies for these tools. We thus argue that it would be important to provide both technical training and argumentation scheme training for students if instructors would like to use any argument visualization tools in teaching. Future Research Directions Priority Areas Looking toward future research directions, we identify several priority areas that warrant investigation. First, as mentioned before, there is a pressing need for large-scale, randomized controlled trials to provide more robust evidence of the effectiveness of argument visualization tools. Such studies should employ standardized assessment protocols to facilitate cross-study comparisons and meta-analyses. Additionally, exploring the long-term impact of argument visualization tools on critical thinking and writing skills, beyond immediate learning outcomes, represents an important yet under-researched area. Addressing these gaps could contribute to a more comprehensive understanding of the potential benefits and limitations of argument visualization tools in diverse educational contexts. Lastly, scoping reviews typically do not conduct formal quality appraisal (Arksey & O'Malley, 2005; Peters et al., 2020); however, this review applied structured inclusion and exclusion criteria to ensure that only empirical, peer-reviewed studies were analyzed. Tools such as AMSTAR and CASP, which are primarily designed for systematic reviews and randomized controlled trials, were not applied because the goal of this synthesis was to map the research landscape rather than assess intervention effectiveness or risk of bias. Additional Research Areas Future research could include a more systematic investigation of implementation strategies across different disciplinary contexts. Our review revealed various approaches to tool integration in higher education contexts, from pre-writing activities to collaborative learning environments, but more research is needed to understand how these strategies can be implemented for different learning objectives and student populations. Future studies should particularly examine the role of instructor training and support; as van Gelder (2015) emphasized, quality feedback can be an important mediating factor for argumentation. Lastly, we believe that technical development represents another crucial area for future research. While Rationale emerged as the dominant tool in our review, the fragmentation of various platforms and varying features suggests a need for a more comprehensive investigation of tool design and functionality and which features can be developed to support argumentation learning. Future studies should explore how emerging technologies, such as adaptive learning systems or artificial intelligence, might enhance the effectiveness of argument visualization tools in higher education so that when teachers implement the tools in their classes, they can provide learners with more personalized learning experiences. Conclusion Our scoping review has revealed trends in studies about argument visualization tools in higher education. The growing body of research during 2022 and 2023 demonstrates increasing recognition of these tools' value in developing post-secondary learners' critical thinking and writing skills. However, the field requires more rigorous empirical validation and systematic investigation of implementation strategies. The diversity of applications across disciplines suggests these tools' versatility, while the predominance of training requirements highlights the importance of structured pedagogical support. As educational technology continues to grow in online distance education, argument visualization tools may play an increasingly important role in developing students' analytical and argumentative skills, provided that future research addresses the identified gaps and challenges in current understanding. Our scoping review thus serves as groundwork for future research directions, particularly in experimental validation and pedagogical design, while highlighting the need for more standardized and rigorous approaches to investigating the effectiveness of these educational tools. Appendix 1 - the list of 45 studies included in the scoping review Authors & Year Year Discipline Study Design Sample Size Sample Size classification Training Tools Used Purpose of Use Outcome Measures Type of Perception Studied Collaboration 1 Alsmari (2022) 2022 EFL Quasi-Exp 40 Small Argumentation scheme training CAAM Pre-writing activity Essay L/I Collaboration 2 Archila et al. (2022) 2022 Food Microbiology Quasi-Exp 44 Small Argumentation scheme training Student Choice Comprehension Argument Maps No Collaboration 3 Binks et al. (2022) 2022 N/A Quasi-Exp 44 Small Tool Familiarization Write Reason web app Pre-writing activity Essay No Individual 4 Borden (2007) 2007 Journalism Case Study 1 Small Argumentation scheme training Decision Explorer 3.0 Clarify the journalists' arguments Argument Maps No Individual 5 Butchart et al. (2009) 2009 Philosophy Quasi-Exp 43 Small Argumentation scheme training Rationale Argumentation Achievement Test No Individual 6 Carrington et al. (2011) 2011 Business Education Quasi-Exp 291 Large Tool Familiarization CAAM Perception of Argumentation Tools Survey and Achievement Test L/F Collaboration 7 Chen et al. (2024a) 2024 Education Quasi-Exp 17 Small Tool Familiarization ZJU YueQue Debate Achievement Test L/I Collaboration 8 Chen et al. (2022) 2022 Education Qualitative 42 Small Tool Familiarization ZJU YueQue Debate Argument Maps L/I Collaboration 9 Crudele & Raffaghelli (2022) 2022 Education (ECE) Quasi-Exp 103 Large Argumentation scheme training N/A Teaching argumentative writing Achievement Test No Individual 10 Cullen et al. (2018) 2018 Philosophy Quasi-Exp 105 Large Tool Familiarization MindMup Comprehension Achievement Test and Essay L Collaboration 11 Davies (2009) 2009 Australian Economic History N/A 42 Small Tool Familiarization CAAM Reading Comprehension Argument Maps No Individual 12 Dwyer et al. (2010) 2010 Psychology Quasi-Exp 400 Large N/A N/A Comprehension Achievement Test No Individual 13 Dwyer et al. (2013) 2013 Arts Quasi-Exp 131 Large N/A N/A Reading Comprehension Achievement Test No Individual 14 Eftekhari & Sotoudehnama (2018) 2018 EFL Quasi-Exp 120 Large Tool Familiarization Rationale Comprehension, Recall, Retention Achievement Test No Individual 15 Eftekhari et al. (2016) 2016 EFL Quasi-Exp 180 Large Argumentation scheme training Rationale Reading comprehension Achievement Test No Individual 16 Garcia et al. (2021) 2021 Biology Education Quasi-Exp 27+51 Medium N/A Debategraph, WISE Argumentation in Teacher Education Survey F/I Individual 17 Gargouri & Naatus (2017) 2017 Business Education Quasi-Exp 14 Small N/A Mindmeister.com Comprehension Argument Maps No Individual 18 Harrell (2008) 2008 English Translation Quasi-Exp 180 Large Argumentation scheme training N/A Promote critical thinking Achievement Test L Collaboration 19 Harrell (2011) 2011 Philosophy Quasi-Exp 130 Large Argumentation scheme training N/A Argumentation Achievement Test No Individual 20 Iandoli et al. (2014) 2014 Engineering Management Quasi-Exp 123 Large Tool Familiarization Debate Dashboard Collective deliberation using CCSAV platforms. Survey No Individual 21 Iandoli et al. (2016) 2016 Economics Quasi-Exp 95 Medium Argumentation scheme training Debate Dashboard, Cohere Construction of shared knowledge Survey and Achievement Test L/I Both 22 Jeong & Kim (2022) 2022 Criminology and Philosophy Qualitative 10 Small Tool Familiarization jMap Reading Comprehension Argument Maps I Individual 23 Jeong & Seok-Shin (2023) 2023 Various Mixed Methods 43 Small Tool Familiarization jMap Identify the processes for creating better maps Argument Maps No Individual 24 Kaeppel (2021) 2021 philosophy Qualitative 16 Small N/A N/A Subjective Reasoning Interview and Observation L/I Individual 25 Kunsch et al. (2014) 2014 Business Education Quasi-Exp 36 Small Tool Familiarization bCisive and Rationale Critical Thinking in Business Achievement Test No Individual 26 Liu et al. (2023) 2023 EFL Quasi-Exp 201 Large Argumentation scheme training Dialectical Map Pre-writing activity Essay F/L/I/G Individual 27 Loll & Pinkwart (2013) 2013 N/A Quasi-Exp 36 Small Tool Familiarization LASAD Comprehension Achievement Test F/L/I/G Individual 28 Maftoon et al. (2014) 2014 English Translation Quasi-Exp 90 Medium Tool Familiarization Rationale Argumentative Writing Essay No Collaboration 29 Memis et al. (2022) 2022 Education Quasi-Exp 84 Medium N/A Rationale Comprehension Argument Map and Achievement Test No Collaboration 30 Mubarok et al. (2023) 2023 EFL Quasi-Exp 45 Small Tool Familiarization argϋman app (en.arguman.org) Improve speaking oral skills Presentation No Collaboration 31 Ngajie et al. (2020) 2020 Education Technology Quasi-Exp 93 Medium Tool Familiarization Rationale Comprehension Achievement Test L/F/I Collaboration 32 Ouyang et al. (2024) 2024 Academic Writing Quasi-Exp 20 Small N/A CAM Knowledge construction Discussion Content and Process Mining No Collaboration 33 Rapanta & Walton (2016) 2016 EFL Quasi-Exp 112 Large N/A Rationale Assessment Argument Maps No Individual 34 Robillos (2021) 2021 Education (TESOL) Quasi-Exp 28 Small N/A Rationale Pre-writing activity Essay L/F/I Collaboration 35 Robillos & Art-in (2023) 2023 EFL Mixed Methods 27 Small N/A N/A Pre-writing activity Essay L/F Both 36 Robillos & Thongpai (2022) 2022 EFL Mixed Methods 21 Small Tool Familiarization CAAM Teaching argumentative writing Essay L/F/I Individual 37 Scheuer et al. (2014) 2014 Humanities and Social Sciences - Philosophy Quasi-Exp 46 Small N/A LASAD Argumentation Learning Survey L/F Individual 38 Sönmez et al. (2020) 2020 Education Qualitative 30 Small Tool Familiarization Rationale Argument-based inquiry Interview G Both 39 Twardy (2004) 2004 Philosophy and History N/A 135 Large N/A Reason!Able software Reasoning ability Achievement Test No Individual 40 Uçar & Demiraslan Çevik (2020) 2020 Education Quasi-Exp 43 Small Argumentation scheme training Argunet Teaching argumentative writing Achievement Test and Interview G Collaboration 41 Uren et al. (2006) 2006 ScholOnto Discourse Ontology Case Study 6 Small Tool Familiarization ClaiMapper Argumentation Survey and Achievement Test G Individual 42 Wu et al. (2013) 2013 Medicine Quasi-Exp 29 Small N/A Dual Mapping cognitive Tool Diagnosis Achievement Test F/L Individual 43 Wu et al. (2014) 2014 Medicine Quasi-Exp 29 Small N/A N/A Diagnosis Achievement Test L Individual 44 Yilmaz-Na & Sönmez (2023a) 2023 Education Quasi-Exp 38 Small N/A ARTOO - CAAM Tool Argumentation Argument Maps No Individual 45 Yilmaz-Na & Sönmez (2023b) 2023 Education (ECE) Quasi-Exp 61 Medium N/A ARTOO (not available anymore) Argumentation Survey No Individual Appendix 2 – Supplementary Figures for the Results section Figure 9 Number of Publications over Time Figure 10 Outcome Measures used for Argument Visualization Number of Studies Number of Publications over Time Number of Studies Outcome Measures Subject Areas AEH represented Australian Economic History; C&P represented Criminology & Philosophy; HSSP represented Humanities & Social Sciences – Philosophy; SDO represented ScholOnto Discourse Ontology; P&H represented Philosophy and History Type of Argument Visualization Tools 8 6 3 3 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 2 3 4 5 6 7 8 9 EFL Education Business Education Philosophy Education (ECE) English Translation Medicine N/A Various SDO P&H HSSP Education (TESOL) Academic Writing Education Technology philosophy Journalism Economics Engineering… Food Microbiology Biology Education Arts Psychology AEH C&P Number of Studies Subject Areas 9 8 4 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 2 3 4 5 6 7 8 9 10 Rationale N/A CAAM LASAD Debate Dashboard jMap ZJU YueQue ARTOO MindMup Debategraph Mindmeister.com Decision Explorer Student Choice bCisive and Rationale Dialectical Map Write Reason argϋman app CAM Reason!Able Argunet ClaiMapper Dual Mapping Number of Studies Argument Visualization Tools Type

Keywords: Argument visualization, Critical Thinking, reasoning, writing skills, argument mapping, cognitive scaffolds

Received: 20 Sep 2025; Accepted: 22 Sep 2025.

Copyright: © 2025 Chang, Lin and Hwang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Daniel H. Chang, dth7@sfu.ca

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.