Your new experience awaits. Try the new design now and help us make it even better

REVIEW article

Front. Educ., 03 November 2025

Sec. Digital Learning Innovations

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1672105

Charting the field: a review of argument visualization research for writing, learning, and reasoning

  • 1Faculty of Education, Simon Fraser University, Burnaby, BC, Canada
  • 2Faculty of Education, Mount Saint Vincent University, Halifax, NS, Canada
  • 3Graduate Institute of Educational Information and Measurement, National Taichung University of Education, Taichung City, Taiwan
  • 4Graduate Institute of Digital Learning and Education, National Taiwan University of Science and Technology, Taipei City, Taiwan
  • 5College of Management, Yuan Ze University, Taoyuan City, Taiwan

To address these critical gaps, although many research studies imply that argument mapping can enhance writers' abilities in argumentation and critical thinking, we need to consolidate the available evidence as the first step to better understand the scope and coverage of these tools. Therefore, this paper adopts a scoping review methodology guided by the PRISMA-ScR framework. Through this systematic scoping approach, we aim to map the breadth of research on argument visualization and learning by identifying key themes and fields and highlighting gaps in empirical studies. By synthesizing existing evidence across disciplines and contexts, we hope that the scoping review will provide a foundation for future research directions, particularly in the areas of experimental validation and pedagogical design. This consolidation is especially timely given the rapid expansion of online and hybrid learning environments where such tools could provide crucial cognitive scaffolding.

Introduction

Context

The capacity to develop and deliver evidence-based and reasoned arguments remains one of the central aims of post-secondary education (Andrews, 2015; Wingate, 2012). To meet this aim, students are often expected to demonstrate, outline, or pre-write their reasoning and claims before producing an essay, frequently with reference to disciplinary conventions (Bean and Melzer, 2021; Manalo and Ohmes, 2022; Mercier and Sperber, 2011). Instructors typically design pre-writing activities to scaffold this process, including outlining, brainstorming, freewriting, diagramming, or using argument visualization. Among these pedagogical approaches, web-based argument visualization tools, in particular, have become more common in higher education to support learners' development of critical thinking, reasoning, and writing skills (Rousseau and van Gelder, 2024). In online and hybrid learning contexts, these web-based tools are particularly valuable because they provide structure and feedback where face-to-face interaction may be limited.

These argument visualization tools have demonstrated important pedagogical significance because they can externalize complex reasoning processes into visual formats. In higher education contexts, argument visualization serves two main purposes. First, in philosophy courses, students learn to analyze the arguments of others by transforming texts into visual representations that clarify the functions of different components (Davies, 2011; Harrell, 2011). Second, students use argument mapping to construct their own arguments. This may involve supporting a perspective on a contentious issue by creating an argument map either as a stand-alone project or as a pre-writing step for an argumentative essay. This dual application suggests that argument visualization tools could play a significant role in developing students' critical thinking capabilities across disciplines.

Research gaps

Despite this pedagogical significance and increasing integration in online education, existing studies highlight the potentials of argument visualization tools to enhance critical thinking and writing, yet robust empirical evidence remains scarce and limited. As van Gelder (2015) suggested, the effectiveness of argument mapping instruction often relies on the availability of skilled instructor feedback, which is not always possible due to workload (Mueller and Schroeder, 2018; Mello et al., 2024). Furthermore, the diversity of tools, instructional aims, and implementation contexts makes it difficult to determine when and how argument visualization is the most effective for supporting student learning outcomes. Given the growth of online courses, the need for clear evidence of how these tools function as cognitive scaffolds in digitally mediated environments is especially urgent. Most critically for online education, this lack of consolidated evidence leaves instructors and designers with limited guidance for how to integrate these web-based tools in online environments and in ways that align with specific disciplinary needs and the demands of online or blended learning contexts. Without systematic evidence about tool effectiveness, implementation strategies, and disciplinary applications, educators cannot make informed decisions about adopting these potentially transformative pedagogical tools.

Objectives

To address these critical gaps, although many research studies imply that argument mapping can enhance writers' abilities in argumentation and critical thinking (Davies, 2011; Harrell and Wetzel, 2015; Liu et al., 2023; Manalo and Fukuda, 2024), we need to consolidate the available evidence as the first step to better understand the scope and coverage of these tools. Therefore, this paper adopts a scoping review methodology guided by the PRISMA-ScR framework (Tricco et al., 2018). Through this systematic scoping approach, we aim to map the breadth of research on argument visualization and learning by identifying key themes and fields and highlighting gaps in empirical studies. By synthesizing existing evidence across disciplines and contexts, we hope that the scoping review will provide a foundation for future research directions, particularly in the areas of experimental validation and pedagogical design. This consolidation is especially timely given the rapid expansion of online and hybrid learning environments where such tools could provide crucial cognitive scaffolding.

Research questions

Given the growing importance of critical thinking and writing skills in post-secondary education, and recognizing the gaps identified above, there is a need to scope existing evidence on how argument visualization tools support learning outcomes. Our review begins by providing an overview of the history and pedagogical uses of visual argumentation, particularly argument mapping; the potential learning outcomes and benefits of digital tools; and how research to date reveals a scarcity of validation for the effectiveness of argument visualization. We then describe our use of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Reviews (Tricco et al., 2018) framework to systematically examine the use of argument visualization tools in higher education. To systematically address the identified gaps, our analysis considers three research questions, each selected to address a specific gap aligned with the overall objective of argument visualization research.

1. The empirical gap: what evidence exists regarding the impact of argument visualization tools on students' critical thinking and writing skills in higher education settings?

2. The diversity gap: what types of argument visualization software are currently being used in higher education, and how are they integrated into teaching and learning practices in different fields?

3. The instructional strategy gap: Under different instructional objectives (e.g., pre-writing planning, collaborative debate), how do tool usage strategies (such as individual vs. collaborative) moderate student learning outcomes?

Literature review

History and background

Over the past decades, argument visualization has emerged as a pedagogical method for enhancing writers' reasoning and critical thinking skills (Kirschner et al., 2012; Reed et al., 2007). Although its roots can be traced back to the nineteenth century, it has only recently become a prominent educational tool (Buckingham Shum, 2003). In particular, recent decades have seen the development of specialized software that supports argument diagramming in diverse learning contexts (Butcher, 2006; Davies, 2011; Manalo and Fukuda, 2024; Reed et al., 2007; van Gelder, 2015).

This evolution from historical concept to digital tool reflects a fundamental shift in how educators approach argumentation instruction. Argument visualization transforms abstract argumentative content into formats such as diagrams or maps, designed to show logical connections between propositions, premises, evidence, and conclusions. As Hitchcock (2017) noted, viewing arguments as structured networks of interrelated premises and conclusions makes them well suited for diagrammatic display. Blair (2012) similarly argued that evaluating arguments requires more than judging persuasive impact, as it demands attention to logical norms, coherence, and evidence. By making these abstract logical relationships visible, these dimensions of argument visualization tools can make the tools more accessible. This historical trajectory demonstrates the value of visual representations as cognitive tools in educational settings. This historical foundation sets the stage for examining specific argument mapping strategies and their documented benefits.

Argument mapping strategy and its benefits

Building on this historical foundation, among the various argument visualization techniques, argument mapping is the most prominent. It provides structured graphic layouts that represent reasoning processes more explicitly than conventional textual approaches (Royer and Royer, 2004). These techniques help writers, especially novice writers, to recognize the underlying architecture of arguments and to understand complex reasoning structures more clearly (Liu et al., 2023; Klein and Rose, 2010). The visual structure follows a consistent pattern: typically, argument mapping uses box-and-arrow designs in which propositions appear in nodes linked by directional arrows that show inferential relationships (Dwyer et al., 2013; van Gelder, 2015). Many argument visualization tools, regardless of discipline or platform, adopt a logic structure grounded in informal reasoning: claims are supported by reasons and evidence, challenged by counterclaims or rebuttals, and synthesized into a conclusion. Figure 1 shows a sample argument map structure based on Freeman's (1991) framework and reflects common design features found in argument visualization software.

Figure 1
Argument visualization structure diagram based on Freeman's structure. At the top, a box labeled “Claim/Thesis Statement” points to two boxes below: “Reasons and evidence that support the claim” (green) and “Reasons and evidence that counter the claim (rebuttal)” (red). Arrows from both boxes converge to “Final take-away and conclusions.” Blue arrows between green and red boxes ask “Are they related? In what way?”.

Figure 1. Sample argument visualization structure (argument map).

Beyond their structural features, empirical research has documented multiple cognitive and pedagogical benefits. Research has shown that visualization techniques can support students' argumentation by activating and refining their prior knowledge (e.g., cognitive schemas related to argumentation). As a result, these techniques can advance writers' critical thinking and writing skills (Nussbaum and Schraw, 2007; Liu et al., 2023; Nesbit et al., 2018). Furthermore, argument mapping is not an isolated tool; it plays a foundational role in preparing students for activities such as collaborative dialogue, structured debates, and argumentative essay writing (Chen et al., 2022; Harrell, 2022). It also serves as a reliable way to assess students' reasoning and analytical progress (Rapanta and Walton, 2016). Building on these cognitive benefits, digital tools offload tasks like layout adjustments (Agostinho et al., 2010; Hoffmann and Paglieri, 2011), freeing resources for argumentation. Research has also further highlighted its cognitive benefits (Dwyer et al., 2010, 2013). For example, Dwyer et al. (2010, 2013) showed that argument maps can aid short-term memory by organizing propositions and reducing cognitive load. Although sustained gains for long-term retention may require repeated practice, students who constructed argument maps or hierarchical outlines performed better in recall tasks than those using text summarization alone. Taken together, these findings suggest that argument maps function not only as visual aids but also as cognitive tools that help organize, integrate, and retrieve complex information. This cognitive scaffolding becomes even more significant when implemented through digital online platforms.

Digital visualization tools and the existing evidence

The transition from paper-based to digital argument mapping has fundamentally transformed the pedagogical possibilities of these tools. Advancement in online digital technology have led to interactive argument visualization tools that create new pedagogical opportunities for developing students' reasoning skills in online environments. These computer-based online tools (see examples of argument visualization interface in Davies, 2009) allow learners to modify, rearrange, and iteratively refine arguments with less cognitive effort, particularly when working with large or complex maps (Chen et al., 2022). This technological shift has yielded measurable learning benefits: research indicates that using digital tools in online settings can help students construct more sophisticated argument structures, engage more deeply with content, and maintain motivation over time (Eftekhari et al., 2016).

The cognitive advantages of digital implementation stem from their ability to reduce extraneous processing demands. One important benefit is that these digital tools can offload lower-level tasks, such as adjusting layouts or redrawing connections, freeing mental resources for higher-order tasks like analyzing counterarguments or constructing rebuttals (Agostinho et al., 2010; Hoffmann and Paglieri, 2011; Nesbit et al., 2018). Therefore, these tools serve not just as visual aids but as cognitive scaffolds that help learners critically engage with complex material (Belland, 2010; Benetos, 2023).

However, a significant disconnect exists between the theoretical promise and empirical validation of these tools. Despite growing interest in argument visualization tools, robust empirical evidence supporting the effectiveness of argument visualization remains limited (Hornikx and Hahn, 2012; Scheuer et al., 2010). Few large-scale controlled experiments have been conducted, although some quasi-experimental studies document promising outcomes in diverse contexts (Noroozi et al., 2012). The available evidence, while encouraging, comes primarily from small-scale studies. For example, Liu et al. (2023) studied 190 EFL undergraduates in China who used the Dialectical Map tool as a pre-writing activity and found positive effects on students' argumentation. Similarly, Darmawansah et al. (2024) found that argument mapping supported students' dialectical thinking, such as reaching conclusions after weighing multiple perspectives. Other studies use pre- and post-testing to measure effects on critical thinking skills. For instance, van Gelder et al. (2004) employed the California Critical Thinking Skills Test (CCTST) and reported a 20% improvement among students taught with argument mapping.

These empirical findings reveal both values and critical gaps that shape our research approach. Taken together, these studies indicate that argument visualization tools have potentials for improving critical thinking and writing skills in higher education. Yet, controlled validation remains limited. This gap informs our first research question, which synthesizes evidence on learning impacts. Our second question focuses on the range of tools and their use across disciplines. Our third question considers instructional aims and strategies to clarify in which contexts argument visualization tools work best and through which mechanisms.

Method

A scoping review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Reviews (Tricco et al., 2018) framework serves to systematically examine the use of argument visualization tools in higher education. This was selected as the most appropriate methodology to address our research questions due to the relatively recent nature of our topic, with the understanding that the literature might be limited and methodologically heterogeneous, making a full systematic review or meta-analysis impractical at this stage. Scoping reviews enable researchers to map out the current state of knowledge on a particular subject, highlight gaps in the literature, and clarify existing concepts and their characteristics (Tricco et al., 2018; Peters et al., 2020).

Analytical framework

Referring to the Technology-enhanced Learning (TEL) model proposed by Lin and Hwang (2019) for examining studies related to the use of technologies in school settings and professional training, this review study took several dimensions into account, including author, year, discipline, study design, sample size, training provided, tools used, purpose of use, and outcome measures.

Lin and Hwang's (2019) TEL model was originally applied in flipped learning contexts; its broader analytical strength, however, offers a structured, multidimensional framework that our review can use in order to examine technological affordances, pedagogical strategies, and learner-centered outcomes. Particularly, Lin and Hwang's TEL model provides a strong yet adaptable basis for our paper that allows us to systematically analyze the diverse range of argument visualization tools, their instructional integration, and their measured effects on students' critical thinking and writing skills.

Our review's analytical approach is aligned with the TEL model's core principles in a way that it situates educational technologies, such as argument visualization tools, within pedagogical contexts to evaluate their educational impact. At the same time, the TEL model emphasizes enabling affordances, which may risk foregrounding positive impacts and underrepresenting null effects or critical perspectives. To address this, our synthesis explicitly considered methodological variations of argument visualization research in terms of study design, sample size, and assessment methods, and noted limitations such as methodological constraints, small effect sizes, and challenges in cross-study comparisons. By integrating these variations, we sought to avoid presupposing the TEL model's implicit bias toward positive impacts.

Search strategy

Our team developed our protocol using the guidelines provided by the Preferred Reporting Items for Systematic Reviews and Meta-analysis Protocols (Shamseer et al., 2015). The protocol was collaboratively revised by the research team in a series of meetings to ensure alignment with our research questions. The inclusion and exclusion criteria were refined based on these discussions and the review of the initial sample. Screening, data extraction, critical appraisal, coding and validation were completed independently by the first two authors to ensure rigor and eliminate bias.

Databases

The database search was conducted in February 2024 using EBSCOhost, which included access to several education-focused databases such as ERIC, Education Research Complete, and Academic Search Complete. This ensured broad coverage of peer-reviewed literature in educational technology and instructional design. Web of Science and Google Scholar were also used to capture multidisciplinary and gray literature. Scopus was considered but excluded due to significant overlap with Web of Science and feasibility constraints, including limited research personnel and time for managing deduplication across large, redundant datasets. While Scopus offers robust coverage in social sciences and international journals, potentially capturing studies from regions like Europe or Asia, this risk was mitigated by including Google Scholar, which indexes a broad range of gray literature and multidisciplinary sources, ensuring comprehensive coverage of relevant studies (Falagas et al., 2008). No date restrictions were applied to the search because research on argument visualization in post-secondary education could span several decades, given the long-standing use of argument mapping in various forms.

Search parameters

The search string used was (“argument map*” OR “argument visual*”) AND (“post-secondary education” OR “higher education” OR “university” OR “college”), designed to capture a broad range of studies related to argument mapping and its pedagogical applications. Synonyms such as “argument diagram*,” “argument chart,” and “argument representation” were initially tested but excluded from the final search string due to extremely low or irrelevant retrieval rates. This low retrieval rate likely reflects the field's preference for standardized terms like ‘argument mapping' and ‘argument visualization,' which are more consistently used in educational technology and critical thinking literature, possibly due to their association with specific software (e.g., Rationale) and established pedagogical frameworks (Davies, 2011). The primary focus was kept on the dominant terms “argument map*” and “argument visual*,” which are consistently used in the empirical literature. The search strategy was applied consistently across databases but adapted as needed for each system's syntax. EBSCO yielded 129 results, Web of Science returned 131 articles, and Google Scholar produced approximately 230 results. Manual searches in Google Scholar were conducted to ensure the inclusion of gray literature and articles not indexed in traditional databases.

Table 1 illustrates the coding scheme used to categorize each study and the parameters examined by the coders.

Table 1
www.frontiersin.org

Table 1. Coding scheme.

Screening process and exclusion criteria

After removing duplicates and applying our exclusion criteria, we identified 45 studies for inclusion in the review. Articles were excluded if they were not directly related to argument mapping strategies or student learning in higher education contexts or if the argument mapping context was not specified. Additionally, non-English or non-indexed articles were excluded to maintain data extraction and analysis consistency. This scoping review aims to provide a comprehensive overview of the existing literature, identify knowledge gaps, and propose directions for future research on argument visualization tools in higher education. Figure 2 illustrates the screening process by following the PRISMA framework.

Figure 2
Flowchart depicting the systematic review process. Identification: 490 records found, reduced to 125 after removing duplicates. Screening: 125 records scanned, 47 excluded. Eligibility: 78 full-text articles assessed, 33 excluded. Inclusion: 45 articles synthesized. Initial coding calibration involved five articles. Validation followed with 20 articles each for Rater 1 and Rater 2.

Figure 2. Study screening and coding process—PRISMA flowchart for the study inclusion.

Results

We reviewed 45 studies (a list of articles is provided in Supplementary material). We found a gradual increase in publications related to argument visualization tools, with a sharp rise in 2022 and 2023, as shown in Supplementary material. This period had the highest number of studies, with 8 publications in 2022 and 6 in 2023.

Outcome measures

As seen in Supplementary material, among the 45 studies, we discovered that the most frequently measured outcomes for argument visualization software were achievement tests (n = 15), reproductions of argument maps (n = 9), and essays as the outcome measure (n = 7). For the studies utilizing achievement tests, they might use the CCTST or student grades to assess the outcome of the argument mapping intervention. These outcome measures were used to assess the impact of argument visualization tools on students' critical thinking and writing skills. Some studies used diverse outcome measures to document the effectiveness of the argument visualization tools they used, including combinations of survey, achievement tests, argument maps, presentation, interview, or any combinations of these (n = 14). For the achievement tests, we have observed that there were measures varied from standardized instruments like the CCTST (e.g., van Gelder et al., 2004) to course-specific grades or custom assessments, reflecting quantitative or qualitative evaluation differences in focus.

Subject-area distribution

The most represented subject area is EFL with 8 studies (n = 8), followed by Education with 6 studies (n = 6), philosophy with 3 studies (n = 3), and business education with 3 studies as well (n = 3). Among these Education studies, some studies train pre-service teachers and in-service teachers to use argument visualization software, and some are used for teacher training in Teaching English to Speakers of Other Languages (TESOL) or Early Childhood Education (ECE). A range of other subject areas, including food microbiology, medicine, early childhood education, Teaching English to Speakers of Other Language (TESOL), psychology, nursing, journalism, and various others, have between 1 and 2 studies for each. The Supplementary material highlights the diversity of fields in which argument visualization tools have been studied, with a clear concentration on language-learning-related disciplines, teacher education, or logic training, such as EFL, Education, and philosophy.

Study design types

Figure 3 shows the types of study designs to research the effects of argument visualization software as a teaching practice. We have found that the majority of the studies (n = 34) employed a quasi-experimental design, making this the most common study type for researching argument visualization. Other study designs included qualitative studies (n = 4), mixed methods (n = studies), and case studies (2 studies). Additionally, two studies did not specify a design, making it challenging to infer one based on the provided descriptions. We concluded that quasi-experimental approaches are popular in the research on argument visualization tools.

Figure 3
Bar chart titled “Type of Study Design” showing the number of studies for each design type. Quasi-Experimental has the most studies with thirty-four, followed by Qualitative with four, Mixed Methods with three, Case Study with two, and N/A with two.

Figure 3. Type of study design for argument visualization research.

Argument visualization tools

In the Supplementary material, we have illustrated the diverse range of tools used in research on argument visualization. In the 45 studies reviewed here, we found that the most frequently used tool is Rationale, utilized in 10 studies, followed by CAM/CAAM tool in 3 studies, and ZJU YueQue in two studies. Several other tools, such as Argunet, Dialectical Map, MindMeister.com, and Write Reason web app, were each used in one study respectively. Additionally, seven studies did not specify the tool used, marked as “N/A”.

Furthermore, when researchers used the tools, the majority of research studies specified training sessions for their participants (n = 29). Among these studies, the majority of the studies train students by presenting the features of the tool to the students, intending to familiarize students with the tool (n = 18), whereas the rest (n = 11) train students via teaching argumentation schemes. Only 16 studies did not provide training to the participants, nor did they specify whether training was offered to their participants (n = 16), as shown above in Figure 4.

Figure 4
Bar chart titled “Training in Argument Visualization Tools” shows the number of studies: Tool Familiarization has 18 studies, N/A has 16 studies, and Argumentation Scheme Training has 11 studies. The y-axis is labeled “Number of Studies.”

Figure 4. Training in argument visualization tools.

Studies also showed a trend toward individual work with the tools. As seen in Figure 5 below, most of the studies (n = 28) had students work with their argument visualization tools individually, whereas 14 studies had their students work in a group setting. Three studies compared both (n = 3), and three studies did not specifically specify whether students worked individually or with a group (n = 3).

Figure 5
Bar chart titled “Collaborative vs. Individual” showing the number of studies. Individual studies have 28, collaboration has 14, and both have 3 studies. The vertical axis shows the number of studies, and the horizontal axis shows the categories.

Figure 5. Collaboration type.

Study size

We found that investigation of argument mapping tools often relies on small-scale studies in educational settings. As shown in Figure 6, among the 45 studies, we have found that the majority of studies were classified as small studies, where there were only fewer than 50 participants (n = 26). We have found that these small-scale studies often focused on using argument visualization as pedagogical interventions to pilot the effectiveness of the tool in teaching argumentation. There were 13 studies classified as large studies involving more than 100 participants, whereas only 6 studies were classified as medium studies, with sample sizes ranging from 51 to 99 participants.

Figure 6
Bar chart titled “Size Classification” showing the number of studies. Three categories are displayed: Small with 26 studies, Large with 13 studies, and Medium with 6 studies.

Figure 6. Sample size classification.

Perceptions of argument visualization tools

Our analysis of 45 studies showed diverse levels of attention to perceptions of argument visualization tools among students and instructors. Specifically, we found that the majority of studies (n = 23) did not examine perceptions of the tool in any form. In the remaining 22 studies, however, student perceptions were categorized following Moore and Benbasat's (1991) framework, which assesses the perception of technological innovations through dimensions of complexity, compatibility, behavioral aspects, or any combinations of these.

As shown in Figure 7, five studies addressed both compatibility and behavioral perceptions, evaluating the alignment of the tool with learning goals and the students' behavioral engagement in using the tool (n = 5). Four studies (n = 4) explored compatibility and complexity perceptions, meaning that these studies focus on how well the tool fits learning goals and the tool's ease of use. Three studies (n = 3) examined compatibility only, meaning that these studies assessed whether the tools enhanced student learning experiences or asked students to self-report their improved argumentation skills; another three studies investigated general attitudes, capturing open-ended student feedback on the tools (n = 3). Furthermore, only two studies considered a more thorough examination of student perception of tool use, including investigation of all of the complexity, compatibility, behavioral aspects, and general attitudes (n = 2); these offer a holistic perspective on student perceptions of the tools. Only one study focused solely on complexity and behavioral aspects (n = 1), while another exclusively explored behavioral perceptions (n = 1).

Figure 7
Bar chart titled “Perceptions of Argument Visualization Tools” with categories on the x-axis and number of studies on the y-axis. The “No” category has the highest count at twenty-three. Other categories include “Compatibility and Behavioral” with five, “Compatibility and Complexity” with four, and others with lower counts.

Figure 7. Perceptions of argument visualization tools.

Purposes of argument visualization tools

Our analysis of the 45 studies demonstrated four distinct purposes for using argument visualization tools in higher education settings: Planning and Reading Comprehension, Reasoning and Teaching Argumentation, Knowledge Construction, and Others. The most frequent purpose was identified as Planning and Reading Comprehension, which accounted for 21 studies (n = 21). The studies identified in this category use argument mapping tools to facilitate pre-writing activities before essay instructions, while some studies use these tools to enhance students' understanding of reading materials, retention of textual arguments or recall of arguments.

The second most frequent purpose was Reasoning and Teaching Argumentation, in which we identified 18 studies among the 45 (n = 18). These studies concentrated on using argument visualization tools as pedagogical tools for fostering critical thinking, reasoning skills, debating skills, or improving argumentative essay writing skills. There are four studies in the Others category (n = 4). These studies mostly use argument visualization tools for specialized disciplinary purposes, such as supporting diagnostic reasoning in medical education or other subject-specific applications.

The least represented category, Knowledge Construction, was observed in only 2 studies. These studies explored the use of argument visualization tools to facilitate the collaborative or individual construction of knowledge frameworks (see Figure 8).

Figure 8
Bar chart titled “Purpose of Use” depicting the number of studies by category. Planning and Reading Comprehension leads with 21 studies, followed by Reasoning and Teaching Argumentation with 18, Others with 4, and Knowledge Construction with 2.

Figure 8. Purpose of use.

Discussion

In this study, we conducted a scoping review of research studies focusing on argument visualization software and closely examined how this technique has been used in teaching and learning contexts in higher education. We closely reviewed 45 empirical articles across various post-secondary fields and found a noticeable increase in publications related to argument visualization tools since 2022. This increase is particularly significant given the rapid expansion of online and distance education following the COVID-19 pandemic, which has created urgent needs for digital tools that can support complex cognitive tasks in remote learning environments. The findings demonstrate both the growing adoption of these tools across diverse disciplines and the need for more rigorous empirical validation of their effectiveness. Our analysis reveals four key areas that characterize the current state of argument visualization research: (1) disciplinary applications, (2) assessment approaches, (3) tool adoption patterns, and (4) instructional strategies that moderate effectiveness.

Disciplinary applications of argument visualization

Specialized purposes

Argument visualization has been adopted as a pedagogical strategy across a broad range of instructional aims and disciplinary contexts in higher education. Its application and use cases are shaped by the epistemic goals and pedagogical traditions of specific domains. In more specialized disciplinary settings, for instance, argument visualization has been used to scaffold diagnostic reasoning in medical education (Wu et al., 2013, 2014), structure journalistic analysis (Borden, 2007), facilitate philosophical reasoning and logic training (Kaeppel, 2021; Twardy, 2004), and support biological argumentation in teacher education programs (Garcia Romano et al., 2021). In other cases, it has been integrated in EFL and TESOL (Teaching English to Speakers of Other Languages) contexts to develop collaborative debate skills (Chen et al., 2022, 2024) or oral argumentation and speaking (Mubarok et al., 2023). These studies illustrate the diverse ways in which argument visualization tools can be used to supporting instructions of individual disciplines, sometimes as standalone reasoning support, and other times as mediators of communication or language learning. This disciplinary diversity demonstrates the flexibility of argument visualization, yet it also raises questions about optimal implementation strategies across different fields.

Critical thinking and argumentation

Beyond discipline-specific applications, our review identified a substantial focus on general critical thinking development. A significant group of argument visualization studies focus more on supporting students' critical thinking and argumentation skills, and their instructional outcomes are often grounded in general education and writing-intensive courses. These studies range from philosophical instruction, academic writing, to interdisciplinary learning environments (Butchart et al., 2009; Harrell, 2008, 2011; Kunsch et al., 2014; Maftoon et al., 2014; Scheuer et al., 2014; Sönmez et al., 2020; Uren et al., 2006; Yilmaz-Na and Sönmez, 2023a,b). More importantly, the emphasis here is to develop students' academic literacy abilities, such as generating arguments, structuring evidence-based claims, evaluating reasoning statements, and responding to counterarguments. This focus on transferable academic skills suggests that argument visualization tools may serve as cross-disciplinary cognitive scaffolds.

Comprehension and retention

Complementing these applications, research has also documented cognitive processing and memory enhancement benefits. Another instructional application of argument visualization focuses on reading comprehension and knowledge retention. Several studies assessed how visualizing arguments during or after reading enhances students' understanding of complex texts, recall of information, and ability to trace logical connections between ideas (Archila et al., 2022; Cullen et al., 2018; Davies, 2009; Dwyer et al., 2010, 2013; Eftekhari and Sotoudehnama, 2018; Eftekhari et al., 2016; Gargouri and Naatus, 2017; Jeong and Kim, 2022; Loll and Pinkwart, 2013; Memis et al., 2022; Ngajie et al., 2020). These studies, although less concerned with argumentative writing, suggest that argument visualization can serve as a cognitive tool to improve information processing (Nesbit et al., 2018), especially when dealing with dense or abstract content. This is consistent with Cognitive Load Theory, which posits that external visual representations can help manage working memory constraints and facilitate deeper learning by reducing extraneous cognitive load (Sweller, 1994). The cognitive benefits documented here provide theoretical grounding for understanding how these tools support learning.

Argument visualization for EFL writers

A distinctive body of research focuses on developing EFL learners' writing skills, where argument visualization tools are introduced as pre-writing activities (Alsmari, 2022; Binks et al., 2022; Liu et al., 2023; Robillos, 2021; Robillos and Art-in, 2023). In these contexts, the argument visualization tools are not simply aids for organizing content but quite instrumental in supporting language learners' development of rhetorical structure in writing argumentative essays. These studies altogether show that argument visualization has been particularly valuable for learners who must navigate both conceptual and linguistic challenges simultaneously. This diversity of applications reveals both opportunities and challenges. While argument visualization demonstrates remarkable flexibility across disciplines, this same diversity complicates efforts to assess overall effectiveness, leading us to examine how these tools have been evaluated across contexts.

Assessing the effectiveness of argument visualization

Turning from applications to evidence of effectiveness, our analysis reveals significant methodological patterns and limitations in how these tools have been evaluated. In the studies we reviewed, argument visualization research employs a variety of assessment approaches to investigate the effectiveness of argument visualization, with an emphasis on measurable learning outcomes. An obvious pattern emerged in the assessment approaches of argument visualization: instructional strategies and collaborative learning approaches play a key role in the efficacy of argument visualization, but variability in training and context shapes student outcomes. Our findings suggest that argument visualization tools may serve as effective pedagogical scaffolds for developing both analytical and constructive argumentation skills; however, the lack of standardized methods complicates synthesis and generalization.

Critical thinking and writing

Our first research question sought to examine the evidence regarding the impact of argument visualization tools on students' critical thinking skills and writing skills. Our findings indicate a predominant use of achievement tests (Butchart et al., 2009; Chen et al., 2024; Crudele and Raffaghelli, 2022; Cullen et al., 2018; Dwyer et al., 2010, 2013; Eftekhari and Sotoudehnama, 2018; Eftekhari et al., 2016; Harrell, 2008, 2011; Kunsch et al., 2014; Loll and Pinkwart, 2013; Memis et al., 2022; Ngajie et al., 2020; Twardy, 2004; Uçar and Demiraslan Çevik, 2020; Wu et al., 2013, 2014), essays (Alsmari, 2022; Binks et al., 2022; Liu et al., 2023; Maftoon et al., 2014; Robillos, 2021; Robillos and Art-in, 2023; Robillos and Thongpai, 2022), or mixture of both (Cullen et al., 2018; Memis et al., 2022) as methods to assess the effects of argument visualization. Our analysis suggests that these studies strongly focus on measurable learning outcomes and the learning effectiveness of the intervention. However, we also note that the heterogeneity of achievement tests, ranging from validated tools like CCTST to subjective grades, poses evaluation challenges for our synthesis, potentially obscuring patterns. For example, there could be stronger effects in standardized than contextual measures, further limiting cross-study comparability. Thus, the promise of these findings is tempered by significant methodological limitations. Previously, van Gelder et al.'s (2004) findings indicated a 20% improvement in critical thinking skills as a result of argument mapping intervention. However, when we reviewed these studies, the diversity in assessment methods and the lack of standardization across studies make it quite challenging to draw definitive conclusions about these tools' effectiveness and this method's validity. Also, according to our results, most of the studies examining the effectiveness of argument visualization utilize small-scale study design (i.e., Jeong and Kim, 2022; Uren et al., 2006). Although small studies provide granular insights into the nuanced influences of argument visualization on writing and critical thinking, the relatively smaller number of large-scale studies indicates the problem of generalizability, demonstrating a need for broader research examination to validate findings across diverse and larger populations.

Qualitative evaluation

While achievement tests provided quantitative measures of improvement and effectiveness, some studies employed the qualitative evaluation of argument maps (Chen et al., 2024; Jeong and Kim, 2022; Kaeppel, 2021; Sönmez et al., 2020), offering insights into students' actual reasoning processes and the structural understanding of arguments. This dual approach to assessment reflects what Davies (2011) identified as the two primary pedagogical applications of argument visualization in higher education, such as the analysis of existing arguments (Archila et al., 2022; Borden, 2007; Cullen et al., 2018; Davies, 2009; Dwyer et al., 2010, 2013; Eftekhari and Sotoudehnama, 2018; Eftekhari et al., 2016; Gargouri and Naatus, 2017; Jeong and Kim, 2022; Loll and Pinkwart, 2013; Memis et al., 2022; Ngajie et al., 2020) and the creation of new argumentation (Alsmari, 2022; Binks et al., 2022; Crudele and Raffaghelli, 2022; Liu et al., 2023; Robillos, 2021; Robillos and Art-in, 2023; Robillos and Thongpai, 2022; Uçar and Demiraslan Çevik, 2020).

Based on our scoping review, what we can conclude so far is that integrating argument visualization tools in teaching writing or reasoning could likely be effective in limited controlled pedagogical contexts. In summary, measurable outcomes are frequently reported, but methodological inconsistency and small sample sizes limit the field's capacity for generalization, underscoring the need for more robust research designs.

Types and uses of argument visualization tools

These assessment findings highlight the need to understand not just whether these tools work, but which specific tools are being adopted and why. Moving from assessment approaches to the practical tool adoption; our second research question reveals significant patterns in how different platforms have been integrated across disciplines.

Adoption of tools

Our second research question focused on the types of argument visualization tools and their disciplinary uses. The adoption of argument visualization tools in higher education is highly heterogeneous, with certain tools dominating particular contexts, while others remain underutilized due to access and design factors. Based on our findings, Rationale emerged as the most widely used tool across the reviewed studies, and other platforms such as Dialectical Map (Liu et al., 2023; Nesbit et al., 2024), LASAD (Loll and Pinkwart, 2013; Scheuer et al., 2014), or Argunet (Uçar and Demiraslan Çevik, 2020) appeared in only one or two studies. This uneven distribution raises important questions about the factors driving tool selection. These usage patterns do not necessarily reflect differences in pedagogical effectiveness, but more likely stem from disparities in institutional access, platform usability, researcher familiarity, and visibility within academic communities. For example, Dialectical Map may still remain underused not due to underperformance but due to limited dissemination, marketing, minimal integration with widely used Learning Management System platforms, or lack of peer-reviewed comparative studies. Overall, these patterns are theoretically aligned with Lin and Hwang's model (2019), as TEL emphasizes on affordances (e.g., Rationale's usability driving adoption); nevertheless, the challenge counts on its bias toward positive impacts. This raises a need to highlight further investigation for any underutilized technologies due to access barriers, helping to extend the model to consider contextual factors in technology integration.

Comparatively, Rationale's popular, widespread use likely originates from its user-friendly box-and-arrow interface and built-in feedback mechanisms, making it usable and suitable for individual critical thinking tasks in philosophy and EFL (e.g., Butchart et al., 2009). In contrast, tools like Dialectical Map emphasize structured argumentation features, enhancing essay writing for language learning purposes (Liu et al., 2023), while other tools (like Argunet) offer more flexible representations for argumentation but may require more training (Uçar and Demiraslan Çevik, 2020). These differences in affordances and usability for novices vs. scalability for collaboration highlight the need for tool selection based on pedagogical goals, though there have been limited comparative studies or quantitative synthesis hinder broader pedagogy and instructional recommendations.

The theoretical framework for understanding these adoption patterns comes from technology acceptance research. These adoption patterns are congruent with the Technology Acceptance Model (Davis, 1986), which suggests that perceived usefulness and perceived ease of use are critical predictors of educators' and students' acceptance of new educational technologies. Limited integration or underutilization may reflect lower perceived compatibility or accessibility within institutional environments. On the other hand, rather than indicating conflicting findings about tool effectiveness, this uneven adoption even suggests a need for more systematic evaluations of which design features or instructional alignments make a tool sustainable in specific disciplinary settings. Understanding adoption patterns naturally leads to examining how these tools are integrated into teaching practices.

Integration of tools

Our analysis reveals meaningful patterns in how these tools are pedagogically integrated. Our second research question also examined the integration of visual argumentation software into post-secondary teaching practices. Our review revealed that Rationale emerged as the dominant tool, though the field shows considerable fragmentation with multiple platforms being used across different contexts (Butchart et al., 2009; Eftekhari and Sotoudehnama, 2018; Eftekhari et al., 2016; Harrell, 2008; Maftoon et al., 2014; Memis et al., 2022; Ngajie et al., 2020; Rapanta and Walton, 2016; Robillos, 2021; Sönmez et al., 2020). The prevalence of EFL studies and philosophy courses in the adoption of argument visualization technology suggests that these disciplines have been early adopters, likely due to their explicit emphasis on cultivating language, reasoning, and argumentation skills essential for post-secondary success. One reason for this adoption may be the potential of argument visualization tools to clarify argumentative structure, which is particularly valuable for learners in these fields (Van den Braak et al., 2006).

Limitations of tools

Our review also uncovered significant challenges that temper the enthusiasm for these tools. Yet, while these EFL studies show that argument mapping tools help students organize complex argument and reduce cognitive load and enhance critical thinking and writing, these tools may also introduce additional cognitive demands, particularly for novice writers (Jeong and Kim, 2022; Shehab and Nussbaum, 2015) Specifically, studies comparing expert and novice use of mapping tools reveal that novice writers often struggle with organizing and linking claims effectively, indicating that instructional support is important to avoid overwhelming learners' working memory (Jeong and Kim, 2022).

These limitations manifest differently across disciplinary contexts. While the structured nature of argument maps provides clarity, we can see that their effectiveness depends heavily on user expertise and tool design. Nevertheless, the integration of these tools varies significantly across disciplines, with some fields emphasizing using the tools for planning, such as pre-writing activities (i.e., Liu et al., 2023; Robillos, 2021; Robillos and Art-in, 2023) while others focus on teaching argumentation, such as diagnostic reasoning (Harrell, 2008; Kaeppel, 2021; Twardy, 2004; Wu et al., 2013, 2014) or debate preparation (Chen et al., 2024).

Summary of integration findings

Despite these challenges, several consistent patterns emerge from our analysis of tool integration. Argument visualization consistently supports students' critical thinking and structured reasoning skills across disciplines, though disciplinary differences shape pedagogical implementation. Given that many argument visualization tools are delivered through web-based platforms (e.g., Rationale, LASAD, Dialectical Map), they have particular instrumental value for learners in online and blended learning environments, where immediate instructor feedback is limited. These tools' interactive visual interface provide structure for asynchronous reasoning tasks (e.g. Kaeppel, 2021), peer collaboration (e.g. Harrell, 2008), and formative assessment (e.g. Cullen et al., 2018), making these tools valuable cognitive scaffolds for supporting critical thinking in digitally mediated learning contexts.

Studies of students' perception

Understanding tool effectiveness requires examining not just outcomes but also user experiences and perceptions. In our analysis, some of the studies are concerned with the student perception of using these tools (Alsmari, 2022; Chen et al., 2022, 2024; Jeong and Kim, 2022; Kaeppel, 2021; Loll and Pinkwart, 2013; Robillos, 2021; Robillos and Art-in, 2023; Robillos and Thongpai, 2022; Scheuer et al., 2014; Sönmez et al., 2020; Uçar and Demiraslan Çevik, 2020; Uren et al., 2006; Wu et al., 2013, 2014), yet nearly half of the studies did not assess student perception of the argument visualization tools. Among those that did examine perception, some studies focused on student users' perception of the argument visualization method or the argument mapping tool's compatibility with their actual learning experience (Alsmari, 2022; Chen et al., 2022; Iandoli et al., 2016; Kaeppel, 2021). This inconsistency in perception research represents a significant gap. Mostly, there were considerable inconsistent variations in the aspects examined, with only relatively few studies adopting comprehensive frameworks (e.g. Loll and Pinkwart, 2013). This gap demonstrates that there is a possible disconnection between technology-focused research and student-centered pedagogical approaches, potentially overlooking barriers to adoption and effectiveness.

Our results indicate the need for future research to design a more comprehensive survey that systematically evaluates student user perceptions by incorporating multiple dimensions, providing a richer understanding of how these argument visualization tools are effectively used by students and instructors in higher education. These varied applications and perceptions naturally lead us to examine the instructional strategies that moderate their effectiveness.

Instructional aims and strategies

Understanding which tools are used and how they're integrated provides essential context, but the critical question remains about what instructional strategies make these tools effective. Our third research question examines how they are implemented pedagogically, revealing critical factors that influence learning outcomes.

Training

Our third research question examined how instructional aims and tool usage strategies, such as training, collaboration, and pedagogical integration can moderate student learning outcomes. A key practice-based pedagogical insight inferred from our findings is that most training emphasizes tool familiarization (i.e., Iandoli et al., 2014; Maftoon et al., 2014) over training for argumentation schemes (i.e., Alsmari, 2022; Eftekhari et al., 2016). Potentially, doing either way would limit deeper skill development for students, as integrated pedagogical approaches that prioritize both may give rise real learning benefits. Our analysis of instructional aims and strategies revealed several interesting patterns in how argument visualization methods are used in higher education and their specific impact on student outcomes. It seems that provision of training emerged as a key differentiator. As previously shown, most studies included explicit training sessions for student participants (Alsmari, 2022; Archila et al., 2022; Binks et al., 2022; Borden, 2007; Butchart et al., 2009; Carrington et al., 2011; Chen et al., 2024; Crudele and Raffaghelli, 2022; Cullen et al., 2018; Dwyer et al., 2013; Eftekhari and Sotoudehnama, 2018; Eftekhari et al., 2016; Harrell, 2008; Iandoli et al., 2016; Jeong and Kim, 2022; Jeong and Seok-Shin, 2023; Kunsch et al., 2014; Liu et al., 2023; Memis et al., 2022; Ngajie et al., 2020; Ouyang et al., 2024; Rapanta and Walton, 2016; Scheuer et al., 2014; Uçar and Demiraslan Çevik, 2020; Wu et al., 2014; Yilmaz-Na and Sönmez, 2023a), while the rest of the studies reported no training provision (e.g., Dwyer et al., 2010; Garcia Romano et al., 2021; Iandoli et al., 2014; Kaeppel, 2021; Robillos and Art-in, 2023; Robillos and Thongpai, 2022; Uren et al., 2006; Wu et al., 2013; Yilmaz-Na and Sönmez, 2023b). However, the nature of this training varied. Therefore, for those studies that provided training, most focused solely on introducing the features of the tools or teaching the underlying structures and logic of argumentation (i.e., a claim, supporting reasons, evidence, etc.).

Instructional applications

The implementation of these tools reveals distinct disciplinary patterns that align with pedagogical goals. With respect to how argument visualization tools are used in instructions, we have also found that the instructional applications of argument visualization tools showed distinct patterns across different educational contexts. In EFL settings, which consist of the largest subject area, the diverse tools were predominantly used as pre-writing support for post-secondary language learners (Alsmari, 2022; Binks et al., 2022; Liu et al., 2023; Robillos, 2021; Robillos and Art-in, 2023). For instance, Liu et al. (2023) demonstrated how the Dialectical Map tool effectively supported 201 undergraduate EFL students in China with their argumentative essay drafting process. Similarly, Robillos' study (2021) employed Rationale software with 28 TESOL students as a pre-writing activity, reporting improved essay quality and positive student perceptions of the tool used in the classroom settings.

In contrast, philosophy courses demonstrate a different instructional focus. In philosophy courses, the focus of instruction shifted toward developing learners' critical thinking and analytical skills for this discipline (Butchart et al., 2009; Cullen et al., 2018; Harrell, 2011; Kaeppel, 2021; Twardy, 2004). Cullen et al. (2018) incorporated MindMap as a tool in philosophy education. They used achievement tests with essay assessments to evaluate both immediate comprehension of texts and transfer of critical thinking skills. Similarly, in English education, (Harrell 2008) utilized Rationale with 180 students in English translation courses, measuring outcomes through CCTST. This approach also demonstrated how argument visualization tools could scaffold the development of complex reasoning skills in philosophical discourse.

Overall, we also find that there is a potential overlap between planning-focused and reasoning-focused purposes; this highlights implications for pedagogical design for argument visualization. For instance, tools used for pre-writing (e.g., in EFL) may inadvertently enhance argumentation skills, suggesting that hybrid implementations could maximize benefits across categories as evidence has shown that explicit planning instruction like argument visualization raises argumentation quality when the planning is tied to argument structure (Graham et al., 2015); however, we acknowledge that further research might be needed to disentangle these effects between argumentative structure quality and reasoning quality.

Teaching argumentation

We have identified which studies provide training and familiarization with the tool in order to help illustrate the variability in instructional conditions and tool deployment strategies across settings. Our findings may point to the fact that some researchers in the field often prioritize the technical usability of the tools' features, but instructions for argumentation schemes are overlooked (e.g., Binks et al., 2022; Carrington et al., 2011). This oversight has important pedagogical implications. We argue that future studies intending to use argument visualization as pedagogy should place more emphasis on structured training of both argument visualization software features as well as argumentation schemes (Visser et al., 2022). This dual approach aligns with van Gelder's (2015) assertion about the importance of quality instruction and feedback in argument-mapping education. These findings suggest that instructional strategies such as collaboration and structured training moderate the effectiveness of argument visualization tools. When learners construct argument maps, instructors should not only offer technical training but also provide argumentative scheme training with feedback for learners, as feedback facilitates learning of argumentation (e.g., Zhu et al., 2017, 2020).

Collaborative uses

Equally important to training is the role of collaborative learning environments. Interestingly, our review revealed that collaborative learning environments were frequently integrated with argument visualization tools (Alsmari, 2022; Archila et al., 2022; Carrington et al., 2011; Chen et al., 2024; Cullen et al., 2018; Harrell, 2008; Maftoon et al., 2014; Memis et al., 2022; Mubarok et al., 2023; Ngajie et al., 2020; Ouyang et al., 2024; Robillos, 2021; Uçar and Demiraslan Çevik, 2020). Chen et al. (2024) utilized ZJU YueQue in their educational setting with 17 participants, preparing for debate and collaborative argument construction. Their achievement test results showed positive outcomes when students worked together to build up and analyze arguments. This collaborative approach was also evident in Carrington et al.'s (2011) study of 291 business education students, where group work enhanced the tool's effectiveness for argumentation.

While the assessment methods varied across these studies, achievement tests emerged as the most common evaluation approach. These were often complemented by argument map analysis and essay assessment, providing a multi-faceted, integrated view of student learning. This triangulation of assessment methods suggests that instructors generally recognized the need to evaluate both the immediate comprehension of argumentation principles and their practical application in academic writing and critical thinking.

Summary of findings

Through this scoping review, we have mapped the prevalent use of argument visualization tools across post-secondary settings. Three key patterns emerge: the concentration in language learning contexts, the emphasis on developing critical thinking and writing skills, and the critical role of structured training and collaborative approaches. For instance, Eftekhari et al. (2016) conducted a study with 180 EFL students, highlighting the inclusion of structured training sessions as a component for effectively using argument visualization software. Similarly, Wu et al. (2013) examined the use of a dual mapping cognitive tool in a diagnostic reasoning context with 29 medical students, illustrating the significance of tool-task alignment. These findings altogether outline the current landscape while highlighting that further research is needed to understand the broader applicability and variations in effectiveness across diverse disciplines and educational contexts.

Implications

These findings have immediate practical relevance for contemporary educational contexts, particularly as institutions increasingly adopt online and hybrid delivery modes. The present findings demonstrate the unique affordances of argument visualization tools for online and blended learning environments, where students often face limited opportunities for immediate instructor feedback. Externalizing the reasoning processes into clear, modifiable visual structures serves as cognitive scaffolds that support critical thinking (Alsmari, 2022; Eftekhari and Sotoudehnama, 2018; Eftekhari et al., 2016; Nussbaum, 2008), and collaborative argumentation (Carrington et al., 2011; Chen et al., 2024; Cullen et al., 2018; Jonassen and Kim, 2010; Harrell, 2008; Nussbaum and Edwards, 2011) in digital contexts. Instructors designing online or hybrid courses may therefore consider integrating argument mapping software not only as a pre-writing aid but also as a means of sustaining student engagement and formative assessment when synchronous interaction is constrained.

Limitations and future directions

Limitations of current studies

Design

Upon careful examination of the 45 studies focusing on argument visualization tools, we have noted several limitations in the current research landscape of argument visualization tools in higher education. Firstly, quasi-experimental designs were predominant in classrooms. While the quasi-experimental design is valuable for the initial exploration of the tool's pedagogical effect, we believe that this also presents a significant methodological constraint in the field. Although these studies provide important insights into the potential effectiveness of argument visualization tools of various kinds, the lack of randomized experimental controlled trials would limit our understanding and ability to draw strong causal conclusions about their impact on student learning outcomes. The predominance of quasi-experimental studies introduces potential selection bias, as non-randomized groups may differ in motivation or prior skills, complicating the causal inferences about a tool's effectiveness (Gibbert et al., 2008; Yin, 2013). Furthermore, heterogeneous evidence that varies in sample sizes, pedagogical goals, assessment methods, and contexts would possibly limit the generalizability of its effectiveness, particularly beyond EFL and philosophy disciplines.

Diverse assessment methods

We also note that diverse assessment methods were used across studies in various settings, ranging from achievement tests (i.e., course based grading vs. standardized critical thinking tests) to argument map evaluations and essays. Student user perception of the tools was also limited in a way that only one or two dimensions of perception were examined. Although these could provide rich data about the student learning process, at the same time, such diversity makes it quite challenging to conduct meaningful cross-study comparisons or meta-analyses of intervention effectiveness. It is because this variability may introduce measurement bias, as subjective tools like argument map evaluations could favor certain instructional styles, underscoring the need for standardized protocols to enhance reliability and external validity (Rapanta and Walton, 2016). Establishing common-ground outcome measures and reporting standards would significantly facilitate meta-analytical syntheses, thus enabling stronger causal inferences about the effects of argument visualization tools on argumentation. On the other hand, we note that most of the studies were small-scale with fewer than 50 participants, and this gap reflects the need to conduct larger-scale research so that researchers can validate findings. This small-size investigation may stem from systemic factors in educational technology research, such as limited funding for large-scale interventions, challenges in accessing diverse participant pools in classroom settings, and logistical barriers to multi-site studies (Lortie-Forgues and Inglis, 2019).

Disciplinary representation

Another significant limitation lies in the uneven distribution of research across disciplines. While EFL, philosophy, and business education are well-represented, many other disciplines have minimal representation in our sample. This concentration in specific fields limits our understanding of how argument visualization tools might be effectively implemented across different academic contexts. Additionally, the variation in training approaches, with some studies providing comprehensive instruction and others offering minimal or no training at all, makes it difficult to determine the most effective implementation strategies for these tools. We thus argue that it would be important to provide both technical training and argumentation scheme training for students if instructors would like to use any argument visualization tools in teaching.

Future research directions

Priority areas

Looking toward future research directions, we identify several priority areas that warrant investigation. First, as mentioned before, there is a pressing need for large-scale, randomized controlled trials to provide more robust evidence of the effectiveness of argument visualization tools. Such studies should employ standardized assessment protocols to facilitate cross-study comparisons and meta-analyses. Additionally, exploring the long-term impact of argument visualization tools on critical thinking and writing skills, beyond immediate learning outcomes, represents an important yet under-researched area. Addressing these gaps could contribute to a more comprehensive understanding of the potential benefits and limitations of argument visualization tools in diverse educational contexts. Lastly, scoping reviews typically do not conduct formal quality appraisal (Arksey and O'Malley, 2005; Peters et al., 2020); however, this review applied structured inclusion and exclusion criteria to ensure that only empirical, peer-reviewed studies were analyzed. Tools such as AMSTAR and CASP, which are primarily designed for systematic reviews and randomized controlled trials, were not applied because the goal of this synthesis was to map the research landscape rather than assess intervention effectiveness or risk of bias.

Additional research areas

Future research could include a more systematic investigation of implementation strategies across different disciplinary contexts. Our review revealed various approaches to tool integration in higher education contexts, from pre-writing activities to collaborative learning environments, but more research is needed to understand how these strategies can be implemented for different learning objectives and student populations. Future studies should particularly examine the role of instructor training and support; as van Gelder (2015) emphasized, quality feedback can be an important mediating factor for argumentation.

Lastly, we believe that technical development represents another crucial area for future research. While Rationale emerged as the dominant tool in our review, the fragmentation of various platforms and varying features suggests a need for a more comprehensive investigation of tool design and functionality and which features can be developed to support argumentation learning. Future studies should explore how emerging technologies, such as adaptive learning systems or artificial intelligence, might enhance the effectiveness of argument visualization tools in higher education so that when teachers implement the tools in their classes, they can provide learners with more personalized learning experiences.

Conclusion

Our scoping review has revealed trends in studies about argument visualization tools in higher education. The growing body of research during 2022 and 2023 demonstrates increasing recognition of these tools' value in developing post-secondary learners' critical thinking and writing skills. However, the field requires more rigorous empirical validation and systematic investigation of implementation strategies. The diversity of applications across disciplines suggests these tools' versatility, while the predominance of training requirements highlights the importance of structured pedagogical support. As educational technology continues to grow in online distance education, argument visualization tools may play an increasingly important role in developing students' analytical and argumentative skills, provided that future research addresses the identified gaps and challenges in current understanding. Our scoping review thus serves as groundwork for future research directions, particularly in experimental validation and pedagogical design, while highlighting the need for more standardized and rigorous approaches to investigating the effectiveness of these educational tools.

Author contributions

DC: Resources, Funding acquisition, Formal analysis, Writing – original draft, Project administration, Visualization, Investigation, Conceptualization, Writing – review & editing, Methodology. ML: Methodology, Validation, Formal analysis, Conceptualization, Writing – review & editing. G-JH: Resources, Validation, Supervision, Writing – review & editing, Methodology, Conceptualization.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This research was funded by Social Sciences and Humanities Research Council of Canada (SSHRC), grant number 430-2023-00368. Part of the APC was covered by SFU's Central Open Access Fund.

Acknowledgments

We would like to sincerely thank the Editor and the reviewers for their thoughtful and constructive feedback during the peer review process. We are also grateful for the wider scholarly conversations that inspired and informed our thinking as this work took shape. While this project benefited from those exchanges, its conceptualization, design, and analysis were carried out independently by the authors, and all interpretations are entirely our own.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Gen AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2025.1672105/full#supplementary-material

References

Agostinho, S., Tindall-Ford, S., and Roodenrys, K. (2010). “Using computer based tools to self manage cognitive load,” in Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications, eds. T. Bastiaens & M. Ebner (Chesapeake, VA: AACE), 3374–3378.

Google Scholar

Alsmari, N. A. (2022). integrating computer-aided argument mapping into efl learners' argumentative writing: evidence from Saudi Arabia. Int. J. Adv. Comp. Sci. Appl. 13, 98–104. doi: 10.14569/IJACSA.2022.0131013

Crossref Full Text | Google Scholar

Andrews, R. (2015). “Critical thinking and/or argumentation in higher education,” in The Palgrave Handbook of Critical Thinking in Higher Education, eds. M. Davies and R. Barnett (New York: Palgrave Macmillan), 63–76.

Google Scholar

Archila, P. A., Barbosa, V., Gravier, G., Levy, L., Ortiz, B. T., Wilches, L., et al. (2022). Integrating peer feedback and instructor feedback to support the construction of bilingual scientific argument maps. Int. J. Sci. Educ. 44, 2283–2305. doi: 10.1080/09500693.2022.2119899

Crossref Full Text | Google Scholar

Arksey, H., and O'Malley, L. (2005). Scoping studies: towards a methodological framework. Int. J. Soc. Res. Methodol. 8, 19–32. doi: 10.1080/1364557032000119616

Crossref Full Text | Google Scholar

Bean, J. C., and Melzer, D. (2021). Engaging Ideas: The Professor's Guide to Integrating Writing, Critical Thinking, and Active Learning in the Classroom (3rd ed.). Hoboken, NJ: John Wiley & Sons.

Google Scholar

Belland, B. R. (2010). Portraits of middle school students constructing evidence-based arguments during problem-based learning: the impact of computer-based scaffolds. Educ. Technol. Res. Dev. 58, 285–309. doi: 10.1007/s11423-009-9139-4

Crossref Full Text | Google Scholar

Benetos, K. (2023). “Digital tools for written argumentation,” in Digital Writing Technologies in Higher Education, eds. O. Kruse (Cham: Springer).

Google Scholar

Binks, A., Toniolo, A., and Nacenta, M. A. (2022). Representational transformations: Using maps to write essays. Int. J. Hum. Comput. Stud. 165:102851. doi: 10.1016/j.ijhcs.2022.102851

Crossref Full Text | Google Scholar

Blair, J. A. (2012). “Argument and its uses,” in Groundwork in the Theory of Argumentation. Argumentation Library, ed. C. Tindale (Dordrecht: Springer).

Google Scholar

Borden, S. L. (2007). Mapping ethical arguments in journalism: an exploratory study. Mass Commun. Soc. 10, 275–297. doi: 10.1080/15205430701407132

Crossref Full Text | Google Scholar

Buckingham Shum, S. (2003). “The roots of computer supported argument visualization,” in Visualizing Argumentation: Software Tools for Collaborative and Educational Sense-Making (London: Springer London), 3–24. doi: 10.1007/978-1-4471-0037-9_1

Crossref Full Text | Google Scholar

Butchart, S., Forster, D., Gold, I., Bigelow, J., Korb, K., Oppy, G., and Serrenti, A. (2009). Improving critical thinking using web based argument mapping exercises with automated feedback. Aust. J. Educ. Technol. 25, 221–239. doi: 10.14742/ajet.1154

Crossref Full Text | Google Scholar

Butcher, K. R. (2006). Learning from text with diagrams: promoting mental model development and inference generation. J. Educ. Psychol. 98, 182–197. doi: 10.1037/0022-0663.98.1.182

Crossref Full Text | Google Scholar

Carrington, M., Chen, R., Davies, M., Kaur, J., and Neville, B. (2011). The effectiveness of a single intervention of computer-aided argument mapping in a marketing and a financial accounting subject. Higher Educ. Res. Dev. 30, 387–403. doi: 10.1080/07294360.2011.559197

Crossref Full Text | Google Scholar

Chen, X., Wang, L., Zhai, X., and Li, Y. (2022). Exploring the effects of argument Map-supported online group debate activities on college students' critical thinking. Front. Psychol. 13, 856462. doi: 10.3389/fpsyg.2022.856462

PubMed Abstract | Crossref Full Text | Google Scholar

Chen, X., Zhao, H., Jin, H., and Li, Y. (2024). Exploring college students' depth and processing patterns of critical thinking skills and their perception in argument map(AM)-supported online group debate activities. Think. Skills Creat. 51:101467. doi: 10.1016/j.tsc.2024.101467

Crossref Full Text | Google Scholar

Crudele, F., and Raffaghelli, J. (2022). Promoting critical thinking through argument mapping: A lab for undergraduate students. J. Inform. Technol. Educ.: Res. 22, 497–525. doi: 10.28945/5220

Crossref Full Text | Google Scholar

Cullen, S., Fan, J., van der Brugge, E., and Elga, A. (2018). Improving analytical reasoning and argument understanding: a quasi-experimental field study of argument visualization. NPJ Sci. Learn. 3:21. doi: 10.1038/s41539-018-0038-5

PubMed Abstract | Crossref Full Text | Google Scholar

Darmawansah, D., Hwang, G. J., and Lin, C. J. (2024). Integrating dialectical constructivist scaffolding-based argumentation mapping to support students' dialectical thinking, oral and dialogical argumentation complexity. Educ. Technol. Res. Dev. 2024, 1–29. doi: 10.1007/s11423-024-10395-5

Crossref Full Text | Google Scholar

Davies, M. (2011). Concept mapping, mind mapping and argument mapping: what are the differences and do they matter? Higher Educ. 62, 279–301. doi: 10.1007/s10734-010-9387-6

Crossref Full Text | Google Scholar

Davies, W. M. (2009). Computer-assisted argument mapping: a rationale approach. Higher Educ. 58, 799–820. doi: 10.1007/s10734-009-9226-9

Crossref Full Text | Google Scholar

Davis, F. D. (1986). A Technology Acceptance Model for Empirically Testing New End-User Information Systems (Unpublished Doctoral Dissertation). Massachusetts Institute of Technology, Cambridge, MA, United States.

Google Scholar

Dwyer, C. P., Hogan, M. J., and Stewart, I. (2010). The evaluation of argument mapping as a learning tool: comparing the effects of map reading versus text reading on comprehension and recall of arguments. Think. Skills Creat. 5, 16–22. doi: 10.1016/j.tsc.2009.05.001

Crossref Full Text | Google Scholar

Dwyer, C. P., Hogan, M. J., and Stewart, I. (2013). An examination of the effects of argument mapping on students' memory and comprehension performance. Think. Skills Creat. 8, 11–24. doi: 10.1016/j.tsc.2012.12.002

Crossref Full Text | Google Scholar

Eftekhari, M., and Sotoudehnama, E. (2018). Effectiveness of computer-assisted argument mapping for comprehension, recall, and retention. ReCALL 30, 337–354. doi: 10.1017/S0958344017000337

Crossref Full Text | Google Scholar

Eftekhari, M., Sotoudehnama, E., and Marandi, S. S. (2016). Computer-aided argument mapping in an EFL setting: does technology precede traditional paper and pencil approach in developing critical thinking? Educ. Technol. Res. Dev. 64, 339–357. doi: 10.1007/s11423-016-9431-z

Crossref Full Text | Google Scholar

Falagas, M. E., Pitsouni, E. I., Malietzis, G. A., and Pappas, G. (2008). Comparison of PubMed, Scopus, web of science, and Google scholar: strengths and weaknesses. FASEB J. 22, 338–342. doi: 10.1096/fj.07-9492LSF

PubMed Abstract | Crossref Full Text | Google Scholar

Freeman, J. B. (1991). Dialectics and the Macrostructure of Argument. New York: Foris.

Google Scholar

Garcia Romano, L., Occelli, M., and Adúriz-Bravo, A. (2021). School scientific argumentation enriched by digital technologies: results with pre-and in-service science teachers. Eurasia J. Mathem. Sci. Technol. Educ. 17:7. doi: 10.29333/ejmste/10990

Crossref Full Text | Google Scholar

Gargouri, C., and Naatus, M. K. (2017). An experiment in mind-mapping and argument-mapping: Tools for assessing outcomes in the business curriculum. E-J. Busin. Educ. Scholars. Teach. 11, 39–78.

Google Scholar

Gibbert, M., Ruigrok, W., and Wicki, B. (2008). What passes as a rigorous case study. South. Med. J. 29, 1465–1474. doi: 10.1002/smj.722

Crossref Full Text | Google Scholar

Graham, S., Harris, K. R., and Santangelo, T. (2015). Research-based writing practices and the Common Core: meta-analysis and meta-synthesis. Element. Sch. J. 115, 498–522. doi: 10.1086/681964

Crossref Full Text | Google Scholar

Harrell, M. (2008). No computer program required: even pencil-and-paper argument mapping improves critical thinking skills. Teach. Philos. 31, 351–374. doi: 10.5840/teachphil200831437

Crossref Full Text | Google Scholar

Harrell, M. (2011). Argument diagramming and critical thinking in introductory philosophy. Higher Educ. Res. Dev. 30, 371–385. doi: 10.1080/07294360.2010.502559

Crossref Full Text | Google Scholar

Harrell, M. (2022). Representing the structure of a debate. Argumentation 36, 595–610. doi: 10.1007/s10503-022-09586-2

Crossref Full Text | Google Scholar

Harrell, M., and Wetzel, D. (2015). “Using argument diagramming to teach critical thinking in a first-year writing course,” in The Palgrave Handbook of Critical Thinking in Higher Education, eds. M. Davies and R. Barnett (New York: Palgrave Macmillan US), 213–232. doi: 10.1057/9781137378057_14

Crossref Full Text | Google Scholar

Hitchcock, D. (2017). “Informal logic and the concept of argument,” in On Reasoning and Argument. Argumentation Library (Cham: Springer).

Google Scholar

Hoffman, M., and Paglieri, F. (2011). “Cognitive effects of argument visualization tools,” in Proceedings of the 9th International Conference of the Ontario Society for the Study of Argumentation (Windsor, ON: OSSA), 1–12.

Google Scholar

Hornikx, J., and Hahn, U. (2012). Reasoning and argumentation: towards an integrated psychology of argumentation. Think. Reason. 18, 225–243. doi: 10.1080/13546783.2012.674715

Crossref Full Text | Google Scholar

Hurt, H. T., and Hibbard, R. (1989). The systematic measurement of the perceived characteristics of information technologies I: microcomputers as innovations. Commun. Q. 37, 214–222. doi: 10.1080/01463378909385541

Crossref Full Text | Google Scholar

Iandoli, L., Quinto, I., De Liddo, A., and Buckingham Shum, S. (2014). Socially augmented argumentation tools: rationale, design and evaluation of a debate dashboard. Int. J. Hum. Comput. Stud. 72, 298–319. doi: 10.1016/j.ijhcs.2013.08.006

Crossref Full Text | Google Scholar

Iandoli, L., Quinto, I., De Liddo, A., and Buckingham Shum, S. (2016). On online collaboration and construction of shared knowledge: assessing mediation capability in computer supported argument visualization tools. J. Assoc. Inform. Sci. Technol. 67, 1052–1067. doi: 10.1002/asi.23481

Crossref Full Text | Google Scholar

Jeong, A., and Kim, H. Y. (2022). Identifying critical thinking skills used by experts versus novices to construct argument maps in a computer-aided mapping tool. Knowl. Managem. E-Learn. 14, 125–149. doi: 10.34105/j.kmel.2022.14.008

Crossref Full Text | Google Scholar

Jeong, A. C., and Shin, H. S. (2023). “Mining, analyzing, and modeling the cognitive strategies students use to construct higher quality causal maps,” in Proceedings of the 20th International Conference on Cognition and Exploratory Learning in the Digital Age, eds. D. Sampson, D. Ifenthaler, and P. Isaias, 233–240.

Google Scholar

Jonassen, D. H., and Kim, B. (2010). Arguing to learn and learning to argue: design justifications and guidelines. Educ. Technol. Res. Dev. 58, 439–457. doi: 10.1007/s11423-009-9143-8

Crossref Full Text | Google Scholar

Kaeppel, K. (2021). The influence of collaborative argument mapping on college students' critical thinking about contentious arguments. Think. Skills Creat. 40:100809. doi: 10.1016/j.tsc.2021.100809

Crossref Full Text | Google Scholar

Kirschner, P. A., Buckingham-Shum, S. J., and Carr, C. S. (2012). Visualizing Argumentation: Software Tools for Collaborative and Educational Sense-Making. Cham: Springer Science & Business Media.

Google Scholar

Klein, P. D., and Rose, M. A. (2010). Teaching argument and explanation to prepare junior students for writing to learn. Read. Res. Q. 45, 433–461. doi: 10.1598/RRQ.45.4.4

Crossref Full Text | Google Scholar

Kunsch, D. W., Schnarr, K., and van Tyle, R. (2014). The use of argument mapping to enhance critical thinking skills in business education. J. Educ. Busin. 89, 403–410. doi: 10.1080/08832323.2014.925416

Crossref Full Text | Google Scholar

Lin, H. C., and Hwang, G. J. (2019). Research trends of flipped classroom studies for medical courses: A review of journal publications from 2008 to 2017 based on the technology-enhanced learning model. Interact. Learn. Environ. 27, 1011–1027. doi: 10.1080/10494820.2018.1467462

Crossref Full Text | Google Scholar

Liu, Q., Zhong, Z., and Nesbit, J. C. (2023). Argument mapping as a pre-writing activity: Does it promote writing skills of EFL learners? Educ. Inform. Technol. 29, 7895–7925. doi: 10.1007/s10639-023-12098-5

Crossref Full Text | Google Scholar

Loll, F., and Pinkwart, N. (2013). LASAD: Flexible representations for computer-based collaborative argumentation. Int. J. Hum. Comput. Stud. 71, 91–109. doi: 10.1016/j.ijhcs.2012.04.002

Crossref Full Text | Google Scholar

Lortie-Forgues, H., and Inglis, M. (2019). Rigorous large-scale educational RCTs are often uninformative: should we be concerned? Educ. Res. 48, 158–166. doi: 10.3102/0013189X19832850

Crossref Full Text | Google Scholar

Maftoon, P., Birjandi, P., and Pahlavani, P. (2014). The impact of using computer-aided argument mapping (CAAM) on the improvement of writing achievement of iranian learners of English. Theory Pract. Lang. Stud. 4, 931–938. doi: 10.4304/tpls.4.5.982-988

Crossref Full Text | Google Scholar

Manalo, E., and Fukuda, M. (2024). “Integration of learning through the use of self-constructed diagrams: opportunities and challenges,” in Diagrammatic Representation and Inference. Diagrams 2024. Lecture Notes in Computer Science, J. Lemanski, M. W. Johansen, E. Manalo, P. Viana, R. Bhattacharjee, R Burns (Cham: Springer).

Google Scholar

Manalo, E., and Ohmes, L. (2022). “The use of diagrams in planning for report writing,” in Diagrammatic Representation and Inference. Diagrams 2022. Lecture Notes in Computer Science, eds. V. Giardino, S. Linker, R. Burns, F. Bellucci, J. M. Boucheix, and P. Viana (Cham: Springer). doi: 10.1007/978-3-031-15146-0_23

Crossref Full Text | Google Scholar

Mello, R. F., Alves, G., Harada, E., Pérez-Sanagustín, M., Hilliger, I., Villalobos, E., et al. (2024). “LAFe: learning analytics solutions to support on-time feedback,” in International Conference on Artificial Intelligence in Education (Cham: Springer Nature Switzerland), 478–485.

Google Scholar

Memis, E. K., Akkas, B. N. C., and Sonmez, E. (2022). Impact of different types of argument maps on critical thinking: a quantitative study with the pre-service science teachers in Turkey. Psycho-Educ. Res. Rev. 11, 41–57. doi: 10.52963/PERR_Biruni_V11.N1.21

Crossref Full Text | Google Scholar

Mercier, H., and Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behav. Brain Sci. 34, 57–74. doi: 10.1017/S0140525X10000968

PubMed Abstract | Crossref Full Text | Google Scholar

Moore, G. C., and Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an information technology innovation. Inform. Syst. Res. 2, 192–222. doi: 10.1287/isre.2.3.192

PubMed Abstract | Crossref Full Text | Google Scholar

Mubarok, H., Lin, C. J., and Hwang, G. J. (2023). A virtual reality-based collaborative argument mapping approach in the EFL classroom. Interact. Learn. Environ. 2023, 1–19. doi: 10.1080/10494820.2023.2207197

Crossref Full Text | Google Scholar

Mueller, R., and Schroeder, M. (2018). From seeing to doing: examining the impact of non-evaluative classroom observation on teaching development. Innovat. Higher Educ. 43, 397–410. doi: 10.1007/s10755-018-9436-0

Crossref Full Text | Google Scholar

Nesbit, J., Liu, Q., Sharp, J., Cukierman, D., Hendrigan, H., Chang, D., et al. (2024). Argument visualization with dmaps: cases from postsecondary learning. J. Interact. Learn. Res. 35, 223–253. doi: 10.70725/941003zktrdh

Crossref Full Text | Google Scholar

Nesbit, J., Niu, H., and Liu, Q. (2018). “Cognitive tools for scaffolding argumentation,” in Contemporary Technologies in Education: Maximizing Student Engagement, Motivation, and Learning (Cham: Springer International Publishing), 97–117.

Google Scholar

Ngajie, B. N., Li, Y., Tiruneh, D. T., and Cheng, M. (2020). Investigating the effects of a systematic and model-based design of computer-supported argument visualization on critical thinking. Think. Skills Creat. 38:100742. doi: 10.1016/j.tsc.2020.100742

Crossref Full Text | Google Scholar

Noroozi, O., Weinberger, A., Biemans, H. J., Mulder, M., and Chizari, M. (2012). Argumentation-based computer supported collaborative learning (ABCSCL): a synthesis of 15 years of research. Educ. Res. Rev. 7, 79–106. doi: 10.1016/j.edurev.2011.11.006

Crossref Full Text | Google Scholar

Nussbaum, E. M. (2008). Using argumentation vee diagrams (AVDs) for promoting argument-counterargument integration in reflective writing. J. Educ. Psychol. 100, 549–565. doi: 10.1037/0022-0663.100.3.549

Crossref Full Text | Google Scholar

Nussbaum, E. M., and Edwards, O. V. (2011). Critical questions and argument stratagems: a framework for enhancing and analyzing students' reasoning practices. J. Learn. Sci. 20, 443–488. doi: 10.1080/10508406.2011.564567

Crossref Full Text | Google Scholar

Nussbaum, E. M., and Schraw, G. (2007). Promoting argument-counterargument integration in students' writing. J. Exp. Educ. 76, 59–92. doi: 10.3200/JEXE.76.1.59-92

Crossref Full Text | Google Scholar

Ouyang, F., Zhang, L., Wu, M., and Jiao, P. (2024). Empowering collaborative knowledge construction through the implementation of a collaborative argument map tool. Intern. Higher Educ. 62:100946. doi: 10.1016/j.iheduc.2024.100946

Crossref Full Text | Google Scholar

Panasuk, R. M., and Todd, J. (2005). Effectiveness of lesson planning: factor analysis. J. Instruct. Psychol. 32, 215–232. Available online at: https://proxy.lib.sfu.ca/login?url=https://www.proquest.com/scholarly-journals/effectiveness-lesson-planning-factor-analysis/docview/1416365381/se-2?accountid=13800

Google Scholar

Peters, M. D., Marnie, C., Tricco, A. C., Pollock, D., Munn, Z., Alexander, L., et al. (2020). Updated methodological guidance for the conduct of scoping reviews. JBI Evid. Synth. 18, 2119–2126. doi: 10.11124/JBIES-20-00167

PubMed Abstract | Crossref Full Text | Google Scholar

Rapanta, C., and Walton, D. (2016). The use of argument maps as an assessment tool in higher education. Int. J. Educ. Res. 79, 211–221. doi: 10.1016/j.ijer.2016.03.002

Crossref Full Text | Google Scholar

Reed, C., Walton, D., and Macagno, F. (2007). Argument diagramming in logic, law and artificial intelligence. Knowl. Eng. Rev. 22, 87–109. doi: 10.1017/S0269888907001051

Crossref Full Text | Google Scholar

Robillos, R. J. (2021). Learners' writing skill and self-regulation of learning awareness using computer-assisted argument mapping (CAAM). Teach. Engl. Technol. 21, 76–93.

Google Scholar

Robillos, R. J., and Art-in, S. (2023). Argument mapping with translanguaging pedagogy: A panacea for EFL students' challenges in writing argumentative essays. Int. J. Instruct. 16, 651–672. doi: 10.29333/iji.2023.16437a

Crossref Full Text | Google Scholar

Robillos, R. J., and Thongpai, J. (2022). Computer-aided argument mapping within metacognitive approach: its impact on students' argumentative writing performance and self-regulated learning. LEARN J. 15, 160–186.

Google Scholar

Rousseau, D. L., and van Gelder, T. (2024). Teaching critical thinking with argument mapping. J. Polit. Sci. Educ. 2024, 1–17.

Google Scholar

Royer, R., and Royer, J. (2004). Comparing hand drawn and computer generated concept mapping. J. Comp. Mathem. Sci. Teach. 23, 67–81. Available online at: https://www-learntechlib-org.proxy.lib.sfu.ca/primary/p/12872/

Google Scholar

Scheuer, O., Loll, F., Pinkwart, N., and McLaren, B. M. (2010). Computer-supported argumentation: A review of the state of the art. Int. J. Comp.-Support. Collaborat. Learn. 5, 43–102. doi: 10.1007/s11412-009-9080-x

Crossref Full Text | Google Scholar

Scheuer, O., McLaren, B. M., Weinberger, A., and Niebuhr, S. (2014). Promoting critical, elaborative discussions through a collaboration script and argument diagrams. Instruct. Sci. 42, 127–157. doi: 10.1007/s11251-013-9274-5

Crossref Full Text | Google Scholar

Shamseer, L., Moher, D., Clarke, M., Ghersi, D., Liberati, A., Petticrew, M., et al. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: Elaboration and explanation. BMJ 349:g7647. doi: 10.1136/bmj.g7647

PubMed Abstract | Crossref Full Text | Google Scholar

Shehab, H. M., and Nussbaum, E. M. (2015). Cognitive load of critical thinking strategies. Learning and Instruction 35, 51–61. doi: 10.1016/j.learninstruc.2014.09.004

Crossref Full Text | Google Scholar

Slavin, R., and Smith, D. (2009). The relationship between sample sizes and effect sizes in systematic reviews in education. Educ. Eval. Policy Anal. 31, 500–506. doi: 10.3102/0162373709352369

Crossref Full Text | Google Scholar

Sönmez, E., Akkas, B. N. Ç., and Memis, E. K. (2020). Computer-aided argument mapping for improving critical thinking: think better! Discuss better! write better!. Int. J. Contemp. Educ. Res. 7, 291–306. doi: 10.33200/ijcer.791430

Crossref Full Text | Google Scholar

Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learn. Instruct. 4, 295–312. doi: 10.1016/0959-4752(94)90003-5

Crossref Full Text | Google Scholar

Tricco, A. C., Lillie, E., Zarin, W., O'Brien, K. K., Colquhoun, H., Levac, D., et al. (2018). PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann. Intern. Med. 169, 467–473. doi: 10.7326/M18-0850

PubMed Abstract | Crossref Full Text | Google Scholar

Twardy, C. (2004). Argument maps improve critical thinking. Teach. Philos. 27, 95–116. doi: 10.5840/teachphil200427213

Crossref Full Text | Google Scholar

Uçar, B., and Demiraslan Çevik, Y. (2020). The effect of argument mapping supported with peer feedback on pre-service teachers' argumentation skills. J. Digit. Learn. Teacher Educ. 37, 6–29. doi: 10.1080/21532974.2020.1815107

Crossref Full Text | Google Scholar

Uren, V., Shum, S. B., Bachler, M., and Li, G. (2006). Sensemaking tools for understanding research literatures: design, implementation and user evaluation. Int. J. Hum. Comput. Stud. 64, 420–445. doi: 10.1016/j.ijhcs.2005.09.004

Crossref Full Text | Google Scholar

Van den Braak, S. W., van Oostendorp, H., Prakken, H., and Vreeswijk, G. A. W. (2006). “A critical review of argument visualization tools: Do users become better reasoners?,” in Workshop Notes of the ECAI-06 Workshop on Computational Models of Natural Argument (CMNA-06), eds. K. R. Reed, C. Grasso (ECCAI), 67–75.

Google Scholar

van Gelder, T. (2015). “Using argument mapping to improve critical thinking skills,” in The Palgrave Handbook of Critical Thinking in Higher Education, eds. M. Davies and R. Barnett (New York: Palgrave Macmillan US), 183–192.

Google Scholar

van Gelder, T., Bissett, M., and Cumming, G. (2004). Cultivating expertise in informal reasoning. Can. J. Exp. Psychol. 58, 142. doi: 10.1037/h0085794

PubMed Abstract | Crossref Full Text | Google Scholar

Visser, J., Lawrence, J., Reed, C., Wagemans, J., and Walton, D. (2022). “Annotating argument schemes,” in Argumentation Through Languages and Cultures, ed. C. Plantin (Cham: Springer).

Google Scholar

Widodo, H. (2006). Designing a genre-based lesson plan for an academic writing course. Engl. Teach. 5, 173–199.

Google Scholar

Wingate, U. (2012). ‘Argument!' helping students understand what essay writing is about. J. Engl. Acad. Purp. 11, 145–154. doi: 10.1016/j.jeap.2011.11.001

Crossref Full Text | Google Scholar

Wu, B., Wang, M., Johnson, J. M., and Grotzer, T. A. (2014). Improving the learning of clinical reasoning through computer-based cognitive representation. Med. Educ. Online 19, 25940. doi: 10.3402/meo.v19.25940

PubMed Abstract | Crossref Full Text | Google Scholar

Wu, B., Wang, M., Spector, J. M., and Yang, S. J. (2013). Design of a dual-mapping learning approach for problem solving and knowledge construction in ill-structured domains. J. Educ. Technol. Soc. 16, 71–84. Available online at: https://www.jstor.org/stable/jeductechsoci.16.4.71

Google Scholar

Yilmaz-Na, E., and Sönmez, E. (2023a). Having qualified arguments: Promoting Pre-service teachers' critical thinking through deliberate computer-assisted argument mapping practices. Think. Skills Creat. 47:101216. doi: 10.1016/j.tsc.2022.101216

Crossref Full Text | Google Scholar

Yilmaz-Na, E., and Sönmez, E. (2023b). Unfolding the potential of computer-assisted argument mapping practices for promoting self-regulation of learning and problem-solving skills of pre-service teachers and their relationship. Comput. Educ. 193:104683. doi: 10.1016/j.compedu.2022.104683

Crossref Full Text | Google Scholar

Yin, R. (2013). Validity and generalization in future case study evaluations. Evaluation 19, 321–332. doi: 10.1177/1356389013497081

Crossref Full Text | Google Scholar

Zhu, M., Lee, H. S., Wang, T., Liu, O. L., Belur, V., and Pallant, A. (2017). Investigating the impact of automated feedback on students' scientific argumentation. Int. J. Sci. Educ. 39, 1648–1668. doi: 10.1080/09500693.2017.1347303

Crossref Full Text | Google Scholar

Zhu, M., Liu, O. L., and Lee, H. S. (2020). The effect of automated feedback on revision behavior and learning gains in formative assessment of scientific argument writing. Comput. Educ. 143:103668. doi: 10.1016/j.compedu.2019.103668

Crossref Full Text | Google Scholar

Keywords: argument visualization, critical thinking, reasoning, writing skills, argument mapping, cognitive scaffolds

Citation: Chang D, Lin MP-C and Hwang G-J (2025) Charting the field: a review of argument visualization research for writing, learning, and reasoning. Front. Educ. 10:1672105. doi: 10.3389/feduc.2025.1672105

Received: 23 July 2025; Accepted: 15 September 2025;
Published: 03 November 2025.

Edited by:

Maurizio Sibilio, University of Salerno, Italy

Reviewed by:

Michele Domenico Todino, University of Salerno, Italy
Alessio Di Paolo, University of Salerno, Italy
Randi Proska Sandra, Fakultas Kedokteran Universitas Negeri Padang, Indonesia
Endry Boeriswati, State University of Jakarta, Indonesia

Copyright © 2025 Chang, Lin and Hwang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Daniel Chang, ZHRoN0BzZnUuY2E=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.