- Institute of Psychology, University of Graz, Graz, Austria
Introduction: Teams can benefit from collaboration tools that make distributed knowledge more accessible and comparable when sharing and integrating information. Comparative knowledge visualizations serve this function by presenting multiple knowledge profiles within a shared display. This allows users to distinguish shared information from unshared information. Despite the widespread use of knowledge visualizations in collaborative settings, comparatively little is known about how specific design choices support comparing multiple knowledge profiles. This study examined how comparative knowledge visualizations support users’ understanding of distributed knowledge. The study focused on two core design decisions: how to represent conceptual knowledge and how to arrange multiple profiles within a shared display.
Methods: We manipulated two core design decisions in comparative knowledge visualizations, each of which was implemented in two variants: knowledge representation format (concept maps versus proposition lists) and visual comparison strategy (juxtaposition versus superimposition). We also varied task complexity to test whether design advantages increase as comparison demands rise. In a 2 × 2 × 3 mixed-design experiment (N = 133), participants completed a visual comparison task in which they judged whether statements about the distribution of knowledge across three fictional group members were true or false based on the visualization. We assessed accuracy, response time, and perceived cognitive usability.
Results: Comparison strategy showed a robust effect: superimposition yielded faster responses overall and higher accuracy under medium and high complexity. Knowledge format did not affect performance. Usability ratings indicated complementary advantages: superimposition was perceived as more helpful for comparing profiles and accessing group-level knowledge, whereas juxtaposition was rated clearer and more supportive for identifying individual knowledge.
Discussion: The effectiveness of comparative knowledge visualizations depends on how multiple profiles are perceptually aligned and separated to match the epistemic goal (integrative comparison vs. source-specific inspection) and the processing demands of the task. The results provide evidence-based guidance for designing comparative displays that support identification of shared and unshared knowledge in collaborative work.
1 Introduction
Interdisciplinary collaboration is widely regarded as a promising approach for tackling knowledge-intensive tasks and complex problems. By combining diverse disciplinary perspectives, these collaborations enable richer problem representations that integrate different assumptions, viewpoints, and explanatory mental models. Such epistemic diversity can lead to more effective solutions than those developed by individuals or homogeneous groups (Blomkamp, 2018; Dillenbourg and Bétrancourt, 2006; Larson and Christensen, 1993; Maciver et al., 2021).
However, pooling and coordinating distributed knowledge from multiple contributors is cognitively demanding, especially when group members must consider several knowledge sources simultaneously. Under these conditions, the resulting information load can exceed working memory capacity. Group members may then overvalue familiar, overlapping content while neglecting contributions that are not shared (Dillenbourg and Bétrancourt, 2006; Stasser and Abele, 2020). This tendency is known as the shared-information or evaluation bias and is a well-documented barrier to effective group decision-making and problem-solving (Schulz-Hardt et al., 2000; Stasser and Titus, 1985, 2003; Stewart and Stasser, 1998; Stasser and Abele, 2020; Bang and Frith, 2017). Thus, successful knowledge integration requires group members to recognize relationships between their contributions and distinguish shared from unshared knowledge (Dehler Zufferey et al., 2010; Engelmann and Hesse, 2010, 2011). Consequently, groups may fail to leverage their expertise fully as they converge on decisions that reflect a disproportionate focus on a shared subset of information rather than on the full distribution of knowledge.
Externalizing knowledge through visual or tangible artifacts is a promising approach to help collaborators share and integrate their knowledge (Engelmann and Hesse, 2010, 2011; Oppl and Stary, 2019). Theories of distributed and external cognition (Hollan et al., 2000; Scaife and Rogers, 1996; Zhang and Norman, 1994) suggest that offloading reasoning from internal memory onto external resources (e.g., technology or knowledge artifacts) can reduce memory and coordination demands by altering a task’s information-processing requirements (Patterson et al., 2014; Risko and Gilbert, 2016; Heersmink, 2021; Skulmowski, 2023; Gilbert et al., 2023). In collaborative settings, external aids and actions, including digital tools and physical objects, can therefore support the team’s coordination and ease integration processes when cognitive resources are limited (Fiore and Wiltshire, 2016). Accordingly, such artifacts can function as scaffolds that structure information processing and regulate working memory demands (van Nooijen et al., 2024). They can also provide shared reference points (i.e., boundary objects) that make knowledge visible, support joint attention, and keep content open to inspection and negotiation (Arias and Fischer, 2000; Tergan and Keller, 2005; Engelmann et al., 2009; Engelmann and Hesse, 2010, 2011; Erkens and Bodemer, 2017, 2019; Oppl and Stary, 2019).
At the same time, externalization does not eliminate the core difficulty of integrating multiple contributors’ knowledge. Processing multiple knowledge sources remains cognitively demanding because users must align information across sources while maintaining intermediate results during verification, especially under constrained working memory (Cowan, 2010; Dillenbourg and Bétrancourt, 2006). Therefore, knowledge visualizations must be deliberately designed to meet the cognitive requirements of specific epistemic tasks (e.g., comparing knowledge within a group) and to support effective processing (Card et al., 1999; Albert and Steiner, 2005a, 2005b; Ghoniem et al., 2005). Meyer’s (2010) typology of knowledge visualizations specifies how design decisions relate to cognitive and epistemic functions across four interdependent dimensions: knowledge type (e.g., declarative vs. procedural), epistemic function (e.g., insight generation vs. group coordination), target audience (e.g., individual vs. group), and presentation format (e.g., sketch vs. map). Complementing this view, Tversky’s (2005) principles of visual cognition emphasize that visual structures should align with users’ mental organization of information (principle of congruence) and be easy and accurate to interpret (principle of apprehension).
1.1 Design dimensions of comparative knowledge visualization
Applied to collaborative settings, these frameworks imply that visualization design should be evaluated in terms of how well it represents group knowledge and supports intended epistemic operations, such as comparing knowledge across contributors to detect shared and unshared information. From this perspective, graph-based formats such as concept maps can represent knowledge in ways that facilitate visual comparison across multiple individuals. These formats often include conceptual and procedural elements and action sequences, and they are commonly used to represent semantically rich knowledge involved in reasoning and problem solving (Novak and Cañas, 2006; Keller and Tergan, 2005; Steiner et al., 2007; Meyer, 2010). For instance, concept maps depict conceptual knowledge as propositions, which are statements that connect two concepts (nodes) through a labeled relationship (Novak, 2004; Novak and Gowin, 1984). This spatial graph arrangement makes concepts and their semantic relations perceptually explicit, supporting efficient retrieval and relational reasoning (Novak and Cañas, 2006; Keller and Tergan, 2005). Visualization tools such as Mental Modeler (Gray et al., 2013) and M-Tool (van den Broek et al., 2021), as well as approaches such as the Knowledge and Information Awareness (KIA) approach (Engelmann et al., 2009; Engelmann and Hesse, 2010) implemented with CmapTools (Cañas et al., 2004; Cañas, 2005), have been developed as visual–spatial knowledge environments for sharing and integrating knowledge. Using such graph-based environments has been shown to enhance communication and support the integration of distributed knowledge in collaborative problem-solving tasks (Engelmann et al., 2009; Engelmann and Hesse, 2010, 2011; Engelmann et al., 2014; Keller et al., 2006).
Despite their benefits, comparatively little is known about how graph-based knowledge visualizations support epistemic operations in groups. In particular, the role of these visualizations in facilitating the visual comparison of information among group members lacks empirical foundation. However, this process is central to identifying shared and unshared knowledge (Pagendarm and Post, 1995; Engelmann et al., 2014). Importantly, the design of effective visual comparisons may depend not only on how knowledge is represented within individual profiles, but also on how multiple profiles are arranged for comparison within a shared display. Thus, comparative knowledge visualizations can be understood as epistemic interfaces that combine a representation format for externalizing individual conceptual knowledge and a comparison strategy for aligning multiple profiles.
To examine this interface function, we implemented a 2 × 2 comparative visualization design that systematically combined two knowledge representation formats (concept maps vs. proposition lists) and two comparison strategies (superimposition vs. juxtaposition), as shown in Figure 1. We tested how these design dimensions shape users’ understanding of distributed knowledge by measuring participants’ performance in a propositional verification task (see Section 2.2.2).
Figure 1. Simplified example of a full 2 × 2 visualization set displaying three knowledge propositions. The set comprises all combinations of knowledge format (concept maps vs. proposition lists) and comparison strategy (juxtaposition vs. superimposition): (a) Superimposed concept map, (b) Juxtaposed concept maps, (c) Superimposed proposition list, and (d) Juxtaposed proposition lists. To ensure structural comparability, proposition lists mirror the hierarchical layout of the corresponding concept maps. In both formats, each concept is enclosed in a rectangular box. Semantic relations are represented as boxed statements in proposition lists, whereas in concept maps, labeled links are placed above connecting lines to enhance readability. A consistent color-coding scheme is used to differentiate the knowledge contributions of the three fictional individuals: orange = person A (Mia), turquoise = person B (Rita), pink = person C (Paul).
The first design dimension addressed the format of knowledge representation and compared concept maps, a visual–spatial graph format, with proposition lists, a linear-sequential list format. Both formats were designed to make semantic relations explicit and accessible within individual knowledge profiles. Concept maps depict concepts as nodes connected by labeled links (see Figures 1a,b) and can reduce redundancy by reusing concepts across propositions. From a cognitive perspective, it is assumed that the node–link structure can transfer relational reasoning from working memory to the perceptual system (Larkin and Simon, 1987; Sweller et al., 1998; Tversky et al., 2007). Proposition lists present each proposition as a sentence in a line-by-line format (see Figures 1c,d). This format may require more sequential search and impose higher extraneous processing demands in comparison tasks (Maslianko and Sielskyi, 2021). However, proposition lists may support item-level verification by presenting propositions in an explicitly segmented verbal structure, which could reduce spatial search demands when users must check specific statements.
The second design dimension addressed the arrangement of multiple knowledge profiles for comparison and contrasted superimposition with juxtaposition (Gleicher et al., 2011). Superimposed views overlay profiles in a single frame, enabling direct visual comparison within the “eye span” (Tufte and Graves-Morris, 1983) and facilitating the perception of overlaps and differences (Cleveland and McGill, 1984; Windhager et al., 2020). In contrast, juxtaposed views display individual profiles side by side (see Figures 1b,d). While this preserves source separation, it increases the need for attentional shifts and working memory when users must align corresponding information across displays (Gleicher et al., 2011; Meulemans et al., 2016; Wolfe, 2020; Matlen et al., 2020).
1.2 Task complexity as a boundary condition for visualization effectiveness
Furthermore, design advantages of comparative visualizations should become most apparent as comparison demands increase. From a cognitive load perspective, increasing task complexity primarily raises intrinsic load because more informational elements and relations must be processed and coordinated simultaneously (Sweller et al., 2019; Fiore et al., 2017). Under such conditions, visualization design should matter most when it reduces extraneous processing (e.g., attentional switching, mental alignment, and spatial search) and supports task-relevant processing by making relevant structure easier to perceive and use (Sweller et al., 2019; Tversky, 2005). Task complexity can therefore be manipulated (e.g., by varying how many propositions and sources must be coordinated) to increase comparison demands and serve as a boundary condition for when design advantages are most likely to emerge.
Against this backdrop, we examined the comparative function of visualization design, guided by the principle of application validity. This principle is defined as the extent to which a design supports the successful completion of an intended epistemic task (Ware et al., 2002; Steiner and Albert, 2017a; Steiner and Albert, 2017b). Accordingly, we employed a task-based evaluation approach that assessed visualization effectiveness under systematically varied complexity levels of the propositional verification task. In addition to these objective indicators, we examined users’ perceived cognitive usability, namely how well each design supports key epistemic operations, such as comparing knowledge and accessing group- versus individual-level information (see Section 2.4.3).
The present study addressed the following research questions: (RQ1) How do knowledge representation format (concept maps vs. proposition lists) and visual comparison strategy (superimposition vs. juxtaposition) affect performance in identifying shared and unshared knowledge, as reflected in response time and accuracy? (RQ2) Do these format- and strategy-related performance effects depend on task complexity (low, medium, high), such that differences become more pronounced as comparison demands increase? (RQ3) How do participants rate the cognitive usability of the visualizations for key epistemic operations as a function of knowledge representation format and comparison strategy?
Based on prior literature and the implied design advantages, we derived the following hypotheses. We predicted that concept maps would result in faster response times and greater accuracy than proposition lists (H1a) and that this advantage would increase with task complexity (H1b). We also predicted that superimposed views would outperform juxtaposed views (H2a), with the advantage becoming more pronounced with greater complexity (H2b). Finally, we hypothesized that combining concept maps and superimposed views would produce the best outcomes (H3a), with this advantage increasing with task complexity (H3b). The cognitive usability ratings to evaluate the subjective perception of the visual design were analyzed in an exploratory manner.
2 Methods
2.1 Participants
The final sample consisted of 133 participants, drawn from an initial pool of 137. Four participants were excluded from all analyses due to incomplete data on most rating scales, implausibly short response times (i.e., <6 s) across trials, and error rates exceeding 80%, suggesting insufficient task engagement. Participants were randomly assigned to the concept map condition (n = 63; 21 men, 39 women, 3 nonbinary; age: M = 36.2, SD = 13.3) or the proposition list condition (n = 70; 27 men, 38 women, 5 nonbinary; age: M = 35.5, SD = 11.6). The sample spanned a broad age range (18–65) and was relatively well educated, with comparable educational attainment across conditions (concept map: 70% university degree, 25% high school, 5% compulsory schooling; proposition list: 75% university degree, 23% high school, 2% compulsory schooling). Participants were recruited via social media platforms and university email lists. Eligibility criteria were normal color vision and German language proficiency of at least C1; individuals not meeting these criteria were asked not to participate. No monetary compensation or course credit was provided.
2.2 Material and task
2.2.1 Stimuli: comparative knowledge visualizations
A total of nine visualization sets were created using yEd (yWorks1). Each set contained four visualizations representing all possible combinations of two knowledge formats (concept maps vs. proposition lists) and two comparison strategies (juxtaposition vs. superimposition). Across the three fictional individuals shown in each visualization, the displayed knowledge comprised a total of 16 propositions on a mental health topic, with some propositions shared and others unique to individual profiles. Propositions were formulated as declarative statements linking two concepts via a semantic relation (e.g., meditation reduces mental stress), using plain, non-technical language throughout. All visualizations were presented as static images with a fixed resolution of 1,280 × 1,024 pixels and had no interactive features. Individual profiles were distinguished using a consistent color-coding scheme: orange (#ffdb81) for person A, turquoise (#9fd1cf) for person B, and pink (#ffb7de) for person C (see Figure 1).
2.2.2 Visual comparison task: verifying statements across three profiles
Participants completed an 18-trial visual comparison task to evaluate the effectiveness of the visualization design in facilitating quick and accurate identification of shared and unshared knowledge. In each trial, participants decided whether a written statement was true or false based on the information shown in the visualization. Each statement described how specific knowledge elements (propositions) were distributed among three fictional group members. Participants compared the depicted knowledge profiles to verify each statement (e.g., determining whether a proposition was shared by multiple members or unique to one).
The complexity of the task was manipulated by varying the number of propositions and individuals referenced in the statement. This yielded three predefined levels: low, medium, and high. In line with cognitive load theory, task complexity is defined as the number of informational elements in a statement that must be processed and integrated to complete the task. Task complexity differs from task difficulty, which refers to the subjective mental effort experienced while performing the task (Huang et al., 2009; Liu and Li, 2012; Newton et al., 2023). Low complexity involved one proposition and one individual (e.g., “Paul knows that meditation reduces mental stress”). Medium complexity involved one proposition distributed across three individuals (e.g., “Mia and Rita, but not Paul, know that meditation controls anxiety”). High complexity involved two propositions distributed across three individuals (e.g., “All three know that meditation lengthens attention span, but only Rita knows that physical fitness reduces the risk of certain diseases”). Consequently, higher-complexity statements require the coordination of more elements and therefore impose higher intrinsic processing demands than low-complexity statements, regardless of the visualization design. This manipulation allowed us to test whether visualization strategies support statement verification under increasing processing demands (i.e., higher intrinsic task complexity).
2.3 Procedure
The study followed a 2 × 2 × 3 mixed factorial design with knowledge format (concept map vs. proposition list) as a between-subjects factor and comparison strategy (juxtaposition vs. superimposition) and task complexity (low, medium, high) as within-subjects factors. Participants were assigned to one knowledge format condition (concept map or proposition list) and completed the visual comparison task under both comparison strategies. Comparison strategy was manipulated within participants in two blocked phases (nine trials each), whereas task complexity varied within each block. To control for sequence effects, the two comparison strategies were administered in two counterbalanced block orders (juxtaposition → superimposition vs. superimposition → juxtaposition), yielding four between-subject conditions (knowledge format × block order) to which participants were randomly assigned.
After providing informed consent, participants completed demographic questions and the familiarity items described in Section 2.4.1. Participants then completed the visual comparison task in two blocks of nine trials. At the beginning of each block, participants were shown a simplified example of the upcoming visualization display and completed two practice trials of moderate complexity with explanatory feedback. The first block used one comparison strategy (juxtaposition or superimposition), and the second block used the alternative strategy. Within each block, comparison strategy was held constant, and participants completed three trials per complexity level (low, medium, high), presented in randomized order. Trial materials were drawn from a larger pool of preconstructed statements and corresponding visualizations, such that participants did not necessarily see identical item sets. After each block, participants rated the cognitive usability of the provided group knowledge visualization using four items (see Section 2.4.3).
2.4 Measures
2.4.1 Participant information
Participants completed a short questionnaire assessing their demographic background and familiarity with different visual information formats. Demographic information included gender (male, female, or other), age (in years), and highest level of completed education (primary, secondary, high school, or college/university).
To assess participants’ familiarity with information visualizations comparable to those used in the study, we asked them to rate their agreement with three items on a 6-point Likert scale (0 = no experience, 5 = high experience); familiarity ratings did not differ between the concept map and proposition list groups (all ps ≥ 0.38; see Supplementary material S1). The items were:
• Data visualization: “I have experience with statistical data visualizations (e.g., bar charts, scatterplots).”
• Network visualization: “I have experience with network visualizations (e.g., node-link diagrams).”
• Table visualization: “I have experience with table visualizations (e.g., spreadsheet-like matrices).”
2.4.2 Task performance measures
The experiment was implemented as a web-based application using the jsPsych framework2. Participants responded via mouse or trackpad, and accuracy and response times were recorded automatically. Accuracy rates (ARs) captured response correctness in the visual comparison task. Each trial was scored as 1 for a correct true/false decision and 0 for an incorrect decision. Within each task block, accuracy was summarized separately for each task complexity level as the number of correct responses across the three trials (range = 0–3). Response times (RTs) captured processing efficiency and were defined as the latency (in milliseconds) between trial onset and the participant’s response.
2.4.3 Cognitive usability ratings
After each task block, participants answered a short, four-item questionnaire assessing the perceived cognitive usability of the provided group visualization. The items addressed conceptually distinct functions and were formulated to ensure high face validity. Participants rated each item on a six-point Likert scale ranging from 0 (not at all) to 5 (completely). The items targeted the following functional dimensions:
• Visual comparison: “The visualization made it easy to compare multiple knowledge profiles.”
• Visual clarity: “The visualization was clear and easy to read.”
• Group knowledge access: “It was easy to see what the group collectively knew.”
• Individual knowledge access: “It was easy to identify what each group member knew.”
3 Results
Prior to statistical analysis, the data were screened for potential outliers. Isolated extreme values in response time data were identified for 11 participants (>80 s), suggesting temporary task disengagement. To balance sensitivity and data retention, we applied Tukey’s interquartile range (IQR) method using a liberal cutoff multiplier of 2.2 (Tukey, 1977; cf. Wilcox, 2012). Single-trial outliers were winsorized–that is, replaced with the nearest non-outlier value within the same distribution (Erceg-Hurn and Mirosevich, 2008).
All analyses were preceded by assumption checks for normality, homogeneity of variance, and sphericity. Where applicable, Greenhouse–Geisser corrections were applied when sphericity was violated, and Bonferroni-adjusted post hoc tests followed significant omnibus effects.
3.1 Task performance
We analyzed task performance using a mixed-design ANOVA with knowledge format (concept maps vs. proposition lists) as a between-subjects factor and comparison strategy (superimposition vs. juxtaposition) and task complexity (low, medium, high) as within-subjects factors. The model included two dependent measures, response time (RT; correct trials only) and accuracy rate (AR; 0–3 correct per complexity level within each task block), which are reported separately.
3.1.1 Response time
The response time increased with task complexity (Table 1). The mixed ANOVA showed a robust main effect of task complexity, F(1.83, 239.72) = 162.52, p < 0.001, ηp2 = 0.554. Collapsed across knowledge format and comparison strategy, estimated marginal means (EMMs) increased from M = 20.58 s (SE = 0.69) at low complexity to M = 23.66 s (SE = 0.86) at medium complexity and M = 33.10 s (SE = 1.03) at high complexity (see also Table 2). Bonferroni-adjusted post hoc comparisons indicated that all three complexity levels differed (all ps < 0.001), confirming that the complexity manipulation increased processing demands.
Table 1. Estimated marginal means for response time (s) by knowledge format (map vs. list), comparison strategy (superimposition vs. juxtaposition), and task complexity (low, medium, high).
Comparison strategy also affected processing efficiency in terms of RT (see Table 3). Participants responded faster with superimposed than with juxtaposed views, F(1, 131) = 33.16, p < 0.001, ηp2 = 0.202; collapsed EMMs were M = 23.19 s (SE = 0.78) for superimposition and M = 28.37 s (SE = 0.98) for juxtaposition (Table 2). However, the strategy × complexity interaction did not reach significance. Bonferroni-adjusted simple-effects comparisons nevertheless showed faster responses under superimposition at each complexity level (all ps ≤ 0.003), with numerically larger differences at higher complexity (see Table 1).
The knowledge format did not reveal any significant differences in response times, nor any interaction effects related to strategy or complexity (all ps ≥ 0.084; see Table 3). Overall, RT results supported the predicted benefit of superimposition (H2a) but did not provide clear evidence for an increased advantage under higher task complexity (H2b), nor for format-related advantages (H1a/H1b) or the predicted format × strategy benefit (H3a/H3b).
3.1.2 Accuracy rate
The accuracy rate decreased with task complexity (see Table 4). The mixed ANOVA revealed a significant main effect of task complexity, F(2, 262) = 10.69, p < 0.001, ηp2 = 0.075. Collapsed across knowledge format and comparison strategy, EMMs (0–3 correct per complexity level) declined from M = 2.870 (SE = 0.024) at low complexity to M = 2.831 (SE = 0.027) at medium complexity and M = 2.726 (SE = 0.033) at high complexity (see Table 5). Bonferroni-adjusted comparisons indicated no difference between low and medium complexity (p = 0.595), but significant declines from low to high (p < 0.001) and from medium to high (p = 0.007).
Table 4. Estimated marginal means for accuracy rate (0–3) by knowledge format (map vs. list), comparison strategy (superimposition vs. juxtaposition), and task complexity (low, medium, high).
Comparison strategy also affected the accuracy with which participants completed the task (see Table 6). Participants were more accurate with superimposition than juxtaposition (F(1, 131) = 16.14, p < 0.001, ηp2 = 0.110). The collapsed EMMs were M = 2.872 (SE = 0.022) for superimposition and M = 2.746 (SE = 0.031) for juxtaposition (see also Table 5). This benefit depended on task complexity, as indicated by a significant strategy × complexity interaction, F(1.87, 245.41) = 4.76, p = 0.011, ηp2 = 0.035. Simple-effects comparisons showed that the superimposition advantage was not reliable at low complexity (p = 0.961) but was significant at medium (p = 0.008) and high complexity (p < 0.001), with the largest advantage in the high-complexity condition.
No effects involving the knowledge format were significant (Table 6), although the format × complexity interaction was marginal (p = 0.058). In sum, accuracy results supported the predicted advantage of superimposition (H2a) and its amplification under higher task complexity (H2b) but did not support format advantages (H1a/H1b) or the predicted format × strategy benefit (H3a/H3b).
3.2 Cognitive usability ratings
To assess perceived cognitive usability, we analyzed four ratings collected after each task block (visual comparison, visual clarity, access to group knowledge, and access to individual knowledge). For each rating, we conducted a 2 (representation format; between-subjects) × 2 (comparison strategy; within-subjects) mixed ANOVA. Descriptive statistics are reported in Table 7, and inferential statistics are summarized in Table 2. Because multiple usability tests were conducted on related outcomes, Bonferroni/Holm adjustments were applied; all effects reported as significant remained significant after correction (corrected p < 0.007).
Table 7. Means and standard deviations for each usability dimension by visualization format (concept map vs. proposition list) and comparison strategy (juxtaposed vs. superimposed).
Visual comparison. Superimposed views were rated as more helpful for comparing distributed knowledge than juxtaposed views, reflected in a significant main effect of comparison strategy, F(1, 130) = 10.28, p = 0.002, ηp2 = 0.073. However, no main effect was found for knowledge format, nor was there a knowledge format × comparison strategy interaction (see Table 8).
Table 8. Inferential statistics for cognitive usability ratings from the four separate mixed ANOVAs.
Visual clarity. Juxtaposed views were rated as clearer and more readable than superimposed views, reflected in a significant main effect of comparison strategy, F(1, 129) = 11.05, p = 0.001, ηp2 = 0.079. Once more, the main effect of knowledge format was not significant, nor was there a knowledge format × comparison strategy interaction (see Table 8).
Access to group knowledge. Ratings favored superimposition overall, F(1, 131) = 13.33, p < 0.001, ηp2 = 0.092, and this effect depended on representation format, as shown by a significant format × strategy interaction, F(1, 131) = 6.66, p = 0.011, ηp2 = 0.048 (Table 2). Follow-up comparisons indicated that superimposition increased perceived access to group knowledge in the concept map condition, t(62) = 4.12, p < 0.001, d = 0.52, but not in the proposition list condition, t(69) = 0.81, p = 0.42, d = 0.10.
Access to individual knowledge. Juxtaposed views were rated as more helpful for identifying individual knowledge than superimposed views, reflected in a significant main effect of comparison strategy, F(1, 131) = 11.00, p = 0.001, ηp2 = 0.077 (Table 2). A paired-samples t-test confirmed the comparison effect, t(132) = −3.35, p = 0.001, d = 0.29. However, neither the main effect of format nor the interaction effect was significant (see Table 8).
4 Discussion
This study examined how two fundamental design features of comparative group knowledge visualizations affect users’ ability to identify shared and unshared knowledge within a group. These features are the knowledge format (concept maps versus proposition lists) and the visual comparison strategy (juxtaposition versus superimposition). The results revealed a significant main effect for comparison strategy: Participants completed visual comparison tasks more accurately and efficiently with superimposed views than with juxtaposed ones. In contrast, no main effect was found for knowledge format, suggesting that concept maps and proposition lists were equally usable for the given task. Furthermore, performance differences between the comparison strategies increased with task complexity, indicating that perceptual alignment becomes more beneficial under higher cognitive load. These results will be discussed in more detail in the following sections.
4.1 Knowledge representation format: structural richness is not sufficient
We expected concept maps to outperform proposition lists because of their spatial organization of semantically interconnected information. These node-link structures are thought to promote semantic integration, pattern recognition, and conceptual understanding by grouping related ideas together, which facilitates perceptual grouping and semantic chunking (Novak and Cañas, 2008; Budé et al., 2009; Tergan, 2005; Engelmann and Hesse, 2010; Meyer, 2010). From a cognitive science perspective, concept maps are regarded as reducing relational inference load by offloading integration processes to the visual display, thereby supporting the construction of coherent mental models (Larkin and Simon, 1987; Johnson-Laird, 1983). However, concept maps can also impose additional demands on visual attention and spatial working memory, particularly when precise, item-level comparisons are required (Ware, 2004; Healey and Enns, 2012; Cowan, 2001).
Contrary to our hypothesis, participants performed equally well when using concept maps or proposition lists. This challenges the assumption that structurally richer graph-based representations provide superior cognitive support for tasks requiring the identification of shared and unshared conceptual knowledge across sources (Tergan and Keller, 2005; Engelmann and Hesse, 2010, 2011; Engelmann et al., 2014). Instead, the findings support the view that representational effectiveness depends on how well a format aligns with the cognitive and epistemic demands of the task (Ghoniem et al., 2005; Steiner and Albert, 2017a, 2017b).
In the present study, participants were tasked with identifying which of three individuals possessed a particular piece of knowledge or proposition. This involved verifying discrete conceptual units at the item level–a task that is potentially better supported by a linear, sentence-based list structure. The sequential layout of these lists may have facilitated visual search by reducing spatial disorientation and the demand on spatial working memory (Potelle and Rouet, 2003; van Oostendorp and Goldman, 1999).
These findings align with research in information visualization showing that linear, text-based representations can support efficient search and verification, particularly when tasks require stepwise access to specific content (Larkin and Simon, 1987). In our study, proposition lists presented clearly segmented sentences in a linear, top-down format, likely enabling users to conduct line-by-line comparisons externally without mentally tracking spatial relations or integrating content across a broader visual field (Maslianko and Sielskyi, 2021).
From this perspective, concept maps may offer advantages for tasks involving open-ended reasoning, structural insight, or conceptual exploration. However, in structured comparison tasks focused on factual verification, structural richness alone does not necessarily translate into greater comparative functionality. Future research could employ within-subject designs in which participants work with both concept maps and proposition lists across various task types. These studies would clarify whether concept maps provide greater support in tasks with higher semantic complexity or open-ended inferential demands and whether proposition lists are more effective for source-specific verification. Such studies would clarify how representational benefits depend on cognitive load and task requirements, providing a more informed basis for matching visualization formats to specific cognitive goals.
4.2 Visual comparison strategy: the role of perceptual alignment
The type of visual comparison strategy used had a robust and consistent influence on task performance. Participants responded faster and more accurately when using superimposed views than juxtaposed views to identify how knowledge was distributed among group members.
However, the extent to which this advantage depended on task complexity differed between response time and accuracy. Superimposition yielded faster processing at all complexity levels for response times, but the strategy × complexity interaction did not reach conventional significance. This indicates that the speed advantage was stable across levels of task demand. For accuracy, however, the benefit of the strategy increased with task complexity. Performance differences were negligible at low complexity but became reliable at medium and high complexity. The largest advantage was observed at high complexity. This pattern suggests that perceptual alignment through superimposition generally supports efficient comparisons and is particularly consequential for accuracy when tasks demand the maintenance and integration of multiple propositions during verification.
In line with previous research, these results support the notion that superimposed views offer cognitive advantages when users must detect structural correspondences across multiple sources, especially under high task complexity or when semantic proximity facilitates grouping (Gleicher et al., 2011; Javed et al., 2010; Windhager et al., 2020).
From a cognitive load perspective, this pattern shows that perceptual alignment decreases extraneous load by enabling users to compare related knowledge elements within a single, integrated view (Sweller et al., 2019; Keller and Grimm, 2005). This minimizes attentional shifts, mental alignment, and spatial memory operations, especially when users must detect overlapping propositions or conceptual gaps (Ware, 2004; Gleicher et al., 2011).
From a design perspective, these results support comparison strategies that actively promote perceptual integration, especially in tasks involving complex, distributed information. Juxtaposed views preserve source separability and facilitate content attribution or individual perspective tracking (Bodemer and Scholvien, 2008; Engelmann et al., 2009). However, superimposed views better support integrative reasoning by offloading structural alignment onto the visual display itself (Gleicher et al., 2011; Windhager et al., 2020).
The cognitive usability ratings of the participants further support these implications, as discussed in the following section.
4.3 Cognitive usability: functional differences of comparison strategies
To examine how participants experienced the cognitive affordances of the visualizations, we asked them to evaluate each group visualization format across four key dimensions of cognitive usability: visual clarity; ease of comparing knowledge; ease of accessing group-level knowledge; and ease of accessing individual knowledge. These ratings provide a subjective complement to objective performance data, capturing how participants perceived the alignment between the visual design and the epistemic demands of the task.
Overall, the results suggest that the two comparison strategies differ systematically in their epistemic affordances. Superimposed views were rated as more helpful for comparing distributed knowledge and gaining an overview of group-level content. By contrast, juxtaposed views were perceived as more effective for identifying individual contributions and were rated higher in visual clarity. This functional divergence aligns with prior information visualization findings: superimposition integrates multiple sources into a shared display. This reduces the need for internal coordination and enables direct perceptual alignment across representations (Gleicher et al., 2011; Gleicher, 2017; Matlen et al., 2020; Windhager et al., 2020). From a cognitive perspective, this alignment reduces the demands on working memory by offloading the processes of mental integration onto the visual display (Sweller et al., 1998; Cowan, 2010). This supports the efficient detection of overlaps and redundancies across knowledge profiles.
In contrast, juxtaposition spatially separates individual representations and preserves their source-specific structure. This layout is ideal for tasks in which users want to isolate and explore a single contributor’s knowledge without interference from others. By presenting each profile in a separate visual space, juxtaposed layouts reduce perceptual complexity and facilitate selective attention, which may be particularly beneficial when individual accountability or knowledge origin is relevant (Engelmann et al., 2009; Dehler Zufferey et al., 2010; Erkens and Bodemer, 2017, 2019).
Notably, the perceived advantage of superimposition for accessing group-level knowledge was only observed in the concept map condition. When participants worked with proposition lists, both comparison strategies received similar ratings. This suggests that the cognitive benefit of perceptual alignment is particularly relevant when semantic content is distributed across a graph-based, spatial layout. Concept maps represent conceptual propositions as node-link structures that arrange elements spatially across the canvas. Although juxtaposed maps use consistent node placement across individual profiles, superimposition enhances the integrative function by directly overlaying corresponding nodes and links into a single, shared frame. This reduces the need for attentional shifts and mental integration across multiple representations, supporting direct visual detection of shared elements through proximity-based perceptual grouping.
In contrast, proposition lists present knowledge sequentially in a sentence-based format, where conceptually related content is already grouped via verbal structure. Here, the added benefit of visual alignment–such as that enabled by superimposition–may offer less perceptual or cognitive advantage, as the linear format already guides users through semantically structured content without requiring spatial integration.
Taken together, these findings suggest that the perceived cognitive benefit of a certain type of comparative visualization does not follow a one-size-fits-all logic. Rather, it emerges from the functional alignment between layout structure, task-specific reasoning demands, and user needs. Based on this understanding, future research could focus on developing cognitively adaptive visualization systems that can dynamically adjust interface layouts according to users’ cognitive style, task complexity, and perceptual load (e.g., Steichen and Fu, 2019; Yelizarov and Gamayunov, 2014).
4.4 Design guidelines for comparative knowledge visualization
Findings from the present task context (static displays and propositional verification across three profiles) and the cognitive usability ratings indicate that no design strategy is universally superior for comparative knowledge visualizations. Rather, the effectiveness of a design strategy depends on its alignment with the primary epistemic goal and the task’s processing demands (cf. Meyer, 2010; Steiner and Albert, 2017a, 2017b). For example, users may need integrative comparison or source-specific inspection. Based on this goal-demand match, we derive four practical recommendations for structuring comparative displays that integrate and differentiate information from multiple sources.
First, superimposed layouts are advantageous when users must rapidly detect overlaps and gaps across profiles, particularly when the demands of comparison increase. In the present study, superimposition supported faster responses and higher accuracy under high task complexity across all complexity levels, which is consistent with the idea that perceptual alignment reduces the need for attentional shifting and mental integration during comparison (Gleicher et al., 2011; Windhager et al., 2020).
Second, juxtaposed layouts are preferable when users must attribute knowledge to specific contributors or inspect individual profiles with minimal interference. Juxtaposition preserves source separability, and it has been found to be clearer and more helpful for identifying individual knowledge. This suggests that its lower perceptual density can facilitate selective attention when source-specific inspection is the primary goal (Engelmann et al., 2009; Dehler Zufferey et al., 2010; Erkens and Bodemer, 2017, 2019).
Third, the absence of a significant difference in performance between concept maps and proposition lists in the present task suggests that representational effectiveness depends on task requirements rather than structural richness alone. For propositional verification tasks requiring the checking of discrete knowledge units, list-based formats may support efficient inspection through sequential, line-by-line access. Conversely, graph-based formats may be advantageous when the amount of conceptual content increases because node-link representations can reuse concepts across propositions, allowing for more compact scaling than sentence lists.
Fourth, when both epistemic goals-integrative overview and source attribution–are relevant within the same workflow, a plausible design response is an adaptive or hybrid interface that allows users to switch between layouts depending on the demands of the task or cognitive constraints (Otjacques and Feltz, 2005). For instance, users could start with a superimposed view to identify shared and unshared patterns at the group level, and then switch to a juxtaposed view to verify specific contributions with greater clarity and reduced interference.
4.5 Future research
This study examined how users performed and perceived different visual layouts when comparing distributed knowledge profiles. However, group knowledge visualizations are primarily used in collaborative contexts where reasoning is socially distributed, cognitively demanding, and highly dynamic. Therefore, future research could examine how the tested design dimensions–comparison strategy and knowledge format–support shared reasoning, joint attention, and effective coordination in real-world group settings (Janssen and Bodemer, 2013; Buder, 2017).
One avenue of research is to test the usability and epistemic effects of these design features in ecologically valid environments. Examples of these environments include interdisciplinary workshops, co-creation sessions, and instructional settings where participants explore and integrate conceptual knowledge collaboratively. Such settings would allow researchers to study cognitive outcomes and socio-epistemic phenomena, including awareness of knowledge asymmetries, negotiation of shared understanding, and attribution of informational responsibility (Bromme and Goldman, 2014; Engelmann et al., 2009).
To better understand the cognitive processes at play, future studies should use methods like eye tracking, interaction logging, and think-aloud protocols (Rayner, 1998; Holmqvist et al., 2011). These methods can reveal how users allocate attention, construct mental models, and resolve representational conflicts, especially in tasks requiring cross-profile integration or source-specific attribution. Cognitive modeling based on these methods could refine theoretical assumptions about perceptual alignment, working memory offloading, and integration effort (Sweller et al., 2019; Johnson-Laird, 1983; Cowan, 2010).
Another critical avenue of exploration is how individual user characteristics moderate the effectiveness of different layout configurations. Cognitive style (Mayer and Massa, 2003), metacognitive regulation (Flavell, 1979), and visualization literacy (Boy et al., 2014) may influence users’ ability to flexibly extract and integrate conceptual content, especially in complex situations. Adaptive visualization systems that respond to user profiles or reasoning phases could optimize layout structure dynamically and reduce unnecessary processing load (de Jong and van Joolingen, 1998; Bodemer and Scholvien, 2008).
In addition to person-based adaptivity, task-phase adaptivity presents another opportunity. Depending on whether users are exploring, comparing, or justifying knowledge, comparative visualizations may serve distinct cognitive functions. Dynamic systems that switch between layouts (e.g., from a superimposed map to a juxtaposed list) could better align with the evolving epistemic demands of distributed reasoning than static designs.
Together, these research directions highlight the importance of investigating comparative visualizations as epistemic tools whose effectiveness depends on their alignment with cognitive, social, and representational constraints, rather than merely as static information displays. Therefore, a cognitively informed design science of group visualizations must integrate insights from working memory theory, mental model theory, and visual attention research while grounding its designs in real-world contexts such as group reasoning and collaborative problem solving.
5 Conclusion
This study provides empirical evidence that the effectiveness of group knowledge visualizations in supporting their comparative function depends more on the chosen visual comparison strategy than on a conceptual-structural knowledge format. While concept maps and proposition lists yielded comparable performance in a propositional comparison task, the comparison strategy significantly influenced both accuracy and efficiency. Participants performed better with superimposed views, particularly as task complexity increased, suggesting that perceptual alignment is critical for managing cognitive load in knowledge comparison tasks.
The findings highlight a key insight for the design of collaborative visualization tools: effectiveness depends not merely on structural expressiveness but on the fit between visualization features and task demands. Superimposed layouts offer perceptual support for integration and synthesis, whereas juxtaposed layouts support separation and attribution. Designers could consider implementing hybrid or adaptive systems that allow users to shift between views depending on their current epistemic focus.
By integrating perspectives from cognitive load theory, visual cognition, and group awareness research, this study contributes to a better understanding of how knowledge representations and comparison strategies interact to support reasoning in distributed knowledge environments. Future research should examine how these findings scale to more complex, interactive, and socially dynamic settings, where visual reasoning, perspective-taking, and shared understanding must be negotiated in real time.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving humans were approved by Hon. Prof. (FH) Univ.-Doz. Dr. Peter Lechner, MAS Chair of the Ethics Committee of the University for Continuing Education Krems (Universität für Weiterbildung Krems). The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
Author contributions
NH: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Validation, Visualization, Writing – original draft, Writing – review & editing. DA: Conceptualization, Supervision, Writing – review & editing.
Funding
The author(s) declared that financial support was received for this work and/or its publication. The authors acknowledge the financial support by the University of Graz.
Acknowledgments
The authors would like to thank Anja Ischebeck for her support and valuable feedback on this manuscript.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that Generative AI was used in the creation of this manuscript. Generative AI was used exclusively for grammar checking. No AI tools were used for generating or editing scientific content. The authors verify and take full responsibility for all aspects of the manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2026.1684634/full#supplementary-material
Footnotes
References
Albert, D., and Steiner, C. (2005a). Representing domain knowledge by concept maps: how to validate them?. The 2nd joint workshop of cognition and learning through media-communication for advanced e-learning (JWCL), In T. Okamoto, D. Albert, T. Honda, & F. W. Hesse (Eds.), Tokyo, Japan: Sophia University. 169–174.
Albert, D., and Steiner, C. M. (2005b). Empirical validation of concept maps: preliminary methodological considerations. In Proceedings of the Fifth IEEE international conference on advanced learning technologies (ICALT'05) (pp. 952–953). Los Alamitos, CA, USA: IEEE Computer Society.
Arias, E. G., and Fischer, G. (2000). Boundary objects: their role in articulating the task at hand and making information relevant to it. In International symposium on Interactive & Collaborative Computing (ICC), Wollongong, Australia (pp. 567–574). Rochester: ICSC Academic Press. https://l3d.colorado.edu/wp-content/uploads/2016/04/icsc2000.pdf
Bang, D., and Frith, C. D. (2017). Making better decisions in groups. R. Soc. Open Sci. 4:170193. doi: 10.1098/rsos.170193
Bodemer, D., and Scholvien, A. (2008). Support for collaborative multimedia learning: considering the individual and the group. ICCE 2008 proceedings, 245–252. https://api.semanticscholar.org/CorpusID:16972460
Boy, J., Rensink, R. A., Bertini, E., and Fekete, J.-D. (2014). A principled way of assessing visualization literacy. IEEE Trans. Vis. Comput. Graph. 20, 1963–1972. doi: 10.1109/TVCG.2014.2346984
Blomkamp, E. (2018). The promise of co-design for public policy. Aust. J. Public Adm. 77, 729–743. doi: 10.1111/1467-8500.12310
Bromme, R., and Goldman, S. R. (2014). The public’s bounded understanding of science. Educ. Psychol. 49, 59–69. doi: 10.1080/00461520.2014.921572
Budé, L., Imbos, T., V. D. Wiel, M. W. J., Broers, N. J., and Berger, M. P. F. (2009). The effect of directive tutor guidance in problem-based learning of statistics on students’ perceptions and achievement. High. Educ. 57, 23–36. doi: 10.1007/s10734-008-9130-8
Buder, J. (2017). A conceptual framework of knowledge exchange. In The psychology of digital learning: Constructing, exchanging, and acquiring knowledge with digital media (pp. 105–122). Cham: Springer International Publishing. https://files.znu.edu.ua/files/Bibliobooks/Inshi61/0044873.pdf#page=114
Cañas, A. J. (2005). A concept map-based knowledge model: a tool for conceptual knowledge structuring in Knowledge and information visualization: Searching for synergies. eds. S. O. Tergan and T. Keller, vol. 3426 (Springer, Berlin: Heidelberg), 145–159.
Cañas, A. J., Hill, G., Carff, R., Suri, N., Lott, J., Gómez, G., et al. (2004). CmapTools: a knowledge modeling and sharing environment in Concept maps: Theory, methodology, technology. Proceedings of the first international conference on concept mapping Vol. I. eds. A. J. Cañas, J. D. Novak, and F. M. González (Pamplona: Editorial Universidad Pública de Navarra), 125–133. Available online at: https://thomaseskridge.com/assets/pdf/Canas-2004.pdf
Card, S. K., Mackinlay, J., and Shneiderman, B. (1999). Readings in information visualization: Using vision to think. San Francisco, CA: Morgan Kaufmann.
Cleveland, W. S., and McGill, R. (1984). Graphical perception: theory, experimentation, and application to the development of graphical methods. J. Am. Stat. Assoc. 79, 531–554. doi: 10.2307/2288400
Cowan, N. (2010). The magical mystery four: how is working memory capacity limited, and why? Curr. Dir. Psychol. Sci. 19, 51–57. doi: 10.1177/0963721409359277
Cowan, N. (2001). The magical number 4 in short-term memory: a reconsideration of mental storage capacity. Behav. Brain Sci. 24, 87–185. doi: 10.1017/S0140525X01003922
Dehler Zufferey, J., Bodemer, D., Buder, J., and Hesse, F. W. (2010). Partner knowledge awareness in knowledge communication: learning by adapting to the partner. J. Exp. Educ. 79, 102–125. doi: 10.1080/00220973.2010.481568
de Jong, T., and van Joolingen, W. R. (1998). Scientific discovery learning with computer simulations of conceptual domains. Rev. Educ. Res. 68, 179–201. doi: 10.2307/1170753
Dillenbourg, P., and Bétrancourt, M. (2006). Collaboration load in Handling complexity in learning environments: Theory and research. eds. J. Elen and R. E. Clark (Amsterdam: Elsevier), 142–163.
Engelmann, T. (2014). Potential and impact factors of the knowledge and information awareness approach for promoting net-based collaborative problem solving: an overview. J. Educ. Comput. Res. 50, 403–430. doi: 10.2190/EC.50.3.f
Engelmann, T., Dehler, J., Bodemer, D., and Buder, J. (2009). Knowledge awareness in CSCL: a psychological perspective. Comput. Hum. Behav. 25, 949–960. doi: 10.1016/j.chb.2009.04.004
Engelmann, T., and Hesse, F. W. (2010). How digital concept maps about the collaborators’ knowledge and information influence computer-supported collaborative problem solving. Int. J. Comput.-Support. Collab. Learn. 5, 299–319. doi: 10.1007/s11412-010-9089-1
Engelmann, T., and Hesse, F. W. (2011). Promoting the sharing of unshared knowledge through access to collaborators' meta-knowledge structures. Comput. Hum. Behav. 27, 2078–2087. doi: 10.1016/j.chb.2011.06.002
Engelmann, T., Kozlov, M. D., Kolodziej, R., and Clariana, R. B. (2014). Fostering group norm development and orientation while creating awareness content to improve net-based collaborative problem solving. Comput. Hum. Behav. 37, 298–306. doi: 10.1016/j.chb.2014.04.052
Erceg-Hurn, D. M., and Mirosevich, V. M. (2008). Modern robust statistical methods: an easy way to maximize the accuracy and power of your research. Am. Psychol. 63, 591–601. doi: 10.1037/0003-066X.63.7.591
Erkens, M., and Bodemer, D. (2017). “Which visualization guides learners best? Impact of available partner- and content-related information on collaborative learning” in Proceedings of the 12th international conference on computer supported collaborative learning (International Society of the Learning Sciences), Philadelphia, PA, USA: International Society of the Learning Sciences (ISLS). 1–8.
Erkens, M., and Bodemer, D. (2019). Improving collaborative learning: guiding knowledge exchange through the provision of information about learning partners and learning contents. Comput. Educ. 128, 452–472. doi: 10.1016/j.compedu.2018.10.009
Fiore, S. M., Warta, S. F., Best, A., Newton, O., and LaViola, J. J. (2017). “Developing a theoretical framework of task complexity for research on visualization in support of decision making under uncertainty” in Proceedings of the human factors and ergonomics society annual meeting, vol. 61 (Los Angeles, CA: SAGE Publications), 1193–1197.
Fiore, S. M., and Wiltshire, T. J. (2016). Technology as teammate: examining the role of external cognition in support of team cognitive processes. Front. Psychol. 7:1531. doi: 10.3389/fpsyg.2016.01531
Flavell, J. H. (1979). Metacognition and cognitive monitoring: a new area of cognitive–developmental inquiry. Am. Psychol. 34, 906–911. doi: 10.1037/0003-066X.34.10.906
Ghoniem, M., Fekete, J. D., and Castagliola, P. (2005). On the readability of graphs using node-link and matrix-based representations: a controlled experiment and statistical analysis. Inf. Vis. 4, 114–135. doi: 10.1057/palgrave.ivs.9500092
Gilbert, S. J., Boldt, A., Sachdeva, C., Scarampi, C., and Tsai, P.-C. (2023). Outsourcing memory to external tools: a review of “intention offloading”. Psychon. Bull. Rev. 30, 60–76. doi: 10.3758/s13423-022-02139-4
Gil-López, T., Christner, C., de León, E., Makhortykh, M., Urman, A., Maier, M., et al. (2023). Do (not!) track me: relationship between willingness to participate and sample composition in online information behavior tracking research. Soc. Sci. Comput. Rev. 41, 2274–2292. doi: 10.1177/08944393231156634
Gleicher, M. (2017). Considerations for visualizing comparison. IEEE Trans. Vis. Comput. Graph. 24, 413–423. doi: 10.1109/TVCG.2017.2744199
Gleicher, M., Albers, D., Walker, R., Jusufi, I., Hansen, C. D., and Roberts, J. C. (2011). Visual comparison for information visualization. Inf. Vis. 10, 289–309. doi: 10.1177/1473871611416549
Gray, S. A., Gray, S., Cox, L. J., and Henly-Shepard, S. (2013). “Mental modeler: a fuzzy-logic cognitive mapping modeling tool for adaptive environmental management” in In 2013 46th Hawaii international conference on system sciences (IEEE), 965–973. doi: 10.1109/HICSS.2013.399
Healey, C. F., and Enns, J. T. (2012). Attention and visual memory in visualization and computer graphics. IEEE Trans Vis Comput Graph. 18:1170–88. doi: 10.1109/TVCG.2011.127
Heersmink, R. (2021). Varieties of artifacts: embodied, perceptual, cognitive, and affective. Top. Cogn. Sci. 13, 573–596. doi: 10.1111/tops.12549
Hollan, J., Hutchins, E., and Kirsh, D. (2000). Distributed cognition: toward a new foundation for human-computer interaction research. ACM Trans. Comput. Hum. Interact. 7, 174–196. doi: 10.1145/353485.353487
Holmqvist, K., Nyström, M., Andersson, R., Dewhurst, R., Jarodzka, H., and Van de Weijer, J. (2011). Eye tracking: a comprehensive guide to methods and measures. Oup Oxford. https://research.ou.nl/en/publications/eye-tracking-a-comprehensive-guide-to-methods-and-measures/
Huang, W., Eades, P., and Hong, S. H. (2009). Measuring effectiveness of graph visualizations: a cognitive load perspective. Inf. Vis. 8, 139–152. doi: 10.1057/ivs.2009.10
Janssen, J., and Bodemer, D. (2013). Coordinated computer-supported collaborative learning: awareness and awareness tools. Educ. Psychol. 48, 40–55. doi: 10.1080/00461520.2012.749153
Javed, F., and Romanos, G. E. (2010). The role of primary stability for successful immediate loading of dental implants: a literature review. J. Dent. 38, 612–620. doi: 10.1016/j.jdent.2010.05.013
Johnson-Laird, P. N. (1983). Mental models: Towards a cognitive science of language, inference, and consciousness, vol. No. 6: Harvard University Press doi: 10.2307/414498.
Keller, T., Tergan, S. O., and Coffey, J. (2006). Concept maps used as a “knowledge and information awareness” tool for supporting collaborative problem solving in distributed groups. In A. J. Cañas and J. D. Novak (Eds.), Concept maps: theory, methodology, technology. Proceedings of the second international conference on concept mapping (Vol. 1, pp. 1–8). San José, Costa Rica: Universidad de Costa Rica.
Keller, T., and Grimm, M. (2005). “The impact of dimensionality and color coding of information visualizations on knowledge acquisition” in Knowledge and information visualization: Searching for synergies (Berlin, Heidelberg: Springer Berlin Heidelberg), 167–182. doi: 10.1007/11510154_9
Keller, T., and Tergan, S. O. (2005). “Visualizing knowledge and information: an introduction” in Knowledge and information visualization. eds. S. O. Tergan and T. Keller (Springer), 1–23. doi: 10.1007/11510154_1
Larkin, J. H., and Simon, H. A. (1987). Why a diagram is (sometimes) worth ten thousand words. Cogn. Sci. 11, 65–100. doi: 10.1016/S0364-0213(87)80026-5
Larson, J. R., and Christensen, C. (1993). Groups as problem-solving units: toward a new meaning of social cognition. Br. J. Soc. Psychol. 32, 5–30. doi: 10.1111/j.2044-8309.1993.tb00983.x
Liu, P., and Li, Z. (2012). Task complexity: a review and conceptualization framework. Int. J. Ind. Ergon. 42, 553–568. doi: 10.1016/j.ergon.2012.09.001
Maciver, D., Hunter, C., Johnston, L., and Forsyth, K. (2021). Using stakeholder involvement, expert knowledge and naturalistic implementation to co-design a complex intervention to support children’s inclusion and participation in schools: the CIRCLE framework. Children 8:217. doi: 10.3390/children8030217
Maslianko, P., and Sielskyi, Y. (2021). Data science–definition and structural representation. Syst. Res. Inform. Technol. 1, 61–78. doi: 10.20535/SRIT.2308-8893.2021.1.05
Matlen, B. J., Gentner, D., and Franconeri, S. L. (2020). Spatial alignment facilitates visual comparison. J. Exp. Psychol. Hum. Percept. Perform. 46, 443–457. doi: 10.1037/xhp0000726
Mayer, R. E., and Massa, L. J. (2003). Three facets of visual and verbal learners: cognitive ability, cognitive style, and learning preference. J. Educ. Psychol. 95, 833–846. doi: 10.1037/0022-0663.95.4.833
Meulemans, W., Dykes, J., Slingsby, A., Turkay, C., and Wood, J. (2016). Small multiples with gaps. IEEE Trans. Vis. Comput. Graph. 23, 381–390. doi: 10.1109/TVCG.2016.2598542
Meyer, R. (2010). Knowledge visualization. Trends in information visualization, (Technical Report LMU-MI-2010-1). In D. Baur, M. Sedlmair, R. Wimmer, Y.-X. Chen, S. Streng, S. Boring (Eds.), Munich, Germany: University of Munich, Department of Computer Science. vol. 23, 23–30.
Newton, O. B., Fiore, S. M., and Song, J. (2023). Validating a task complexity framework for studies of uncertainty visualization. In Proceedings of the human factors and ergonomics society annual meeting (Vol. 67, No., pp. 21–26). Sage CA: Los Angeles, CA: SAGE Publications
Novak, J. D., and Cañas, A. J. (2006). The origins of the concept mapping tool and the continuing evolution of the tool. Inf. Vis. 5, 175–184. doi: 10.1057/palgrave.ivs.9500126
Novak, J. D., and Cañas, A. J. (2008). The theory underlying concept maps and how to construct and use them (technical report IHMC CmapTools 2006-01 rev 01-2008). Florida Institute for Human and Machine Cognition. http://cmap.ihmc.us/Publications/ResearchPapers/TheoryUnderlyingConceptMaps.pdf
Novak, J. D., and Gowin, D. B. (1984). Learning how to learn. Cambridge: Cambridge University Press doi: 10.1017/CBO9781139173469.
Oppl, S., and Stary, C. (2019). Designing digital work: Concepts and methods for human-centered digitization. Cham, Switzerland: Springer International Publishing, 435.
Otjacques, B., and Feltz, F. (2005). Representation of graphs on a matrix layout. In Proceedings of the Ninth International Conference on Information Visualisation (IV’05) (pp. 339–344). IEEE. Doi:doi: 10.1109/IV.2005.107
Pagendarm, H.G., and Post, F. (1995). Comparative visualization - approaches and examples. In M. Göbel, H. Müller, & B. Urban (Eds.), Visualization in Scientific Computing (pp. 95–108). Wien, Austria: Springer.
Patterson, R. E., Blaha, L. M., Grinstein, G. G., Liggett, K. K., Kaveney, D. E., Sheldon, K. C., et al. (2014). A human cognition framework for information visualization. Comput. Graph. 42, 42–58.
Potelle, H., and Rouet, J.-F. (2003). Effects of content representation and readers’ prior knowledge on the comprehension of hypertext. International Journal of Human-Computer Studies 58, 327–345. doi: 10.1016/S1071-5819(03)00016-8
Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychol. Bull. 124, 372–422. doi: 10.1037/0033-2909.124.3.372
Risko, E. F., and Gilbert, S. J. (2016). Cognitive offloading. Trends Cogn. Sci. 20, 676–688. doi: 10.1016/j.tics.2016.07.002
Scaife, M., and Rogers, Y. (1996). External cognition: how do graphical representations work? Int. J. Hum. Comput. Stud. 45, 185–213. doi: 10.1006/ijhc.1996.0048
Schulz-Hardt, S., Frey, D., Lüthgens, C., and Moscovici, S. (2000). Biased information search in group decision making. J. Pers. Soc. Psychol. 78, 655–669. doi: 10.1037/0022-3514.78.4.655
Skulmowski, A. (2023). The cognitive architecture of digital externalization. Educ. Psychol. Rev. 35:101. doi: 10.1007/s10648-023-09818-1
Stasser, G., and Abele, S. (2020). Collective choice, collaboration, and communication. Annu. Rev. Psychol. 71, 589–612. doi: 10.1146/annurev-psych-010418-103211
Stasser, G., and Titus, W. (1985). Pooling of unshared information in group decision making: biased information sampling during discussion. J. Pers. Soc. Psychol. 48, 1467–1478. doi: 10.1037/0022-3514.48.6.1467
Stasser, G., and Titus, W. (2003). Hidden profiles: a brief history. Psychol. Inq., 14(3–4),–313. doi: 10.1207/S15327965PLI1403&4_21
Steiner, C. M., and Albert, D. (2017a). Validating domain ontologies: a methodology exemplified for concept maps. Cogent Educ. 4:1263006. doi: 10.1080/2331186X.2016.1263006
Steiner, C. M., and Albert, D. (2017b). Cognitive and epistemic usability of concept map tools in Concept mapping: Theory, methodology, technology. eds. A. J. Cañas, J. D. Novak, and J. Vanhear (Eds.), (Cham, Switzerland: Springer), 43–52.
Steiner, C. M., Albert, D., and Heller, J. (2007). Concept mapping as a means to build e-learning. Advanced principles of effective E-learning. eds. N. A. Buzzetto-More. Santa Rosa, CA, USA: Informing Science Press. 59–111.
Steichen, B., and Fu, B. (2019). Towards adaptive information visualization-a study of information visualization aids and the role of user cognitive style. Frontiers in artificial intelligence 2:22. doi: 10.3389/frai.2019.00022
Sweller, J., van Merrienboer, J. J. G., and Paas, F. G. W. C. (1998). Cognitive architecture and instructional design. Educ. Psychol. Rev. 10, 251–296. doi: 10.1023/A:1022193728205
Stewart, D. D., and Stasser, G. (1998). The sampling of critical, unshared information in decision-making groups: the role of an informed minority. Eur. J. Soc. Psychol. 28, 95–113. doi: 10.1002/(SICI)1099-0992(199801/02)28:1<95::AID-EJSP847>3.0.CO;2-0
Sweller, J., Van Merriënboer, J. J., and Paas, F. (2019). Cognitive architecture and instructional design: 20 years later. Educ. Psychol. Rev. 31, 261–292. doi: 10.1007/s10648-019-09465-5
Tergan, S. O. (2005). “Digital concept maps for managing knowledge and information” in Knowledge and information visualization: Searching for synergies (Berlin, Heidelberg: Springer Berlin Heidelberg), 185–204. doi: 10.1007/11510154_10
Tergan, S. O., and Keller, T. (Eds.) (2005). Knowledge and information visualization: Searching for synergies, vol. 3426. Berlin, Heidelberg: Springer.
Tversky, B., Agrawala, M., Heiser, J., Lee, P. U., Hanrahan, P., Phan, D., et al. (2007). “Cognitive design principles for generating visualizations” in Applied spatial cognition: From research to cognitive technology. ed. G. Allen (Mahwah, NJ: Erlbaum), 53–73. doi: 10.4324/9781003064350-3
Tversky, B. (2005). “Functional significance of visuospatial representations” in Handbook of higher-level visuospatial thinking. eds. P. Shah and A. Miyake (Cambridge, England: Cambridge University Press), 1–34. doi: 10.1017/CBO9780511610448.002
Tufte, E. R., and Graves-Morris, P. R. (1983). The visual display of quantitative information. Cheshire, CT: Graphics press.
van den Broek, K. L., Klein, S. A., Luomba, J., and Fischer, H. (2021). Introducing M-tool: A standardised and inclusive mental model mapping tool. Syst. Dyn. Rev. 37, 353–362. doi: 10.1002/sdr.1698
van Nooijen, C. C. A., de Koning, B. B., Bramer, W. M., Isahakyan, A., Asoodar, M., Kok, E., et al. (2024). A cognitive load theory approach to understanding expert scaffolding of visual problem-solving tasks: a scoping review. Educ. Psychol. Rev. 36, 1–42. doi: 10.1007/s10648-024-09848-3
van Oostendorp, H., and Goldman, S. R. (Eds.). (1999). The construction of mental representations during reading. Mahwah, NJ: Lawrence Erlbaum Associates.
Ware, C., Purchase, H., Colpoys, L., and McGill, M. (2002). Cognitive measurements of graph aesthetics. Inf. Vis. 1, 103–110. doi: 10.1057/palgrave.ivs.95000
Ware, C. (2004). Information visualization: perception for design. Center for Coastal and Ocean Mapping. 143. https://scholars.unh.edu/ccom/143
Wilcox, R. R. (2012). Introduction to robust estimation and hypothesis testing. 3rd Edn. San Diego, CA: Academic Press.
Windhager, F., Salvucci, D. D., and Franconeri, S. L. (2020). Cognitive load and visualizations: how understanding dependencies can help optimize design. IEEE Trans. Vis. Comput. Graph. 26, 143–153. doi: 10.1109/TVCG.2019.2934438
Wolfe, J. M. (2020). Visual search: how do we find what we are looking for? Ann. Rev. Vision Sci. 6, 539–562. doi: 10.1146/annurev-vision-091718-015048
Yelizarov, A., and Gamayunov, D. (2014). “Adaptive visualization Interface that manages user’s cognitive load based on interaction characteristics” in Proceedings of the 7th international symposium on visual information communication and interaction, 1–8. doi: 10.1145/2636240.2636844
Keywords: cognitive load theory, comparative knowledge visualization, concept map, group knowledge integration, juxtaposition, proposition list, superimposition, visual comparison strategy
Citation: Hynek N and Albert D (2026) Comparing distributed knowledge: the effects of visualization format and comparison strategy on task performance. Front. Psychol. 17:1684634. doi: 10.3389/fpsyg.2026.1684634
Edited by:
Stefan Oppl, University for Continuing Education Krems, AustriaReviewed by:
Magdalena Mateescu, University of Applied Sciences and Arts Northwestern Switzerland, SwitzerlandFlorian Krenn, Johannes Kepler University of Linz, Austria
Copyright © 2026 Hynek and Albert. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Nicole Hynek, bmljb2xlLmh5bmVrQGVkdS51bmktZ3Jhei5hdA==