Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Comput. Sci., 17 November 2025

Sec. Human-Media Interaction

Volume 7 - 2025 | https://doi.org/10.3389/fcomp.2025.1650189

This article is part of the Research TopicArtificial Intelligence for Technology Enhanced LearningView all 6 articles

The impact of adaptive cognitive diversity and attention on discussion effectiveness in an intelligent discussion system

  • 1School of Psychology, Xinxiang Medical University, Xinxiang, China
  • 2School of Computing and Mathematical Sciences, University of Leicester, Leicester, United Kingdom

Introduction: Although group discussion plays a crucial role in collaborative learning, it often falls short of achieving optimal effectiveness. The introduction of conversational agents has the potential to enhance the effectiveness of group discussion; nevertheless, the interaction strategies between conversational agents and human participants remain an issue that requires further investigation. The present study aims to examine how the diverse viewpoints provided by the conversational agent and participants’ attention to them affected discussion effectiveness.

Methods: This study involved 129 university students who discussed an open-ended question in an adaptive discussion system. A 2 (adaptive cognitive diversity: homogeneity vs. diversity) × 2 (attention: with vs. without instruction) between-subjects design was employed, with an additional control condition. Participants in the experimental conditions interacted with a conversational agent, while those in the control condition discussed in pairs without it.

Results and discussion: The results indicated that discussions in the diversity condition exhibited greater breadth, whereas those in the homogeneity condition demonstrated significantly greater depth, suggesting that diverse perspectives promote broader idea exploration, while similar perspectives facilitate deeper elaboration. Compared with the control condition, the diversity-with-instruction demonstrated greater discussion breadth. Participants under the with-instruction condition perceived the conversational agent’s viewpoints as obstructing their own idea generation; by contrast, those under the without-instruction condition generated a higher proportion of valid ideas and achieved deeper and better understanding of the discussion topic. These results suggest that attention plays both positive and negative roles in the discussion process. The present study examined the roles of adaptive cognitive diversity and attention in group discussion and explored how manipulating these factors within a human-computer interaction system can shape discussion effectiveness.

1 Introduction

Learning or working in groups is essential in the modern world because many complex problems require or benefit from teams that bring diverse expertise and perspectives (Corrégé and Michinov, 2021; Graesser et al., 2018, 2020; Kenworthy et al., 2023). Group discussion is an important part of collaborative learning and group creative problem solving (Kenworthy et al., 2024). It is a process of interpersonal interaction in which group members share perspectives. These shared perspectives serve as stimulus cues that activate relevant information within each participant’s semantic network, allowing old and new information to interact and thereby fostering knowledge generation and innovation.

However, positive outcomes do not occur automatically simply by assembling a group to engage in discussion (Kuhn et al., 2025). When left to their own devices, groups often perform suboptimally (Kenworthy et al., 2023). The interactive process of the group discussion involves both social and informational interactions. Social interaction primarily serves to provide social cues that promote a sense of identity and belonging, while informational interaction mainly functions to elevate the cognitive level of the discussants. Unlike social interaction, informational interaction does not serve the intended function especially for expressing disagreements (Almodiel, 2022). Furthermore, it is often challenging to identify ideal discussion partners who can facilitate optimal discussion outcomes (Memmert and Tavanapour, 2023).

With the development of technology, researchers have applied artificial intelligence (AI) techniques to the collaborative learning process, namely adaptive collaborative learning support (Rummel et al., 2016; Walker et al., 2009). A prominent example of such support is human interaction with virtual conversational agents (CAs) (de Araujo et al., 2024, 2025; Graesser, 2016; Graesser et al., 2017). CAs are computer programs designed to communicate with humans through natural language, either spoken or written (Paschoal et al., 2020). They may assume various roles in the learning process, such as peers, tutors, or even competitors (Graesser et al., 2017; Lehman and D’Mello, 2013; Nguyen, 2023). By fostering both constructive engagement, which involves learners generating new idea, and interactive engagement, which entails co-constructing understanding through dialogue, CAs can enhance learning experiences (Chi and Wylie, 2014; Nguyen, 2023). Increasingly, CAs are being recognized as active participants in human learning and ideation processes. They can provide adaptive support for collaborative learning and serve as co-ideators in human-AI collaborative problem-solving contexts (de Araujo et al., 2024, 2025; La Scala et al., 2025; Memmert and Tavanapour, 2023; Richter and Schwabe, 2025; Schmidt et al., 2023; Tegos and Demetriadis, 2017).

Nevertheless, technological innovations often fail to directly enhance learning unless thoughtfully integrated. Although CAs have the potential to facilitate ideation and knowledge construction, their success is heavily dependent on carefully designed scaffolds that structure and regulate interactions. The interaction mechanism between human participants and CAs is thus of great importance. Specifically, it is essential to investigate both the factors that influence interaction effectiveness and the means by which CAs can leverage these factors to enhance discussion outcomes.

Prior studies suggest that cognitive diversity is one of the key factors influencing the effectiveness of group ideation (Nijstad et al., 2002; Paulus and Brown, 2007; Paulus and Kenworthy, 2021). Semantic diversity and similarity are important manifestations of cognitive diversity, as it reflects how individuals differ in representing and expressing ideas. Specifically, semantic similarity refers to ideas within the same semantic domain, while semantic diversity involves ideas spanning across different domains, indicating broader conceptual variation. The search for ideas in associative memory (SIAM) model proposed by Nijstad et al. (2002) emphasizes that semantically diverse viewpoints can expand the knowledge base, while semantically homogeneous viewpoints can deepen discussion within a specific domain (Baruah and Paulus, 2011; Rietzschel et al., 2007). According to the cognitive-social-motivational (CSM) model (Paulus and Brown, 2007; Paulus and Kenworthy, 2021), group diversity can generate diverse viewpoints, thereby increasing cognitive stimulation and activating less accessible knowledge. However, previous research has often focused on static, non-adaptive cognitive diversity, neglecting the internal dynamic processes of discussion (Moussaïd et al., 2018; Reinert et al., 2025). To address this, we propose the concept of adaptive cognitive diversity, which involves dynamically providing viewpoints based on the current discussion content. The dynamic process offers greater theoretical and practical value than static cognitive diversity, thus providing a new perspective for research on cognitive diversity in group activities.

Besides the diversity of shared perspectives, attention, the selective allocation of cognitive resources to certain stimuli or ideas, is also another crucial factor that influences the effectiveness of discussion. According to the CSM model (Paulus and Brown, 2007; Paulus and Kenworthy, 2021), attention functions as a crucial bridging mechanism between social-motivational and cognitive processing. Specifically, attention governs how individuals allocate and focus cognitive resources on socially relevant stimuli, such as cues from others’ ideas, enabling selective engagement with the most motivationally pertinent information. In this way, attention not only filters and prioritizes social inputs but also enables deeper cognitive processing—such as retrieval, integration, and elaboration—which is crucial for generating new ideas. In an electronic human-human brainstorming, the study by Michinov et al. (2015) demonstrated a significant partial correlation between participants’ attention to their partner’s ideas and the quality of the ideas they generated. Attention guidance can help participants allocate their cognitive resources more effectively in online learning dialogues, and such improvements have been found to enhance dialogue quality (Eryilmaz et al., 2015). This promotes deeper processing of target concepts, thereby improving memory retention and knowledge transfer, as well as fostering greater learning efficiency (De Koning et al., 2007; Eryilmaz et al., 2014, 2018). Therefore, as a mechanism that bridges social-motivational and cognitive processes, attention plays a critical role in determining the effectiveness of online discussion. Enhancing learners’ attention to others’ perspectives may be a key strategy for improving the overall quality of group discussion.

In summary, intelligent CAs hold promise for enhancing the effectiveness of group discussion, yet the mechanisms of human-computer interaction involved remain under-investigated. Specifically, it is necessary to examine which factors enable CAs to influence interaction outcomes and how these factors can be manipulated to optimize performance in human-computer interaction systems. Building on relative theoretical foundations, this study focuses on two key factors that may influence the effectiveness of human-CA discussion: adaptive cognitive diversity and attention to others’ viewpoints, with the aim of proposing effective strategies for optimizing group discussion. According to SIAM model (Nijstad and Stroebe, 2006) and CSM model (Paulus and Brown, 2007; Paulus and Kenworthy, 2021), semantically different perspectives can stimulate cognitive activation across distant concepts in associative memory, thereby promoting broader idea exploration and enhancing creative potential. In contrast, semantically similar perspectives reinforce existing associative pathways, facilitating focused thinking and deeper elaboration within a familiar conceptual space. Based on these theoretical insights, we hypothesize that (H1) semantically different perspectives provided by the CA enhance the human participants’ breadth of discussion, (H2) semantically similar perspectives provided by the CA enhance the human participants’ depth of discussion. According to CSM model (Paulus and Brown, 2007; Paulus and Kenworthy, 2021), attention plays a central role in shaping how individuals cognitively engage with external inputs during idea generation. When participants actively focus on the CA’s contributions, they are more likely to process, integrate, and build upon those inputs, thus enhancing the effectiveness of the interaction. In line with this theoretical framework, we hypothesize that (H3) participants’ attention to the CA’s perspectives positively influences the effectiveness of the discussion. Specifically, attending to the diverse perspectives of CA increases the breadth of discussion, whereas attending to its homogeneous perspectives enhances the depth.

Building on these insights, the present study proposes a human-computer interaction system in which the computer dynamically provides adaptive viewpoints tailored to the evolving discussion context and guides human participants to further process these viewpoints. Theoretically, this research analyzes factors affecting discussion outcomes through the lenses of adaptive cognitive diversity and attention. Informed by these theoretical insights, the practical dimension explores how to manipulate these factors in the human-computer interaction system to improve discussion quality.

2 Methods

2.1 Participants

A total of 129 university students participated in this experiment. The sample size was determined based on prior research (Dugosh et al., 2000) and practical constraints, particularly the availability of participants within the specified time frame. They were randomly assigned to one of five conditions, including four experimental subconditions and one control condition. In the experimental conditions, each discussion involved one participant interacting with a computer-based CA, whereas in the control condition, discussions were conducted between two participants without the presence of the CA. After excluding data from three groups unrelated to the discussion topic, the final sample consisted of 126 participants (53 males, 73 females), with 21 participants assigned to each of the four experimental conditions and 42 participants assigned to the control condition. All participants provided written informed consent prior to their participation.

2.2 Materials

Corpus: In a previous study, a total of 60 participants engaged in a 30-min online discussion on the topic “What impact will AI have on humanity?” Viewpoint sentences were extracted from the discussion transcripts and refined, resulting in an initial corpus of 600 viewpoints. The viewpoints were independently annotated by five researchers with respect to their types (for example, tag “I believe AI will cause unemployment” as “unemployment”), and discrepancies in coding were resolved through discussion until consensus was achieved. Similar types were subsequently merged, and those with higher frequencies were summarized. Representative viewpoints were then identified for each type, yielding 13 preliminary types with approximately 10 viewpoints each. To further evaluate type distinctiveness and the correspondence between viewpoints and their assigned types, assessments were conducted by other 26 participants. Based on their evaluations, several types with overlapping or easily confusable meanings were modified or consolidated. As a result, a final viewpoint pool comprising 8 types and their representative viewpoints was established, as presented in Table 1.

Table 1
www.frontiersin.org

Table 1. Selected types and typical views in the corpus.

Discussion effectiveness questionnaires: the questionnaire evaluated the helpfulness of others’ viewpoints (e.g., “The views expressed by others in the discussion have been helpful to me,” “The views expressed by others in the discussion hindered the generation of my own opinions”), overall discussion effectiveness (e.g., “I think the discussion has been very productive” and “Following the discussion, I have gained a deeper understanding of the issue discussed”). The questionnaire is scored on a 5-point scale. A score of 1 indicates strongly disagree, 2 for disagree, 3 for neutral, 4 for agree, 5 for strongly agree.

2.3 Design

The experiment employed a 2 (adaptive cognitive diversity: homogeneity vs. diversity) × 2 (attention: with-instruction vs. without-instruction) between-subjects design, with an added non-factorial control condition. In the control condition, two participants engaged in a discussion without the involvement of the CA. In the experimental conditions, after the participants stated several ideas, the computer automatically identified the semantic type to which the participants’ opinions belong and then gave adaptive response. In the diversity condition, the CA introduced ideas from semantically different types based on the current discussion, meaning the CA’s ideas differed from the ongoing conversation. In the homogeneity condition, the CA provided ideas from the same type as the current discussion, meaning the CA’s ideas aligned with the ongoing conversation. In the with-instruction condition, participants were instructed to remember the CA’s ideas and recall what the CA said after the discussion. In the without-instruction condition, no memory-related instructions or recall tasks were given (Dugosh et al., 2000).

Dependent variables included the breadth, depth, and effectiveness ratio that participants demonstrate in the discussions, as well as participants’ self-reported discussion effectiveness. Each sentence spoken by the participants in the chat logs was categorized by the computer. Sentences not involving any keywords from a predefined list were marked as invalid viewpoints, while valid viewpoints were categorized accordingly. Based on this classification, the number and types of viewpoints for each group member were calculated. Subsequently, the following dependent variables were analysed (Manabe et al., 2024; Nijstad et al., 2002). The breadth of discussion was defined as the number of different types of viewpoints presented by participants. The depth of discussion was operationalized as the average number of viewpoints per type, calculated by dividing the total number of viewpoints by the number of viewpoint types. The proportion of valid ideas was measured as the ratio of valid ideas to the total number of sentences. Finally, self-reported discussion effectiveness was assessed through a post-discussion questionnaire.

2.4 Adaptive discussion system

This study employed a self-developed human-CA discussion system, in which the CA used keyword matching to automatically identify the semantic type of participants’ contributions and respond adaptively based on the discussion context.

During the discussion process, the computer needs to give the same type or different types of viewpoints based on the content of the participants’ previous contributions; it also needs to identify valid viewpoints and categorise the viewpoints in the calculation of the dependent variable indicator. In order to achieve these functions, it is necessary for the computer to be able to identify the types involved in the content of the participants’ previous contributions. In this study, we used keyword matching. A list of keywords is formed by extracting keywords from the typical viewpoints contained in each type, together with the near-synonyms of these keywords. If a piece of information provided by a participant contains any of the keywords from the list, it indicates that the corresponding type has been mentioned. If a message contains more than one keyword, it is classified into the type that includes the largest number of mentioned keywords; if a message does not contain any of the keywords from the list, it is labeled as irrelevant.

We evaluated the accuracy of the computerised classification. First, one group of discussion texts was selected and two researchers were asked to identify valid ideas and mark the type to which they belonged. Then, four groups of discussion texts were selected and the two researchers were asked to perform type labeling independently, which showed that the agreement (number of agreements ÷ total number) between the two labelers was 0.72. The computer performed type labeling for each sentence using keyword matching and compared the results with those obtained from the two researcher’s negotiated results. The analysis showed that the agreement between the computer-generated labeling and the manual labeling (number of agreements ÷ total number) was 0.79.

2.5 Procedure

The entire experiment was conducted in a laboratory setting, with partitions around each workstation to minimize distractions. Before the discussion, the experimenter set up the discussion group in the management interface, specifying the group type and ID. Upon arrival, participants signed an informed consent form and then sat at a computer. The computer screen displayed a general-purpose instructional interface that outlined the experimental procedures and introduced the principles of confidentiality.

During the online discussion phase, the system introduced the discussion topic and the time allocated. In the control condition, the instructions did not explicitly state whether the other discussion group member was a human participant or a CA. The discussion was initiated after both members entered the group, during which they freely exchanged views on the assigned topic. Participants from the same group were seated in different positions, ensuring they were not placed opposite or adjacent to each other. In the experimental conditions, participants were informed that GX07 was a CA. In the with-instruction condition, participants received the prompt “Please try to remember what GX07 said, you need to recall it after the discussion,” while in the without-instruction condition, participants were not shown this statement.

The discussion topic “What impact will AI have on humanity?” was displayed at the top of the discussion interface. In the experimental conditions, after participants had spoken 2–4 sentences, the CA automatically identified the corresponding types and then provided feedback according to the requirements of each experimental condition. In the diversity condition, after identifying the type of the participant’s viewpoint, the CA selected a response from viewpoints of a different type in the corpus. In the homogeneity condition, the CA followed the same procedure but responded with a viewpoint of the same type as the participant’s most recent sentence (see Figure 1).

Figure 1
Flowchart depicting a participant’s interaction with a conversational agent (CA). Arrows indicate different conditions. In the with-instruction condition, participants were instructed to remember the CA’s ideas and recall what the CA said. In the without-instruction condition, no memory-related instructions or recall tasks were given. In the diversity condition, the CA introduced an idea from a semantically different type based on the current discussion. In the homogeneity condition, the CA provided an idea from the same type as the current discussion.

Figure 1. Schematic diagram of human-CA dialogue process.

After 15 min of discussion, the system automatically moved to the discussion effectiveness questionnaire. In the with-instruction condition, participants were required to complete an additional recall task, in which they were asked, “Please recall as many of the main viewpoints expressed by GX07 as possible.” This task was not administered in the other conditions. Upon answering and submitting the questions, participants were directed to a summary interface concluding the session.

3 Results

3.1 Breadth of discussion

A 2 (adaptive cognitive diversity: homogeneity vs. diversity) × 2 (attention: with-instruction vs. without-instruction) between-subjects analysis of variance (ANOVA) was conducted to examine its effect on discussion breadth. The results indicated that the interaction between adaptive cognitive diversity and attention was not significant, F (1, 80) = 3.62, p = 0.061, η p 2 = 0.04, 95% Cl for η p 2 [ 0.00 , 0.16 ] . Similarly, the main effect of attention was not significant, F (1, 80) = 3.02, p = 0.086, η p 2 = 0.04, 95% Cl for η p 2 [ 0.00 , 0.15 ] . However, the main effect of adaptive cognitive diversity was significant, F (1, 80) = 5.76, p = 0.019, η p 2 = 0.07, 95% Cl for η p 2 [ 0.00 , 0.19 ] ; the breadth of discussion was higher in the diversity condition (M = 4.90, SD = 1.53) compared to the homogeneity condition (M = 4.21, SD = 1.16).

A one-way ANOVA was conducted to test differences in breadth of discussion across the five conditions (four experimental and one control). The analysis revealed a significant main effect of condition, F (4, 100) = 3.96, p = 0.005, η p 2 = 0.14, 95% Cl for η p 2 [ 0.02 , 0.25 ] . The post hoc multiple comparison results, corrected using the Bonferroni method, indicated that the breadth of discussion was significantly greater in the diversity-with-instruction condition (M = 5.43, SD = 1.33) than in the control condition (M = 4.10, SD = 1.00), p = 0.009. The other experimental conditions were not significantly different from the control condition (see Figure 2).

Figure 2
Bar graph showing the breadth of discussion across five conditions: Control, Homogeneity With Instruction, Diversity With Instruction, Homogeneity Without Instruction, and Diversity Without Instruction. The breadth of discussion was significantly greater in the diversity-with-instruction condition than in the control condition indicated by double asterisks. Each bar includes error bars representing standard error.

Figure 2. Comparison of the differences between the four experimental conditions and control condition on the breadth of discussion. Error bars show standard errors in all figures. ** p < 0.01.

3.2 Depth of discussion

A 2 (adaptive cognitive diversity: homogeneity vs. diversity) × 2 (attention: with-instruction vs. without-instruction) non-repeated measures ANOVA was used to explore the effect on depth of discussion. Results demonstrated no significant interaction between the two variables, F (1, 80) = 2.29, p = 0.134, η p 2 = 0.03, 95% Cl for η p 2 [ 0.00 , 0.13 ] . The main effect of attention was not significant, F (1, 80) = 1.26, p = 0.264, η p 2 = 0.02, 95% Cl for η p 2 [ 0.00 , 0.11 ] . The main effect of adaptive cognitive diversity was significant, F (1, 80) = 7.47, p = 0.008, η p 2 = 0.09, 95% Cl for η p 2 [ 0.01 , 0.22 ] . Specifically, the depth of discussion in the homogeneity condition (M = 2.73, SD = 0.94) was significantly higher than in the diversity condition (M = 2.24, SD = 0.71).

For the depth of discussion, a one-way ANOVA across the five conditions revealed a significant main effect of condition, F (4, 100) = 2.74, p = 0.033, η p 2 = 0.10, 95% Cl for η p 2 [ 0.00 , 0.20 ] . However, post-hoc pairwise comparisons (Bonferroni corrected) indicated that none of the experimental conditions significantly differed from the control condition (ps > 0.05).

3.3 Proportion of valid viewpoints

A 2 (adaptive cognitive diversity: homogeneity vs. diversity) × 2 (attention: with-instruction vs. without-instruction) non-repeated measures ANOVA was also conducted to explore the impact on the proportion of valid viewpoints. It turned out that no significant interaction was found, F (1, 80) = 0.13, p = 0.718, η p 2 = 0.00, 95% Cl for η p 2 [ 0.00 , 0.06 ] . The main effect of adaptive cognitive diversity was significant, F (1, 80) = 7.48, p = 0.008, η p 2 = 0.09, 95% Cl for η p 2 [ 0.01 , 0.22 ] ; the proportion of valid discussion in the diversity condition (M = 0.87, SD = 0.12) was significantly higher than in the homogeneity (M = 0.80, SD = 0.15). The main effect of attention was also significant, F (1, 80) = 5.69, p = 0.019, η p 2 = 0.07, 95% Cl for η p 2 [ 0.00 , 0.19 ] ; the without-instruction condition (M = 0.87, SD = 0.12) had a significantly higher proportion of valid discussion than the with-instruction condition (M = 0.80, SD = 0.14).

The one-way ANOVA revealed a significant main effect of condition on the proportion of valid views, F (4, 100) = 13.41, p < 0.001, η p 2 = 0.35, 95% Cl for η p 2 [ 0.19 , 0.47 ] . After applying Bonferroni correction to the post hoc multiple comparison results, it was found that the proportion of valid views was significantly greater in all experimental conditions—homogeneity-with-instruction (M = 0.77, SD = 0.14, p = 0.003), diversity-with-instruction (M = 0.83, SD = 0.13, p < 0.001), homogeneity-without-instruction (M = 0.82, SD = 0.14, p < 0.001), and diversity-without-instruction (M = 0.91, SD = 0.09, p < 0.001)—relative to the control condition (M = 0.60, SD = 0.20) (see Figure 3).

Figure 3
Bar chart showing the proportion of valid viewpoints across five conditions: Control, Homogeneity with Instruction, Diversity with Instruction, Homogeneity without Instruction, and Diversity without Instruction. Bars range from 0.65 to 0.9, with statistical significance indicated by asterisks.

Figure 3. Comparison of the differences between the four experimental conditions and control condition on the proportion of valid viewpoints. ** p < 0.01. *** p < 0.001.

3.4 Self-reported discussion effectiveness

A 2 (adaptive cognitive diversity: homogeneity vs. diversity) × 2 (attention: with-instruction vs. without-instruction) non-repeated measures ANOVA was conducted to explore the impact of independent variables on the self-reported discussion effectiveness. The analysis results indicated that the interaction effect was not significant across all indicators, nor was the main effect of adaptive cognitive diversity significant. The main effect of attention was significant for the hindering effect of others’ viewpoints, F (1, 80) = 6.17, p = 0.015, η p 2 = 0.07, 95% Cl for η p 2 [ 0.00 , 0.20 ] ; participants in the with-instruction condition (M = 2.60, SD = 1.04) were more likely to report that “others’ opinions hindered the generation of my own opinions” than participants in the without-instruction condition (M = 2.07, SD = 0.89). In addition, a significant main effect of attention was found on the depth of problem understanding, F (1, 80) = 4.04, p = 0.048, η p 2 = 0.05, 95% Cl for η p 2 [ 0.00 , 0.17 ] ; participants in the without-instruction condition (M = 4.19, SD = 0.86) felt they had a deeper understanding of the issue discussed compared to the with-instruction condition (M = 3.71, SD = 1.13). Furthermore, a significant main effect of attention was also observed on the overall understanding of the problem, F (1, 80) = 4.75, p = 0.032, η p 2 = 0.06, 95% Cl for η p 2 [ 0.00 , 0.18 ] ; participants in the without-instruction condition (M = 4.29, SD = 0.71) reported a better understanding of the issues under discussion compared to the with-instruction condition (M = 3.93, SD = 0.89). There was no significant difference in self-reported discussion effectiveness between the control condition and any of the four experimental conditions.

4 Discussion

This study extends the CSM model (Paulus and Brown, 2007; Paulus and Kenworthy, 2021) by introducing and empirically testing it in the novel context of human-AI collaborative discussion. It reconceptualizes cognitive diversity from a static group attribute to a dynamic, adaptive process that can be regulated by a conversational agent. This dynamic perspective captures the fluid nature of real-world discussion and clarifies the conditions under which cognitive diversity enhances performance. The findings provide new theoretical insight into how adaptive cognitive diversity and participants’ attention jointly influence discussion outcomes, thereby advancing the understanding of effective human-AI collaboration.

The results showed that the proportion of valid views during discussion was higher in all four experimental conditions (in which a CA was involved) than that in the control condition in which only two human participants discussed. This difference was independent of whether the computer provided differing or similar views, or whether participants were asked to pay attention to those views. This result indicates that two human participants are more prone to drift off-topic when discussing, whereas the mere presence of the CA encourages the participants to focus more on the current dialogue. At the same time, for the proportion of valid viewpoints, the diversity group achieved a significantly higher score than the similarity group. This suggests that offering diverse viewpoints is more likely to promote participants’ engagement in the discussion process, implying that adaptive diversity has the potential to address the issue of low student participation in online discussion (Hew et al., 2010).

4.1 Impact of diverse and homogeneous ideas on the breadth and depth of discussion

This study has demonstrated that when the CA provided viewpoints differing from the current discussion content, the breadth of discussion was significantly greater than when it provided similar viewpoints. The results are consistent with the hypothesis that semantically different perspectives contribute to the breadth of the discussion (H1). As with previous findings (Baruah and Paulus, 2011; Nijstad et al., 2002), these results support the SIAM model and CSM model (Paulus and Brown, 2007; Paulus and Kenworthy, 2021). They suggest that semantically diverse ideas can serve as retrieval cues, activating a broader range of knowledge in long-term memory and prompting more semantically diverse viewpoints.

When the CA provided similar ideas, the depth of discussion was higher than when it provided different viewpoints. This aligns with the hypothesis that semantically similar perspectives deepen the discussion (H2). The results concur with prior research that semantically related or homogeneous cues activate ideas within a narrower domain, fostering deeper exploration and the generation of numerous ideas within that semantic category (Baruah and Paulus, 2011; Nijstad et al., 2002; Rietzschel et al., 2007). In terms of depth, whether the computer-provided viewpoints were noticed or not, the depth of discussion in both the homogeneity and diversity conditions did not differ from that of the control condition in which two human participants discussed. This suggests that computer-human interaction can achieve a depth of engagement comparable to that of two real people interacting.

4.2 Dual effects of attention to the ideas of CA

Attention fulfills two primary roles: it enables the selective focus on particular information and the allocation of cognitive resources to that focus. The more resources are devoted to the selected information, the more effectively it can be processed. In human-computer interaction, attention may be oriented either toward others’ viewpoints (e.g., those generated by a CA) or toward one’s own idea-generation process. This theoretical framework provides the basis for understanding the role of attention observed in the present study.

The results of this study suggest that attention exerts a dual influence on the discussion process. On the one hand, directing participants’ attention to computer-generated viewpoints—especially under diversity condition—expanded the breadth of discussion. This finding partly supports our hypothesis that attention to others’ perspectives contributes to discussion outcomes (H3). On the other hand, allocating more cognitive resources to others’ viewpoints reduced the resources available for generating one’s own ideas. As a result, participants under the with-instruction condition generated a lower proportion of effective viewpoints. Results on self-reported discussion effectiveness further support this interpretation, as participants encouraged to focus on the CA’s ideas often felt that “others’ opinions hindered the generation of my own opinions,” whereas those without such instructions were more likely to report a deeper and better understanding. The dual role of attention observed in this study aligns with the CSM model (Paulus and Brown, 2007; Paulus and Kenworthy, 2021). This model posits that attention to others’ ideas can stimulate cognition and expand idea generation; however, excessive focus on others may inhibit the development of one’s own ideas.

Therefore, achieving a balance between attending to others and focusing on oneself is crucial for optimizing the effectiveness of interaction. When the aim is to stimulate idea generation, attending to others’ diverse viewpoints may be more beneficial. Once new information has been acquired, however, participants should redirect their attention inward, weaving external insights into their personal conceptual framework. For human-computer collaboration platforms, these findings highlight the importance for adaptive attention guidance mechanisms. One practical design is to cue users to attend to computer-generated viewpoints when additional input is needed. Subsequently, systems can prompt users to reflect on and integrate these viewpoints with their own ideas. Such designs would help balance external and internal attention. In turn, this balance can support both the breadth and the depth of collaborative problem solving.

4.3 Limitations and future prospects

The design was based on eight predefined types, with the variable operationalized through same-type versus different-type groupings. While this ensured that, in the diversity condition, computer-generated viewpoints covered multiple aspects and thereby promoted broader discussion among participants, it also constrained the range of viewpoints to the predefined types, thus limiting flexibility. Future research could introduce generative AI to elicit more accurate, richer, and deeper interactions between computers and humans (Memmert and Tavanapour, 2023; Zhu et al., 2025). Regarding to the manipulation of attention, the present study employed the recall instruction to direct participants’ focus. The resulting effects may reflect memory processes rather than attention per se. Future work could consider employing methods such as eye-tracking to investigate participants’ attentional allocation (Michinov et al., 2015), thereby avoiding the ambiguity between attention and memory.

In addition, the present study imposed a fixed instruction set, assigning participants receiving either similar or divergent viewpoints and being required, or not, to attend to them. However, real-world discussion is inherently dynamic, and the roles of diverse viewpoints and attention may shift across different stages. Accordingly, future research should adopt flexible designs that let participants decide in real time whether to solicit CA input and whether to favor similar or divergent viewpoints. Such flexibility would enhance the relevance and necessity of the CA contributions and promote more targeted and effective interactions.

5 Conclusion

Drawing on theories related to group idea generation, this study examined the impact of adaptive cognitive diversity and attention on the effectiveness of discussion in a human-computer interaction system. The main conclusions of this study are as follows: when the CA supplied adaptive differences in viewpoints, discussion breadth widened and participants could stay concentrated on the ongoing topic; conversely, when the computer offered similar viewpoints, discussion depth increased. Attention plays a dual role in the discussion process, exerting both facilitative and inhibitory effects. When the CA provides differing viewpoints and participants attended to them, the breadth of discussion exceeds that of two-human interactions. At the same time, however, requiring participants to attend to computer-provided viewpoints hampers their own idea generation and impairs their understanding of discussion issues. The presence of a CA enables participants to focus more on the current dialogue, achieving a depth of discussion comparable to that of a dyadic human conversation.

Data availability statement

The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: figshare: https://figshare.com/s/7db9d1f93d5ca9e47208 and DOI: https://doi.org/10.6084/m9.figshare.28255064.

Ethics statement

The studies involving humans were approved by Xinxiang Medical University Ethics Committee. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

HG: Writing – original draft, Funding acquisition, Conceptualization, Supervision. BZ: Methodology, Data curation, Writing – original draft, Investigation. XH: Methodology, Software, Data curation, Writing – original draft. CL: Writing – original draft, Methodology, Data curation. HC: Writing – original draft, Methodology, Investigation. XJ: Investigation, Writing – original draft, Methodology. HoZ: Funding acquisition, Writing – review & editing, Project administration, Supervision. HuZ: Writing – review & editing, Supervision.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This research was funded by Science and Technology Tackling Project of Henan Province "The influence of adaptive cognitive diversity and fine elaboration on discussion effect in intelligent discussion system" (232102320156), Research and Practice Project on Higher Education Teaching Reform jointly supported by Henan Province (2024SJGLX0391) and Xinxiang Medical University (2024-XYJG-12) "Reform of psychiatry teaching based on generative AI in the context of new medical science". Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of Henan Provincial Science and Technology Department, Henan Provincial Department of Education.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Gen AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Almodiel, M. C. (2022). Analyzing online learners’ knowledge construction in asynchronous discussion forums using interaction analysis model. Int. J. Inform. Technol. Govern. Educ. Bus. 4, 1–11. doi: 10.32664/ijitgeb.v4i1.93

Crossref Full Text | Google Scholar

Baruah, J., and Paulus, P. B. (2011). Category assignment and relatedness in the group ideation process. J. Exp. Soc. Psychol. 47, 1070–1077. doi: 10.1016/j.jesp.2011.04.007

Crossref Full Text | Google Scholar

Chi, M. T. H., and Wylie, R. (2014). The ICAP framework: linking cognitive engagement to active learning outcomes. Educ. Psychol. 49, 219–243. doi: 10.1080/00461520.2014.965823

Crossref Full Text | Google Scholar

Corrégé, J.-B., and Michinov, N. (2021). Group size and peer learning: peer discussions in different group size influence learning in a biology exercise performed on a tablet with stylus. Front. Educ. 6:733663. doi: 10.3389/feduc.2021.733663

Crossref Full Text | Google Scholar

de Araujo, A., Papadopoulos, P. M., McKenney, S., and de Jong, T. (2024). A learning analytics-based collaborative conversational agent to foster productive dialogue in inquiry learning. J. Comput. Assist. Learn. 40, 2700–2714. doi: 10.1111/jcal.13007

Crossref Full Text | Google Scholar

de Araujo, A., Papadopoulos, P. M., McKenney, S., and de Jong, T. (2025). Investigating the impact of a collaborative conversational agent on dialogue productivity and knowledge acquisition. Int. J. Artif. Intell. Educ., 35, 1–27. doi: 10.1007/s40593-025-00469-7

PubMed Abstract | Crossref Full Text | Google Scholar

De Koning, B. B., Tabbers, H. K., Rikers, R. M. J. P., and Paas, F. (2007). Attention cueing as a means to enhance learning from an animation. Appl. Cogn. Psychol. 21, 731–746. doi: 10.1002/acp.1346

Crossref Full Text | Google Scholar

Dugosh, K. L., Paulus, P. B., Roland, E. J., and Yang, H.-C. (2000). Cognitive stimulation in brainstorming. J. Pers. Soc. Psychol. 79, 722–735. doi: 10.1037/0022-3514.79.5.722

PubMed Abstract | Crossref Full Text | Google Scholar

Eryilmaz, E., Thoms, B., and Canelon, J. (2018). How design science research helps improve learning efficiency in online conversations. Commun. Assoc. Inf. Syst. 42, 548–580. doi: 10.17705/1CAIS.04221

Crossref Full Text | Google Scholar

Eryilmaz, E., Thoms, B., Mary, J., Kim, R., and Van Der Pol, J.. (2014). Attention guidance in online learning conversations. 47th Hawaii International Conference on System Sciences, 22–31.

Google Scholar

Eryilmaz, E., Thoms, B., Mary, J., Kim, R., and van der Pol, J. (2015). Instructor versus peer attention guidance in online learning conversations. AIS Trans. Hum. Comput. Interact. 7, 234–268. doi: 10.17705/1thci.00074

Crossref Full Text | Google Scholar

Graesser, A. C. (2016). Conversations with AutoTutor help students learn. Int. J. Artif. Intell. Educ. 26, 124–132. doi: 10.1007/s40593-015-0086-4

Crossref Full Text | Google Scholar

Graesser, A. C., Cai, Z., Morgan, B., and Wang, L. (2017). Assessment with computer agents that engage in conversational dialogues and trialogues with learners. Comput. Human Behav. 76, 607–616. doi: 10.1016/j.chb.2017.03.041

Crossref Full Text | Google Scholar

Graesser, A. C., Fiore, S. M., Greiff, S., Andrews-Todd, J., Foltz, P. W., and Hesse, F. W. (2018). Advancing the science of collaborative problem solving. Psychol. Sci. Public Interest 19, 59–92. doi: 10.1177/1529100618808244

PubMed Abstract | Crossref Full Text | Google Scholar

Graesser, A. C., Greiff, S., Stadler, M., and Shubeck, K. T. (2020). Collaboration in the 21st century: the theory, assessment, and teaching of collaborative problem solving. Comput. Human Behav. 104:106134. doi: 10.1016/j.chb.2019.09.010

Crossref Full Text | Google Scholar

Hew, K. F., Cheung, W. S., and Ng, C. S. L. (2010). Student contribution in asynchronous online discussion: a review of the research and empirical exploration. Instr. Sci. 38, 571–606. doi: 10.1007/s11251-008-9087-0

Crossref Full Text | Google Scholar

Kenworthy, J. B., Coursey, L. E., Dickson, J. J., Paulus, P. B., Rozich, B. C., and Marusich, L. R. (2024). The impact of intergroup idea exposure on group creative problem-solving. Group Process. Intergroup Relat. 27, 1452–1473. doi: 10.1177/13684302231216047

Crossref Full Text | Google Scholar

Kenworthy, J. B., Doboli, S., Alsayed, O., Choudhary, R., Jaed, A., Minai, A. A., et al. (2023). Toward the development of a computer-assisted, real-time assessment of ideational dynamics in collaborative creative groups. Creat. Res. J. 35, 396–411. doi: 10.1080/10400419.2022.2157589

Crossref Full Text | Google Scholar

Kuhn, D., Fraguada, T., and Halpern, M. (2025). How do new ideas come to be adopted during discourse? Int. J. Comput.-Support. Collab. Learn. 20, 223–248. doi: 10.1007/s11412-024-09441-4

PubMed Abstract | Crossref Full Text | Google Scholar

La Scala, J., Bartłomiejczyk, N., Gillet, D., and Holzer, A.. (2025). Fostering innovation with generative AI: a study on human-AI collaborative ideation and user anonymity. Hawaii International Conference on System Sciences.

Google Scholar

Lehman, B., and D’Mello, S. (2013). Inducing and tracking confusion with contradictions during complex learning. Int. J. Artif. Intell. Educ. 22, 85–105. doi: 10.3233/JAI-130025

Crossref Full Text | Google Scholar

Manabe, M., Fujiwara, K., Ito, K., and Itoh, Y. (2024). The association between synchrony and intellectual productivity in a group discussion: a study using the SenseChair. Humanit. Soc. Sci. Commun. 11. doi: 10.1057/s41599-023-02566-1

Crossref Full Text | Google Scholar

Memmert, L., and Tavanapour, N.. (2023). Towards human-AI-collaboration in brainstorming: empirical insights into the perception of working with a generative AI. 31st European Conference on Information Systems (ECIS). Available online at: https://aisel.aisnet.org/ecis2023_rp/429

Google Scholar

Michinov, N., Jamet, E., Métayer, N., and Le Hénaff, B. (2015). T he eyes of creativity: impact of social comparison and individual creativity on performance and attention to others’ ideas during electronic brainstorming. Comput. Human Behav. 42, 57–67. doi: 10.1016/j.chb.2014.04.037

Crossref Full Text | Google Scholar

Moussaïd, M., Noriega Campero, A., and Almaatouq, A. (2018). Dynamical networks of influence in small group discussions. PLoS One 13:e0190541. doi: 10.1371/journal.pone.0190541

PubMed Abstract | Crossref Full Text | Google Scholar

Nguyen, H. (2023). Role design considerations of conversational agents to facilitate discussion and systems thinking. Comput. Educ. 192:104661. doi: 10.1016/j.compedu.2022.104661

Crossref Full Text | Google Scholar

Nijstad, B. A., and Stroebe, W. (2006). How the group affects the mind: a cognitive model of idea generation in groups. Personal. Soc. Psychol. Rev. 10, 186–213. doi: 10.1207/s15327957pspr1003_1

PubMed Abstract | Crossref Full Text | Google Scholar

Nijstad, B. A., Stroebe, W., and Lodewijkx, H. F. M. (2002). Cognitive stimulation and interference in groups: exposure effects in an idea generation task. J. Exp. Soc. Psychol. 38, 535–544. doi: 10.1016/S0022-1031(02)00500-0

Crossref Full Text | Google Scholar

Paschoal, L. N., Loureiro Krassmann, A., Nunes, F. B., Morais De Oliveira, M., Bercht, M., Barbosa, E. F., et al. (2020). A systematic identification of pedagogical conversational agents. 2020 IEEE Frontiers in Education Conference (FIE), 1–9.

Google Scholar

Paulus, P. B., and Brown, V. R. (2007). Toward more creative and innovative group idea generation: a cognitive-social-motivational perspective of brainstorming. Soc. Personal. Psychol. Compass 1, 248–265. doi: 10.1111/j.1751-9004.2007.00006.x

Crossref Full Text | Google Scholar

Paulus, P. B., and Kenworthy, J. B. (2021). “Theoretical models of the cognitive, social, and motivational processes in group idea generation” in Creativity and innovation. eds. S. Doboli, J. B. Kenworthy, A. A. Minai, and P. B. Paulus (Cham, Switzerland: Springer International Publishing), 1–20.

Google Scholar

Reinert, C., Buengeler, C., Lehmann-Willenbrock, N., and Homan, A. C. (2025). Reviewing and revisiting the processes and emergent states underlying team diversity effects. Small Group Res. 56, 114–163. doi: 10.1177/10464964241275748

Crossref Full Text | Google Scholar

Richter, A., and Schwabe, G. (2025). “There is no ‘AI’ in ‘TEAM’! Or is there?” – towards meaningful human-AI collaboration. Australas. J. Inf. Syst. 29. doi: 10.3127/ajis.v29.5753

Crossref Full Text | Google Scholar

Rietzschel, E. F., Nijstad, B. A., and Stroebe, W. (2007). Relative accessibility of domain knowledge and creativity: the effects of knowledge activation on the quantity and originality of generated ideas. J. Exp. Soc. Psychol. 43, 933–946. doi: 10.1016/j.jesp.2006.10.014

Crossref Full Text | Google Scholar

Rummel, N., Walker, E., and Aleven, V. (2016). Different futures of adaptive collaborative learning support. Int. J. Artif. Intell. Educ. 26, 784–795. doi: 10.1007/s40593-016-0102-3

Crossref Full Text | Google Scholar

Schmidt, L., Piazza, A., and Wiedenhöft, C. (2023). ““Augmented brainstorming with AI” – research approach for identifying design criteria for improved collaborative idea generation between humans and AI” in Frontiers in artificial intelligence and applications. eds. P. Lukowicz, S. Mayer, J. Koch, J. Shawe-Taylor, and I. Tiddi (Amsterdam, The Netherlands: IOS Press).

Google Scholar

Tegos, S., and Demetriadis, S. (2017) Conversational agents improve peer learning through building on prior knowledge Educ. Technol. Soc. 20 99–111. Available online at: http://www.jstor.org/stable/jeductechsoci.20.1.99

Google Scholar

Walker, E., Rummel, N., and Koedinger, K. R. (2009). CTRL: a research framework for providing adaptive collaborative learning support. User Model. User Adapt. Interact. 19, 387–431. doi: 10.1007/s11257-009-9069-1

Crossref Full Text | Google Scholar

Zhu, Y., Liu, Q., and Zhao, L. (2025). Exploring the impact of generative artificial intelligence on students’ learning outcomes: a meta-analysis. Educ. Inf. Technol. 30, 16211–16239. doi: 10.1007/s10639-025-13420-z

Crossref Full Text | Google Scholar

Keywords: group learning, cognitive-social-motivational model, artificial intelligence in education, intelligent tutoring system, conversational agent

Citation: Gao H, Zhao B, Hu X, Liu C, Chen H, Jiang X, Zhang H and Zhou H (2025) The impact of adaptive cognitive diversity and attention on discussion effectiveness in an intelligent discussion system. Front. Comput. Sci. 7:1650189. doi: 10.3389/fcomp.2025.1650189

Received: 26 June 2025; Accepted: 04 November 2025;
Published: 17 November 2025.

Edited by:

Antonio Sarasa-Cabezuelo, Complutense University of Madrid, Spain

Reviewed by:

Yuki Nishida, Ritsumeikan University, Japan
Cleofé Alvites Huamaní, Cesar Vallejo University, Peru
Gloria Virginia, Duta Wacana Christian University, Indonesia

Copyright © 2025 Gao, Zhao, Hu, Liu, Chen, Jiang, Zhang and Zhou. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Hongxing Zhang, emh4MTY2NjY2QDE2My5jb20=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.