Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Digit. Health, 10 September 2025

Sec. Human Factors and Digital Health

Volume 7 - 2025 | https://doi.org/10.3389/fdgth.2025.1655860

This article is part of the Research TopicEmotional Intelligence AI in Mental HealthView all 9 articles

Alter egos alter engagement: perspective-taking can improve disclosure quantity and depth to AI chatbots in promoting mental wellbeing

  • 1Department of Computer & Information Science & Engineering, Virtual Experiences Research Group, University of Florida, Gainesville, FL, United States
  • 2Institute of Food & Agricultural Sciences, University of Florida, Gainesville, FL, United States
  • 3Department of Computer Science, University of Central Florida, Orlando, FL, United States

Introduction: Emotionally intelligent AI chatbots are increasingly used to support college students’ mental wellbeing. Yet, adoption remains limited, as users often hesitate to open up due to emotional barriers and vulnerability. Improving chatbot design may reduce some barriers, but users still bear the emotional burden of opening up and overcoming vulnerability. This study explores whether perspective-taking can support user disclosure by addressing underlying psychological barriers.

Methods: In this between-subjects study, 96 students engaged in a brief reflective conversation with an embodied AI chatbot. Perspective-Taking participants defined and imagined a designated other’s perspective and responded from that viewpoint. Control participants provided self-information and responded from their own perspective. Disclosure was measured by quantity (word count) and depth (information, thoughts, and feelings). Additional immediate measures captured readiness, intentions for mental wellbeing, and attitudes toward the chatbot and intervention.

Results: Perspective-Taking participants disclosed significantly greater quantity, overall depth, thoughts depth, and frequencies of high disclosures of thoughts and information. Both groups showed significant improvements in readiness and intention to address mental wellbeing, with no difference in improvement magnitude. However, Control participants reported significantly lower (better) skepticism towards the intervention and greater increases in willingness to engage with AI chatbots comparatively.

Discussion: This study highlights how perspective-taking and distancing may facilitate greater disclosure to AI chatbots supporting mental wellbeing. We explore the nature of these disclosures and how perspective-taking may drive readiness and enrich the substance of disclosures. These findings suggest a way for chatbots to evoke deeper reflection and effective support while potentially reducing the need to share sensitive personal self-information directly with generative AI systems.

1 Introduction

Amid the excitement and rigor of college life, many students encounter mental health challenges that can feel overwhelming and isolating. Recent surveys reveal that over 60% of U.S. college students report experiencing at least one mental health-related issue (e.g., stress, anxiety, depressive symptoms) during their education (1). The most recent Healthy Minds dataset (2023–2024) paints an even starker picture: 78% of students currently indicate some level of need with emotional or wellbeing challenges, yet only 54% had ever reached out to professional counseling, with only 36% doing so in the prior year. Although recent trends may reflect more positivity, available early data reported a median delay of 11 years between the onset of symptoms and initial treatment among a general U.S. population (2). In response to the rising demand and persistent barriers, research has increasingly turned to digital mental wellbeing support through emotionally intelligent AI chatbots. While other telehealth modalities also aim to expand access, such chatbots offer unique advantages by mitigating time, availability, and location barriers through their asynchronous nature. Numerous studies suggest that emotionally intelligent AI chatbots can even provide interim support for depressive symptoms (37). At the same time, recent systematic and meta reviews also highlight limited effectiveness and inconsistent results in addressing mental health concerns (8, 9). Hence, another promising application lies in chatbots’ abilities to empower users to proactively manage their wellbeing or seek additional support from professionals (10). This approach is motivated by evidence suggesting that replacing human care with automated systems in therapeutic contexts can leave users feeling discomfort and reluctance in deeper engagements (1114). Such self-empowering chatbots have been leveraged to support goal-setting (15), adhere to medication goals (16), drive engagement with online therapy (17), promote smoking abstinence (18), or increase efficacy in addressing eating disorders (19, 20). Rapid advancements of large language models also further improve emotionally intelligent AI chatbots’ abilities to overcome obstacles to promote self-wellbeing (21).

Despite promising developments, meaningful engagement with chatbots for wellbeing is far from guaranteed. Recent reports indicate that U.S. college student adoption of chatbot mental health services remains limited, and attitudes are significantly more negative compared to traditional services (22). Lingering reluctance may result in limited self-disclosure, shallow interaction patterns, and reduced ability to provide meaningful support (2327). Users may also abandon chatbot interactions due to technical issues, a perceived lack of human emotion or empathy, or doubts in the chatbot’s ability to provide meaningful support (13, 28). Achieving meaningful engagement with chatbots often requires users to self-disclose content that might not otherwise be disclosed due to vulnerability or discomfort. If the goal is to help users feel safe enough to share, then we must also address the psychological mechanisms that underlie emotional risk itself. To do so, many approaches aim to enhance chatbots to normalize stigmas (29, 30) or to become more accommodating in their conversations (e.g., more empathetic, human-like, acceptable) (3137). However, the emotional burden still largely falls on users: users must choose to open up, risk being vulnerable, and overcome deeply personal inhibitions. Additional challenges arise with respect to privacy and ethical concerns in engaging with AI. Uncertainty in data storage, access, and confidentiality may impede disclosures (38). Moreover, ethical concerns around propagating prejudice due to algorithm bias and limited capabilities for responding to crises raise critical questions about the safety, fairness, and reliability of AI-driven mental health interventions (39, 40). Even if such scenarios are “safe,” deeper consideration for the user may be needed to foster disclosure in such environments.

In an effort to mediate user reluctance in disclosing to chatbots, we propose employing perspective-taking. Taking another’s perspective allows one to “discern the thoughts, feelings, and motivations of [others]” (41) and offers emotional distance to reflect on distressing experiences (4144). Given that users often adopt altered identities in digital contexts (45, 46), interactions with chatbots may offer a unique opportunity to explore identity through perspective taking. We conceptualize taken perspectives in relation to the “self” and “other,” where “other” refers to external entities, such as strangers, friends, family, or even hypothetical entities (44, 47). The ability to take perspectives is one formed in early childhood that is considered critical to empathetic capability (48, 49). Empathy is often attributed to the ability to take perspectives (50): we share in other people’s emotions (51) and reconstruct their mental states for ourselves (52, 53). Though seemingly intuitive, perspective-taking relies on concrete knowledge to ground inferences about others’ actions (54) and is facilitated by greater self–other overlap (55). As a result, less informed perspectives (i.e., distal constructs) can result in abstractions, or the employment of general heuristics and social rules to estimate behavior (44, 56). Pertinent examples include abstract syntax in speech (e.g., more adjectives than descriptive verbs) (44, 57), higher-level terminology in descriptions (5860), or more polite, indirect language (57, 61).

The primary motivation for perspective-taking in this study is its demonstrated impact on user behavior, attitudes, and outcomes. Perspective-taking can lead to improved prosocial behaviors and intentions of change (6265). Pahl & Bauer found that briefly adopting the perspective of a young woman affected by environmental changes increased participants’ engagement with environmental materials and enhanced their intentions toward environmental action (66). Perspective-taking with outgroups has been shown to reduce aggression, increase empathy, and diminish stereotyping and bias (6772). Perspective-taking may also lead to improved outcomes for the self (42, 7375) Boland et al. found that adopting the perspectives of past selves or others, by receiving or offering compassion, can reduce emotional discomfort and enhance self-compassion (42). Perspective-taking is shaped by factors like altruistic concern and egotistic motivation (76, 77), yet a compelling effect may stem from the merging of self and other (55, 74, 78, 79). Aron and Aron describe this as incorporating others into the self, ultimately viewing them as extensions of oneself (80). Perspective-takers are thought to internalize the insights, thoughts, and emotions of others (81, 82). Perspective-takers may project their own traits onto others (e.g., “I liked this movie; therefore, my friend will too”) (44, 63). They may also adopt traits of others, as seen when taking a professor’s perspective increased self-ratings of intelligence (83). These effects intensify when individuals internalize others’ experiences, emotions, and attributes as part of their self-concept (78, 84, 85).

Similar to prior literature (64, 66, 73), this study investigates perspective-taking as a means to promote behavioral engagement, rather than investigating underlying mechanisms of overlap or attitudes towards the taken other. Specifically, This study examines how perspective-taking can enhance disclosure in conversations with emotionally intelligent AI chatbots. Our utilization of an emotionally intelligent AI chatbot entails an embodied conversational agent (ECA)-guided reflective conversation for addressing ambivalence to change, where theory-driven, AI-generated empathetic expressions of dialogue are adapted and delivered based on individual user disclosures (see Section 2.2.2). Although current approaches have demonstrated that AI systems can detect and express emotion with potential for higher sophistication (33, 8689), the present study’s integration of an emotionally intelligent AI conversation serves as a platform to empirically evaluate perspective-taking. The findings of this work arise from a between-participants study that recruited primarily STEM students from the University of Florida who were randomized into Perspective-Taking and Control conditions. Perspective-Taking participants took the perspective of a self-defined, known other and engaged in the AI-guided conversation fully from the other’s perspective; Control participants completed identical tasks with a self-framing and engaged fully from their self-perspective. This study investigated perspective-taking’s effects on the following measures: disclosure (word quantity and categorical depth), readiness to address mental wellbeing, and attitudes toward the intervention and chatbot. Our main hypotheses predicted Perspective-Taking would enhance engagement in the forms of greater disclosure quantities and depths in comparison to the Control:

H1DisclosureQuantity: Perspective-Taking participants will exhibit greater quantities of disclosure compared to Control participants.

H2DisclosureDepth: Perspective-Taking participants will exhibit greater depths of disclosure compared to Control participants.

Secondary to disclosure, we investigated how taking an other’s perspective impacted readiness and attitudes. We hypothesized Perspective-Taking participants’ readiness to address mental wellbeing would significantly improve overall from Pre- to Post-measure.

H3ReadinessOverall: Perspective-Taking participants will exhibit improved readiness after the reflective conversation on wellbeing.

Though it was not fully expected that perspective-taking would outperform a self-perspective in a conversation dictated by disclosure and self-reflection, we deferred the remaining outcome hypothesis in favor of the Perspective-Taking (experimental) condition:

H4ReadinessComparison: Perspective-Taking participants will exhibit greater improvements on readiness compared to Control participants.

H5AttitudesIntervention: Perspective-Taking participants will exhibit more positive attitudes towards the present wellbeing intervention compared to Control participants.

H6AttitudesChatbots: Perspective-Taking participants will exhibit greater improvements on attitudes towards AI wellbeing chatbots compared to Control participants.

2 Materials and methods

Two main conditions were examined to assess the impact of perspective-taking: Perspective-Taking (perspective of an other) and Control (perspective of self). All participants completed two main intervention steps consisting of perspective-taking tasks and a reflective conversation. The primary difference is in the framing of the tasks themselves (see Table 1).

Table 1
www.frontiersin.org

Table 1. Summary of the study conditions, their perspectives, and task framing. Both conditions completed the same steps; however, the framing of the tasks differed based on whether participants took an other’s perspective (Perspective-Taking) or engaged as the self (Control). A high-level overview of the framing is provided, but see Section 2.1 for specific details.

2.1 Study design

Prior to the study start, participants selected between a male and female ECA for the remainder of the interaction. The ECA introduced itself and provided an overview of the study based on the participant’s assigned condition. The study consisted of two main phases of interaction described in this section: perspective-taking and reflective conversation.

2.1.1 Perspective-taking phase

Participants in the Perspective-Taking condition were instructed to identify, describe, and imagine another’s perspective during this phase. Rather than being given a fictional persona, participants chose a real person in their life, such as a friend or family member, who would benefit from the reflective conversation. The decision to have participants define a real, known other who might benefit from the interaction was based on prior research and considerations specific to this study. First, perspective-taking may fail due to insufficient information or a lack of reason to set aside egocentric bias and take perspectives (90, 91), and greater proximity will increase the likelihood of adopting an other’s perspective (76). Allowing participants to define a known other who might benefit from the conversation offers greater familiarity with the other’s mental wellbeing and a meaningful reason to take their perspective. Second, establishing conversational depth was a high priority for the study’s wellbeing aims. An other who is too distant from the participant may be difficult to portray, leading to more abstract, higher-level responses (44). Given strong evidence that people can spontaneously take perspectives or empathize without prompting (9295), allowing participants to define their own target seemed suitable for supporting perspective-taking in this context.

To support effective perspective-taking, the task extended beyond the typical narrative and imaginative phases commonly used in prior studies (50, 66, 96). This study used a persona-crafting task called empathy mapping, which aligns with established perspective-taking processes (54). Given empathy’s strong connection to perspective-taking (62, 92, 97), empathy mapping was appropriate, as it helps participants understand others by viewing the world through their eyes and evokes empathy through persona design (98). Empathy mapping aimed to personify the other by capturing demographic information, personality traits, values, wellbeing concerns, goals, and brief imaginative descriptions of the other’s life. These fields were further informed based on literature in empathy mapping and design of patient personas in eHealth interventions (99101).

The AI chatbot guided participants through the empathy mapping tasks, explaining the aims and requirements for each prompted item. After designing the persona, participants reviewed their other’s persona and imagined their other’s perspective and experiences. To deepen perspective-taking and ensure participants responded only from their other’s viewpoint, the AI chatbot provided mock conversation prompts for first-person replies as the other. Afterwards, participants began the reflective conversation by entering the following phrase: “I am ready to play the role of [alias]” (where “alias” refers to the other’s defined name). In the Control condition, the same empathy mapping and mock scenario tasks were delivered by the AI chatbot to maintain structural parity. The difference in conditions arises in the framing of the task as a personal intake in the Control, rather than perspective-taking.

2.1.2 Reflective conversation phase

The reflective conversation was designed to evoke disclosure from participants conducive to building their readiness to address their wellbeing. Therefore, the conversation was designed using principles from motivational interviewing, a client-centered approach to enhance readiness, resolve ambivalence, and encourage capability and autonomy (102, 103). To enable meaningful comparison of disclosure quantity and depth, a fixed set of conversation items replaced the open-ended format typical of motivational interviews. This design allowed participants to receive the same, verbatim open-ended questions across interactions. Furthermore, the AI chatbot was designed to convey empathy and emotion in response, which can influence disclosure attitudes (104, 105). A structured conversation also afforded consistency in the quantity and depth of empathetic expression from the AI chatbot. The rule-based reflective conversation script was designed by two authors MV, an expert in health communication trained in motivational interviewing, and CY, who received the standard full-day training in motivational interviewing (106). A formal pilot with (n = 58) participants was conducted while iteratively updating the conversation’s script.

Four classifications of motivational interviewing strategies reviewed by Hardcastle et al. served as sub-phases for the present reflective conversation (107). In line with motivational interviewing’s open nature, the authors describe their classifications as identified themes rather than strict design requirements for conversations. Their classifications include motivational interviewing strategies for engaging, focusing, evoking, and planning. Although the conversation includes both closed- and open-ended items per Hardcastle et al., we focus here on the nine open-ended disclosure items (see Section 2.3.1). These nine disclosure items include strategies directly from three of the four classifications: Engaging: two “open-ended question“ disclosure items, Evoking: five disclosure items on “troubleshooting” barriers to change, “looking forward” on future possibilities, “identifying past successes” in coping strategies, “exploring values” relating to the wellbeing concern or behavior, and “brainstorming” options to change, and Planning: two disclosure items of “considering change options” and “developing a change plan” towards one concrete, self-designed next step (see Table 2) (107). In Focusing, participants were provided the opportunities to engage in a set of resources (NIMH & CDC) containing techniques for improving mental wellbeing through closed-ended responses.

Table 2
www.frontiersin.org

Table 2. The specific strategies employed for the conversation’s nine disclosure items and empathetic expressions. The disclosure strategies refer to the nine open-input items analyzed, and the empathetic expression strategies illustrate how prompts were designed (107).

Participants completed the reflective conversation by responding to the AI chatbot’s nine disclosure items and additional closed-ended items across the four sub-phases. With each disclosure response from the participant, the AI chatbot employed a sequence of providing empathetic expression before delivering the ensuing prompt (see Section 2.2.2 for implementation details). To ensure proper study completion, static interface messages reminded participants to engage from the defined perspective, and the AI chatbot’s first question requested participants to provide their defined alias for conversation. Participants who provided an alias differing from their taken perspective were removed from analyses. All participants completed the reflective conversation’s disclosure and closed-ended items across four sub-phases. Perspective-Taking participants responded from the designated other’s perspective, while Control participants responded from their own. Following the conversation, participants engaged in a short transitional phase to return to their own perspective, labeled self-reflection in the Control group. During this transition, participants were told to momentarily pause to reorient to their own perspective and experiences and self-reflect on the conversation. Perspective-Taking participants were reminded to complete post-surveys from their own perspective, as in the pre-survey.

2.2 AI chatbot and empathetic expression protocol

This section describes the architecture for the AI chatbot and the design for empathetic expression in responses. The AI chatbot interaction is built on a Node.js framework, which is commonly used to build and deploy web applications. The study is deployed asynchronously over the web, where participants were required to complete the study on a desktop or laptop web-enabled device.

2.2.1 AI chatbot

The generation of ECAs, their verbal responses, and their corresponding non-verbal behaviors are described. ECAs were employed as evidence has indicated they can provide a level of human touch and foster greater willingness to disclose (108110). Each ECA is designed via ReadyPlayerMe, a free-to-use online tool to generate 3D models that can be rigged, rendered, and utilized on the web using a Three.js library. ReadyPlayerMe provides integrated blendshapes to the model, which support the non-verbal behaviors of animation and lip-syncing. A male and female ECA options were generated (see Figure 1) and included in the pilot tests with the (n = 58) participants to broadly check for any negative sentiments in design choices. Upon accessing the study’s webpage, the ReadyPlayerMe-exported ECA model is loaded and rendered on users’ devices.

Figure 1
Side-by-side images show two virtual characters in a counselor's office in front of a bookshelf. The left image features a male character speaking to the user with a speech bubble instructing the user to role-play as \

Figure 1. The intervention interfaces illustrating the male and female ECA. (Left) mock conversation scenario in perspective-taking phase and (Right) sample disclosure item Q2 in reflective conversation phase. Avatar created using https://readyplayer.me/.

When a participant interacts with the ECA, verbal responses containing text and audio are generated statically or dynamically using the rule-based conversation script and LLMs. To generate the verbal response, OpenAI’s Completions (4o-mini) and Text-To-Speech (tts-1) models are employed (male voice: echo; female voice: shimmer). The conversation script indicates how text and audio responses should be statically or dynamically generated. Static verbal responses are pre-generated to control the interactions so that participants receive identical responses when necessary. Static verbal responses from the ECA include the nine disclosure items in the reflective conversation or the empathy mapping items in the perspective-taking phase. Both conditions followed the same rule-based conversation script during the reflective conversation, producing identical static verbal responses. However, the perspective-taking phase used two distinct scripts due to differences in the empathy mapping framing. In contrast, dynamic verbal responses are generated in real-time based on individual participant queries across both conditions. Dynamic verbal responses largely pertain to the empathetic expressions delivered in the reflective conversation. Where the script calls for a dynamic verbal response, an empathetic expression strategy guides the LLM (see Section 2.2.2). The ECA immediately responds with a verbal backchannel (e.g., “Thanks for being open. I’m working on generating something thoughtful based on what you’ve shared.”) to acknowledge input and mask LLM response delays. The system generates dynamic verbal responses and queues them to deliver after the verbal backchannel finishes.

The ECAs perform non-verbal behaviors such as animations and lip-syncing that correspond to their verbal responses, implemented using the open-source repository TalkingHead.1 For animations, the ECA employs template behaviors when idling or speaking (e.g., standing straight, leaning to the side, gestures). When the ECA is idle, a sequence of randomized idle poses with a generic breathing animation is rendered. When the ECA is speaking (i.e., when a verbal response is delivered), additional animations were integrated alongside the randomized talking poses (e.g., a wave when the ECA introduces itself). For lip-syncing, transcriptions and timestamps of the spoken audio are derived from the verbal responses. The transcription is processed to extract individual phonemes, which are mapped to corresponding visemes. These visemes are coded in the lip-sync system using Oculus Lipsync and TalkingHead. Timestamps show when to apply visemes to the ECA’s facial blendshapes during playback to simulate natural speech. In sum, a typical conversation turn will entail: receiving user input, delivering verbal backchannels, generating appropriate (dynamic) verbal response for empathetic expression, retrieving subsequent (static) verbal response for disclosure or closed-ended item, delivering entire verbal response, animating non-verbal behaviors, and synchronizing lip movements to verbal response.

2.2.2 Empathetic expressions of dialogue

While numerous articles explore opportunities to detect and convey emotion accordingly (33, 8689), the present work primarily focuses on how emotionally intelligent systems can be further enhanced by psychological theories of perspective-taking and self-distancing. However, establishing emotional intelligence from the AI chatbot remains critical to the study’s mental wellbeing design and in understanding how perspective-taking can enhance such chatbots. Thus, we designed the AI chatbot to convey empathetic expression strategically to individual participant disclosures. In health communications, there are opportunities (when) empathy must be conveyed and corresponding expressions or representations (what) of empathy (111, 112). With AI chatbots, frameworks suggest that similar processes of recognition and communication can be administered (113, 114). Focusing on perspective-taking, this study aims to streamline the process using a rule-based, structured approach with LLMs. The rules determine when to prompt during the nine open-ended disclosure items, while the LLMs use motivational interviewing strategies to generate what to say through empathetic expressions.

The AI chatbot’s empathetic expressions are based on Hardcastle et al.’s motivational interviewing strategy classifications, which also guided the design of the disclosure items (107). The present study’s empathetic expression strategies are primarily derived from relational strategies within the four classifications. The nature of the previous disclosure item primarily guides the empathetic expression used in the reflective conversation script. Open-ended questions like Q1 and Q2 engage participants by asking them to describe a mental wellbeing concern or goal and its impact on them. Hardcastle et al.’s strategies of offering emotional support, summarization/reflective statements, and reframing help participants feel heard and encourage reflection. In questions that help the participant plan, like in Q8 and Q9, it is more important to emphasize autonomy in the participant’s choice and support their change and persistence (102). Table 2 illustrates the specific disclosure and empathetic expression strategies utilized in the present study, as listed directly from Hardcastle et al.’s classifications (107). For each disclosure, emotional dialogue expressions in dynamic verbal responses are generated by identifying the relevant theory-driven strategy, adapting to user disclosures and conversation history, and prompting the AI model accordingly. By anchoring each empathetic expression in an established theoretical classification, the present study provides a controlled and interpretable environment to assess the impact of perspective-taking in conversations within AI chatbots that are intelligent to user disclosures with their expressions of emotion.

2.3 Measures

To address the hypotheses, three primary constructs were investigated: disclosure, readiness, and attitudes.

2.3.1 Disclosure

Based on prior literature (115, 116), disclosure is assessed through measures of quantity and depth. For H1DisclosureQuantity, LIWC-22 was used to capture word counts across the nine disclosure items to determine if the disclosed quantity of words was altered by the perspective-taking manipulation (117). To supplement analyses on disclosure quantity, we calculated an abstractness score (1–5) across participants’ nine disclosure responses, using the Linguistic Category Model (LCM) (118, 119). Seih et al.’s generated LIWC-22 dictionary and their described process for using the TreeTagger2 tool was used to capture frequencies for LCM (120, 121).

For H2DisclosureDepth, qualitative analysis was performed using the process and categories defined by Barak & Gluck-Ofri to code each of the nine participant responses in terms of information, thoughts, and feelings (122). Each response was segmented into distinct statements and categorized as follows: information, when the writer shared personal details, experiences, or factual content; thoughts, when they expressed personal opinions or reflections; and feelings, when they conveyed emotional or affective responses. Within each category, one of three levels of depth is assigned: 1. no disclosure about the user in the category altogether, 2. a disclosure about the user but in general or mild expressions, 3. a disclosure about the user in personally revealing, intimate, or deep expressions. Therefore, each response will have resulted in a score (1–3) for all categories of information, thoughts, and feelings. An overall depth score for the amount of disclosure is obtained by combining the levels of information, thoughts, and feelings for each response (122). Sample responses categorized as depth levels 1, 2, and 3 for each category can be found in Table 3.

Table 3
www.frontiersin.org

Table 3. Sample responses illustrating coded depths (1–3) of information, thoughts, and feelings from our study population. Each statement will always receive three codes; therefore, statements shown in this table may have received different scores for their non-represented categories (e.g., [P46] was rated as (depth = 1) no disclosure of feelings and (depth = 3) high disclosure of information).

A total of 96 participants properly completed the entire intervention, but disclosure analysis includes 55 participants’ responses to the disclosure items: Control (n = 29 participants × 9 items = 261 items) and Perspective-Taking (n = 26 participants × 9 items = 234 items). Technical errors early in data logging prevented the capture of conversation logs for the outstanding participants. The resulting disclosure analysis includes a robust set of (n = 493 items × 3 codes = 1479) codes, after validating responses and omitting (n = 2) responses due to invalid input. Authors AM, DT, and XP served as three independent coders with condition- and participant-anonymized, shuffled versions of the conversation transcripts. Each author had prior experience in qualitative methods and received training on the process by Barak & Gluck-Ofri before individually coding the same 30% subset of the data (n = 615 codes). Kendall’s W indicated the three coders statistically significantly agreed in their assessments, W=.856,p<.001. Disputes within responses were settled as a group, and each coder individually coded a third of the remaining data. See Table 4 for quantities of depth at each level across participants’ nine disclosure items.

Table 4
www.frontiersin.org

Table 4. Table illustrates the frequencies (n and %) of depth codes for each condition in terms of the categories of Information, Thoughts, and Feelings.

2.3.2 Readiness

To assess readiness for wellbeing change, readiness is assessed through measures of stage of readiness, composite readiness score, and intentions to address mental wellbeing. For stage and composite, we collected responses to the Readiness-to-Change Questionnaire (123). This questionnaire is grounded in the Transtheoretical Model of Change (TTM) (124), a structured and theoretical framework commonly used in health interventions and digital health (125, 126) to conceptualize behavior change as a progression through distinct stages (127). Computational modeling of TTM has demonstrated its validity in classifying users into these stages (128), and TTM-based digital interventions have shown efficacy in promoting behavioral change (129). When combined with empathetic communication strategies in chatbots, TTM-based assessments can enhance responsiveness to users’ psychological needs (113). We assess an individual’s readiness to change stage across three stages: Pre-Contemplation, Contemplation, and Action (130). The stage measure indicates whether the person is not yet considering change (PC), thinking about making a change (C), or actively working toward change (A). The composite readiness measure is produced by the following equation: C+APC. Additionally, one single item was adapted from prior work to assess participant intention to address their mental wellbeing, Pre and Post (131, 132). H3ReadinessOverall investigates within-condition changes from Pre to Post for the measures of stage, composite readiness, and intent. H4ReadinessComparison investigates between-condition changes from Pre to Post for the measures of stage, composite readiness, and intent.

2.3.3 Attitudes

The attitudinal metrics include a questionnaire on participant attitudes towards the present study’s wellbeing chatbot intervention (skepticism, confidence, technologization threat, anonymity) and a single-item measure on willingness to engage with AI chatbots for mental wellbeing. For H5AttitudesIntervention, attitudes were measured through an adaptation of the Attitude towards Psychological Online Interventions (APOI) Questionnaire (133). The scale comprises four dimensions: skepticism and perception of risks, confidence in effectiveness, technologization threat, and anonymity benefits. For H6AttitudesChatbots, attitudes of willingness to engage with AI chatbots was measured Pre and Post through a single item similar to the prior intention metric.

2.4 Procedure

We conducted a between-participant study with the described system with undergraduates at the University of Florida. Participants selected a time to participate in the study through one of the university’s research recruitment platforms, which provides course credit to students as compensation for research studies. After giving informed consent, participants completed the pre-survey measures of readiness listed in Section 2.3. Participants were then randomized into one of Perspective-Taking or Control. Each participant completed the intervention steps of perspective-taking and reflective conversation as described in Section 2.1. Concluding the intervention steps and reflection in their self-perspective, participants completed the post-survey measures of readiness and attitudes described in Section 2.3, as well as demographics. Participants were debriefed on the study concerning how their anonymized data would be used and were subsequently granted course credit for their participation.

2.5 Participants

An a priori power analysis using G*Power was conducted for a mixed-design ANOVA with 2 groups (between-subjects factor) and 2 time points (within-subjects factor). Assuming an α = 0.05, power (1β)=0.95, a medium effect size of f = 0.20, a correlation among repeated measures = 0.5, and sphericity met, we yield the minimum accepted sample size of N84. This study was approved by the University of Florida Institutional Review Board, and all participants provided written informed consent. To account for dropout and errors in completion, a total of 99 participants were recruited via the research recruitment platform and completed the entirety of the Procedure in Section 2.4. Three (n = 3) Perspective-Taking participants were excluded from analyses for introducing themselves as an alias deviant from their defined perspective’s alias (see Section 2.1.2). The final analysis included 96 participants, with (n = 48) participants each in the Control and Perspective-Taking conditions.

Participants ranged in age from 18 to 41 years (M=21.8,SD=2.98). Gender identities included 67% male, 29% female, and 4% non-binary or unreported. Demographics were 52% White, 36% Asian or Pacific Islander, 4% Black or African American, 4% mixed, and 4% unreported, with 16% also identifying as Hispanic or Latino. In terms of education, all participants were students at the University of Florida, with 79% attending as undergraduate students and the remainder as graduate students. As the employed recruitment platform provides compensation for computer science-related courses, breakdowns of majors largely pertain to STEM: 71% computer science-related, 22% engineering, 5% mathematics or education-related, and 2% unlisted.

3 Results

After collection and coding, data pre-processing was conducted using Python (3.12.2). Statistical analyses were primarily performed using IBM SPSS Statistics (Version 30). Descriptive statistics were computed to summarize the key variables across conditions. The significance level was set at p<0.05, and assumptions for each test (e.g., normality tests via Shapiro-Wilk) were evaluated before conducting the analyses. Assumptions for independent samples t-tests and ANCOVAs, using Pre as a covariate, were tested and revealed violations of normality in measures, p<0.05. Therefore, Mann-Whitney U tests and aligned rank transform (ART) ANOVAs, with Condition (Control and Perspective-Taking) and Time (Pre and Post) as factors, were conducted for each measure. Effect sizes were calculated and listed via rank-biserial correlation (r), partial eta squared ηp2, and Cramer’s V for the corresponding non-parametric tests and Chi-square tests. Post hoc pairwise comparisons for ART ANOVAs were performed using ART-C with a Holm correction across the six pairwise post hoc comparisons to control the familywise error rate (134, 135). Due to a lack of support for ART ANOVAs in SPSS (see Section 3.2 for analysis), ART ANOVAs were analyzed in R (4.5.0) using ARTool (135).

3.1 Disclosure

3.1.1 Quantity

Word Count. Mann-Whitney U test found a significant difference between conditions in word counts (averaged across the nine disclosure items), U=523,z=2.45,p=0.014,r=0.33. Word counts were significantly higher in Perspective-Taking (Mdn = 20.2, M = 21.9, SD = 11.7) compared to Control (Mdn = 10.4, M = 14.7, SD = 10.3) (see Figure 2).

Figure 2
Three box plots compare a control group to a perspective-taking group. The first plot shows higher mean word counts for perspective-taking. The second plot indicates greater depth of thought disclosure for perspective-taking. The third plot reveals higher overall disclosure depth for perspective-taking. Statistically significant differences are marked with asterisks.

Figure 2. Box plots with medians for disclosure for Control and Perspective-Taking in terms of (Left) quantities, (Middle) depth of thoughts, and (Right) depth overall. Quantity, depth of thoughts, and depth overall refer to means for word counts, depth (intimacy) of thoughts, and depth (intimacy) of overall content, respectively, across the nine disclosure items, with significance illustrated (**<0.01, *<0.05).

Abstractness. Mann-Whitney U test revealed no significant difference in abstractness via LCM scores between conditions, U=380,z=0.051,p=0.960. Descriptives for abstractness (1 = concrete, 5 = abstract) are included for reference: Perspective-Taking (Mdn = 3.22, M = 3.27, SD = 0.243) and Control (Mdn = 3.22, M = 3.24, SD = 0.179).

3.1.2 Depth

In addition to Mann-Whitney U tests, Chi-square tests for homogeneity were employed to assess frequencies of depth 1, 2, or 3 for each category of information, thoughts, and feelings. Post hoc pairwise comparisons for Chi-squares were conducted using z-tests with a Bonferroni correction.

Information. Mann-Whitney U test revealed no significant difference in depth of information disclosure between conditions, U=433,z=0.952,p=0.341.

Chi-square and post hoc tests indicated a significantly greater proportion of high disclosures (depth = 3) for Perspective-Taking compared to Control, χ2(2)=11.0,p<0.01,V=0.15. In turn, a significantly lower proportion of low disclosures (depth = 2) was found for Perspective-Taking compared to Control. See Table 4 for coded depth frequencies and differences in Information.

Thoughts. Mann-Whitney U test found a significant difference in depth of thoughts disclosure between conditions, U=549,z=2.92,p<0.01,r=0.39. Depth of thoughts were significantly higher in Perspective-Taking (Mdn = 2.00, M = 1.97, SD = 0.318) compared to Control (Mdn = 1.67, M = 1.71, SD = 0.320) (see Figure 2).

Chi-square and post hoc tests indicated a significantly greater proportion of high disclosures (depth = 3) for Perspective-Taking compared to Control, χ2(2)=13.4,p<0.001,V=0.16. In turn, a significantly lower proportion of no disclosures (depth = 1) was found for Perspective-Taking compared to Control. See Table 4 for coded depth frequencies and differences in Thoughts.

Feelings. Mann-Whitney U test revealed no significant difference in depth of feelings disclosure between conditions, U=333,z=0.938,p=0.348.

There was a heavy skew in scores with no disclosure of feelings (depth = 1). Fisher’s exact test was conducted due to an inadequate sample size for the chi-square test of homogeneity (136). The distributions of feelings depth scores were not significantly different between conditions, p=0.224.

Overall Depth. Mann-Whitney U test found a significant difference in overall disclosure depth between conditions, U=504,z=2.15,p=0.032,r=0.29. Overall depth was significantly higher in Perspective-Taking (Mdn = 4.94, M = 4.90, SD = 0.519) compared to Control (Mdn = 4.56, M = 4.63, SD = 0.530) (see Figure 2).

3.2 Readiness

Stage. The ART ANOVA revealed a significant main effect of Time, F(1,94)=17.5,p<0.001,ηp2=0.157, indicating an overall improvement in stage of readiness from Pre to Post across conditions. Post hoc analyses revealed a significant improvement in stage from Pre to Post for Control only (t(94)=3.58,p<0.01,r=0.35). No significant main effect of Condition or Condition × Time interaction was found, suggesting that the magnitudes of improvement over time did not differ significantly. Separate analysis on the deltas from Pre-stage to Post-stage also found no significant difference between conditions, p>0.05 (see Figure 3).

Figure 3
Box plots showing the effects of control and perspective-taking conditions on three measures: \

Figure 3. Box plots of Pre- and Post-readiness measures for Perspective-Taking and Control with medians: (Top-Left) stage of readiness for Pre-Contemplation, Contemplation, and Action, (Top-Right) composite readiness scores, and (Bottom) intent to address wellbeing. Significance within conditions from Pre to Post illustrated (***<0.001, **<0.01, *<0.05). No significant effects of Condition × Time interaction.

Composite. Similar to Stage, the ART ANOVA revealed a significant main effect of Time, F(1,94)=27.6,p<0.001, ηp2=0.227, indicating an overall improvement in composite readiness scores from Pre to Post across conditions. Post hoc analyses revealed significant increases in composite readiness from Pre to Post for each condition: Perspective-Taking (t(94)=3.45,p<0.01,r=0.34) and Control (t(94)=4.06,p<0.001,r=0.39). No significant main effect of Condition or Condition × Time interaction was found, suggesting that the magnitudes of improvement over time did not differ significantly. Separate analysis on the deltas from Pre-composite readiness to Post-composite readiness also found no significant difference between conditions, p>0.05 (see Figure 3).

Intention. ART ANOVA revealed a significant main effect of Condition, F(1,93)=6.43,p=0.013,ηp2=0.065. Post hoc comparisons for Condition revealed significantly higher overall intentions in the Control compared to Perspective-Taking, p=0.013,r=0.25. A significant main effect of Time was also observed, F(1,93)=18.9,p<0.001,ηp2=0.169, indicating an overall improvement in intention to address mental wellbeing from Pre to Post across conditions. Post hoc analyses revealed significant increases in intentions from Pre to Post for each condition: Perspective-Taking (t(93)=2.85,p=0.022,r=0.28) and Control (t(93)=3.48,p<0.01,r=0.34). No significant effect of Condition × Time interaction was found, suggesting that the magnitudes of improvement over time did not differ significantly. Separate analysis on the deltas from Pre-intention to Post-intention also found no significant difference between conditions, p>0.05 (see Figure 3).

3.3 Attitudes

Skepticism and perception of risks. Mann-Whitney U test found a significant difference in skepticism and perception of risks between conditions, U=1558,z=3.02,p<0.01,r=0.31. Skepticism and perception of risks were significantly higher (worse) in Perspective-Taking (Mdn = 2.75, M = 3.09, SD = 1.33) compared to Control (Mdn = 2.00, M = 2.31, SD = 0.733) (see Figure 4).

Figure 4
Box plots compare skepticism and perception of risks on the left, with attitude scores significantly greater (worse) in the perspective-taking condition. On the right, willingness to engage with ECA chatbots for mental wellbeing scores are shown pre- and post-test for control and perspective-taking conditions, indicating significant score increases from pre- to post-test in both conditions. Additionally, a significantly greater willingness to engage is shown in the post-test for the control compared to perspective-taking. Asterisks denote statistical significance.

Figure 4. Box plots of attitudinal measures for Perspective-Taking and Control with medians: (Left) skepticism and perception of risks, and (Right) Pre- and Post-willingness to engage with AI chatbots for mental wellbeing. Significance in box plot between conditions and Pre to Post differences illustrated (***<0.001, *<0.05). There was a significant interaction effect of Condition × Time in willingness to engage in favor of Control.

Confidence in effectiveness. Mann-Whitney U test revealed no significant difference in confidence in effectiveness between conditions, U=961,z=1.42,p=0.155.

Technologization threat. Mann-Whitney U test revealed no significant difference in technologization threat between conditions, U=1122,z=.221,p=0.825.

Anonymity benefits. Mann-Whitney U test revealed no significant difference in anonymity benefits between conditions, U=1161,z=0.063,p=0.950.

Willingness to engage. ART ANOVA revealed a significant main effect of Time, F(1,93)=45.9,p<0.001,ηp2=0.331, indicating an overall improvement in willingness to engage with AI chatbots for mental wellbeing from Pre to Post across conditions. Post hoc analyses revealed significant increases in willingness from Pre to Post for each condition: Perspective-Taking (t(93)=3.09,p=0.010,r=0.31) and Control (t(93)=5.72,p<0.001,r=0.51). A significant Condition × Time interaction was also observed, F(1,93)=5.23,p=0.024, ηp2=0.053, suggesting that the effect of time differed between conditions. Additionally, the Control reported significantly greater willingness to engage with AI wellbeing chatbots at Post compared to Perspective-Taking, p=.032,r=0.22 (see Figure 4).

4 Discussion

Our results suggest that perspective-taking can significantly alter the ways in which users disclose to chatbots. In line with hypotheses H1DisclosureQuantity and H2DisclosureDepth,textbfPerspective-Taking participants disclosed significantly greater word quantities, depth of thoughts, and overall depth than Control participants. Perspective-taking also resulted in more frequent high-depth (level 3) disclosures in both information and thoughts compared to the control. Results also showed significant improvement in all readiness measures across both conditions, supporting our hypothesis H3ReadinessOverall. Improvements in readiness did not support our deferred choice of Perspective-Taking for H4ReadinessComparison, but surprisingly, we also found no interaction between Condition and Time, nor any difference in the rate of change (deltas) across readiness measures. The promising effects are tempered by attitudes: Perspective-Taking participants showed significantly greater skepticism and a less pronounced increase in willingness to engage with wellbeing chatbots than Control participants, contrary to H5AttitudesIntervention and H6AttitudesChatbot. We interpret the findings observed on disclosure and the implications of this work for wellbeing chatbots accordingly.

4.1 Interpreting effects on disclosure

To contextualize the effects of perspective-taking on disclosure, we provide interpretations of the improved disclosure, consider the nature of the disclosures, and identify limits to our disclosure findings.

Perspective-taking significantly improved the quantity and depth of participants’ disclosures (see Figure 2). Our findings echo prior work showing that perspective-taking can shift engagement behavior in applied contexts, now extended to chatbot-mediated disclosures (66, 137, 138). Such literature suggests perspective-takers often align their behavior with their expectations of the other’s imagined actions, which can override intrinsic behavioral constraints (50, 54, 62, 76, 91). In the present study, we suggest that the change in disclosure behavior stems from similar effects and indicates greater substance within these disclosures, rather than an abstract increase in verbosity. Despite prior claims that distal constructs and psychological distance promotes abstraction (44, 139), we observed no such increase in abstract language among perspective-taking disclosures. The observed improvement was also not limited to quantity, as the depth of thoughts and overall disclosures were significantly greater when perspective-taking. This would indicate that Perspective-Taking participants illustrate a greater disclosure of personal and intimate information (115, 140). While quantity and depth can often relate, lower quantity disclosures can still result in higher depths (122), and their correlations are not necessarily positive (141). The observed findings of improved disclosure depths further suggest that perspective-taking fostered greater substantive content from users, rather than a simple inflation of abstract or verbose wording.

While disclosure quantity and depth seem to have meaningfully improved, it is worthwhile to discuss the nature of the disclosures produced in the Perspective-Taking condition. A natural question that arises is in the self-relevance of these disclosures, as they were uttered wholly from the perspective of the designated other. Even if the disclosures do not directly mirror personal information or thoughts, prior literature notes that individuals often project self-relevant traits onto imagined others (44, 56). Although the present study did not directly measure overlap, the study design of self-designation for an other, imagined in the first person, aimed to promote overlap and afford successful perspective-taking (54, 142). Our earlier discussion on a lack of abstractness differences would also align with such literature (56, 90, 139). Even further, the results also demonstrate that Perspective-Taking participants’ readiness improved, despite disclosing from an other’s perspective. The readiness gains may reflect previously documented merging effects (55, 79, 80). Together, the study design and findings suggest plausibility that Perspective-Taking participants projected some properties of the self (albeit, to a likely lesser extent than Control) in their estimations of the other, resulting in the improved readiness outcomes. We ultimately still characterize our findings as an improvement in disclosure, rather than self-disclosure, recognizing the limitations in determining the extent to which the Perspective-Taking disclosures directly pertain to participants’ selves.

The claims on disclosure are with respect to multiple dimensions: quantity via word counts and abstractions, and depth via information, thoughts, and overall. However, the disclosure of feelings remains an area for deeper investigation. Within the present study, few disclosures pertained to participant emotions or feelings, regardless of condition. Roughly 96% of the responses across both conditions were assigned (depth = 1) no disclosure of feelings. Since both conditions disclosed little emotional content, this may seemingly be explained by the fact that each of the nine disclosure items directly requested disclosure of information (e.g., experiences, background) or thoughts (e.g., opinions, goals, plans). While our reflective conversation did assess participants’ feelings and emotions, such sentiments were primarily captured in the closed-ended items, which could not be included in the analysis of the disclosure items. As a result, the responses to the nine disclosure items almost exclusively pertained to participants’ direct histories, experiences, thoughts, and opinions. However, the lack of emotional expression may also reflect a broader limitation of perspective-taking itself, which may enhance depth and thought but not necessarily encourage the disclosure of affective content. This possibility invites further investigation into whether perspective-taking facilitates cognitive but not emotional forms of disclosure.

4.2 AI chatbots to promote mental wellbeing

In light of findings on readiness, we discuss how perspective-taking led to such effects, implications for wellbeing chatbot interactions, and design considerations based on user attitudes.

While both conditions experienced significantly improved readiness outcomes, our findings suggest that the degree of improvement between conditions was not significantly different (see Figure 3). Though hypotheses were deferred in favor of Perspective-Taking, it would also reason that speaking from one’s own perspective (Control) should naturally afford disclosures more pertinent to the self, as well as more self-tailored expressions of emotion from the chatbot. We provide several possible explanations of how Perspective-Taking disclosures may have led to seemingly comparable user outcomes. First, the improved readiness observed in Perspective-Taking likely reflects previously discussed mechanisms of activated motivations to change and blurred boundaries of helping the self or an other (66, 76, 77, 81, 82). And while potential self-other overlap effects may also account for such improvements, another plausible possible factor is the therapeutic effect of written emotional experiences, which has been shown to enhance wellbeing (143, 144). Written emotional disclosures may cover topics such as emotional experiences, future aspirations, or past successes, akin to topics in our reflective conversation. Prior work has found that writing forms of emotional disclosures can help facilitate relief for anxiety and depressive symptoms (145147), with improvements in self-esteem (148) and even physical health symptoms (149151). Interestingly, some studies have investigated the effects of written emotional disclosures from non-self perspectives. Greenberg et al. found that writing emotional disclosures from an unexperienced, imagined perspective led to improvements in health symptoms and lower immediate reports of depression, fatigue, and avoidance (152). King et al. also found that writing from a distance perspective in the form of a hypothetical (ideal) self could elicit similar health effects in comparison to writing about self-experiences (146). Our findings echo prior work illustrating that writing emotional disclosures, regardless of perspective, may produce positive effects on the self. Finally, it is also worth briefly mentioning the potential role of observed enhancements in disclosures in Perspective-Taking. The findings suggest that Perspective-Taking participants wrote more quantity with greater depth than the Control, which may have further contributed to overall improvements and lack of significant differences between conditions. No formal analysis was able to be conducted on the relationship between disclosures and readiness, but such investigations could further elucidate how chatbots can promote self-outcomes.

Based on our findings and discussion, it would appear that direct self-disclosures may not be a strict requirement to promote user outcomes with chatbots. This may have numerous implications for pathways to promote interactions with emotional intelligence AI. In chatbot conversations addressing highly stigmatized topics (e.g., severe health issues, sexual health, or mental health), users may limit their disclosures due to shame or fear of judgment (29, 30, 153). If our findings hold, a distanced perspective may be able to be leveraged to overcome such stigmas and draw deeper disclosures for self-benefit. A similar domain that applies such techniques is within therapeutic role-plays, which has demonstrated that imaginative scenarios can be employed for self-understanding, improvement, and behavior (154156). The present study also supports preliminary findings suggesting that such engagements may be suitable for human-computer simulations (157). Another implication of our work is towards the ethical, safe usage of AI chatbots, especially with regard to user data privacy and security. Though chatbots have been shown to be a promising opportunity for health outcomes (3, 5, 6), concerns arise in the employment of generative AI for wellbeing. Generative AIs can run the risk of memorizing or reproducing data, which poses further considerations in digital health conversations that pertain to protected or sensitive health information (158). Generative AI also faces broad technology risks associated with compromised data and leaks (159). While cybersecurity safeguards and processing data in a de-identified state can provide a layer of security, even de-identified information can potentially be re-identified with real persons (158). Perspective-taking may offer a potential mitigation strategy for these risks. By encouraging distanced disclosure, users may still benefit from reflective engagement without exposing identifiable or sensitive information. Such efforts align with recent ethical recommendations that emphasize a need to minimize data exposure in AI-mediated mental health contexts (160, 161). Given skepticism findings suggesting that the present intervention was perceived as less relevant to perspective-takers, distanced perspectives may be able to help buffer against negative effects that arise from AI hallucinations, since inaccurate or misleading responses may not necessarily interpreted as personally relevant or diagnostic (162). In this way, our study contributes to ongoing discussions about how to design AI systems that are both effective and ethically responsible in sensitive domains.

The promising findings on disclosure and readiness should be interpreted in light of the more complex pattern observed in attitudes. Perspective-taking may be able to promote disclosure behaviors, but it may come at a cost of perceived personal applicability of the chatbot’s support. Worth noting is that the skepticism measure was employed to gauge perceptions of the intervention’s ability to provide effective, personal support, and the resulting attitudes would be consistent with expectations that Perspective-Taking would include less self-relevant disclosures and/or support compared to Control. The findings of H5 and H6 are also aligned with established literature on self-distancing and construal-level theory that illustrate that distance reduces egocentric experiences with stimuli and leads to less self-relevant appraisals (44, 163). In other words, when individuals adopt another person’s perspective, they may feel less directly connected to the experience and perceive it as less personally relevant or useful, even if it encourages thoughtful reflection. Given the role that attitudes may play in one’s decision to engage with such digital interventions (164), the divergence in attitudes warrants additional considerations. The confounding effect raises practical concerns for the design of supportive AI systems that leverage psychological distance. A perceived lack of belief that the AI can support the user may lead to diminished future engagements (165), even if the intervention effectively prompts deeper disclosure in the present. Users may also be resistant to advice from the AI chatbot, despite its potential effectiveness due to distancing effects and relevance (166). If perspective-taking prompts deeper reflection but undermines one’s attitudes towards using such systems, its standalone use may be insufficient. Perspective-taking and distancing theories could enhance wellbeing chatbot engagements, but may need to be complemented with strategies that restore personal resonance to foster congruently positive attitudes. Future work could investigate practices in therapeutic contexts where distancing is spontaneous rather than longitudinal (163) or patients switch between immersed and distanced perspectives (167).

4.3 Limitations and future work

There are limitations to this study that help contextualize its findings and identify avenues for future research. The focus of this work was on driving disclosure and wellbeing among university-attending populations with AI; as such, recruitments were made through a University of Florida student research platform. The resulting population consisted primarily of STEM students, which limits the generalizability of the results to broader student populations. STEM students may have more familiarity with AI compared to general populations and differing mental health concerns that may alter their usage of such systems (168). Furthermore, the present work involved a large qualitative corpus of 1,479 codes for participant disclosures, but successful capture of the lost data may have allowed greater ability to analyze relationships between disclosures and outcomes. The lack of emotional disclosures from this structured, reflective conversation also limits the ability to understand how such methods can elicit affective engagement. Understanding these relationships could further clarify the role of disclosure as a mediating factor in AI chatbot engagements. A few study design limitations are mentioned for future work to help validate research with perspective-taking and emotionally intelligence AI chatbots. While the present perspective-taking intervention appeared effective, the absence of a placebo condition limits our ability to isolate the effects of intervention content from outstanding engagement effects. The results of this work are interpretable within single-session mental wellbeing conversations. As a result, the present outcomes are confined to immediate effects on disclosure and readiness to address mental wellbeing, but longitudinal effects on disclosure or actual changes in healthy behavior remain an unexplored area for future work. The intervention also relied on a carefully structured set of tasks to elicit the perspective within an asynchronous environment. Wellbeing interventions may struggle to incorporate such specific tasks within their contexts. Although prior research indicates that perspective-taking can happen more spontaneously (94, 95), future integrations may need to consider alternative methodologies to integrate distancing practically, especially with respect to the attitudinal findings on skepticism. Several related areas for investigation outside of the current scope of work include: perceptions of the chatbot’s expression of emotion, participants’ attitudes toward their designated, or self–other overlap. Future work should continue to research the noted limitations, as well as opportunities to employ potential implications of the present work.

5 Conclusion

AI chatbots continue to act as a medium towards reducing barriers for mental support, especially when supplemented with emotional intelligence. However, the capabilities of such chatbots and their resulting outcomes can be limited without meaningful engagements and disclosures from users. A conversation with little-to-no depth may only elicit a surface-level understanding of the wellbeing concerns and needs of an individual. Similar to support from a counselor, a friend, or loved one, a chatbot’s capability to appropriately assist and empathize may increase when provided with a greater quantity and depth of context. The findings of this study illustrate that perspective-taking may be able to enhance disclosure to AI chatbots for wellbeing. Specifically, our results suggest that perspective-taking led participants to share significantly greater disclosure in forms of word quantity and depth across multiple categories, with limited evidence of abstractions beyond what was seen in our control. Furthermore, the AI chatbot intervention seemingly helped all participants improve their readiness and intentions to address mental wellbeing, and perspective-taking did not seemingly diminish the gain in participants’ improvements. In light of prior literature, our findings may suggest that meaningful disclosure to chatbots to improve mental wellbeing readiness may not necessarily require direct self-disclosures. In doing so, we describe implications for how perspective-taking and distancing theories may further enhance disclosure to chatbots in sensitive contexts or in pursuit of minimizing the disclosure of sensitive self-information. Future work should continue to investigate how greater disclosure can be evoked to meaningfully foster user outcomes with emotionally intelligent AI chatbots based on the limitations and emergent gaps identified in the present study.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving humans were approved by University of Florida Institutional Review Board. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

CY: Conceptualization, Data curation, Formal analysis, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing. RG: Methodology, Writing – review & editing, Formal analysis. MV: Methodology, Writing – review & editing, Conceptualization, Supervision. RosV: Methodology, Validation, Writing – review & editing. RohV: Methodology, Validation, Writing – review & editing. AM: Formal analysis, Software, Writing – review & editing. XP: Formal analysis, Software, Writing – review & editing. DT: Formal analysis, Software, Writing – review & editing. BL: Project administration, Resources, Supervision, Validation, Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that Generative AI was used in the creation of this manuscript. AI technology was used to improve the language, grammar, and readability of the manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fdgth.2025.1655860/full#supplementary-material

Footnotes

1. ^https://github.com/met4citizen/TalkingHead

2. ^https://www.ims.uni-stuttgart.de/en/research/resources/tools/treetagger/

References

1. Lipson SK, Zhou S, Abelson S, Heinze J, Jirsa M, Morigney J, et al. Trends in college student mental health and help-seeking by race/ethnicity: findings from the national healthy minds study, 2013–2021. J Affect Disord. (2022) 306:138–47. doi: 10.1016/j.jad.2022.03.038

PubMed Abstract | Crossref Full Text | Google Scholar

2. Wang PS, Berglund PA, Olfson M, Kessler RC. Delays in initial treatment contact after first onset of a mental disorder. Health Serv Res. (2004) 39:393–416. doi: 10.1111/j.1475-6773.2004.00234.x

PubMed Abstract | Crossref Full Text | Google Scholar

3. Abd-Alrazaq AA, Rababeh A, Alajlani M, Bewick BM, Househ M. Effectiveness and safety of using chatbots to improve mental health: systematic review and meta-analysis. J Med Internet Res. (2020) 22:e16021. doi: 10.2196/16021

PubMed Abstract | Crossref Full Text | Google Scholar

4. Bickmore TW, Mitchell SE, Jack BW, Paasche-Orlow MK, Pfeifer LM, O’Donnell J. Response to a relational agent by hospital patients with depressive symptoms. Interact Comput. (2010) 22:289–98. doi: 10.1016/j.intcom.2009.12.001

PubMed Abstract | Crossref Full Text | Google Scholar

5. Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (woebot): a randomized controlled trial. JMIR Ment Health. (2017) 4:e7785. doi: 10.2196/mental.7785

Crossref Full Text | Google Scholar

6. Gardiner PM, McCue KD, Negash LM, Cheng T, White LF, Yinusa-Nyahkoon L, et al. Engaging women with an embodied conversational agent to deliver mindfulness and lifestyle recommendations: a feasibility randomized control trial. Patient Educ Couns. (2017) 100:1720–9. doi: 10.1016/j.pec.2017.04.015

PubMed Abstract | Crossref Full Text | Google Scholar

7. Schroeder J, Wilkes C, Rowan K, Toledo A, Paradiso A, Czerwinski M, et al. Pocket skills: a conversational mobile web app to support dialectical behavioral therapy. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. (2018). p. 1–15.

Google Scholar

8. Feng Y, Hang Y, Wu W, Song X, Xiao X, Dong F, et al. Effectiveness of ai-driven conversational agents in improving mental health among young people: systematic review and meta-analysis. J Med Internet Res. (2025) 27:e69639. doi: 10.2196/69639

PubMed Abstract | Crossref Full Text | Google Scholar

9. Limpanopparat S, Gibson E, Harris A. User engagement, attitudes, and the effectiveness of chatbots as a mental health intervention: a systematic review. Comput Hum Behav Artif Hum. (2024) 2:100081. doi: 10.1016/j.chbah.2024.100081

Crossref Full Text | Google Scholar

10. Casu M, Triscari S, Battiato S, Guarnera L, Caponnetto P. Ai chatbots for mental health: a scoping review of effectiveness, feasibility, and applications. Appl Sci. (2024) 14:5889. doi: 10.3390/app14135889

Crossref Full Text | Google Scholar

11. Brown JE, Halpern J. Ai chatbots cannot replace human interactions in the pursuit of more inclusive mental healthcare. SSM-Ment Health. (2021) 1:100017. doi: 10.1016/j.ssmmh.2021.100017

Crossref Full Text | Google Scholar

12. Khawaja Z, Bélisle-Pipon J-C. Your robot therapist is not your therapist: understanding the role of ai-powered mental health chatbots. Front Digit Health. (2023) 5:1278186. doi: 10.3389/fdgth.2023.1278186

PubMed Abstract | Crossref Full Text | Google Scholar

13. You C, Ghosh R, Maxim A, Stuart J, Cooks E, Lok B. How does a virtual human earn your trust? Guidelines to improve willingness to self-disclose to intelligent virtual agents. In: Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents. (2022). p. 1–8.

Google Scholar

14. Zhang Z, Wang J. Can AI replace psychotherapists? Exploring the future of mental health care. Front Psychiatry. (2024) 15:1444382. doi: 10.3389/fpsyt.2024.1444382

PubMed Abstract | Crossref Full Text | Google Scholar

15. Aggarwal A, Tam CC, Wu D, Li X, Qiao S. Artificial intelligence–based chatbots for promoting health behavioral changes: systematic review. J Med Internet Res. (2023) 25:e40789. doi: 10.2196/40789

PubMed Abstract | Crossref Full Text | Google Scholar

16. Bickmore TW, Puskar K, Schlenk EA, Pfeifer LM, Sereika SM. Maintaining reality: relational agents for antipsychotic medication adherence. Interact Comput. (2010) 22:276–88. doi: 10.1016/j.intcom.2010.02.001

Crossref Full Text | Google Scholar

17. Yasukawa S, Tanaka T, Yamane K, Kano R, Sakata M, Noma H, et al. A chatbot to improve adherence to internet-based cognitive–behavioural therapy among workers with subthreshold depression: a randomised controlled trial. BMJ Ment Health. (2024) 27:e300881. doi: 10.1136/bmjment-2023-300881

PubMed Abstract | Crossref Full Text | Google Scholar

18. Olano-Espinosa E, Avila-Tomas JF, Minue-Lorenzo C, Matilla-Pardo B, Serrano MES, Martinez-Suberviola FJ, et al. Effectiveness of a conversational chatbot (dejal@ bot) for the adult population to quit smoking: pragmatic, multicenter, controlled, randomized clinical trial in primary care. JMIR Mhealth Uhealth. (2022) 10:e34273. doi: 10.2196/34273

PubMed Abstract | Crossref Full Text | Google Scholar

19. Fitzsimmons-Craft EE, Chan WW, Smith AC, Firebaugh M-L, Fowler LA, Topooco N, et al. Effectiveness of a chatbot for eating disorders prevention: a randomized clinical trial. Int J Eat Disord. (2022) 55:343–53. doi: 10.1002/eat.23662

PubMed Abstract | Crossref Full Text | Google Scholar

20. Shah J, DePietro B, D’Adamo L, Firebaugh M-L, Laing O, Fowler LA, et al. Development and usability testing of a chatbot to promote mental health services use among individuals with eating disorders following screening. Int J Eat Disord. (2022) 55:1229–44. doi: 10.1002/eat.23798

PubMed Abstract | Crossref Full Text | Google Scholar

21. Lawrence HR, Schneider RA, Rubin SB, Matarić MJ, McDuff DJ, Bell MJ. The opportunities and risks of large language models in mental health. JMIR Ment Health. (2024) 11:e59479. doi: 10.2196/59479

PubMed Abstract | Crossref Full Text | Google Scholar

22. Rackoff GN, Zhang ZZ, Newman MG. Chatbot-delivered mental health support: attitudes and utilization in a sample of us college students. Digit Health. (2025) 11:20552076241313401. doi: 10.1177/20552076241313401

PubMed Abstract | Crossref Full Text | Google Scholar

23. Carmichael L, Poirier S-M, Coursaris CK, Léger P-M, Sénécal S. Users’ information disclosure behaviors during interactions with chatbots: the effect of information disclosure nudges. Appl Sci. (2022) 12:12660. doi: 10.3390/app122412660

Crossref Full Text | Google Scholar

24. Coghlan S, Leins K, Sheldrick S, Cheong M, Gooding P, D’Alfonso S. To chat or bot to chat: ethical issues with using chatbots in mental health. Digit Health. (2023) 9:20552076231183542. doi: 10.1177/20552076231183542

PubMed Abstract | Crossref Full Text | Google Scholar

25. Crutzen R, Bosma H, Havas J, Feron F. What can we learn from a failed trial: insight into non-participation in a chat-based intervention trial for adolescents with psychosocial problems. BMC Res Notes. (2014) 7:824. doi: 10.1186/1756-0500-7-824

PubMed Abstract | Crossref Full Text | Google Scholar

26. Hill J, Ford WR, Farreras IG. Real conversations with artificial intelligence: a comparison between human–human online conversations and human–chatbot conversations. Comput Hum Behav. (2015) 49:245–50. doi: 10.1016/j.chb.2015.02.026

Crossref Full Text | Google Scholar

27. Nguyen M, Bin YS, Campbell A. Comparing online and offline self-disclosure: a systematic review. Cyberpsychol Behav Soc Netw. (2012) 15:103–11. doi: 10.1089/cyber.2011.0277

PubMed Abstract | Crossref Full Text | Google Scholar

28. Saadati SA, Saadati SM. The role of chatbots in mental health interventions: user experiences. AI Tech Behav Soc Sci. (2023) 1:19–25. doi: 10.61838/kman.aitech.1.2.4

Crossref Full Text | Google Scholar

29. Branley-Bell D, Brown R, Coventry L, Sillence E. Chatbots for embarrassing and stigmatizing conditions: could chatbots encourage users to seek medical advice? Front Commun. (2023) 8:1275127. doi: 10.3389/fcomm.2023.1275127

Crossref Full Text | Google Scholar

30. Cui Y, Lee Y-J, Jamieson J, Yamashita N, Lee Y-C. Exploring effects of chatbot’s interpretation and self-disclosure on mental illness stigma. Proc ACM Hum-Comput Interact. (2024) 8:1–33. doi: 10.1145/3637329

PubMed Abstract | Crossref Full Text | Google Scholar

31. Chin H, Song H, Baek G, Shin M, Jung C, Cha M, et al. The potential of chatbots for emotional support and promoting mental well-being in different cultures: mixed methods study. J Med Internet Res. (2023) 25:e51712. doi: 10.2196/51712

PubMed Abstract | Crossref Full Text | Google Scholar

32. Denecke K, Abd-Alrazaq A, Househ M. Artificial Intelligence for chatbots in mental health: opportunities and challenges. In: Househ M, Borycki E, Kushniruk A, editors. Multiple Perspectives on Artificial Intelligence in Healthcare. Lecture Notes in Bioengineering. Cham: Springer (2021). p. 115–28. doi: 10.1007/978-3-030-67303-1_10

Crossref Full Text | Google Scholar

33. Edalat A, Hu R, Patel Z, Polydorou N, Ryan F, Nicholls D. Self-initiated humour protocol: a pilot study with an AI agent. Front Digit Health. (2025) 7:1530131. doi: 10.3389/fdgth.2025.1530131

PubMed Abstract | Crossref Full Text | Google Scholar

34. Ennis E, O’Neill S, Mulvenna M, Bond R. Chatbots supporting mental health and wellbeing of children and young people; applications, acceptability and usability. In: European Conference on Mental Health. (2023).

Google Scholar

35. Grové C. Co-developing a mental health and wellbeing chatbot with and for young people. Front Psychiatry. (2021) 11:606041. doi: 10.3389/fpsyt.2020.606041

PubMed Abstract | Crossref Full Text | Google Scholar

36. Inkster B, Sarda S, Subramanian V. An empathy-driven, conversational artificial intelligence agent (Wysa) for digital mental well-being: real-world data evaluation mixed-methods study. JMIR Mhealth Uhealth. (2018) 6:e12106. doi: 10.2196/12106

PubMed Abstract | Crossref Full Text | Google Scholar

37. Zhai C, Wibowo S. A systematic review on cross-culture, humor and empathy dimensions in conversational chatbots: the case of second language acquisition. Heliyon. (2022) 8:1–13. doi: 10.1016/j.heliyon.2022.e12056

Crossref Full Text | Google Scholar

38. Vaidyam AN, Wisniewski H, Halamka JD, Kashavan MS, Torous JB. Chatbots and conversational agents in mental health: a review of the psychiatric landscape. Can J Psychiatry. (2019) 64:456–64. doi: 10.1177/0706743719828977

PubMed Abstract | Crossref Full Text | Google Scholar

39. Boucher EM, Harake NR, Ward HE, Stoeckl SE, Vargas J, Minkel J, et al. Artificially intelligent chatbots in digital mental health interventions: a review. Expert Rev Med Devices. (2021) 18:37–49. doi: 10.1080/17434440.2021.2013200

PubMed Abstract | Crossref Full Text | Google Scholar

40. Huo B, Boyle A, Marfo N, Tangamornsuksan W, Steen JP, McKechnie T, et al. Large language models for chatbot health advice studies: a systematic review. JAMA Network Open. (2025) 8:e2457879. doi: 10.1001/jamanetworkopen.2024.57879

PubMed Abstract | Crossref Full Text | Google Scholar

41. Gehlbach H, Brinkworth ME, Wang M-T. The social perspective taking process: what motivates individuals to take another’s perspective? Teach Coll Rec. (2012) 114:1–29. doi: 10.1177/016146811211400108

Crossref Full Text | Google Scholar

42. Boland L, Campbell D, Fazekas M, Kitagawa W, MacIver L, Rzeczkowska K, et al. An experimental investigation of the effects of perspective-taking on emotional discomfort, cognitive fusion and self-compassion. J Contextual Behav Sci. (2021) 20:27–34. doi: 10.1016/j.jcbs.2021.02.004

Crossref Full Text | Google Scholar

43. Kross E, Ayduk O. Making meaning out of negative experiences by self-distancing. Curr Dir Psychol Sci. (2011) 20:187–91. doi: 10.1177/0963721411408883

Crossref Full Text | Google Scholar

44. Liberman N, Trope Y, Stephan E. Psychological distance. Soc Psychol. (2007) 2:353–83.

Google Scholar

45. Bargh JA, McKenna KY, Fitzsimons GM. Can you see the real me? Activation and expression of the “true self” on the internet. J Social Issues. (2002) 58:33–48. doi: 10.1111/1540-4560.00247

Crossref Full Text | Google Scholar

46. Bullingham L, Vasconcelos AC. “The presentation of self in the online world”: goffman and the study of online identities. J Inf Sci. (2013) 39:101–12. doi: 10.1177/0165551512470051

Crossref Full Text | Google Scholar

47. Liberman N, Trope Y. Traversing psychological distance. Trends Cogn Sci (Regul Ed). (2014) 18:364–9. doi: 10.1016/j.tics.2014.03.001

PubMed Abstract | Crossref Full Text | Google Scholar

48. Kohlberg L. Moral stages and moralization: the cognitive-development approach. In: Lickona T, editor. Moral Development and Behavior: Theory Research and Social Issues. New York, NY: Holt, Rienhart, and Winston (1976). p. 31–53.

Google Scholar

49. Piaget J. The Moral Judgment of the Child. London: Routledge (2013). Available online at: https://www.taylorfrancis.com/books/mono/10.4324/9781315009681/moral-judgment-child-jean-piaget

Google Scholar

50. Batson CD, Early S, Salvarani G. Perspective taking: imagining how another feels versus imaging how you would feel. Pers Soc Psychol Bull. (1997) 23:751–8. doi: 10.1177/0146167297237008

Crossref Full Text | Google Scholar

51. Eisenberg N, Spinrad T, Sadovsky A. Empathy-related responding in children. In: Killen M, Smetana JG, editors. Handbook of Moral Development. 2nd ed. New York, NY: Psychology Press (2014). p. 184–207. Available online at: https://psycnet.apa.org/record/2013-21910-009

Google Scholar

52. Davis JL, Love TP. Self-in-self, mind-in-mind, heart-in-heart: the future of role-taking, perspective taking, and empathy. In: Thye SR, Lawler EJ, editors. Advances in Group Processes. Bingley: Emerald Publishing Limited (2017). p. 151–74. Available onlne at: https://www.emerald.com/books/edited-volume/10630/Advances-in-Group-Processes

Google Scholar

53. Hogan R. Development of an empathy scale. J Consult Clin Psychol. (1969) 33:307–16. doi: 10.1037/h0027580

PubMed Abstract | Crossref Full Text | Google Scholar

54. Davis MH, Soderlund T, Cole J, Gadol E, Kute M, Myers M, et al. Cognitions associated with attempts to empathize: how do we imagine the perspective of another? Pers Soc Psychol Bull. (2004) 30:1625–35. doi: 10.1177/0146167204271183

PubMed Abstract | Crossref Full Text | Google Scholar

55. Ames DL, Jenkins AC, Banaji MR, Mitchell JP. Taking another person’s perspective increases self-referential neural processing. Psychol Sci. (2008) 19:642–4. doi: 10.1111/j.1467-9280.2008.02135.x

PubMed Abstract | Crossref Full Text | Google Scholar

56. Trope Y, Liberman N. Construal-level theory of psychological distance. Psychol Rev. (2010) 117:440. doi: 10.1037/a0018963

PubMed Abstract | Crossref Full Text | Google Scholar

57. Semin GR, Fiedler K. The cognitive functions of linguistic categories in describing persons: social cognition and language. J Pers Soc Psychol. (1988) 54:558–68. doi: 10.1037/0022-3514.54.4.558

Crossref Full Text | Google Scholar

58. Hampson SE, John OP, Goldberg LR. Category breadth and hierarchical structure in personality: studies of asymmetries in judgments of trait implications. J Pers Soc Psychol. (1986) 51:37–54. doi: 10.1037/0022-3514.51.1.37

PubMed Abstract | Crossref Full Text | Google Scholar

59. Liberman N, Trope Y. The role of feasibility and desirability considerations in near and distant future decisions: a test of temporal construal theory. J Pers Soc Psychol. (1998) 75:5–18. doi: 10.1037/0022-3514.75.1.5

Crossref Full Text | Google Scholar

60. Vallacher RR, Wegner DM. Levels of personal agency: individual variation in action identification. J Pers Soc Psychol. (1989) 57:660–71. doi: 10.1037/0022-3514.57.4.660

Crossref Full Text | Google Scholar

61. Brown P, Levinson SC. Politeness: Some Universals in Language Usage. Cambridge: Cambridge University Press (1987). Vol. 4. Available online at: https://www.cambridge.org/highereducation/books/politeness/89113EE2FB4A1D254D4A8D2011E542E4#overview

Google Scholar

62. Batson CD, Chang J, Orr R, Rowland J. Empathy, attitudes, and action: can feeling for a member of a stigmatized group motivate one to help the group? Pers Soc Psychol Bull. (2002) 28:1656–66. doi: 10.1177/014616702237647

Crossref Full Text | Google Scholar

63. Galinsky AD, Moskowitz GB. Perspective-taking: decreasing stereotype expression, stereotype accessibility, and in-group favoritism. J Pers Soc Psychol. (2000) 78:708. doi: 10.1037/0022-3514.78.4.708

PubMed Abstract | Crossref Full Text | Google Scholar

64. Uhl-Haedicke I, Klackl J, Muehlberger C, Jonas E. Turning restriction into change: imagine-self perspective taking fosters advocacy of a mandatory proenvironmental initiative. Front Psychol. (2019) 10:2657. doi: 10.3389/fpsyg.2019.02657

PubMed Abstract | Crossref Full Text | Google Scholar

65. Van Loon A, Bailenson J, Zaki J, Bostick J, Willer R. Virtual reality perspective-taking increases cognitive empathy for specific others. PLoS One. (2018) 13:e0202442. doi: 10.1371/journal.pone.0202442

PubMed Abstract | Crossref Full Text | Google Scholar

66. Pahl S, Bauer J. Overcoming the distance: perspective taking with future humans improves environmental engagement. Environ Behav. (2013) 45:155–69. doi: 10.1177/0013916511417618

Crossref Full Text | Google Scholar

67. Galinsky AD, Ku G. The effects of perspective-taking on prejudice: the moderating role of self-evaluation. Pers Soc Psychol Bull. (2004) 30:594–604. doi: 10.1177/0146167203262802

PubMed Abstract | Crossref Full Text | Google Scholar

68. Richardson DR, Hammock GS, Smith SM, Gardner W, Signo M. Empathy as a cognitive inhibitor of interpersonal aggression. Aggress Behav. (1994) 20:275–89. doi: 10.1002/1098-2337(1994)20:4%3C275::AID-AB2480200402%3E3.0.CO;2-4

Crossref Full Text | Google Scholar

69. Seinfeld S, Arroyo-Palacios J, Iruretagoyena G, Hortensius R, Zapata LE, Borland D, et al. Offenders become the victim in virtual reality: impact of changing perspective in domestic violence. Sci Rep. (2018) 8:2692. doi: 10.1038/s41598-018-19987-7

PubMed Abstract | Crossref Full Text | Google Scholar

70. Shaffer VA, Bohanek J, Focella ES, Horstman H, Saffran L. Encouraging perspective taking: using narrative writing to induce empathy for others engaging in negative health behaviors. PLoS One. (2019) 14:e0224046. doi: 10.1371/journal.pone.0224046

PubMed Abstract | Crossref Full Text | Google Scholar

71. Shechtman Z, Tanus H. Counseling groups for Arab adolescents in an intergroup conflict in Israel: report of an outcome study. Peace Confl. (2006) 12:119–37. doi: 10.1207/s15327949pac1202_2

Crossref Full Text | Google Scholar

72. Vescio TK, Sechrist GB, Paolucci MP. Perspective taking and prejudice reduction: the mediational role of empathy arousal and situational attributions. Eur J Soc Psychol. (2003) 33:455–72. doi: 10.1002/ejsp.163

Crossref Full Text | Google Scholar

73. Blatt B, LeLacheur SF, Galinsky AD, Simmens SJ, Greenberg L. Does perspective-taking increase patient satisfaction in medical encounters? Acad Med. (2010) 85:1445–52. doi: 10.1097/ACM.0b013e3181eae5ec

PubMed Abstract | Crossref Full Text | Google Scholar

74. Hodges SD, Clark BA, Myers MW. Better living through perspective taking. In: Biswas-Diener R, editor. Positive Psychology as Social Change. Dordrecht: Springer (2011). p. 193–218. doi: 10.1007/978-90-481-9938-9_12

Crossref Full Text | Google Scholar

75. Taylor SE, Brown JD. Illusion and well-being: a social psychological perspective on mental health. Psychol Bull. (1988) 103:193. doi: 10.1037/0033-2909.103.2.193

PubMed Abstract | Crossref Full Text | Google Scholar

76. Batson CD. The Altruism Question: Toward a Social-Psychological Answer. New York, NY: Psychology Press (2014). Available online at: https://www.taylorfrancis.com/books/mono/10.4324/9781315808048/altruism-question-daniel-batson

Google Scholar

77. Cialdini RB, Schaller M, Houlihan D, Arps K, Fultz J, Beaman AL. Empathy-based helping: is it selflessly or selfishly motivated? J Pers Soc Psychol. (1987) 52:749–58. doi: 10.1037/0022-3514.52.4.749

PubMed Abstract | Crossref Full Text | Google Scholar

78. Batson CD, Sager K, Garst E, Kang M, Rubchinsky K, Dawson K. Is empathy-induced helping due to self–other merging? J Pers Soc Psychol. (1997) 73:495–509. doi: 10.1037/0022-3514.73.3.495

Crossref Full Text | Google Scholar

79. Goldstein NJ, Cialdini RB. The spyglass self: a model of vicarious self-perception. J Pers Soc Psychol. (2007) 92:402. doi: 10.1037/0022-3514.92.3.402

PubMed Abstract | Crossref Full Text | Google Scholar

80. Aron A, Aron EN. Love and the Expansion of Self: Understanding Attraction and Satisfaction. New York, NY: Hemisphere Publishing Corp/Harper & Row Publishers (1986). Available online at: https://psycnet.apa.org/record/1986-98255-000

Google Scholar

81. Cialdini RB, Brown SL, Lewis BP, Luce C, Neuberg SL. Reinterpreting the empathy–altruism relationship: when one into one equals oneness. J Pers Soc Psychol. (1997) 73:481–94. doi: 10.1037/0022-3514.73.3.481

PubMed Abstract | Crossref Full Text | Google Scholar

82. Davis MH, Conklin L, Smith A, Luce C. Effect of perspective taking on the cognitive representation of persons: a merging of self and other. J Pers Soc Psychol. (1996) 70:713–26. doi: 10.1037/0022-3514.70.4.713

PubMed Abstract | Crossref Full Text | Google Scholar

83. Galinsky AD, Wang CS, Ku G. Perspective-takers behave more stereotypically. J Pers Soc Psychol. (2008) 95:404. doi: 10.1037/0022-3514.95.2.404

PubMed Abstract | Crossref Full Text | Google Scholar

84. De Cremer D. The closer we are, the more we are alike: the effect of self-other merging on depersonalized self-perception. Curr Psychol. (2004) 22:316–24. doi: 10.1007/s12144-004-1037-7

Crossref Full Text | Google Scholar

85. Krebs D. Empathy and altruism. J Pers Soc Psychol. (1975) 32:1134–46. doi: 10.1037/0022-3514.32.6.1134

PubMed Abstract | Crossref Full Text | Google Scholar

86. Coelho J, Pécune F, Micoulaud-Franchi J-A, Bioulac B, Philip P. Promoting mental health in the age of new digital tools: balancing challenges and opportunities of social media, chatbots, and wearables. Front Digit Health. (2025) 7:1560580. doi: 10.3389/fdgth.2025.1560580

PubMed Abstract | Crossref Full Text | Google Scholar

87. Gonzalez-Acosta AM, Vargas-Treviño M, Batres-Mendoza P, Guerra-Hernandez EI, Gutierrez-Gutierrez J, Cano-Perez JL, et al. The first look: a biometric analysis of emotion recognition using key facial features. Front Comput Sci. (2025) 7:1554320. doi: 10.3389/fcomp.2025.1554320

Crossref Full Text | Google Scholar

88. Rupp LH, Kumar A, Sadeghi M, Schindler-Gmelch L, Keinert M, Eskofier BM, et al. Stress can be detected during emotion-evoking smartphone use: a pilot study using machine learning. Front Digit Health. (2025) 7:1578917. doi: 10.3389/fdgth.2025.1578917

PubMed Abstract | Crossref Full Text | Google Scholar

89. Valderrama CE, Sheoran A. Identifying relevant EEG channels for subject-independent emotion recognition using attention network layers. Front Psychiatry. (2025) 16:1494369. doi: 10.3389/fpsyt.2025.1494369

PubMed Abstract | Crossref Full Text | Google Scholar

90. Epley N, Caruso EM. Perspective taking: misstepping into others’ shoes. In: Markman KD, Klein WMP, Suhr JA, editors. Handbook of Imagination and Mental Simulation. New York, NY: Psychology Press (2009). p. 295–309. Available online at: https://psycnet.apa.org/record/2008-07500-020

Google Scholar

91. Kavanagh D, Barnes-Holmes Y, Barnes-Holmes D. The study of perspective-taking: contributions from mainstream psychology and behavior analysis. Psychol Rec. (2020) 70:581–604. doi: 10.1007/s40732-019-00356-3

Crossref Full Text | Google Scholar

92. Batson CD, Batson JG, Slingsby JK, Harrell KL, Peekna HM, Todd RM. Empathic joy and the empathy-altruism hypothesis. J Pers Soc Psychol. (1991) 61:413–26. doi: 10.1037/0022-3514.61.3.413

PubMed Abstract | Crossref Full Text | Google Scholar

93. Davis MH. Empathy: A Social Psychological Approach. New York, NY: Routledge (2018). Available online at: https://www.taylorfrancis.com/books/mono/10.4324/9780429493898/empathy-mark-davis

Google Scholar

94. Furlanetto T, Cavallo A, Manera V, Tversky B, Becchio C. Through your eyes: incongruence of gaze and action increases spontaneous perspective taking. Front Hum Neurosci. (2013) 7:455. doi: 10.3389/fnhum.2013.00455

PubMed Abstract | Crossref Full Text | Google Scholar

95. Tversky B, Hard BM. Embodied and disembodied cognition: spatial perspective-taking. Cognition. (2009) 110:124–9. doi: 10.1016/j.cognition.2008.10.008

PubMed Abstract | Crossref Full Text | Google Scholar

96. Weyant JM. Perspective taking as a means of reducing negative stereotyping of individuals who speak english as a second language 1. J Appl Soc Psychol. (2007) 37:703–16. doi: 10.1111/j.1559-1816.2007.00181.x

Crossref Full Text | Google Scholar

97. Batson CD, Polycarpou MP, Harmon-Jones E, Imhoff HJ, Mitchener EC, Bednar LL, et al. Empathy and attitudes: can feeling for a member of a stigmatized group improve feelings toward the group? J Pers Soc Psychol. (1997) 72:105–18. doi: 10.1037/0022-3514.72.1.105

PubMed Abstract | Crossref Full Text | Google Scholar

98. Ferreira B, Silva W, Oliveira E, Conte T. Designing personas with empathy map. In: SEKE. (2015). Vol. 152.

Google Scholar

99. Bartels SL, Taygar AS, Johnsson SI, Petersson S, Flink I, Boersma K, et al. Using personas in the development of ehealth interventions for chronic pain: a scoping review and narrative synthesis. Internet Interv. (2023) 32:100619. doi: 10.1016/j.invent.2023.100619

PubMed Abstract | Crossref Full Text | Google Scholar

100. Bland D. Agile coaching tip–what is an empathy map (2012). Available online at: http://www. bigvisible.com/2012/06/what-is-an-empathy-map (Accessed June 10, 2025).

Google Scholar

101. Ledel Solem IK, Varsi C, Eide H, Kristjansdottir OB, Børøsund E, Schreurs KM, et al. A user-centered approach to an evidence-based electronic health pain management intervention for people with chronic pain: design and development of epio. J Med Internet Res. (2020) 22:e15889. doi: 10.2196/15889

PubMed Abstract | Crossref Full Text | Google Scholar

102. Hettema J, Steele J, Miller WR. Motivational interviewing. Annu Rev Clin Psychol. (2005) 1:91–111. doi: 10.1146/annurev.clinpsy.1.102803.143833

PubMed Abstract | Crossref Full Text | Google Scholar

103. Rollnick S, Miller WR. What is motivational interviewing? Behav Cogn Psychother. (1995) 23:325–34. doi: 10.1017/S135246580001643X

Crossref Full Text | Google Scholar

104. Aron A, Melinat E, Aron EN, Vallone RD, Bator RJ. The experimental generation of interpersonal closeness: a procedure and some preliminary findings. Pers Soc Psychol Bull. (1997) 23:363–77. doi: 10.1177/0146167297234003

Crossref Full Text | Google Scholar

105. Ludwig VU, Berry B, Cai JY, Chen NM, Crone DL, Platt ML. The impact of disclosing emotions on ratings of interpersonal closeness, warmth, competence, and leadership ability. Front Psychol. (2022) 13:989826. doi: 10.3389/fpsyg.2022.989826

PubMed Abstract | Crossref Full Text | Google Scholar

106. Söderlund LL, Madson MB, Rubak S, Nilsen P. A systematic review of motivational interviewing training for general health care practitioners. Patient Educ Couns. (2011) 84:16–26. doi: 10.1016/j.pec.2010.06.025

PubMed Abstract | Crossref Full Text | Google Scholar

107. Hardcastle SJ, Fortier M, Blake N, Hagger MS. Identifying content-based and relational techniques to change behaviour in motivational interviewing. Health Psychol Rev. (2017) 11:1–16. doi: 10.1080/17437199.2016.1190659

PubMed Abstract | Crossref Full Text | Google Scholar

108. Loveys K, Hiko C, Sagar M, Zhang X, Broadbent E. “I felt her company”: a qualitative study on factors affecting closeness and emotional support seeking with an embodied conversational agent. Int J Hum Comput Stud. (2022) 160:102771. doi: 10.1016/j.ijhcs.2021.102771

Crossref Full Text | Google Scholar

109. Lucas GM, Gratch J, King A, Morency L-P. It’s only a computer: virtual humans increase willingness to disclose. Comput Human Behav. (2014) 37:94–100. doi: 10.1016/j.chb.2014.04.043

Crossref Full Text | Google Scholar

110. Pickard MD, Roster CA, Chen Y. Revealing sensitive information in personal interviews: is self-disclosure easier with humans or avatars and under what conditions? Comput Human Behav. (2016) 65:23–30. doi: 10.1016/j.chb.2016.08.004

Crossref Full Text | Google Scholar

111. Bylund CL, Makoul G. Empathic communication and gender in the physician–patient encounter. Patient Educ Couns. (2002) 48:207–16. doi: 10.1016/S0738-3991(02)00173-8

PubMed Abstract | Crossref Full Text | Google Scholar

112. Bylund CL, Makoul G. Examining empathy in medical encounters: an observational study using the empathic communication coding system. Health Commun. (2005) 18:123–40. doi: 10.1207/s15327027hc1802_2

PubMed Abstract | Crossref Full Text | Google Scholar

113. Lin S, Lin L, Hou C, Chen B, Li J, Ni S. Empathy-based communication framework for chatbots: a mental health chatbot application and evaluation. In: Proceedings of the 11th International Conference on Human-Agent Interaction. (2023). p. 264–72.

Google Scholar

114. Yalçın ÖN. Empathy framework for embodied conversational agents. Cogn Syst Res. (2020) 59:123–32. doi: 10.1016/j.cogsys.2019.09.016

Crossref Full Text | Google Scholar

115. Carpenter A, Greene K. Social penetration theory. In: Berger CR, Roloff ME, Wilson SR, Dillard JP, Caughlin J, Solomon D, editors. The International Encyclopedia of Interpersonal Communication. 1st ed. Chichester; Malden, MA: John Wiley & Sons (2016). p. 1–5. doi: 10.1002/9781118540190.wbeic160

Crossref Full Text | Google Scholar

116. Lee Y-C, Yamashita N, Huang Y. Designing a chatbot as a mediator for promoting deep self-disclosure to a real mental health professional. Proc ACM Hum-Comput Interact. (2020) 4:1–27. doi: 10.1145/3392836

Crossref Full Text | Google Scholar

117. Boyd RL, Ashokkumar A, Seraj S, Pennebaker JW. The Development and Psychometric Properties of LIWC-22. Austin (TX): University of Texas at Austin. (2022). Vol. 10. p. 1–47.

Google Scholar

118. Semin GR, Fiedler K. The linguistic category model, its bases, applications and range. Eur Rev Soc Psychol. (1991) 2:1–30. doi: 10.1080/14792779143000006

Crossref Full Text | Google Scholar

119. Semin GR, Görts CA, Nandram S, Semin-Goossens A. Cultural perspectives on the linguistic representation of emotion and emotion events. Cogn Emot. (2002) 16:11–28. doi: 10.1080/02699930143000112

Crossref Full Text | Google Scholar

120. Johnson-Grey KM, Boghrati R, Wakslak CJ, Dehghani M. Measuring abstract mind-sets through syntax: automating the linguistic category model. Soc Psychol Personal Sci. (2020) 11:217–25. doi: 10.1177/1948550619848004

Crossref Full Text | Google Scholar

121. Seih Y-T, Beier S, Pennebaker JW. Development and examination of the linguistic category model in a computerized text analysis method. J Lang Soc Psychol. (2017) 36:343–55. doi: 10.1177/0261927X16657855

Crossref Full Text | Google Scholar

122. Barak A, Gluck-Ofri O. Degree and reciprocity of self-disclosure in online forums. CyberPsychol Behav. (2007) 10:407–17. doi: 10.1089/cpb.2006.9938

PubMed Abstract | Crossref Full Text | Google Scholar

123. Heather N, Hönekopp J. A revised edition of the readiness to change questionnaire [treatment version]. Addict Res Theory. (2008) 16:421–33. doi: 10.1080/16066350801900321

Crossref Full Text | Google Scholar

124. Prochaska JO, Velicer WF. The transtheoretical model of health behavior change. Am J Health Promot. (1997) 12:38–48. doi: 10.4278/0890-1171-12.1.38

PubMed Abstract | Crossref Full Text | Google Scholar

125. Ferron M, Massa P. Transtheoretical model for designing technologies supporting an active lifestyle. In: Proceedings of the Biannual Conference of the Italian Chapter of SIGCHI. (2013). p. 1–8.

Google Scholar

126. Zhang B, Kalampakorn S, Powwattana A, Sillabutra J, Liu G. A transtheoretical model-based online intervention to improve medication adherence for chinese adults newly diagnosed with type 2 diabetes: a mixed-method study. J Prim Care Commun Health. (2024) 15:21501319241263657. doi: 10.1177/21501319241263657

PubMed Abstract | Crossref Full Text | Google Scholar

127. Smith M. The transtheoretical model, stages of change and motivational interviewing. In: Cavaiola AA, Smith M, editors. A Comprehensive Guide to Addiction Theory and Counseling Techniques. New York, NY: Routledge (2020). p. 148–60. Available online at: https://www.taylorfrancis.com/books/edit/10.4324/9780429286933/comprehensive-guide-addiction-theory-counseling-techniques-alan-cavaiola-margaret-smith?refId=768880ca-fcc8-44d1-a6bf-33d4eb16d0fb&context=ubx

Google Scholar

128. Attrey L, Dua S, Kaushik R, Anand S, Agarwal A. Modeling the transtheoretical model for health behavior stage analysis: tool development and testing. In: Kumar A, Dembla D, Tinker S, Khan SB, editors. Handbook of Deep Learning Models for Healthcare Data Processing. Boca Raton: CRC Press (2025). p. 144–57. Available online at: www.taylorfrancis.com/chapters/edit/10.1201/9781003467281-10/modeling-transtheoretical-model-health-behavior-stage-analysis-liza-attrey-sunaina-dua-ruchi-kaushik-sarita-anand-aparna-agarwal

Google Scholar

129. Nghaimesh MO, Abd Ali MBH. Efficacy of stage-matched intervention based on the transtheoretical model of behavior change in enhancing high school students’ decisional balance of digital gaming behavior: a randomized controlled trial. Cent Asian J Med Nat Sci. (2024) 5:221–9.

Google Scholar

130. Rollnick S, Heather N, Gold R, Hall W. Development of a short “readiness to change’ questionnaire for use in brief, opportunistic interventions among excessive drinkers. Br J Addict. (1992) 87:743–54. doi: 10.1111/j.1360-0443.1992.tb02720.x

PubMed Abstract | Crossref Full Text | Google Scholar

131. Gayet-Ageron A, Rudaz S, Perneger T. Study design factors influencing patients’ willingness to participate in clinical research: a randomised vignette-based study. BMC Med Res Methodol. (2020) 20:1–8. doi: 10.1186/s12874-020-00979-z

Crossref Full Text | Google Scholar

132. Yang ZJ, McComas K, Gay G, Leonard JP, Dannenberg AJ, Dillon H. From information processing to behavioral intentions: exploring cancer patients’ motivations for clinical trial enrollment. Patient Educ Couns. (2010) 79:231–8. doi: 10.1016/j.pec.2009.08.010

PubMed Abstract | Crossref Full Text | Google Scholar

133. Schröder J, Sautier L, Kriston L, Berger T, Meyer B, Späth C, et al. Development of a questionnaire measuring attitudes towards psychological online interventions–the apoi. J Affect Disord. (2015) 187:136–41. doi: 10.1016/j.jad.2015.08.044

Crossref Full Text | Google Scholar

134. Holm S. A simple sequentially rejective multiple test procedure. Scand J Stat. (1979) 6(2):65–70. Available online at: http://www.jstor.org/stable/4615733

Google Scholar

135. Wobbrock JO, Findlater L, Gergle D, Higgins JJ. The aligned rank transform for nonparametric factorial analyses using only anova procedures. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. (2011). p. 143–6.

Google Scholar

136. Cochran WG. Some methods for strengthening the common χ2 tests. Biometrics. (1954) 10:417–51. doi: 10.2307/3001616

Crossref Full Text | Google Scholar

137. Ahn SJ, Le AMT, Bailenson J. The effect of embodied experiences on self-other merging, attitude, and helping behavior. Media Psychol. (2013) 16:7–38. doi: 10.1080/15213269.2012.755877

Crossref Full Text | Google Scholar

138. Williams C, Rauwolf P, Boulter M, Parkinson JA. Closing the gap: how psychological distance influences willingness to engage in risky COVID behavior. Behav Sci. (2024) 14:449. doi: 10.3390/bs14060449

PubMed Abstract | Crossref Full Text | Google Scholar

139. Liberman N, Trope Y. The psychology of transcending the here and now. Science. (2008) 322:1201–5. doi: 10.1126/science.1161958

PubMed Abstract | Crossref Full Text | Google Scholar

140. Altman I, Taylor DA. Social Penetration: The Development of Interpersonal Relationships. Oxford: Holt, Rinehart & Winston (1973). Available online at: https://psycnet.apa.org/record/1973-28661-000

Google Scholar

141. Tolstedt BE, Stokes JP. Self-disclosure, intimacy, and the depenetration process. J Pers Soc Psychol. (1984) 46:84–90. doi: 10.1037/0022-3514.46.1.84

Crossref Full Text | Google Scholar

142. Myers MW, Laurent SM, Hodges SD. Perspective taking instructions and self-other overlap: different motives for helping. Motiv Emot. (2014) 38:224–34. doi: 10.1007/s11031-013-9377-y

Crossref Full Text | Google Scholar

143. Pennebaker JW. Writing about emotional experiences as a therapeutic process. Psychol Sci. (1997) 8:162–6. doi: 10.1111/j.1467-9280.1997.tb00403.x

Crossref Full Text | Google Scholar

144. Pennebaker JW, Francis ME. Cognitive, emotional, and language processes in disclosure. Cogn Emot. (1996) 10:601–26. doi: 10.1080/026999396380079

Crossref Full Text | Google Scholar

145. Graf MC, Gaudiano BA, Geller PA. Written emotional disclosure: a controlled study of the benefits of expressive writing homework in outpatient psychotherapy. Psychother Res. (2008) 18:389–99. doi: 10.1080/10503300701691664

PubMed Abstract | Crossref Full Text | Google Scholar

146. King LA. The health benefits of writing about life goals. Pers Soc Psychol Bull. (2001) 27:798–807. doi: 10.1177/0146167201277003

Crossref Full Text | Google Scholar

147. Konig A, Eonta A, Dyal SR, Vrana SR. Enhancing the benefits of written emotional disclosure through response training. Behav Ther. (2014) 45:344–57. doi: 10.1016/j.beth.2013.12.006

PubMed Abstract | Crossref Full Text | Google Scholar

148. O’Connor DB, Hurling R, Hendrickx H, Osborne G, Hall J, Walklet E, et al. Effects of written emotional disclosure on implicit self-esteem and body image. Br J Health Psychol. (2011) 16:488–501. doi: 10.1348/135910710X523210

Crossref Full Text | Google Scholar

149. Frisina PG, Borod JC, Lepore SJ. A meta-analysis of the effects of written emotional disclosure on the health outcomes of clinical populations. J Nerv Ment Dis. (2004) 192:629–34. doi: 10.1097/01.nmd.0000138317.30764.63

PubMed Abstract | Crossref Full Text | Google Scholar

150. Greenberg MA, Stone AA. Emotional disclosure about traumas and its relation to health: effects of previous disclosure and trauma severity. J Pers Soc Psychol. (1992) 63:75–84. doi: 10.1037/0022-3514.63.1.75

PubMed Abstract | Crossref Full Text | Google Scholar

151. Radcliffe AM, Lumley MA, Kendall J, Stevenson JK, Beltran J. Written emotional disclosure: testing whether social disclosure matters. J Soc Clin Psychol. (2007) 26:362–84. doi: 10.1521/jscp.2007.26.3.362

PubMed Abstract | Crossref Full Text | Google Scholar

152. Greenberg MA, Wortman CB, Stone AA. Emotional expression and physical health: revising traumatic memories or fostering self-regulation? J Pers Soc Psychol. (1996) 71:588–602. doi: 10.1037/0022-3514.71.3.588

PubMed Abstract | Crossref Full Text | Google Scholar

153. Miles O, West R, Nadarzynski T. Health chatbots acceptability moderated by perceived stigma and severity: a cross-sectional survey. Digit Health. (2021) 7:20552076211063012. doi: 10.1177/20552076211063012

PubMed Abstract | Crossref Full Text | Google Scholar

154. Corsini R. Role Playing in Psychotherapy. New York, NY: Routledge (2017). Available online at: https://www.taylorfrancis.com/books/mono/10.4324/9781351307208/role-playing-psychotherapy-raymond-corsini

Google Scholar

155. Moreno JL. Who shall survive? A new approach to the problem of human interrelations (1934).

Google Scholar

156. Moreno JL. The Essential Moreno: Writings on Psychodrama, Group Method, and Spontaneity. New York, NY: Springer Publishing Company (1987). Available online at: https://psycnet.apa.org/record/1988-97034-000

Google Scholar

157. Matthews M, Gay G, Doherty G. Taking part: role-play in the design of therapeutic systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. (2014). p. 643–52.

Google Scholar

158. Feretzakis G, Papaspyridis K, Gkoulalas-Divanis A, Verykios VS. Privacy-preserving techniques in generative ai and large language models: a narrative review. Information. (2024) 15:697. doi: 10.3390/info15110697

Crossref Full Text | Google Scholar

159. Chen Y, Esmaeilzadeh P. Generative AI in medical practice: in-depth exploration of privacy and security challenges. J Med Internet Res. (2024) 26:e53008. doi: 10.2196/53008

PubMed Abstract | Crossref Full Text | Google Scholar

160. Cabrera J, Loyola MS, Magaña I, Rojas R. Ethical dilemmas, mental health, artificial intelligence, and LLM-based chatbots. In: International Work-Conference on Bioinformatics and Biomedical Engineering. Springer (2023). p. 313–26.

Google Scholar

161. Luxton DD, Hudlicka E. Intelligent virtual agents in behavioral and mental healthcare: ethics and application considerations. In: Jotterand F, Ienca M, editors. Artificial Intelligence in Brain and Mental Health: Philosophical, Ethical & Policy Issues Advances in Neuroethics. Cham: Springer (2021). p. 41–55. doi: 10.1007/978-3-030-74188-4_4

Crossref Full Text | Google Scholar

162. Blease C, Rodman A. Generative artificial intelligence in mental healthcare: an ethical evaluation. Curr Treat Options Psychiatry. (2025) 12:5. doi: 10.1007/s40501-024-00340-x

Crossref Full Text | Google Scholar

163. Ayduk Ö, Kross E. From a distance: implications of spontaneous self-distancing for adaptive self-reflection. J Pers Soc Psychol. (2010) 98:809–29. doi: 10.1037/a0019205

PubMed Abstract | Crossref Full Text | Google Scholar

164. Marangunić N, Granić A. Technology acceptance model: a literature review from 1986 to 2013. Universal Access Inf Soc. (2015) 14:81–95. doi: 10.1007/s10209-014-0348-1

Crossref Full Text | Google Scholar

165. Li L, Peng W, Rheu MM. Factors predicting intentions of adoption and continued use of artificial intelligence chatbots for mental health: examining the role of UTAUT model, stigma, privacy concerns, and artificial intelligence hesitancy. Telemed e-Health. (2024) 30:722–30. doi: 10.1089/tmj.2023.0313

PubMed Abstract | Crossref Full Text | Google Scholar

166. Park G, Chung J, Lee S. Human vs. machine-like representation in chatbot mental health counseling: the serial mediation of psychological distance and trust on compliance intention. Curr Psychol. (2024) 43:4352–63. doi: 10.1007/s12144-023-04653-7

Crossref Full Text | Google Scholar

167. Barbosa E, Amendoeira M, Ferreira T, Teixeira AS, Pinto-Gouveia J, Salgado J. Immersion and distancing across the therapeutic process: relationship to symptoms and emotional arousal. Res Psychother Psychopathol Process Outcome. (2017) 20:258. doi: 10.4081/ripppo.2017.258

PubMed Abstract | Crossref Full Text | Google Scholar

168. Lillywhite B, Wolbring G. Auditing the impact of artificial intelligence on the ability to have a good life: using well-being measures as a tool to investigate the views of undergraduate stem students. AI Soc. (2024) 39:1427–42. doi: 10.1007/s00146-022-01618-5

Crossref Full Text | Google Scholar

Keywords: artificial intelligence, chatbot, mental wellbeing, perspective-taking, disclosure, emotional expression, embodied conversational agents

Citation: You C, Ghosh R, Vilaro M, Venkatakrishnan R, Venkatakrishnan R, Maxim A, Peng X, Tamboli D and Lok B (2025) Alter egos alter engagement: perspective-taking can improve disclosure quantity and depth to AI chatbots in promoting mental wellbeing. Front. Digit. Health 7:1655860. doi: 10.3389/fdgth.2025.1655860

Received: 28 June 2025; Accepted: 15 August 2025;
Published: 10 September 2025.

Edited by:

Björn Wolfgang Schuller, Imperial College London, United Kingdom

Reviewed by:

Marcin Rza̧deczka, Marie Curie-Sklodowska University, Poland
Sonal Sharma, Central University of Gujarat, India

Copyright: © 2025 You, Ghosh, Vilaro, Venkatakrishnan, Venkatakrishnan, Maxim, Peng, Tamboli and Lok. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Christopher You, Y2hyaXN0b3BoZXJ5b3VAdWZsLmVkdQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.