Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Educ., 14 July 2025

Sec. Higher Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1617067

Investigating dimensions of instructor trust using the words of undergraduate STEM students


Kathy ZhangKathy Zhang1Julia C. GillJulia C. Gill1Tong ZhangTong Zhang1Lia CrowleyLia Crowley1Juliette BennieJuliette Bennie1Henry WagnerHenry Wagner1Melanie BauerMelanie Bauer1David HanauerDavid Hanauer2Xinnian ChenXinnian Chen3Mark J. Graham
Mark J. Graham1*
  • 1STEM Program Evaluation and Research Lab (STEM-PERL), Department of Ecology and Evolutionary Biology, Yale University, New Haven, CT, United States
  • 2Department of English/Applied Linguistics, Indiana University of Pennsylvania, Indiana, PA, United States
  • 3Department of Physiology and Neurobiology, University of Connecticut, Storrs, CT, United States

Introduction: Recent work has shown that student trust in their instructor is a key moderator of STEM student buy-in to evidence-based teaching practices (EBTs), enhancing positive student outcomes such as performance, engagement, and persistence. Although trust in instructor has been previously operationalized in related settings, a systematic classification of how undergraduate STEM students perceive trustworthiness in their instructors remains to be developed. Moreover, previous operationalizations impose a structure that often includes distinct domains, such as cognitive and affective trust, that have yet to be empirically tested in the undergraduate STEM context.

Methods: To address this gap, we engage in a multi-step qualitative approach to unify existing definitions of trust from the literature and analyze structured interviews with 57 students enrolled in undergraduate STEM classes who were asked to describe a trusted instructor. Through thematic analysis, we propose that characteristics of a trustworthy instructor can be classified into three domains. We then assess the validity of the three-domain model both qualitatively and quantitatively. First, we examine student responses to determine how traits from different domains are mentioned together. Second, we use a process-model approach to instrument design that leverages our qualitative interview codebook to develop a survey that measures student trust. We performed an exploratory factor analysis on survey responses to quantitatively test the construct validity of our proposed three-domain trust model.

Results and discussion: We identified 28 instructor traits that students perceived as trustworthy, categorized into cognitive, affective, and relational domains. Within student responses, we found that there was a high degree of interconnectedness between traits in the cognitive and relational domains. When we assessed the construct validity of the three-factor model using survey responses, we found that a three-factor model did not adequately capture the underlying latent structure. Our findings align with recent calls to both closely examine long-held assumptions of trust dimensionality and to develop context-specific trust measurements. The work presented here can inform the development of a reliable measure of student trust within undergraduate STEM student environments and ultimately improve our understanding of how instructors can best leverage the effectiveness of EBTs for positive student learning outcomes.

1 Introduction

Two reports published 8 years apart, one in 2012 by the President's Council of Advisors on Science and Technology (PCAST) and one in 2020 by the National Science Board, both call for a modernization of STEM education to better retain students and strengthen the domestic science and technology workforce. The 2012 PCAST report found that only 40% of students who matriculate into higher education with the intent of pursuing a STEM degree persist to the end of their degree. The learning environment of introductory courses in the first 2 years of the STEM major is a critical factor in retaining these students (President's Council of Advisors on Science and Technology, 2012). Since then, national assessments have shown stagnant or declining STEM competencies among students and the general public (National Science Board, National Science Foundation, 2020; US Department of Education, 2018, 2024).

Evidence-based teaching practices (EBTs) such as student-centered active learning or discovery-based learning improve student achievement and persistence in STEM fields (Chasteen and Pollock, 2008; Gross et al., 2015; Freeman et al., 2014; Handelsman et al., 2007; Hanauer et al., 2017; Henderson and Dancy, 2009; Jensen et al., 2015; Reeves et al., 2023; Wieman, 2014). Yet, widespread adoption remains limited due to institutional barriers and student resistance (Brazeal et al., 2016; Brownell and Tanner, 2012; Finelli et al., 2018; Minhas et al., 2012; Nguyen et al., 2016; Patrick, 2020; Seidel and Tanner, 2013; Stains et al., 2018; Walker et al., 2008). Critically, instructors' experience of student resistance, which can manifest as lack of engagement or disruptive behavior, may contribute to high rates of instructors who revert to traditional lecturing after trying EBTs (Lake, 2001; Henderson et al., 2012; Seidel and Tanner, 2013; Nguyen et al., 2021). Thus, a better understanding of the social and cognitive factors underlying students' buy-in, or commitment, to the use of EBTs may improve adoption rates (Cavanagh et al., 2016; Corwin et al., 2015; Dolan, 2015; Wang et al., 2021). One factor that has emerged as an empirically significant moderator of student buy-in is trust in their instructor (Cavanagh et al., 2018; Wang et al., 2021).

Indeed, empirical studies have shown that strong personal connections between faculty and students can positively affect a variety of student outcomes (Mayhew et al., 2016), such as persistence in college (Guzzardo et al., 2021; Milem and Berger, 1997; Nora et al., 1996; Pascarella and Terenzini, 1979; Robinson et al., 2019; Schudde, 2019; Pike et al., 1997; Wilcox et al., 2005), attitudes toward learning (Christophel, 1990; McLure et al., 2022), motivation (Komarraju et al., 2010; Wentzel, 2016; Zhou et al., 2023), academic self-concept (Kim and Sax, 2014; Trinidad et al., 2024), self-efficacy (Ballen et al., 2017; Ferguson, 2021), engagement (Umbach and Wawrzynski, 2005; Snijders et al., 2020), performance (Roorda et al., 2011; Zhao and You, 2023), self-worth (Alt et al., 2022; Kuh, 1995; Trinidad et al., 2024), and interest and effort put toward a course (Fedesco et al., 2019). Students themselves report that closer relationships with faculty based on trust are critical for success in college STEM classrooms (Pedersen et al., 2022). Among these relational elements, trust has emerged as a key construct that not only underpins the quality of student–instructor relationships but also directly moderates student buy-in to evidence-based teaching practices (Cavanagh et al., 2018; Wang et al., 2021). Because buy-in has been identified as a critical mechanism for improving student engagement and persistence, especially in STEM, we focus our investigation on trust as a theoretically grounded and empirically supported factor within the broader construct of student–instructor connection. Despite the importance of positive student-teacher relationships for student success, how students develop a sense of trust in their instructor remains empirically understudied and may be undervalued by college STEM instructors (Beltrano et al., 2021; Christe, 2013; Felten et al., 2023; Hagenauer and Volet, 2014; Niedlich et al., 2021; Payne et al., 2022; Tierney, 2006).

The construct of trust has been widely studied across disciplines both from theoretical and empirical perspectives. For example, in an empirical study of romantic partnerships, Rempel et al. (1985) consider the development of trust as beginning with demonstrations of consistency and evolving based on shared values and goodwill. Revisiting this work, Camanto and Campbell (2025) found three key dimensions of trust in romantic relationships that reiterate Rempel et al.'s (1985) framework: predictability, dependability, and faith. Lewicki and Bunker (1996) offer an expanded theoretical framework to describe the development of trust in professional relationships. Initial calculus-based trust informed by self-interest grows into knowledge-based trust through familiarity. When two individuals identify with each other's shared values and goals, they progress to the deepest form of identification-based trust. While Lewicki and Bunker's (1996) framework describes the development of trust through different domains over time, McAllister's (1995) empirical study of workplace relationships suggests that different domains of trust, specifically cognitive and affective, develop simultaneously and independently from each other. The cognitive domain depends on a rational assessment of professional competence while the affective domain is rooted in an emotional bond. Indeed, Massey et al. (2019) argue that interpersonal trust is bidimensional in nature and consists of both affective and cognitive components, highlighting in their empirical study that affective and cognitive trust domains explain significant variance in one's perception of the quality of an interpersonal relationship. Lewis and Weigert (1985) present a theoretical description of trust as a collective social force that also considers the distinction between cognitive and emotional processes but treats trust as a generalized attitude toward an institution rather than in the context of a specific relationship. In another framing of organizational trust, Mayer et al. (1995) consider trust as unidimensional and provide a theoretical model that distinguishes between trust as an internal state of willingness of a trustor to be vulnerable to a trustee in the face of uncertainty. The decision to trust is based on the trustor's sense of the other party's trustworthiness, determined by the trustee's demonstration of ability, benevolence, and integrity. Conducting an empirical study on how trust is built in both hybrid and in-person work settings, Fischer et al. (2023) interestingly highlight the value of behavioral or relational trust, deeming authenticity and communication as trustworthy professional behaviors.

There is no one unified definition of trust, though there is some consensus in the literature that trust has at least two distinct dimensions: cognitive and affective. Despite this consensus and exploration of these two dimensions in research on trust, there is nonetheless a shortfall in the literature in terms of a consistent and empirical distinction between cognitive and affective domains (Legood et al., 2023). In the context of higher education, there is even less consensus on the definition of trust between students and instructors (Beltrano et al., 2021; Christe, 2013; Felten et al., 2023; Hagenauer and Volet, 2014; Niedlich et al., 2021; Payne et al., 2022; Tierney, 2006). First defined in the K-12 context, Bryk and Schneider (2002) put forth a relational trust framework based on empirical research in Chicago public schools to describe the role of trust as a collective property of the school environment in improving student outcomes and organizational effectiveness. In this framework, trust is built through the quality of social exchanges (measured by benevolence, competence, integrity, and respect) between teachers, students, administrators, and parents. Building upon this framework, Tschannen-Moran and Hoy (2000) broaden the scope of Bryk and Schneider's work, adding more focus on school leadership, policies, and climate. Additionally, Tschannen-Moran and Hoy take an empirical approach to their synthesis of literature by focusing on measurable characteristics that could be used to develop a quantitative tool. Their resulting Omnibus Trust Scale measures five dimensions of trust: benevolence, reliability, competence, honesty, and openness. Using Tschannen-Moran and Hoy's framework to theoretically ground their study, Holzer and Daumiller (2025) use analyses of qualitative interviews with students and teachers in ninth-grade classes to suggest that teachers' willingness to be vulnerable and confide personal information in their students are also critical components of trust. Although developed in the context of K-12 education, Tschannen-Moran and Hoy's framework of trust has been used as a reference point for investigating trust in higher education.

Models of trust in higher education marketing have examined the relationship between students' trust and their loyalty toward their institution. Surveys of students and alumni revealed that trust in the institution included five dimensions parallel to those identified in Tschannen-Moran and Hoy's framework: expertise, integrity, congeniality, sincerity, and openness (Ghosh et al., 2001). Sampaio et al. (2012) take Ghosh et al.'s model a step further through a quantitative survey with business students, distinguishing student trust in faculty as a critical component of trust in their institution. Indeed, conceptual models of retention suggest that trust depends on the success of relational exchanges between students and faculty (Dzimińska et al., 2018; Schertzer and Schertzer, 2004). These are primarily conceptual or theoretical papers, offering models rather than new empirical data. The pedagogical impact of student-faculty trust as an important form of social capital is illustrated by Ream et al. (2014), who conducted an empirical mixed-methods study of STEM students in a research program. Using survey data and qualitative interviews, they found that STEM students who had greater trust in their mentor during a summer research program reported greater motivation and had higher career expectation. Building upon Mayer et al.'s (1995) and Tschannen-Moran and Hoy's (2000) frameworks of trust, Ream et al. (2014) estimated students' perceptions of trustworthiness through surveys measuring competence, benevolence, and integrity. Importantly, research students in this study interacted with their faculty mentor outside of a formal classroom setting. Similarly, past empirical studies of student-faculty relationships demonstrate the importance of informal interactions with faculty outside of class for student satisfaction, engagement, and retention (Mattanah et al., 2024; Wong and Chapman, 2023; Pascarella and Terenzini, 2005; Tinto, 2015; Wilcox et al., 2005).

Whether students choose to interact with faculty outside of class is based on perceptions of approachability and support, informed by behavioral cues during class (Lamport, 1993; Wilson et al., 1974). Based on surveys and classroom observation, Lamport (1993) found that rather than age, gender, academic rank, or research accolades, students are more likely to engage in informal interactions with faculty based on their instructors' interpersonal sociopsychological characteristics, such as friendliness, understanding, and authenticity. Similarly, student surveys collected by Schussler et al. (2021) found that student ratings of instructor support were influenced by student perceptions of care and approachability as well as the instructor's personality. Moreover, an empirical survey study (Denzine and Pulos, 2000) found that in-class behaviors demonstrating care and concern for the student (such as asking personal questions) explained significantly more variance in measures of approachability compared to behaviors that demonstrated conscientiousness (such as starting class on time). Empirical evidence gathered from surveys shows that students who report greater trust in the instructor are more likely to engage in out-of-classroom contact with their instructors (Faranda, 2015; Jaasma and Koper, 1999), thus faculty approachability based on demonstrations of care may be a significant factor contributing to student trust. Relatedly, other empirical studies have found that when instructors bring personal elements into their instruction, such as showing vulnerability through acts of self-disclosure (Johnson and LaBelle, 2017; LaBelle et al., 2023) or teacher immediacy (Andersen, 1979; Liu, 2021), students report higher relational satisfaction (Johnson and LaBelle, 2017; LaBelle et al., 2023), increased motivation, and more positive attitudes toward learning (Christophel, 1990; Frymier, 1994; Frymier et al., 2019). While these studies do not explicitly reference the development of trust, the broader literature suggests that trust develops over the course of repeated interactions between individuals. Thus, the factors that lead students to have a positive view of their interactions with faculty both inside and out-of-class likely play a significant role in the development of trust. In a reflective, qualitative study that did explicitly explore the development of trust, Meinking and Hall (2024) describe how students emphasized the importance of relational trust and the willingness of both students and instructors to be vulnerable with one another as key factors for building a trusting learning environment.

The treatment of the teacher-student relationship as an interpersonal one with significant relational and emotional components has been widely adopted in the instructional communication literature (Hess and Mazer, 2017). In 2018, Cavanagh et al. adapted and validated the use of Clark and Lemay's (2010) close interpersonal relationship framework to define student trust in their STEM instructor. Clark and Lemay's work highlights the positive impact of mutual responsiveness and communal norms, where individuals act for each other's benefit without contingency, for long-term, intimate relationships. Cavanagh et al. (2018) adapt this theory to model students' responsiveness to their instructor's use of EBTs, arising from trust that their instructor is acting for their benefit. The decision to trust is based on the extent to which students believe their instructor understands, accepts, and cares about them. This operationalization of trust was further validated in Wang et al.'s (2021) study of the relationship between student trust and buy-in in 14 large-enrollment STEM courses. However, a critique of the instructional communication literature has been that too much focus has been placed on the interpersonal aspects of the student-teacher relationship without consideration for cognitive factors (Hess and Mazer, 2017). Indeed, when students in an online learning environment were surveyed about instructor trustworthiness, high-trusting students cited the instructor's professional credibility and expertise in addition to interpersonal traits related to care, acceptance, and understanding (Hai-Jew, 2007). Similarly, a conceptual model for student trust developed through interviews with college faculty included a domain related to cognitive factors, such as instructors' knowledge, skill, and competence, in addition to affective, identity, and value-based domains (Felten et al., 2023). These studies suggest that both students and faculty believe that trust in instructor encompasses both affective and cognitive domains and that the conceptualization of student-instructor trust solely through the lens of a close personal relationship is insufficient.

However, the distinction between different dimensions of trustworthiness has also been debated. McEvily and Tortoriello's (2011) and Whipple et al.'s (2013) reviews of the measurement of trustworthiness argue there is weak evidence to support the construct validity of separate dimensions. A literature review conducted by Niedlich et al. (2021) similarly highlights the lack of conceptual clarity and inconsistent application of existing theoretical frameworks to define trust and its dimensionality across studies specifically within education contexts. Moreover, Niedlich et al. (2021) note that while existing research often depends on the use of multidimensional trust scales, the relationships between dimensions is rarely examined. Concerns about the construct validity of trust dimensions have also been raised in other domains. For instance, Bradford et al. (2022), in a mixed-methods study of trust in police among immigrant communities in Australia, emphasize the contextual and interpretive variability in how trustworthiness is perceived and measured—raising similar questions about the transferability of pre-defined trust constructs. Likewise, Nielsen and Nielsen (2023), working from an ethnomethodological and micro-sociological perspective, argue that trustworthiness emerges in the details of social interaction, challenging the assumption that it can be cleanly isolated and captured through conventional self-report measures. Together, these studies align with our argument that trust, as perceived by undergraduate STEM students, may not be fully captured by dimensions derived from other top-down theoretical models.

To empirically test the construct validity of trust in higher education, Di Battista et al. (2020, 2021) sought to determine if students could themselves consistently differentiate between instructor characteristics related to two dimensions of trust often used in education contexts: competence and benevolence. In a quantitative study, Di Battista et al. (2021) found that manipulating students' perceptions of an instructor's competence significantly affected their subsequent judgment of benevolence, and vice versa. In a qualitative study, they further found that when students were asked to list characteristics associated with a benevolent or competent instructor, students frequently used the same words to describe both dimensions and used words that were not aligned with theoretical definitions (Di Battista et al., 2020). These findings affirm the argument that theorized sub-constructs of trust and the relationships between them may be highly dependent on institutional context or overlap entirely when empirically tested (PytlikZillig and Kimbrough, 2016). The lack of empirical studies of trust-dimensionality in higher STEM education calls for a more thorough examination of how well theorized trust dimensions drawn from organizational, social, and educational psychology frameworks or from K-12 contexts represent student perceptions in this specific context.

In the current study, we therefore seek to address the following research question: are college STEM students' perceptions of instructor trustworthiness accurately captured by previously theorized sub-constructs of trust? Based on research evidence discussed above, we hypothesize that a simple two- or three-domain model may not capture the rich dimensionality of student descriptions of trustworthiness. To test this hypothesis, we first employ a multi-step qualitative approach that gives students the opportunity to describe trusted instructors in their own words. To the best of our knowledge, such a “bottom-up” approach has yet to be applied in empirical studies of American college STEM students' trust in their instructors (Di Battista et al., 2020). Di Battista et al.'s qualitative study (2020) was conducted with a group of 125 psychology students in a single course at an Italian institution. Previous studies of American STEM undergraduate student trust have been limited to research faculty mentorship (Ream et al., 2014), faculty perceptions (Felten et al., 2023; Bayraktar et al., 2025), small classroom settings (Meinking and Hall, 2024) or to a close personal relationship framing of the student-instructor relationship (Cavanagh et al., 2018; Wang et al., 2021). Therefore, deepening our understanding of trust from the perspective of students themselves is a key step toward advancing student experiences in STEM classrooms.

To prioritize empirical model testing, we chose to follow a defined “process model” approach that leverages qualitative data for instrument design (Chatterji, 2003). First, we reviewed literature across education, psychology, and management to identify existing trust constructs. We then conducted structured interviews with 57 STEM undergraduates, asking them to describe a trusted instructor. Using a priori codes from the literature and inductively generating new ones, we developed a codebook that categorized traits into conceptual groupings. These categories were then used to draft survey items and test dimensionality. The purpose of the qualitative work was therefore twofold: (1) to propose a preliminary model of instructor trustworthiness grounded in student descriptions and (2) to draft an instrument for empirical testing.

In this manuscript, we distinguish between trust and trustworthiness. Drawing on Mayer et al. (1995), trust refers to the psychological state of the trustor based on a decision to be vulnerable to the actions of the trustee. Trustworthiness, by contrast, refers to the characteristics or behaviors of the trustee, such as competence, care, or fairness, that lead the trustor to view them as deserving of trust. Our study centers on students' descriptions of instructor trustworthiness and uses these perceptions as a window into how trust develops. Although we use the term “trust” at times for brevity, our analyses focus on the observable antecedents to trust as experienced and articulated by students.

Our qualitative analysis revealed that trustworthy instructor traits clustered into cognitive, affective, and relational domains, with notable overlap between cognitive and relational elements. We piloted a survey based on the codebook in a large-enrollment STEM course. A forced three-factor exploratory factor analysis (EFA) yielded poor-to-acceptable model fit while higher-order models performed better. Moreover, items did not load cleanly into the predefined domains, indicating that student conceptions of trustworthiness may not align neatly with previously theorized models. These findings suggest a more nuanced understanding of trust is needed to improve student buy-in to evidence-based practices and, ultimately, support retention in STEM fields (Cavanagh et al., 2018; Graham et al., 2013; Wang et al., 2021).

2 Materials and methods

2.1 Process model approach to instrument design

In this study, we apply an iterative process model for instrument design to develop a codebook of trustworthy instructor characteristics (Chatterji, 2003; Chatterji et al., 2002; Graham et al., 2009). The process has four phases, depicted in Figure 1. In phase 1, we began by defining the assessment context, including the constructs and population that will be targeted for measurement. In this case, the domain of interest was defined as the constructs underlying trust for undergraduate STEM students. In phase 2, we specified the domain in terms of action-oriented and observable indicators to facilitate instrument construction in phase 3. To do so, we conducted a literature review and held structured interviews with current undergraduate STEM students. The work of phase 3 then focused on converting the specified behaviors and characteristics into rating items for a survey. Finally, in phase 4, we conducted iterative rounds of validation and revision, including content validation of the items and a pilot test of the instrument. The process model approach used here is based on recommendations for test development grounded in psychometric modeling described in the Standards for Educational and Psychological Testing (American Educational Research Association, 1999). Instruments developed using this model typically achieve desired reliability and a concise factor structure within fewer rounds of empirical testing (Chatterji et al., 2002; Graham et al., 2009). We apply all four phases of the process model in this work, focusing primarily on domain specification to clarify and validate the constructs underlying student trust in their instructor. Importantly, while we present findings from a pilot study, this paper does not address phase 4 in full. In the process model, phase 4 typically includes exploratory and confirmatory studies with large independent data sets. Overall, this study followed a sequential exploratory mixed methods design, in which qualitative data collection and analysis preceded and directly informed the quantitative phase. The qualitative phase involved structured interviews to identify traits students associate with instructor trustworthiness. These traits were used to develop a survey instrument, which was then pilot tested using exploratory factor analysis to examine the dimensional structure of trust.

Figure 1
Flowchart depicting a multi-phase process for developing an assessment tool. Phase 1 involves defining the domain of interest, focusing on undergraduate STEM students and measuring trust in instructors. Phase 2 specifies the domain as a codebook via literature review, structured interviews, and coding. Phase 3 drafts the assessment tool based on the codebook, with revisions. Phase 4 involves content validation and pilot testing the draft instrument, followed by empirical validation through exploratory studies on two datasets. Arrows indicate process flow and revision points across the phases.

Figure 1. Phases in the development of a codebook of instructor behaviors and characteristics that contribute to the development of student trust and validation of trust measurement instrument derived from the codebook. Dashed lines depict revisions made to the instrument following rounds of validation. Boxes in gray (“Empirical Validation I” and “Empirical Validation II”) represent steps of the process model approach not addressed in this paper.

2.2 Process model phase 1

We defined the domain of interest as the constructs underlying the latent variable “trust” and the assessment context was defined as undergraduate STEM classrooms in the United States.

2.3 Process model phase 2

2.3.1 Literature review

To identify actions, characteristics, and other related variables underlying descriptions of trusted individuals, the research team conducted an exploratory literature search to identify key dimensions previously used to operationalize the latent variable, “trust,” across multiple disciplinary contexts. We began our search by first expanding upon theoretical frameworks used to measure trust in schools. These included the close personal relationship framework adapted by Cavanagh et al. (2018) and the five dimensions of trust highlighted by Tschannen-Moran and Hoy (2000). The research team identified potential new sources via keyword searches of the following online databases: JSTOR, ProQuest, EBSCOHost, and Google Scholar. Example keywords used in the search included: “trust in schools,” “trust in organizations,” “trust between superior and subordinates,” “trust among colleagues,” “trust in leaders,” and “trust in teachers.” Searches were conducted between Spring 2021 and 2022 and yielded about 100 research articles published between 1967 and 2022, spanning psychology, education, and organizational studies. Articles were included based on whether the authors provided a clear operational definition or framework of trust with specific domains, or clear descriptions of behaviors and characteristics associated with trustworthy individuals. Articles that did not involve the study of human behavior or that did not provide a working definition of trust were excluded.

In reviewing each article, research team members recorded the dimensions and individual characteristics used to operationalize and describe trustworthiness. Each dimension or characteristic was recorded within a preliminary codebook and accompanied by any available definitions, behaviors, or sample items included in the original article. Definitions and examples of dimensions or characteristics not explicitly tied to education settings were contextualized to the student-instructor relationship based on definitions provided in the original source material. Our focus was to clarify existing trust dimensions with descriptive language that placed actions and characteristics into the specified domain context. The literature review concluded once the research team agreed that saturation had been achieved, or that few new terms appeared in each successive article. The full literature review codebook is presented in Supplementary Table 1.

2.3.2 Interview participants and procedures

Next, we sought to obtain the perspectives of students as key stakeholders in the study of instructor trust. To do so, we recruited undergraduate students enrolled in STEM classes (N = 57) from one large public research university, one mid-size private research university, and one mid-size public teaching college to participate in a study about their relationships with college instructors. Students were first recruited via convenience sampling by undergraduate assistants on the research team. Recruitment was then formalized through posters, web posts, and emails to students with the incentive of $10 compensation.

Thirty participants identified as students of color (52.6%). Of those 30, 12 identified as Black (21%), 10 identified as Hispanic or Latino (17.5%), three as more than one group (5.3%), and five did not identify their race/ethnicity (8.8%). Thirty-eight participants identified as women (67.7%) and 19 identified as men (33.3%). Most students were in their first and second years (56.2%) while the remainder were in their third and fourth year (43.8%) of college. Most students majored in STEM fields (68.4%) with some from the social sciences (21.1%), humanities (7%), and other fields (3.5%). Seventeen students (29.8%) were first-generation college students. See Table 1 for all interview participant demographic characteristics.

Table 1
www.frontiersin.org

Table 1. Demographic characteristics of student interview participants (N = 57).

Upon volunteering to participate in the study, students were invited to a 15-min Zoom interview with a member of the research team. The interview followed a two-part structure. In the first, priming task, participants were asked to reflect on a past college instructor they trusted and to generate words describing the instructor's traits. Using Google Slides, participants were asked to place each trait on a bullseye graphic based on how important they believed each trait was to their perception of trustworthiness (e.g. if a student believed that an instructor's “punctuality” was the foremost reason for trust, the student would place “punctuality” in the center of the bullseye; see Figure 2 for an example). In the second part of the task, participants were asked to explain why the characteristics they chose were perceived as being trustworthy and how those characteristics were demonstrated by the instructor. Participants were encouraged to provide anecdotal examples of how their instructor had displayed the traits they had chosen. The purpose of the priming task was to prime the participant to reflect on a past trusted instructor in preparation for discussing their experiences in depth during the “free response task” (see Figure 2 for a schematic overview of the interview procedure). In the current study, we focus our analysis on responses during the free response task. A second manuscript in progress (Chen et al., in review) provides an analysis of the responses to the priming task. Interviewers (consisting of two full-time researchers and three undergraduate research assistants) only asked follow-up questions if participants' responses were unclear. Each interview was securely recorded on Zoom. This project was granted exempt status from each institution's Institutional Review Board Human Subjects Committee, as it examined standard educational practices.

Figure 2
Diagram depicting two tasks. Panel A shows a priming task with instructions to identify traits of a trusted college instructor and assign their importance on a bullseye chart, followed by a free response task to describe how traits are demonstrated. Panel B displays a completed bullseye with traits such as “organized,” “knowledgeable,” “understanding,” and “empathetic” distributed across circles based on importance.

Figure 2. (A) Schematic overview of the two-part student interview procedure. In the first priming task, students were prompted to recall a past trusted instructor, identify characteristics of that instructor they perceived as being trustworthy and put them in boxes. Students then placed traits in order of perceived importance by moving boxes onto a “bullseye” image. In the second free response task, students were asked to expand upon the characteristics they listed in the priming task. (B) Example of a bullseye completed by a student participant in the priming task.

2.3.3 Thematic coding of interview free responses

We adhered to the established qualitative methodology of thematic analysis to identify emergent themes in students' free-response interview data (Braun and Clarke, 2006). First, three members of the research team (one full-time researcher and two undergraduate research assistants) familiarized themselves with the data by reading through all student responses and generating an initial list of themes. Next, we engaged in a directed content analysis approach (Hsieh and Shannon, 2005), integrating both deductive and inductive coding, to systematically code individual student responses. All interview responses were uploaded to NVivo, which enabled us to electronically code and manage data and ideas (Hilal and Alabri, 2013). Research team members systematically identified all traits associated with the trusted instructor by considering student responses sentence by sentence, contextualizing each sentence within the greater paragraph. Using deductive coding, constructs from the literature review and initial list of themes were used as an a priori list of codes to label students' responses. When student responses did not readily align with existing codes, we used inductive coding to generate new codes that more accurately captured emerging constructs. This iterative process ensured that our coding scheme remained grounded in prior research while allowing flexibility to accommodate novel insights from the data.

While our methodological approach incorporated both deductive and inductive elements, we did not adopt a fully inductive Grounded Theory method (Glaser and Strauss, 2017). Instead, we followed an abductive approach informed by Chatterji's (2003) process model, using existing frameworks to guide initial coding while allowing new codes to emerge from student responses. We recognize that this hybrid strategy limits the possibility of generating an entirely novel theory of instructor trust. However, it was intentionally chosen to balance theoretical grounding with openness to context-specific constructs, as our goal was to develop a preliminary instrument aligned with both empirical data and existing conceptualizations of trust.

Once an initial codebook was generated, two members of the research used the codebook to code all student responses independently. Once the two raters coded all responses, they met to discuss their independent analyses. To ensure consistency and validity, three rounds of intercoder reliability checks were conducted. When coding disagreements arose between the two coders, the coders and a senior investigator would discuss them and assign a final code after consensus was reached. The kappa value of the first intercoder reliability check was 0.64. The kappa value increased to 0.80 and finally to 0.85 after disagreements were resolved during the second and third rounds of intercoder reliability checks. The final kappa value indicates strong agreement between coders based on the codebook. Once agreement was reached on the codebook structure, two coders continued to complete the coding of all student interview free responses. At the completion of coding, NVivo's Coding Query functionality was used to calculate code frequencies. Even if a construct was mentioned more than once by a participant, we coded it a maximum of one time per response.

Finally, we searched for themes in the data by grouping codes into categories based on existing theoretical frameworks of trust identified during the literature review. Specifically, we had identified a consensus in the literature that trust broadly encompasses at least two domains: affective and cognitive. Additionally, previous research of undergraduate STEM students defined trust using a close personal relationship framework. Thus, we included a third, relational trust domain. Members of the research team first grouped codes into these broad domains. These themes were then iteratively refined through discussion among the research team. Where necessary, major themes were divided into sub-categories to ensure that the richness of our interview data was accurately represented. The final codebook contains 28 individual codes grouped into three major themes: affective, cognitive, and relational trust. The cognitive domain comprises six sub-categories, the relational domain comprises five sub-categories, and it was not necessary to split the affective domain into smaller sub-categories.

2.3.4 Network analysis of code frequencies

To examine patterns in how students associated different instructor traits with trust, we constructed a co-occurrence network diagram based on qualitative interview responses. Each node in the network represents a unique interview code (trait) mentioned by students, and edges represent instances where two traits were co-mentioned in the same interview. The weight of each edge reflects the frequency of co-occurrence across all participants. To further analyze network structure, for each node, we calculated degree centrality (the number of connections a trait had) and betweenness centrality (how often a trait acted as a bridge between others). At the domain level, we computed the average node degree and average betweenness centrality, as well as the intra-domain edge density, defined as the ratio of actual to possible connections among traits within the same domain. Finally, we quantified inter-domain edge frequencies to assess the extent to which traits from different trust domains co-occurred. These analyses allowed us to identify not only which traits were most central to students' conceptualizations of trust, but also how traits within and across trust domains were structurally interconnected.

2.4 Process model phase 3

2.4.1 Survey item writing and content validation

According to the process model approach to instrument design, the interview codebook was used to draft items for an assessment tool measuring students' trust in their instructor. Each survey item was derived from each unique code. To draft survey items, three senior members of the research team independently wrote items for all codes. In some cases, multiple items were written for the same code to ensure an accurate representation of the behaviors or traits encompassed within the code. Once completed, the researchers convened to discuss the drafted items. When items written for the same code differed from each other, the researchers reviewed the contextual definitions of the code, consulted with senior investigators, and edited the items until consensus was reached. To content validate survey items, three currently enrolled undergraduate STEM students were asked to provide feedback on each item. Each student provided an interpretation of what each item was asking for, highlighted items that were unclear or ambiguous, and provided feedback on how well the items aligned with their experiences as STEM students. Items were revised according to their feedback. The final survey contained 38 items and is provided on the first page of the Supplementary material.

2.5 Process model phase 4

2.5.1 Survey participants and procedures

As a pilot study, the survey was distributed in one STEM classroom at a large public research university. Students received an e-mail from their instructor inviting them to participate in an online survey administered with Qualtrics survey software. Of the 252 students who received the survey, 210, or 83%, completed the survey in its entirety. Of the participants, 58.6% identified as female, 14.8% identified as male, 0.5% identified as non-binary and 26.2% declined to provide their gender identity. Most students were in their second year (54.3%), with 41.4% of students in their third and fourth years and only 0.9% of students in their first year. Most participants self-identified as: White (58.6%), followed by Asian or Pacific Islander (15.7%), Hispanic or Latino (8.1%), multiple ethnicities (6.7%), and Black or African American (6.2%). 4.3% of participants declined to provide information regarding their race or ethnicity. 28.1% of participants were first-generation college students. Almost all students (96.2%) majored in STEM fields. 2.4% of students majored in Social Sciences and 1.4% were undeclared. See Table 2 for all survey participant demographic characteristics. This project was granted exempt status from each institution's Institutional Review Board Human Subjects Committee, as it examined standard educational practices.

Table 2
www.frontiersin.org

Table 2. Demographic characteristics of survey participants (N = 210).

2.5.2 Psychometric analysis of survey

Statistical analyses for the pilot study were conducted to investigate the psychometric properties of the survey derived from the interview codebook. Based on thematic analysis of the codebook, we hypothesized that a three-factor solution would define the dimensions of the survey (affective, cognitive, and relational trust). To evaluate this hypothesis, we conducted a maximum-likelihood factor analysis with promax rotation. Sampling adequacy was evaluated using a Kaiser-Meyer-Olkin analysis and suitability for factor analysis was evaluated using Bartlett's test. We evaluated model fit with the chi-square test of model fit, comparative fit index (CFI; Bentler, 1990), Tucker-Lewis Index (TLI; Tucker and Lewis, 1973), normed-fit index (NFI; Bentler and Bonett, 1980) and root mean square error of approximation (RMSEA). Finally, we computed a factor correlation matrix and examined internal consistency using Cronbach's α for all survey items and for each component factor.

3 Results

To understand how students conceptualize trust in instructors in their own words, we conducted structured interviews with 57 currently enrolled undergraduate STEM students at 3 institutions. They were asked to describe characteristics of a past trusted instructor, including examples of how the instructor had demonstrated these characteristics (see Section 2 for details). We first performed qualitative analyses to determine the dimensional organization of traits students used to describe trusted instructors, resulting in an interview codebook that proposes a three-dimensional structure to the development of trust. Next, we used qualitative and quantitative methods to understand the relationships between proposed dimensions of trust. We first sought to determine whether the proposed dimensions of trust were highly interrelated or if they remained distinct by examining co-occurrences of trait mentions in student responses. Finally, we developed a survey based on the qualitative interview codebook and used it to empirically test the construct validity of the proposed dimensions of trust.

3.1 Emergent dimensions of trust based on student interviews

Through content analysis of open-ended interview responses, our findings reveal 28 individual codes representing instructor characteristics perceived by students as demonstrating trustworthiness (see Section 2 for details). Because students were asked to provide examples of how instructors demonstrated these characteristics, we extracted observable contextual definitions for each code and included them in our codebook. Through thematic analysis, we organized individual codes into three major dimensions: affective, cognitive, and relational trust (Figure 3; see Section 2 for details). The major coding categories were then further divided into subcategories that grouped together similar individual codes as needed. In our qualitative codebook of trust, readers can find the three major coding categories, subcategories, individual code definitions, and accompanying student examples for each code (Table 3).

Figure 3
Diagram showing three branches of trust: Relational Trust, Cognitive Trust, and Affective Trust. Relational Trust focuses on acceptance, caring, interpersonal bonds, open communication, and understanding. Cognitive Trust emphasizes academic engagement, accommodating students, professional communication, responsiveness, subject matter competence, and supportiveness. Affective Trust highlights being friendly, funny, kind, and having a positive attitude.

Figure 3. Schematic of major code categories and their subcategories, representing the three dimensions of trust captured by qualitative analysis of student interview responses.

Table 3
www.frontiersin.org

Table 3. Qualitative codebook emerging from student interviews.

As described in “Section 2,” our coding and thematic analysis were informed by a broad literature review, which yielded 50 distinct characteristics that have been previously used to operationalize the latent variable “trust” or have been found to be statistically strongly associated with trust. Our search included more than 100 review articles, experimental studies, and qualitative analyses across a wide array of fields (see Supplementary Table 1 for the full literature review codebook). While our review was comparatively limited in scope relative to the entire body of literature on “trust,” we found that many operational definitions of trust included two sub-constructs: cognitive and affective trust. Thus, we sought to categorize interview codes into these two domains, using the existing literature to guide our categorization of traits related either to an instructor's professional capabilities or an instructor's ability to elicit positive emotions from their students, respectively. Additionally, in previous studies of undergraduate STEM student trust, trust was operationally defined using a close personal relationship framework encompassing care, acceptance, and understanding as sub-constructs (Clark and Lemay, 2010; Cavanagh et al., 2018; Wang et al., 2021). We therefore opted to include a third, relational trust domain to capture instructor traits related to these and other constructs that could be associated with developing and maintaining a close personal relationship with students. In the following, we describe each of the three domains—cognitive, relational, and affective trust—in more detail, providing contextual information about each domain and a rationale for the inclusion of individual codes within specific domains.

3.1.1 Cognitive trust

In our literature review, we found that cognitive trust is causally driven and based on a knowledge-based evaluation of a trustee's ability to fulfill an obligation (Dowell et al., 2015; Johnson and Grayson, 2005; Lewis and Weigert, 1985; Rempel et al., 1985). In this framing of trust, the trustor holds certain expectations of the trustee, based on a promise the trustee made to the trustor. Traits that were often found to be associated with the cognitive domain in a review of the literature included “competence,” “reliability,” “consistency,” “fairness,” “professionalism,” “responsiveness,” “flexibility,” and “timeliness” (Butler and Cantrell, 1984; Cook and Wall, 1980; Friedland, 1990; Ghosh et al., 2001; Lindskold and Bennett, 1973; McAllister, 1995; Moorman et al., 1993; Rousseau et al., 1998; Tschannen-Moran and Hoy, 2000; others, see Supplementary Table 1), among others.

In the higher education context, the trustor is the student making a cognitive decision about whether the trustee, their instructor, can meet their expectations of what an instructor should do in the classroom. This decision may be driven by evaluations of the instructor's competence and reliability, or other behaviors that seek to facilitate an effective working relationship between the student and instructor. Thus, interview codes related to the professional responsibilities typical of an undergraduate STEM instructor, such as demonstrating subject matter competence and providing adequate support for students' academic success, were subsequently grouped into the cognitive dimension (Table 3). In previous work, student trust in the higher education context was assessed using Tschannen-Moran and Hoy's framework of trust developed in the K-12 setting, which included “reliability” and “competence” as key domains of trust (McClain and Cokley, 2017). These elements were also captured in our analysis, represented in the cognitive domain of trust.

Overall, we found that cognitive trust was an important dimension for student perceptions of trustworthiness. Cognitive trust was the second most coded theme with 54 out of 57 students (94.7%) referencing at least one instructor characteristic associated with building cognitive trust (Table 4). Containing 14 codes, the project team divided cognitive trust into six subcategories: academically engaging, accommodating students, professional communication, responsive to students, competent in subject matter, and supportive. The most cited singular codes within the cognitive domain were “supportive,” referenced by 33 out of 57 of students (57.9%) and “flexible,” referenced by 19 out of 57 students (33.4%; Table 4).

Table 4
www.frontiersin.org

Table 4. Frequency of unique code mentions by students (out of 57 students) and codes most frequently co-occurring with it.

In interviews, students described instructor behaviors and traits that demonstrated the instructor's ability to fulfill their professional obligations in creating an effective learning environment. For example, instructor flexibility was described as an instructor's willingness to accommodate extenuating circumstances, such as illness or family emergencies, that prevented students from turning in assignments on time:

“[My instructor] was willing to work with me when it came to catching up on class notes. He set up informal office hours with me so I could catch up on materials and this was an action that not many of my professors were willing to do when I was sick. His ability to be flexible, understanding, and attentive to my needs as a student truly meant a lot to me.”

While this action could be interpreted as kindness, the context in which students described this trait had strong implications for the student's course performance. Similarly, instructor support was not described as being emotionally supportive but rather, operationalized as the instructor's role as a resource for academic success. For example, behaviors described as supportive included providing the necessary resources for students to complete assignments and motivating students to engage deeply with the course material. One student explained the importance of instructor support as: “I considered this period in my academic career my most productive because I knew there was always someone who supported my learning and would answer any of my questions, no matter what.”

3.1.2 Relational trust

In contrast to cognitive trust, which captures instructor characteristics aimed at building an effective working relationship, relational trust captures characteristics that reflect the cultivation of a strong personal relationship. This is not meant to connote an inappropriate relationship but rather refers to the ways in which an instructor may get to know a student and treat a student as a whole person. These actions may not have direct implications for students' classroom performance or academic achievement, but may have indirect effects through impact on students' self-efficacy, engagement, academic self-concept, motivation, and persistence (Ballen et al., 2017; Eimers, 2001; Komarraju et al., 2010; Kuh and Hu, 2001; Micari and Pazos, 2012; Umbach and Wawrzynski, 2005; Vogt et al., 2007).

Previous studies of trust in the higher education STEM context defined trust through elements of care, understanding, and acceptance (Cavanagh et al., 2018; Wang et al., 2021; Supplementary Table 1). Thus, we included emergent interview codes that reiterated these three elements in the relational trust domain (Table 3). Further, we considered some of the key elements that have been associated with positive personal student-teacher relationship in other studies, including openness, benevolence, care, connectedness, vulnerability, and respect (Anderson and Carta-Falsa, 2002; Jacklin and Le Riche, 2009; Komarraju et al., 2010; Meinking and Hall, 2024; McClain and Cokley, 2017; Micari and Pazos, 2012; Tschannen-Moran and Hoy, 2000; Umbach and Wawrzynski, 2005). Willing vulnerability and the disclosure of personal information by teachers have also been emphasized as key components of a trust student-teacher relationship in the K-12 context (Holzer and Daumiller, 2025). Interview codes showing instructor behaviors intended to build and maintain strong interpersonal relationships were therefore also grouped into the relational trust dimension (Table 3).

The relational trust category contained 10 unique codes, divided into five subcategories: accepting of students, caring, interpersonal bond, open communication, and understanding. After “caring,” traits associated with an instructor's understanding were the most mentioned during student interviews and were referenced by 34 out of 57 students (59.6%; Table 4). Our analysis found that relational trust was the most coded theme with 55 out of 57 students (96.5%) referencing at least one instructor characteristic associated with building relational trust (Table 4). Across all student interviews, instructor characteristics from the relational domain were among the most cited traits informing students' overall perception of their instructor's trustworthiness. Indeed, the single most often cited instructor characteristic perceived by students as indicating trustworthiness was “caring,” mentioned by 40 out of 57 students (70.2%; Table 4).

In student interviews, traits associated with building relational trust were often described within the context of the instructor's efforts to recognize students' identities beyond their role as students and to share aspects of their own identity beyond that of an instructor. For example, one student explained that the trust they had in their instructor came from how:

“[t]he instructor [] really wanted to get to know me and my background. Usually, instructors just see you as just another person in a class, but this instructor really wanted to understand what [was] going on in my other classes and would check in to make sure that I wasn't being too hard on myself. They established this relationship by getting personal and sharing information about themself so I could then open up. This instructor's willingness and desire to get to know me past the identity of a student is why I trust them.”

Across disciplines, trust is often described as a “willingness to be vulnerable” to the actions of a trustee (Mayer et al., 1995). In the context of the student experiences described here, instructors who were also willing to be vulnerable through acts of self-disclosure or who made strides to accept and understand their students' vulnerability appeared to succeed in building not only relational trust, but overall trust with their students.

3.1.3 Affective trust

Affective trust is the emotional component of trust based upon an initial interpersonal connection between two individuals that can lead to feelings of closeness, care, concern, or friendship. In turn, these positive emotions can deepen the development of trust, even in the absence of other causal attributes (Dowell et al., 2015; Johnson and Grayson, 2005; Lewis and Weigert, 1985; Rempel et al., 1985). It is important to distinguish between the relational and affective domains of trust. For example, “benevolence,” or acting out of kindness, is often cited as a component of trust in the broader literature as part of an affective dimension (Erdem and Ozen, 2003; Hoy and Tschannen-Moran, 1999; Jarvenpaa and Leidner, 1998; Kramer and Cook, 2004; Lindskold and Bennett, 1973; Mayer et al., 1995; McAllister, 1995; Morgan and Hunt, 1994; Renn and Levine, 1991; Rousseau et al., 1998; others, see Supplementary Table 1). However, taking students' contextualized meaning into account in our qualitative analysis, we deemed that certain interview codes that could be related to “benevolence” went beyond simple acts of professional courtesy or kindness and instead represented truly individualized acts of care. Such codes were subsequently categorized within the relational trust domain.

Interview codes included in the affective domain were instead related to students developing positive feelings toward their instructor that initially built trust or encouraged students' openness to the possibility of pursuing a personal relationship with their instructor (Table 3). In other words, codes included within the affective domain are related to students' first impressions of their instructor's affect and approachability, which then informed their decision to interact further with their instructor. Indeed, our review of the literature found that instructor approachability and frequency of positive interactions with instructors were important affective components of trustworthiness (Boyas and Sharpe, 2010; Denzine and Pulos, 2000; Edmondson et al., 2004; Jaasma and Koper, 1999; Kramer and Cook, 2004; Lamport, 1993; Robinson, 1996; Tschannen-Moran and Hoy, 1998; others, see Supplementary Table 1). Of the three dimensions of trust that emerged from student interviews, affective trust was the least commonly coded theme, with only 19 out of 57 students (33.3%) referencing at least one of the associated characteristics in their interviews (Table 4). The affective trust code category was encompassed by four singular codes (friendly/personable, funny, kind, and positive attitude), with no additional categorization of codes needed (Table 3).

When students referenced the affective domain of trust during interviews, they referred to instructor characteristics that made them feel more positive about the classroom environment and attending class or office hours. For example, one student discussed the importance of instructor kindness in building trust and motivating attendance: “One of the biggest things a professor can do that can increase my likelihood of trusting them is to simply be kind and empathetic. I am much less likely to go to a professor's office hours if they are cold and callous during class but am much more likely to approach a professor when they are kind.”

3.2 Relationships between dimensions of trust in students' words

Once we qualitatively categorized interview codes into three dimensions, we next sought to understand how different dimensions of trust interacted within students' open-ended responses. In doing so, we aimed to assess whether students tended to systematically distinguish between different dimensions of trust in their descriptions or if there was a pattern in how traits were mentioned in relation to each other. First, we tabulated the number of codes mentioned by each student in their free responses (Table 4). Of the 57 students we interviewed, 37 students mentioned traits from at least two dimensions, 17 students mentioned traits from all three dimensions, and three students mentioned traits from only one dimension. The three students who mentioned traits from only one dimension all used traits belonging to the relational domain. On average, students used between five and six traits to describe an instructor and 2–3 of those traits tended to fall within the cognitive and relational domains, respectively.

Next, we determined the frequency of codes co-occurring together within a student's response. Across all interviews, we found that “supportive” from the cognitive domain and “caring” from the relational domain were most frequently mentioned together, co-occurring 26 times. Following behind, “caring” and “understanding” from the relational domain co-occurred 19 times while “flexible” from the cognitive domain and “caring” from the relational domain were mentioned together 13 times. “Supportive” from the cognitive domain also frequently co-occurred with “open communication” from the cognitive domain (12 co-occurrences) and “understanding” from the relational domain (11 co-occurrences). The rightmost column of Table 4 lists the traits that were most frequently mentioned in conjunction with each individual code.

The complete network diagram is shown in Figure 4, where each individual code is represented by a node and edges connect nodes that co-occurred in student responses. The size of the node reflects the number of times a code was mentioned across all students, and thickness of the edges represents how frequently two codes were mentioned together. Nodes are additionally color-coded by trust domain. As seen in the co-occurrence network, traits associated with the relational and cognitive domains occupy central positions in the network and exhibit a high degree of co-occurrence. These nodes are not only frequently mentioned, based on their size, but also highly interconnected. The high degree of interdependence suggests that students may perceive traits falling within these domains as closely linked and reinforcing when evaluating trust in their instructors. On the other hand, traits falling within the affective domain are not as consistently interconnected with other traits in the network, suggesting they may play a more peripheral role in student evaluations of trust.

Figure 4
Network diagram illustrating trust dimensions with nodes labeled by traits, connected by lines representing relationships. Nodes are colored by category: relational trust (purple), cognitive trust (green), and affective trust (blue). Central nodes include “Caring” and “Open Communication,” while peripheral nodes include traits like “Friendly,” “Knowledgeable,” and “Provides Feedback.”

Figure 4. Co-occurrence network graph of interview codes. Each code is represented by a node, whose size reflects the number of times it was mentioned across all student interviews and whose color denotes the thematic domain, or trust dimension, to which it belongs. Edges connect codes that were mentioned by the same student and thickness of the edges represents how frequently pairs co-occurred.

To further characterize the network structure, we calculated node-level and domain-level descriptive statistics. Traits in the relational and cognitive domains exhibited higher average degrees (22.8 and 21.21, respectively) than those in the affective domain (18.25), indicating that they co-occurred more frequently with other traits. Similarly, betweenness centrality scores were higher for relational (3.4) and cognitive (2.77) traits compared to affective traits (1.56), suggesting that nodes within these domains function as more central bridges within the network. Intra-domain edge density was also higher among relational (0.93) and cognitive traits (0.82), compared to affective traits (0.67), reinforcing that these domains are more densely interconnected. Lastly, we found that the number of inter-domain edges was highest between relational and cognitive traits (113 edges between relational and cognitive domains compared to 31 edges between relational and affective domains and 34 edges between cognitive and affective domains), further supporting their overlapping nature in students' descriptions of trustworthy instructors.

Finally, we examined students' open-ended responses to qualitatively understand how students, in their own words, used traits from different domains in relation to each other when describing trusted instructors. In a notable example of the relationship between relational and cognitive dimensions, one student describes their instructor:

“He viewed the task of building students' understanding as his responsibility as a professor rather than the student's responsibility. When students went to see him for office hours, he always asked about them even though it was technically extraneous information—what a student's major was, what their interests were, etc. And he would not only remember this information, but he would also use it to help explain ideas better. He would introduce students to each other if they were in office hours at the same time. Essentially, he humanized and dignified the students who would come to see him, which was particularly helpful during times of struggling with the material.”

From their response, it is apparent that the instructor first took the time to get to know their students personally and build a bond based on acceptance and understanding. Once the student began to build relational trust and felt humanized, they felt more comfortable asking for help with course material. Moreover, because the instructor had taken the time to get to know students personally, they were able to provide personalized examples and analogies when explaining difficult course concepts. By making the content more individually meaningful, the instructor was better able to provide academic support for their students. Thus, the instructor built cognitive trust by leveraging the personal information they learned about the student in the process of building relational trust. Based on students' descriptions of their instructors, traits within the relational and cognitive dimensions appear to be highly interrelated, frequently overlapping and interacting within students' perceptions rather than functioning as distinct or independent categories.

3.3 Construct validity of trust dimensionality

Following our assessment of trust dimensionality using students' open-ended responses, we then sought to quantitatively test the construct validity of the three dimensions. Because we had constructed the interview codebook as part of a process model approach to instrument design (see Section 2 for a detailed description), we could readily derive survey items from individual interview codes. The resulting survey could then be used to determine whether the dimensionality we had proposed in the interview codebook similarly emerged from a factor analysis of student responses.

Student descriptions of trusted instructors gave observable contextual operationalizations of instructor traits, which were incorporated into the interview codebook. Based on these descriptions, we wrote example items that could be used in an instrument to assess the extent to which instructors demonstrated these traits and thus ultimately assess students' perceptions of trust in their instructor. For example, based on students' use of the trait, “humanizes students” in their open-ended responses, the contextual definition of the code was determined to be: “the instructor comprehends and shows consideration for the fact that students are human beings, not just students. Therefore, the instructor may show that they really know the student (e.g., knowing students' names). They may also respect the student and “display politeness” (Table 3). Example items to assess this trait could therefore be: “My instructor treats students with respect” or “My instructor makes me feel like more than a student” (Table 4).

Three senior members of the research team independently wrote draft items matched to each interview code and description. Once consensus was reached on all drafted items, three currently enrolled undergraduate STEM students were asked to provide feedback on the items for the purpose of content validation (see Section 2 for details). After items were revised according to their feedback, the final survey contained 38 items. Table 5 presents all finalized items that were included in the survey, matched to interview codes. In addition to newly drafted items, we also chose to include previously validated items from the trust survey used by Cavanagh et al. (2018) due to the similarity of constructs that emerged in our qualitative data and that were used to operationalize trust in their study. The full survey is provided in Supplementary material. We distributed the survey to one STEM classroom at a large public research university and received responses from 210 students (see Section 2 for details).

Table 5
www.frontiersin.org

Table 5. Survey items derived from unique interview codes.

Based on thematic analysis of the codebook from which the survey items were derived, we hypothesized that a three-factor solution would define the dimensions of survey responses (affective, cognitive, and relational trust). To evaluate this hypothesis, we conducted a maximum-likelihood factor analysis with promax rotation (to accommodate nonorthogonal relationships) with a forced three-factor solution. Sampling adequacy was evaluated using a Kaiser–Meyer–Olkin (KMO) analysis; the KMO value of 0.953 supports a suitable sample size for factor analysis. Bartlett's test additionally indicated that the data were suitable for a factor analysis [c2 (703) = 7,621.74, p < 0.0001].

The three extracted factors accounted for 54.4% of the total variance. Table 6 presents the factor pattern matrix and cross-loadings with other factors. The first factor accounted for 21.1% of the variance, the second factor accounted 18.6% of the variance, and the third factor accounted for 14.7% of the variance. Using a factor correlation matrix, we found that the three factors were sufficiently distinct from each other as all factor correlations fell below the recommend 0.84 threshold (Brown, 2015; Kline, 2016) (Supplementary Table 2). When all survey items were included, we observed a high degree of internal consistency (Cronbach's α = 0.971). Internal consistency was also evaluated for each of the factors independently and was high for each factor (Cronbach's α = 0.957, 0.950, and 0.934, respectively). To evaluate model fit for the three-factor solution, we utilized the following fit indices: TLI, CFI, NFI, RMSEA, and chi-square goodness of fit. We found that the close-fit indices for the three-factor solution approximate, but do not all reach recommended levels (RMSEA ≤ 0.8; NFI, TLI, and CFI ≈ 0.95) for an appropriate outcome [c2 (592) = 1,593.55, p < 0.000 TLI = 0.84, NFI = 0.80, CFI = 0.866, RMSEA = 0.09] (Bentler and Bonett, 1980; Hu and Bentler, 1999; Tucker and Lewis, 1973). Specifically, while RMSEA and chi-square fitness for the three-factor solution are in the acceptable fit range, NFI, CFI, and TLI are below acceptable fit levels. When we evaluated model fit indices for other factor solutions for the survey, we found that four- and five-factor solutions more closely approached recommended levels (Supplementary Table 3). Taken together, psychometric properties of the survey suggest that while the instrument has discriminant validity and high internal consistency, our hypothesized three-factor solution may not adequately capture the underlying structure.

Table 6
www.frontiersin.org

Table 6. Factor pattern matrix and cross-loadings for individual items in the survey.

Moreover, as can be seen in Table 6 and presented schematically in Figure 5, items derived from codes that were qualitatively categorized into cognitive, relational, and affective dimensions did not factor into similar units. Factor 1 included 18 items primarily from the cognitive and relational dimensions, and 1 from the affective domain. Factor 2 included 10 items spanning all three trust dimensions, while Factor 3 included 11 items from the cognitive and relational dimensions. Across all factors, several items exhibited moderate or strong cross-loadings above the 0.32 threshold (Table 6; Tabachnick and Fidell, 2001). Given the high degree of cross-loading and distribution of items across the three factors, our analysis suggests that the hypothesized three-dimensional structure of trust may not be representative of the complex interactions between domains that inform student perceptions of trust. Further, items from relational and cognitive domains tended to factor together consistently. This affirms findings from our co-occurrence network analysis, suggesting that traits from these two dimensions are highly interrelated.

Figure 5
Diagram illustrating the relationship between theorized trust dimensions and latent factors determined by factor analysis. The dimensions are Affective Trust, Cognitive Trust, and Relational Trust, each connecting to various survey items about instructor behavior. These items are mapped to three latent factors, which are represented by blue triangles on the right. Affective Trust items include friendliness and humor, Cognitive Trust items focus on communication and understanding, and Relational Trust items cover empathy and respect. The connections to latent factors are indicated by colored lines.

Figure 5. Schematic representing survey items (boxes) and their corresponding trust domain from the qualitative interview codebook (circles) and extracted factors from a forced three-factor solution (triangles).

4 Discussion

With this study we sought to broaden our understanding of STEM students' perceptions of trust in their instructor by performing a qualitative analysis of structured interviews where students were asked to share characteristics of a trusted college instructor. Further, we sought to test whether the dimensionality of the latent trust construct proposed in previous research could be recapitulated in the higher education STEM context. Our findings build upon previous work that defined student trust in their instructor using a close-personal relationship framework that highlighted the instructor's care, understanding, and acceptance (Cavanagh et al., 2018; Clark and Lemay, 2010; Wang et al., 2021) and previous work in related fields defining trust between relevant organizational stakeholders using different domains, such as cognitive or affective trust (Ghosh et al., 2001; Lewicki and Bunker, 1996; Lewis and Weigert, 1985; Mayer et al., 1995; McAllister, 1995; Tschannen-Moran and Hoy, 2000). Our qualitative analysis suggested trust dimensions similar to existing frameworks, but student responses revealed substantial overlap between traits theoretically categorized as distinct, particularly between relational and cognitive domains. This interdependence was reinforced by a factor analysis, which failed to empirically validate the proposed dimensional structure derived from student-identified traits.

4.1 Proposed three-domain model of instructor trust

We found that the characteristics students used to describe trusted instructors fell into three broad domains: “relational trust” included characteristics related to how an instructor intentionally cultivated a personal relationship with their students, “cognitive trust” encompassed characteristics related to students' evaluation of their instructors' professional competence, and “affective trust” contained characteristics that led to a positive first impression of the instructor (Figure 3). The relational trust domain was most comparable to subconstructs previously used to define student trust through the close-personal relationship framework. In previous work, the subconstructs of “care,” “acceptance,” and “understanding” were empirically validated to underlie trust and found to be strongly positively associated with other positive student learning outcomes (Cavanagh et al., 2018; Wang et al., 2021). Another study similarly found that students participating in small classrooms using EBTs such as co-creating and un-grading, reported that relational trust with their peers and instructors, centered on reciprocated vulnerability, was critical for their engagement (Meinking and Hall, 2024). The importance of reciprocated vulnerability for building trust has been echoed in K-12 education settings as well (Holzer and Daumiller, 2025). Our findings confirm the validity of using a relational framework in the college STEM context given that more than 96% of interviewed students referenced instructor characteristics related to relationship-building (Table 4). Indeed, all three subconstructs of the close-personal relationship framework (care, understanding, and acceptance) were among both the most cited traits and the most highly interconnected traits in a co-occurrence network analysis (Figure 4).

The cognitive trust domain of our codebook parallels the use of competence to operationalize trust in instructors in the K-12 context and in broader literature, with many of the traits identified by students in our study previously associated with cognitive trust in other contexts (Butler and Cantrell, 1984; Cook and Wall, 1980; Friedland, 1990; Ghosh et al., 2001; Lindskold and Bennett, 1973; McAllister, 1995; Moorman et al., 1993; Rousseau et al., 1998; Tschannen-Moran and Hoy, 2000; others, see Supplementary Table 1). Characteristics such as “organized,” “knowledgeable,” and “professional communication” were salient among college STEM students, with more than 94% of students mentioning instructor traits that demonstrated their ability to perform their professional duties (Table 4). Additionally, we found that an important aspect of cognitive trust for students was that their instructor be “academically engaging.” While this trait is not widely cited in the broader literature, it bears similarity to descriptions that emerged in Di Battista et al.'s (2020) qualitative study investigating Italian student perceptions of instructor trustworthiness. In their study, students referenced the instructor's ability to manage the classroom, engage student participation, and demonstrate passion for the subject as important aspects of trustworthiness. Here, we similarly found that “academically engaging” instructors were skilled in cultivating an active classroom environment where students were compelled to pay close attention and engage with course material. From the broader literature, this definition is most closely paralleled by Gabarro's (1978) description of “interpersonal competence” in the context of organizational management and trust, which references managers' ability to build effective social relationships and competently engage in social interactions.

The affective domain is perhaps most closely related to the subconstruct of benevolence previously used to define trust in the K-12 setting and in broader literature; the term is used to describe trustees who act with the best interest of the trustor in mind (Baier, 1986; Butler and Cantrell, 1984; Cummings and Bromiley, 1996; Mayer et al., 1995; Morgan and Hunt, 1994; Renn and Levine, 1991; Schindler and Thomas, 1993; Tschannen-Moran and Hoy, 2000; Zand, 1972; others, see Supplementary Table 1). We found that characteristics in this domain were least often mentioned by participants, with only a third of the students we interviewed referencing “friendly,” “funny,” “kind,” and “positive attitude” (Table 4). These traits may form the basis for students' first impressions, which previous research suggests can be formed in < 6 s (Ambady and Rosenthal, 1993; Tom et al., 2010; Begrich et al., 2020). A favorable impression of approachability can lead to increased interactions with the instructor both formally within the classroom or informally out-of-class (Cox et al., 2010; Denzine and Pulos, 2000; Lamport, 1993; Schussler et al., 2021; Valenzuela, 2025; Wilson et al., 1974). These interactions have been shown to be critical for students' social integration and subsequent trust in institutions of higher education (Milem and Berger, 1997; Nora et al., 1996; Pascarella and Terenzini, 1979; Tinto, 2015; Pike et al., 1997; Wilcox et al., 2005; Kim et al., 2023; Reindl et al., 2022; Paquin et al., 2025). The fact that these traits were not among the most frequently cited by interviewed students may suggest that, over time, the importance of traits informing their first impression was superseded by the strength of the personal relationship that developed afterward.

Our codebook shares a great deal of overlap with frameworks in the existing literature on trust across many contexts. However, no one existing framework of trust sufficiently captures the complexity of college STEM students' perceptions of trust in their instructor that was uncovered through our qualitative interviews. We acknowledge that many of the characteristics students associated with trust, such as “good listener,” “flexibility,” or “support,” may also be interpreted as dimensions of related constructs such as helpfulness or general teaching effectiveness. This overlap reflects a broader challenge in trust research: trust often co-occurs with other relational or affective constructs, making clean conceptual boundaries difficult to maintain (McEvily and Tortoriello, 2011). Rather than asserting that these traits are unique to trust, our approach sought to identify which traits students themselves associated with trustworthiness. In doing so, we recognize that students' definitions of trust are likely embedded in broader relational judgments and shaped by contextual cues. This underscores the importance of bottom-up approaches for operationalizing trust in context-specific ways. Based on our literature review, existing work on trust in higher education is relatively limited in its inclusion of direct student responses (Di Battista et al., 2020). Our codebook centers students' perspective on what makes an instructor trustworthy, including specific examples of actions instructors took to gain their trust. Future work may use the codebook as a tool to generate actionable strategies for building trust in college STEM classrooms.

4.2 Validity of three-domain model

As a latent variable, trust has long been made observable through operationalization using sub-constructs or domains.

The construct validity of trust dimensionality itself, though, has been challenged. When existing conceptualizations and measures of trust were used across disciplines, Whipple et al. (2013) found that content validity and replicability were significantly below adequate standards. A lack of replicability for existing trust measures was similarly critiqued by McEvily and Tortoriello (2011). In their review of 171 publications that included 129 distinct measures of trust, only 24 had been successfully replicated and of those, only 13 were replicated by an independent research group. Finally, in a series of confirmatory factor analyses exploring the construct validity of models of trust across several institutional contexts, PytlikZillig and Kimbrough (2016) found that there was highly variable discriminant validity depending on the test sample and context. In the education context specifically, Niedlich et al. (2021) systematically identify the inconsistency with which trust has been operationalized in existing research and note that theorized trust dimensions often overlap or are conflated. The conclusions of these reviews are also supported by recent empirical study (Di Battista et al., 2020, 2021).

Our work recapitulates these findings. When we qualitatively examined student responses, we found that traits from different domains frequently co-occurred and that students described traits from different dimensions as reinforcing rather than distinct (Figure 4). Using an instrument derived from our qualitative interview codebook, we found that items from different theorized dimensions factored into the same latent sub-construct and the model fit of a three-factor solution was inadequate (Figure 5). We found that a four- or five-factor solution may exhibit better model fit (Supplementary Table 2), aligning with previous findings that higher-order factors may provide a better fit than attempting to collapse sub-constructs into fewer factors (PytlikZillig and Kimbrough, 2016). Although our quantitative evidence suggests that there are indeed distinct factors underlying trust, the actual dimensions do not necessarily align with those that we and others have previously proposed. For example, the five-factor solution distinguishes between an instructor's treatment of the class as a whole (e.g. “My instructor is accepting of students' differences” and “My instructor cares about students' wellbeing) vs. developing a personal relationship with individual students (e.g. “My instructor listens very carefully to me” and “My instructor accepts me for who I am”). Moreover, two of the factors also exemplified a distinction between competent instructional communication (e.g. “My instructor communicates course concepts clearly” and “My instructor makes class activities interesting”) and interpersonal communication (e.g. “My instructor expresses interests in students' lives” and “My instructor consistently communicates with students outside of class”). These findings reinforce previous calls for careful consideration of the influence of context when attempting to define and measure trust (Di Battista et al., 2020, 2021; McEvily and Tortoriello, 2011; Niedlich et al., 2021; PytlikZillig and Kimbrough, 2016; Whipple et al., 2013). Our codebook and resulting instrument can therefore form the basis for a contextualized re-examination of student trust specifically within STEM higher education. Future work, however, is needed to empirically test the instrument with multiple different samples to assess whether a higher-order factor structure consistently emerges.

4.3 Implications and future directions

The strength of the student-instructor relationship has long been shown to have benefits for student social and learning outcomes. Students who interact more frequently with their instructors increased social and cultural capital in academic research environments (Ahmad et al., 2017; Cooper et al., 2018, 2021; Gillespie, 2005; Ream et al., 2014; Thompson et al., 2016; Wilson and Davis, 2020). As student-centered teaching transforms the college STEM education landscape, interactions between students and their instructors are steadily increasing (Esparza et al., 2020; Freeman et al., 2014; Handelsman et al., 2007; Henderson and Dancy, 2009). Given the effectiveness of EBTs and early undergraduate research experiences for increased student performance and retention, there is a need to better understand how students form perceptions of trust in their instructors and consequently develop strong relationships with them (Freeman et al., 2014; Graham et al., 2013; Hanauer et al., 2017; Theobald et al., 2020; Wang and Degol, 2013).

Indeed, previous work has shown that trust in instructor is strongly positively associated with student buy-in to an instructor's use of EBTs, student engagement, intent to persist, and course performance (Wang et al., 2021). In this study, researchers also tested the relationship between student's growth mindset and the same outcomes and found that trust had more than twice as strong an association with outcomes than growth mindset (Wang et al., 2021). This finding is particularly striking because growth mindset is an internal view of intelligence, while trust is a perception of someone external to the student. Given that changing students' internal beliefs about their intelligence can be difficult (Dweck, 2008), it is an encouraging possibility that instructors might be able to improve learning outcomes by investing time toward gaining trust.

We followed a “process model” approach to develop the interview codebook in order to facilitate the construction of an instrument that could be used for empirical testing (Chatterji, 2003). By drafting items from a codebook supported by student interview data (Table 5), this approach essentially allows students themselves to write the instrument items.

After an empirical pilot study of the instrument constructed from our codebook, we found that the instrument demonstrated both discriminant validity and high internal consistency, but its internal structure was not adequately modeled by a hypothesized three-factor solution (Table 6; Supplementary Table 2). Our study was limited to a relatively small sample: a single high-enrollment college STEM classroom. Therefore, future work is needed to empirically validate the drafted instrument items with a larger and more diverse sample to determine the underlying factor structure. Such an instrument can be used not only to inform future lines of research, but also as a tool for practitioner use in the classroom and in instructor training and evaluation.

While our study specifically focused on undergraduate STEM students given previous research demonstrating the importance of trust for student buy-in to EBTs, it is possible that our findings may be relevant for students and instructors in other disciplines. Large-enrollment classrooms are common among introductory courses for many disciplines, such as the arts, humanities, and social sciences, and the characteristics that help STEM instructors build trust with their students in large courses may be generalizable to other large courses. Future empirical testing of the drafted instrument may be done in a variety of contexts, including other disciplines, to test this possibility.

5 Limitations

There are several important limitations to consider when interpreting the results of our study. First, due to the qualitative nature of the data, all analyses are inherently subject to researcher biases. Our primary research team was composed of people who belong to majority groups in STEM and higher education, situated at an affluent private research university. Data were analyzed individually by members of the research team and our prior knowledge and experiences naturally color our interpretation of student responses. We chose a qualitative approach to capture more detailed insights than a survey might.

Additionally, every student we interviewed may interpret the word “trust” differently based on their prior experiences and assumptions. This limitation may be addressed in the future by providing participants with a definition of “trust.” We opted not to do so in the current study because our work was exploratory in nature, and we wanted to capture students' most unbiased interpretation of the concept of trust. Due to the in-depth nature of the interview process, we were limited to the number of students and contexts we could sample from. Specifically, student interviews were conducted during the COVID-19 pandemic which severely impacted college learning experiences. Thus, without an investigation of a larger and more diverse sample, we do not intend to generalize the results for students in other contexts. Students were also specifically asked to recall a single past instructor. The characteristics that emerged therefore may not fully capture the developmental process of building trust over time or may be influenced by the amount of time that had elapsed and the changing perceptions of students at different points in their college careers. Future work may take a longitudinal approach to better understand how trusting relationships are built and sustained.

6 Conclusion

In this study, we engaged in a “bottom-up” qualitative approach that allowed students to define trustworthy instructors in their own words. We found that students used many traits spanning previously theorized sub-constructs of trust—including cognitive, affective, and relational domains—in interrelated ways and latent factor analysis challenged the construct validity of a simple three-domain model. This work informs future investigation of the impact of student trust in their instructors on desired long-term student outcomes, such as persistence in STEM education and STEM-related careers, by providing a contextualized and broadened framework of trust and an accompanying assessment tool for the undergraduate STEM student population.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving humans were approved by Yale University Institutional Review Board. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

KZ: Visualization, Formal analysis, Validation, Conceptualization, Methodology, Writing – review & editing, Writing – original draft, Software. JG: Methodology, Investigation, Project administration, Conceptualization, Formal analysis, Writing – review & editing. TZ: Writing – review & editing, Investigation, Methodology, Formal analysis, Conceptualization. LC: Writing – review & editing, Formal analysis, Investigation. JB: Project administration, Writing – review & editing. HW: Investigation, Writing – review & editing. MB: Writing – review & editing, Project administration. DH: Methodology, Conceptualization, Writing – review & editing, Supervision. XC: Writing – review & editing, Conceptualization, Methodology, Supervision. MG: Methodology, Conceptualization, Writing – review & editing, Funding acquisition, Supervision.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This work was funded by NSF grant #2000417.

Acknowledgments

We thank all the individuals who participated in the qualitative interviews. We also thank the many contributors to this project including members of our advisory board, Michelle Withers, Phil Reeves, Viknesh Sivanathan, Todd Campbell, and Oriana Aragon who provided valuable feedback over the course of this study. Finally, we thank the undergraduate research assistants who assisted with data collection and analysis, Phoebe Yeh, Demi Lee, Steven Kao, Lazaros Efthymiou, and Claire Sullivan. We also thank Phoebe Yeh and Mia Morgan for their help with manuscript preparation.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Gen AI was used in the creation of this manuscript.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2025.1617067/full#supplementary-material

References

Ahmad, C. N. C., Shaharim, S. A., and Abdullah, M. F. N. L. (2017). Teacher-student interactions, learning commitment, learning environment and their relationship with student learning comfort. J. Turk. Sci. Educ. 14, 57–72. Available online at: https://www.tused.org/index.php/tused/article/view/137

Google Scholar

Alt, D., Itzkovich, Y., and Naamati-Schneider, L. (2022). Students' emotional well-being, and perceived faculty incivility and just behavior before and during COVID-19. Front. Psychol. 13:849489. doi: 10.3389/fpsyg.2022.849489

PubMed Abstract | Crossref Full Text | Google Scholar

Ambady, N., and Rosenthal, R. (1993). Half a minute: predicting teacher evaluations from thin slices of nonverbal behavior and physical attractiveness. J. Pers. Soc. Psychol. 64, 431–441. doi: 10.1037/0022-3514.64.3.431

Crossref Full Text | Google Scholar

American Educational Research Association (1999). The Standards for Educational and Psychological Testing. Washington, DC: American Psychological Association.

Google Scholar

Andersen, J. F. (1979). Teacher immediacy as a predictor of teaching effectiveness. Ann. Int. Commun. Assoc. 3, 543–559. doi: 10.1080/23808985.1979.11923782

Crossref Full Text | Google Scholar

Anderson, L. E., and Carta-Falsa, J. (2002). Factors that make faculty and student relationships effective. Coll. Teach. 30, 134–138. doi: 10.1080/87567550209595894

Crossref Full Text | Google Scholar

Baier, A. (1986). Trust and antitrust. Ethics 96, 231–260. doi: 10.1086/292745

Crossref Full Text | Google Scholar

Ballen, C. J., Wieman, C., Salehi, S., Searle, J. B., and Zamudio, K. R. (2017). Enhancing diversity in undergraduate science: self-efficacy drives performance gains with active learning. CBE—Life Sci. Educ. 16:ar56. doi: 10.1187/cbe.16-12-0344

PubMed Abstract | Crossref Full Text | Google Scholar

Bayraktar, B., Ragupathi, K., and Troyer, K. A. (2025). Building trust through feedback: a conceptual framework for educators. Teach. Learn. Inq. 13, 1–19. doi: 10.20343/teachlearninqu.13.7

Crossref Full Text | Google Scholar

Begrich, L., Fauth, B., and Kunter, M. (2020). Who sees the most? Differences in students' and educational research experts' first impressions of classroom instruction. Soc. Psychol. Educ. 23, 673–699. doi: 10.1007/s11218-020-09554-2

Crossref Full Text | Google Scholar

Beltrano, N. R., Archer-Kuhn, B., and MacKinnon, S. (2021). Mining for gold and finding only nuggets: attempting a rapid systematic review, on trust in higher education IBL classrooms. Teach. Teach. 27, 300–315. doi: 10.1080/13540602.2021.1955672

Crossref Full Text | Google Scholar

Bentler, P. (1990). Comparative fit indexes in structural models. Psychol. Bull. 107, 238–246. doi: 10.1037/0033-2909.107.2.238

PubMed Abstract | Crossref Full Text | Google Scholar

Bentler, P. M., and Bonett, D. G. (1980). Significance tests and goodness of fit in the analysis of covariance structures. Psychol. Bull. 88, 588–606. doi: 10.1037/0033-2909.88.3.588

Crossref Full Text | Google Scholar

Bentler, P. M., and Bonett, D. G. (1980). Significance tests and goodness of fit in the analysis of covariance structures. Psychol. Bull. 88, 588–606. doi: 10.1037/0033-2909.88.3.588

Crossref Full Text | Google Scholar

Boyas, J., and Sharpe, T. L. (2010). Racial and ethnic determinants of interracial and ethnic trust. J. Hum. Behav. Soc. Environ. 20, 618–636. doi: 10.1080/10911351003673682

Crossref Full Text | Google Scholar

Bradford, B., Jackson, J., Murphy, K., and Sargeant, E. (2022). The space between: trustworthiness and trust in the police among three immigrant groups in Australia. J. Trust Res. 12, 125–152. doi: 10.1080/21515581.2022.2155659

Crossref Full Text | Google Scholar

Braun, V., and Clarke, V. (2006). Using thematic analysis in psychology. Qual. Res. Psychol. 3, 77–101. doi: 10.1191/1478088706qp063oa

PubMed Abstract | Crossref Full Text | Google Scholar

Brazeal, K. R., Brown, T. L., and Couch, B. A. (2016). Characterizing student perceptions and buy-in toward common formative assessment techniques. CBE—Life Sci. Educ. 15:ar73. doi: 10.1187/cbe.16-03-0133

PubMed Abstract | Crossref Full Text | Google Scholar

Brown, T. A. (2015). Confirmatory Factor Analysis for Applied Research. Guilford Publications.

Google Scholar

Brownell, S. E., and Tanner, K. D. (2012). Barriers to faculty pedagogical change: lack of training, time, incentives, and… tensions with professional identity? CBE—Life Sci. Educ. 11, 339–346. doi: 10.1187/cbe.12-09-0163

PubMed Abstract | Crossref Full Text | Google Scholar

Bryk, A., and Schneider, B. (2002). Trust in Schools: A Core Resource for Improvement. New York, NY: Russell Sage Foundation.

Google Scholar

Butler, J. K., and Cantrell, R. S. (1984). A behavioral decision theory approach to modeling dyadic trust in superiors and subordinates. Psychol. Rep. 55, 19–28. doi: 10.2466/pr0.1984.55.1.19

Crossref Full Text | Google Scholar

Camanto, O. J., and Campbell, L. (2025). Trust in close relationships revisited. J. Soc. Pers. Relat. 60–70. doi: 10.1177/02654075251346105

Crossref Full Text | Google Scholar

Cavanagh, A. J., Aragón, O. R., Chen, X., Couch, B., Durham, M., Bobrownicki, A., et al. (2016). Student buy-in to active learning in a college science course. CBE—Life Sci. Educ. 15:ar76. doi: 10.1187/cbe.16-07-0212

PubMed Abstract | Crossref Full Text | Google Scholar

Cavanagh, A. J., Chen, X., Bathgate, M., Frederick, J., Hanauer, D. I., Graham, M. J., et al. (2018). Trust, growth mindset, and student commitment to active learning in a college science course. CBE—Life Sci. Educ. 17:ar10. doi: 10.1187/cbe.17-06-0107

PubMed Abstract | Crossref Full Text | Google Scholar

Chasteen, S. V., and Pollock, S. J. (2008). Transforming upper-division electricity and magnetism. AIP Conf. Proc. 1064, 91–94. doi: 10.1063/1.3021282

Crossref Full Text | Google Scholar

Chatterji, M. (2003). Designing and Using Tools for Educational Assessment. Boston, MA: Allyn and Bacon.

Google Scholar

Chatterji, M., Sentovich, C., Ferron, J., and Rendina-Gobioff, G. (2002). Using an iterative model to conceptualize, pilot test, and validate scores from an instrument measuring teacher readiness for educational reforms. Educ. Psychol. Meas. 62, 444–465. doi: 10.1177/00164402062003004

Crossref Full Text | Google Scholar

Christe, B. L. (2013). The importance of faculty-student connections in stem disciplines. J. STEM Educ.: Innov. Res. 14, 22–26. Available online at: https://www.jstem.org/jstem/index.php/JSTEM/article/view/1797

Google Scholar

Christophel, D. M. (1990). The relationships among teacher immediacy behaviors, student motivation, and learning. Commun. Educ. 39, 323–340. doi: 10.1080/03634529009378813

Crossref Full Text | Google Scholar

Clark, M. S., and Lemay, E. P. (2010). “Close relationships,” in Handbook of Social Psychology, eds. S. T. Fiske, D. T. Gilbert, and G. Lindzey (Hoboken, NJ: John Wiley & Sons), 898–940. doi: 10.1002/9780470561119.socpsy002025

Crossref Full Text | Google Scholar

Cook, J., and Wall, T. (1980). New work attitude measures of trust, organizational commitment and personal need non-fulfilment. J. Occup. Psychol. 53, 39–52. doi: 10.1111/j.2044-8325.1980.tb00005.x

Crossref Full Text | Google Scholar

Cooper, K. M., Ashley, M., and Brownell, S. E. (2018). Breaking down barriers: a bridge program helps first-year biology students connect with faculty. J. Coll. Sci. Teach. 47. doi: 10.2505/4/jcst18_047_04_60

Crossref Full Text | Google Scholar

Cooper, K. M., Cala, J. M., and Brownell, S. E. (2021). Cultural capital in undergraduate research: An exploration of how biology students operationalize knowledge to access research experiences at a large, public research-intensive institution. Int. J. STEM Educ. 8, 1–17. doi: 10.1186/s40594-020-00265-w

Crossref Full Text | Google Scholar

Corwin, L. A., Graham, M. J., and Dolan, E. L. (2015). Modeling course-based undergraduate research experiences: an agenda for future research and evaluation. CBE—Life Sci. Educ. 14:es1. doi: 10.1187/cbe.14-10-0167

PubMed Abstract | Crossref Full Text | Google Scholar

Cox, B. E., McIntosh, K. L., Terenzini, P. T., Reason, R. D., and Lutovsky Quaye, B. R. (2010). Pedagogical signals of faculty approachability: factors shaping faculty-student interaction outside the classroom. Res. High. Educ. 51, 767–788. doi: 10.1007/s11162-010-9178-z

Crossref Full Text | Google Scholar

Cummings, L. L., and Bromiley, P. (1996). The organizational trust inventory (OTI). Trust Organ. Front. Theory Res. 302, 39–52. doi: 10.4135/9781452243610.n15

PubMed Abstract | Crossref Full Text | Google Scholar

Denzine, G. M., and Pulos, S. (2000). College students' perceptions of faculty approachability. Educ. Res. Q. 24, 56–66. Available online at: https://www.proquest.com/scholarly-journals/college-students-perceptions-faculty/docview/216181530/se-2

Google Scholar

Di Battista, S., Pivetti, M., and Berti, C. (2020). Competence and benevolence as dimensions of trust: lecturers' trustworthiness in the words of Italian students. Behav. Sci. 10:143. doi: 10.3390/bs10090143

PubMed Abstract | Crossref Full Text | Google Scholar

Di Battista, S., Smith, H. J., Berti, C., and Pivetti, M. (2021). Trustworthiness in higher education: the role of professor benevolence and competence. Soc. Sci. 10:18. doi: 10.3390/socsci10010018

Crossref Full Text | Google Scholar

Dolan, E. L. (2015). Biology education research 2.0. CBE—Life Sci. Educ. 14:ed1. doi: 10.1187/cbe.15-11-0229

PubMed Abstract | Crossref Full Text | Google Scholar

Dowell, D., Morrison, M., and Heffernan, T. (2015). The changing importance of affective trust and cognitive trust across the relationship lifecycle: a study of business-to-business relationships. Ind. Mark. Manage. 44, 119–130. doi: 10.1016/j.indmarman.2014.10.016

Crossref Full Text | Google Scholar

Dweck, C. S. (2008). Can personality be changed? The role of beliefs in personality and change. Curr. Dir. Psychol. Sci. 17, 391–394. doi: 10.1111/j.1467-8721.2008.00612.x

Crossref Full Text | Google Scholar

Dzimińska, M., Fijałkowska, J., and Sułkowski, Ł. (2018). Trust-based quality culture conceptual model for higher education institutions. Sustainability 10:2599. doi: 10.3390/su10082599

Crossref Full Text | Google Scholar

Edmondson, A. C., Kramer, R. M., and Cook, K. S. (2004). “Psychological safety, trust, and learning in organizations: a group-level lens,” in Trust and Distrust in Organizations: Dilemmas and Approaches, Vol. 12, eds. R. M. Kramer, and K. S. Cook (New York, NY: Russell Sage Foundation), 239–272.

Google Scholar

Eimers, M. (2001). The impact of student experiences on progress in college: an examination of minority and nonminority differences. NASPA J. 38, 386–409. doi: 10.2202/1949-6605.1148

Crossref Full Text | Google Scholar

Erdem, F., and Ozen, J. (2003). Cognitive and affective dimensions of trust in developing team performance. Team Perform. Manage. Int. J. 9, 131–135. doi: 10.1108/13527590310493846

Crossref Full Text | Google Scholar

Esparza, D., Wagler, A. E., and Olimpo, J. T. (2020). Characterization of instructor and student behaviors in CURE and non-CURE learning environments: impacts on student motivation, science identity development, and perceptions of the laboratory experience. CBE—Life Sci. Educ. 19:ar10. doi: 10.1187/cbe.19-04-0082

PubMed Abstract | Crossref Full Text | Google Scholar

Faranda, W. T. (2015). The effects of instructor service performance, immediacy, and trust on student-faculty out-of-class communication. Mark. Educ. Rev. 25, 83–97. doi: 10.1080/10528008.2015.1029853

Crossref Full Text | Google Scholar

Fedesco, H. N., Bonem, E. M., Wang, C., and Henares, R. (2019). Connections in the classroom: separating the effects of instructor and peer relatedness in the basic needs satisfaction scale. Motiv. Emot. 43, 758–770. doi: 10.1007/s11031-019-09765-x

Crossref Full Text | Google Scholar

Felten, P., Forsyth, R., and Sutherland, K. A. (2023). Building trust in the classroom: a conceptual model for teachers, scholars, and academic developers in higher education. Teach. Learn. Inq. 11. doi: 10.20343/teachlearninqu.11.20

Crossref Full Text | Google Scholar

Ferguson, S. N. (2021). Effects of faculty and staff connectedness on student self-efficacy. J. Scholarsh. Teach. Learn. 21:28597. doi: 10.14434/josotl.v21i2.28597

Crossref Full Text | Google Scholar

Finelli, C. J., Nguyen, K., DeMonbrun, M., Borrego, M., Prince, M., Husman, J., et al. (2018). Reducing student resistance to active learning: strategies for instructors. J. Coll. Sci. Teach. 47, 80–91. doi: 10.2505/4/jcst18_047_05_80

PubMed Abstract | Crossref Full Text | Google Scholar

Fischer, S., Walker, A., and Hyder, S. (2023). The development and validation of a multidimensional organisational trust measure. Front. Psychol. 14:1189946. doi: 10.3389/fpsyg.2023.1189946

PubMed Abstract | Crossref Full Text | Google Scholar

Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., et al. (2014). Active learning increases student performance in science, engineering, and mathematics. Proc. Nat. Acad. Sci. 111, 8410–8415. doi: 10.1073/pnas.1319030111

PubMed Abstract | Crossref Full Text | Google Scholar

Friedland, N. (1990). Attribution of control as a determinant of cooperation in exchange interactions. J. Appl. Soc. Psychol. 20, 303–320. doi: 10.1111/j.1559-1816.1990.tb00413.x

Crossref Full Text | Google Scholar

Frymier, A. B. (1994). A model of immediacy in the classroom. Commun. Q. 42, 133–144. doi: 10.1080/01463379409369922

Crossref Full Text | Google Scholar

Frymier, A. B., Goldman, Z. W., and Claus, C. J. (2019). Why nonverbal immediacy matters: a motivation explanation. Commun. Q. 67, 526–539. doi: 10.1080/01463373.2019.1668442

Crossref Full Text | Google Scholar

Gabarro, J. J. (1978). “The development of trust, influence, and expectations,” in Interpersonal Behavior: Communication and Understanding in Relationship, eds. A. G. Athos, and J. J. Gabarro (Englewood Cliffs, NJ: Prentice-Hall), 290–303.

Google Scholar

Ghosh, A. K., Whipple, T. W., and Bryan, G. A. (2001). Student trust and its antecedents in higher education. J. Higher Educ. 72, 322–340. doi: 10.1080/00221546.2001.11777097

Crossref Full Text | Google Scholar

Gillespie, M. (2005). Student–teacher connection: a place of possibility. J. Adv. Nurs. 52, 211–219. doi: 10.1111/j.1365-2648.2005.03581.x

PubMed Abstract | Crossref Full Text | Google Scholar

Glaser, B., and Strauss, A. (2017). Discovery of Grounded Theory: Strategies for Qualitative Research. London: Routledge. doi: 10.4324/9780203793206

Crossref Full Text | Google Scholar

Graham, M. J., Frederick, J., Byars-Winston, A., Hunter, A. B., and Handelsman, J. (2013). Increasing persistence of college students in STEM. Science 341, 1455–1456. doi: 10.1126/science.1240487

PubMed Abstract | Crossref Full Text | Google Scholar

Graham, M. J., Naqvi, Z., Encandela, J., Harding, K. J., and Chatterji, M. (2009). Systems-based practice defined: taxonomy development and role identification for competency assessment of residents. J. Grad. Med. Educ. 1, 49–60. doi: 10.4300/01.01.0009

PubMed Abstract | Crossref Full Text | Google Scholar

Gross, D., Pietri, E. S., Anderson, G., Moyano-Camihort, K., and Graham, M. J. (2015). Increased preclass preparation underlies student outcome improvement in the flipped classroom. CBE—Life Sci. Educ. 14:ar36. doi: 10.1187/cbe.15-02-0040

PubMed Abstract | Crossref Full Text | Google Scholar

Guzzardo, M. T., Khosla, N., Adams, A. L., Bussmann, J. D., Engelman, A., Ingraham, N., et al. (2021). “The ones that care make all the difference”: perspectives on student-faculty relationships. Innov. High. Educ. 46, 41–58. doi: 10.1007/s10755-020-09522-w

PubMed Abstract | Crossref Full Text | Google Scholar

Hagenauer, G., and Volet, S. E. (2014). Teacher–student relationship at university: an important yet under-researched field. Oxford Rev. Educ. 40, 370–388. doi: 10.1080/03054985.2014.921613

PubMed Abstract | Crossref Full Text | Google Scholar

Hai-Jew, S. (2007). The trust factor in online instructor-led college courses. J. Interact. Instr. Dev. 19, 11–25.

Google Scholar

Hanauer, D. I., Graham, M. J., Sea-Phages Betancur, L., Bobrownicki, A., Cresawn, S. G., et al. (2017). An inclusive Research Education Community (iREC): impact of the SEA-PHAGES program on research outcomes and student learning. Proc. Natl. Acad. Sci. U.S.A. 114, 13531–13536. doi: 10.1073/pnas.1718188115

PubMed Abstract | Crossref Full Text | Google Scholar

Handelsman, J., Miller, S., and Pfund, C. (2007). Scientific Teaching. New York, NY: Macmillan.

Google Scholar

Henderson, C., Dancy, M., and Niewiadomska-Bugaj, M. (2012). Use of research-based instructional strategies in introductory physics: where do faculty leave the innovation-decision process? Phys. Rev. Spec. Top—Phys. Educ. Res. 8:020104. doi: 10.1103/PhysRevSTPER.8.020104

Crossref Full Text | Google Scholar

Henderson, C., and Dancy, M. H. (2009). Impact of physics education research on the teaching of introductory quantitative physics in the United States. Phys. Rev. Spec. Top. Phys. Educ. Res. 5:020107. doi: 10.1103/PhysRevSTPER.5.020107

Crossref Full Text | Google Scholar

Hess, J. A., and Mazer, J. P. (2017). Looking forward: envisioning the immediate future of instructional communication research. Commun. Educ. 66, 474–475. doi: 10.1080/03634523.2017.1359640

Crossref Full Text | Google Scholar

Hilal, A. H., and Alabri, S. S. (2013). Using NVivo for data analysis in qualitative research. Int. Interdiscip. J. Educ. 2, 181–186. doi: 10.12816/0002914

Crossref Full Text | Google Scholar

Holzer, A., and Daumiller, M. (2025). Building trust in the classroom: perspectives from students and teachers. Eur. J. Psychol. Educ. 40, 1–24. doi: 10.1007/s10212-025-00961-7

Crossref Full Text | Google Scholar

Hoy, W. K., and Tschannen-Moran, M. (1999). Five faces of trust: an empirical confirmation in urban elementary schools. J. Sch. Leadersh. 9, 184–208. doi: 10.1177/105268469900900301

Crossref Full Text | Google Scholar

Hsieh, H.-F., and Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qual. Health Res. 15, 1277–1288. doi: 10.1177/1049732305276687

PubMed Abstract | Crossref Full Text | Google Scholar

Hu, L., and Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct. Equ. Modeling 6, 1–55. doi: 10.1080/10705519909540118

PubMed Abstract | Crossref Full Text | Google Scholar

Jaasma, M. A., and Koper, R. J. (1999). The relationship of student-faculty out-of-class communication to instructor immediacy and trust and to student motivation. Commun. Educ. 48, 41–47. doi: 10.1080/03634529909379151

Crossref Full Text | Google Scholar

Jacklin, A., and Le Riche, P. (2009). Reconceptualising student support: from 'support' to ‘suportive'. Stud. High. Educ. 34, 735–749. doi: 10.1080/03075070802666807

Crossref Full Text | Google Scholar

Jarvenpaa, S. L., and Leidner, D. E. (1998). Communication and trust in global virtual teams. J. Comput.-Mediat. Commun. 3:JCMC346. doi: 10.1111/j.1083-6101.1998.tb00080.x

Crossref Full Text | Google Scholar

Jensen, J. L., Kummer, T. A., and Godoy, P. D. D. M. (2015). Improvements from a flipped classroom may simply be the fruits of active learning. CBE—Life Sci. Educ. 14:ar5. doi: 10.1187/cbe.14-08-0129

PubMed Abstract | Crossref Full Text | Google Scholar

Johnson, D., and Grayson, K. (2005). Cognitive and affective trust in service relationships. J. Bus. Res. 58, 500–507. doi: 10.1016/S0148-2963(03)00140-1

Crossref Full Text | Google Scholar

Johnson, Z. D., and LaBelle, S. (2017). An examination of teacher authenticity in the college classroom. Commun. Educ. 66, 423–439. doi: 10.1080/03634523.2017.1324167

Crossref Full Text | Google Scholar

Kim, D., Woo, Y., Song, J., and Son, S. (2023). The relationship between faculty interactions, sense of belonging, and academic stress: a comparative study of the post-COVID-19 college life of Korean and international graduate students in South Korea. Front. Psychiatry 14:1169826. doi: 10.3389/fpsyt.2023.1169826

PubMed Abstract | Crossref Full Text | Google Scholar

Kim, Y. K., and Sax, L. J. (2014). The effects of student–faculty interaction on academic self-concept: does academic major matter? Res. High. Educ. 55, 780–809. doi: 10.1007/s11162-014-9335-x

Crossref Full Text | Google Scholar

Kline, R. B. (2016). Principles and Practice of Structural Equation Modeling, 4th Edn. New York, NY: The Guilford Press.

Google Scholar

Komarraju, M., Musulkin, S., and Bhattacharya, G. (2010). Role of student–faculty interactions in developing college students' academic self-concept, motivation, and achievement. J. Coll. Stud. Dev. 51, 332–342. doi: 10.1353/csd.0.0137

PubMed Abstract | Crossref Full Text | Google Scholar

Kramer, R. M., and Cook, K. S. (Eds.). (2004). Trust and Distrust in Organizations: Dilemmas and Approaches. New York, NY: Russell Sage Foundation.

PubMed Abstract | Google Scholar

Kuh, G., and Hu, S. (2001). The effects of student-faculty interaction in the 1990s. Rev. High. Educ. 24, 309–332. doi: 10.1353/rhe.2001.0005

PubMed Abstract | Crossref Full Text | Google Scholar

Kuh, G. D. (1995). The other curriculum: out-of-class experiences associated with student learning and personal development. J. Higher Educ. 66, 123–155. doi: 10.1080/00221546.1995.11774770

Crossref Full Text | Google Scholar

LaBelle, S., Johnson, Z. D., and Journeay, J. (2023). Teacher authenticity in the college classroom: communicative and behavioral expressions of authentic instruction. Commun. Educ. 72, 61–80. doi: 10.1080/03634523.2022.2142624

PubMed Abstract | Crossref Full Text | Google Scholar

Lake, D. A. (2001). Student performance and perceptions of a lecture-based course compared with the same course utilizing group discussion. Phys. Ther. 81, 896–902. doi: 10.1093/ptj/81.3.896

PubMed Abstract | Crossref Full Text | Google Scholar

Lamport, M. A. (1993). Student-faculty informal interaction and the effect on college outcomes: a review of the literature. Adolescence 28, 971–990.

PubMed Abstract | Google Scholar

Legood, A., van der Werff, L., Lee, A., den Hartog, D., and van Knippenberg, D. (2023). A critical review of the conceptualization, operationalization, and empirical literature on cognition-based and affect-based trust. J. Manage. Stud. 60, 495–537. doi: 10.1111/joms.12811

Crossref Full Text | Google Scholar

Lewicki, R. J., and Bunker, B. B. (1996). Developing and Maintaining Trust in Work Relationships. Thousand Oaks, CA. doi: 10.4135/9781452243610.n7

Crossref Full Text | Google Scholar

Lewis, J. D., and Weigert, A. (1985). Trust as a social reality. Soc. Forces 63, 967–985. doi: 10.2307/2578601

Crossref Full Text | Google Scholar

Lindskold, S., and Bennett, R. (1973). Attributing trust and conciliatory intent from coercive power capability. J. Pers. Soc. Psychol. 28:180. doi: 10.1037/h0035734

Crossref Full Text | Google Scholar

Liu, W. (2021). Does teacher immediacy affect students? A systematic review of the association between teacher verbal and non-verbal immediacy and student motivation. Front. Psychol. 12:713978. doi: 10.3389/fpsyg.2021.713978

PubMed Abstract | Crossref Full Text | Google Scholar

Massey, G. R., Wang, P. Z., and Kyngdon, A. S. (2019). Conceptualizing and modeling interpersonal trust in exchange relationships: The effects of incomplete model specification. Ind. Mark. Manag. 76, 60–71. doi: 10.1016/j.indmarman.2018.06.012

Crossref Full Text | Google Scholar

Mattanah, J., Holt, L., Feinn, R., Bowley, O., Marszalek, K., Albert, E., et al. (2024). Faculty-student rapport, student engagement, and approaches to collegiate learning: exploring a mediational model. Curr. Psychol. 43, 23505–23516. doi: 10.1007/s12144-024-06096-0

Crossref Full Text | Google Scholar

Mayer, R. C., Davis, J. H., and Schoorman, F. D. (1995). An integrative model of organizational trust. Acad. Manag. Rev. 20, 709–734. doi: 10.2307/258792

Crossref Full Text | Google Scholar

Mayhew, M. J., Rockenbach, A. N., Bowman, N. A., Seifert, T. A. D., and Wolniak, G. C. (2016). How College Affects Students: 21st Century Evidence that Higher Education Works, Volume 1. San Francisco, CA: John Wiley and Sons.

Google Scholar

McAllister, D. J. (1995). Affect-and cognition-based trust as foundations for interpersonal cooperation in organizations. Acad. Manag. J. 38, 24–59. doi: 10.2307/256727

Crossref Full Text | Google Scholar

McClain, S., and Cokley, K. (2017). Academic disidentification in Black college students: the role of teacher trust and gender. Cult. Divers. Ethnic Minor. Psychol. 23, 125–133. doi: 10.1037/cdp0000094

PubMed Abstract | Crossref Full Text | Google Scholar

McEvily, B., and Tortoriello, M. (2011). Measuring trust in organisational research: review and recommendations. J. Trust Res. 1, 23–63. doi: 10.1080/21515581.2011.552424

Crossref Full Text | Google Scholar

McLure, F. I., Fraser, B. J., and Koul, R. B. (2022). Structural relationships between classroom emotional climate, teacher–student interpersonal relationships and students' attitudes to STEM. Soc. Psychol. Educ. 25, 625–648. doi: 10.1007/s11218-022-09694-7

Crossref Full Text | Google Scholar

Meinking, K., and Hall, E. E. (2024). Enhancing trust and embracing vulnerability in the college classroom: a reflection on ungrading and co-creation in teaching and learning. Teach. Learn. Inq. 12, 1–15. doi: 10.20343/teachlearninqu.12.29

Crossref Full Text | Google Scholar

Micari, M., and Pazos, P. (2012). Connecting to the professor: impact of the student–faculty relationship in a highly challenging course. Coll. Teach. 60, 41–47. doi: 10.1080/87567555.2011.627576

Crossref Full Text | Google Scholar

Milem, J. F., and Berger, J. B. (1997). A modified model of college student persistence: exploring the relationship between Astin's theory of involvement and Tinto's theory of student departure. J. Coll. Stud. Dev. 38, 387–400.

Google Scholar

Minhas, P. S., Ghosh, A., and Swanzy, L. (2012). The effects of passive and active learning on student preference and performance in an undergraduate basic science course. Anat. Sci. Educ. 5, 200–207. doi: 10.1002/ase.1274

PubMed Abstract | Crossref Full Text | Google Scholar

Moorman, C., Deshpande, R., and Zaltman, G. (1993). Factors affecting trust in market research relationships. J. Mark. 57, 81–101. doi: 10.1177/002224299305700106

Crossref Full Text | Google Scholar

Morgan, R. M., and Hunt, S. D. (1994). The commitment-trust theory of relationship marketing. J. Mark. 58, 20–38. doi: 10.1177/002224299405800302

Crossref Full Text | Google Scholar

National Science Board National Science Foundation. (2020). Science and Engineering Indicators 2020: The State of U.S. Science and Engineering. NSB-2020-1. Alexandria, VA.

Google Scholar

Nguyen, K. A., Borrego, M., Finelli, C. J., DeMonbrun, M., Crockett, C., Tharayil, S., et al. (2021). Instructor strategies to aid implementation of active learning: a systematic literature review. Int. J. STEM Educ. 8:9. doi: 10.1186/s40594-021-00270-7

Crossref Full Text | Google Scholar

Nguyen, K. A., Borrego, M., Finelli, C. J., Shekhar, P., DeMonbron, M., Hendersen, C., et al. (2016). Measuring Student Response to Instructional Practices (StRIP) in Traditional and Active Classrooms, 2016 ASEE Annual Conference and Exposition. New Orleans, LA.

Google Scholar

Niedlich, S., Kallfaß, A., Pohle, S., and Bormann, I. (2021). A comprehensive view of trust in education: Conclusions from a systematic literature review. Rev. Educ. 9, 124–158. doi: 10.1002/rev3.3239

Crossref Full Text | Google Scholar

Nielsen, M. F., and Nielsen, A. M. R. (2023). Revisiting Trustworthiness in Social Interaction. London: Routledge. doi: 10.4324/9781003280903

PubMed Abstract | Crossref Full Text | Google Scholar

Nora, A., Cabrera, A., Serra Hagedorn, L., and Pascarella, E. (1996). Differential impacts of academic and social experiences on college-related behavioral outcomes across different ethnic and gender groups at four-year institutions. Res. High. Educ. 37, 427–451. doi: 10.1007/BF01730109

Crossref Full Text | Google Scholar

Paquin, V., Miconi, D., Aversa, S., Johnson-Lafleur, J., Côté, S., Geoffroy, M.-C., et al. (2025). Social and mental health pathways to institutional trust: a cohort study. Soc. Sci. Med. 379:118199. doi: 10.1016/j.socscimed.2025.118199

PubMed Abstract | Crossref Full Text | Google Scholar

Pascarella, E. T., and Terenzini, P.T. (2005). How College Affects Students, Vol. 2, A Third Decade of Research. San Francisco, CA: Jossey-Bass.

Google Scholar

Pascarella, E. T., and Terenzini, P. T. (1979). Student-faculty informal contact and college persistence: a further investigation. J. Educ. Res. 72, 214–218. doi: 10.1080/00220671.1979.10885157

Crossref Full Text | Google Scholar

Patrick, L. E. (2020). “Faculty and student perceptions of active learning,” in Active Learning in College Science, eds. J. J. Mintzes, and E. M. Walter (Cham: Springer), 889–907. doi: 10.1007/978-3-030-33600-4_55

Crossref Full Text | Google Scholar

Payne, A. L., Stone, C., and Bennett, R. (2022). Conceptualising and building trust to enhance the engagement and achievement of under-served students. J. Contin. High. Edu. 71, 1−18. doi: 10.32388/Q6VAOK

Crossref Full Text | Google Scholar

Pedersen, D. E., Kubátová, A., and Simmons, R. B. (2022). Authenticity and psychological safety: building and encouraging talent among underrepresented students in STEM. Teach. Learn. Inq. 10. doi: 10.20343/teachlearninqu.10.31

Crossref Full Text | Google Scholar

Pike, G. R., Schroeder, C. S., and Berry, T. R. (1997). Enhancing the educational impact of residence halls: the relationship between residential learning communities and first-college experiences and persistence. J. Coll. Stud. Dev. 38:609.

Google Scholar

President's Council of Advisors on Science and Technology (2012). Engage to Excel: Producing one Million Additional College Graduates with Degrees in Science, Technology, Engineering, and Mathematics. Washington, DC: Executive Office of the President.

Google Scholar

PytlikZillig, L. M., and Kimbrough, C. D. (2016). “Consensus on conceptualizations and definitions of trust: are we there yet?” in Interdisciplinary Perspectives on Trust, eds. E. Shockley, T. Neal, L. PytlikZillig, and B. Bornstein (Cham: Springer International Publishing), 17–47. doi: 10.1007/978-3-319-22261-5_2

Crossref Full Text | Google Scholar

Ream, R. K., Lewis, J. L., Echeverria, B., and Page, R. N. (2014). Trust matters: distinction and diversity in undergraduate science education. Teach. Coll. Rec. 116, 1–50. doi: 10.1177/016146811411600408

Crossref Full Text | Google Scholar

Reeves, P. M., Cavanagh, A. J., Bauer, M., Wang, C., and Graham, M. J. (2023). Cumulative cross course exposure to evidence-based teaching is related to increases in STEM student buy-in and intent to persist. Coll. Teach. 71, 66–74. doi: 10.1080/87567555.2021.1991261

Crossref Full Text | Google Scholar

Reindl, M., Auer, T., and Gniewosz, B. (2022). Social integration in higher education and development of intrinsic motivation: a latent transition analysis. Front. Psychol. 13:877072. doi: 10.3389/fpsyg.2022.877072

PubMed Abstract | Crossref Full Text | Google Scholar

Rempel, J. K., Holmes, J. G., and Zanna, M. P. (1985). Trust in close relationships. J. Pers. Soc. Psychol. 49:95. doi: 10.1037/0022-3514.49.1.95

Crossref Full Text | Google Scholar

Renn, O., and Levine, D. (1991). “Credibility and trust in risk communication,” in Communicating Risks to the Public. Technology, Risk, and Society, Vol 4, eds. R. E. Kasperson, and P. J. M. Stallen (Dordrecht: Springer), 175–217. doi: 10.1007/978-94-009-1952-5_10

Crossref Full Text | Google Scholar

Robinson, C. D., Scott, W., and Gottfried, M. A. (2019). Taking it to the next level: a field experiment to improve instructor-student relationships in college. AERA Open 5. doi: 10.1177/2332858419839707

Crossref Full Text | Google Scholar

Robinson, S. L. (1996). Trust and breach of the psychological contract. Adm. Sci. Q. 41, 574 −599. doi: 10.2307/2393868

Crossref Full Text | Google Scholar

Roorda, D. L., Koomen, H. M., Spilt, J. L., and Oort, F. J. (2011). The influence of affective teacher–student relationships on students' school engagement and achievement: a meta-analytic approach. Rev. Educ. Res. 81, 493–529. doi: 10.3102/0034654311421793

PubMed Abstract | Crossref Full Text | Google Scholar

Rousseau, D. M. com, S. B., Burt, R. S., and Camerer, C. (1998). Introduction to special topic forum: not so different after all: a cross-discipline view of trust. Acad. Manag. Rev. 23, 393–404. doi: 10.5465/amr.1998.926617

Crossref Full Text | Google Scholar

Sampaio, C. H., Perin, M. G., Simões, C., and Kleinowski, H. (2012). Students' trust, value and loyalty: evidence from higher education in Brazil. J. Mark. High. Educ. 22, 83–100. doi: 10.1080/08841241.2012.705796

Crossref Full Text | Google Scholar

Schertzer, C. B., and Schertzer, S. M. (2004). Student satisfaction and retention: a conceptual model. J. Mark. High. Educ . 14, 79–91. doi: 10.1300/J050v14n01_05

Crossref Full Text | Google Scholar

Schindler, P. L., and Thomas, C. C. (1993). The structure of interpersonal trust in the workplace. Psychol. Rep. 73, 563–573. doi: 10.2466/pr0.1993.73.2.563

Crossref Full Text | Google Scholar

Schudde, L. (2019). Short- and long-term impacts of engagement experiences with faculty and peers at community colleges. Rev. High. Educ. 42, 385–426. doi: 10.1353/rhe.2019.0001

PubMed Abstract | Crossref Full Text | Google Scholar

Schussler, E. E., Weatherton, M., Chen Musgrove, M. M., Brigati, J. R., and England, B. J. (2021). Student perceptions of instructor supportiveness: what characteristics make a difference? CBE—Life Sci. Educ. 20:ar29. doi: 10.1187/cbe.20-10-0238

PubMed Abstract | Crossref Full Text | Google Scholar

Seidel, S. B., and Tanner, K. D. (2013). “What if students revolt?”—considering student resistance: origins, options, and opportunities for investigation. CBE—Life Sci. Educ. 12, 586–595. doi: 10.1187/cbe-13-09-0190

PubMed Abstract | Crossref Full Text | Google Scholar

Snijders, I., Wijnia, L., Rikers, R. M., and Loyens, S. M. (2020). Building bridges in higher education: student-faculty relationship quality, student engagement, and student loyalty. Int. J. Educ. Res. 100:101538. doi: 10.1016/j.ijer.2020.101538

Crossref Full Text | Google Scholar

Stains, M., Harshman, J., Barker, M. K., Chasteen, S. V., Cole, R., DeChenne-Peters, S. E., et al. (2018). Anatomy of STEM teaching in North American universities. Science 359, 1468–1470. doi: 10.1126/science.aap8892

PubMed Abstract | Crossref Full Text | Google Scholar

Tabachnick, B. G., and Fidell, L. S. (2001). Using Multivariate Statistics. Boston, MA: Allyn and Bacon.

Google Scholar

Theobald, E. J., Hill, M. J., Tran, E., Agrawal, S., Arroyo, E. N., Behling, S., et al. (2020). Active learning narrows achievement gaps for underrepresented students in undergraduate science, technology, engineering, and math. Proc. Nat. Acad. Sci. 117, 6476–6483. doi: 10.1073/pnas.1916903117

PubMed Abstract | Crossref Full Text | Google Scholar

Thompson, J. J., Conaway, E., and Dolan, E. L. (2016). Undergraduate students' development of social, cultural, and human capital in a networked research experience. Cult. Stud. Sci. Educ. 11, 959–990. doi: 10.1007/s11422-014-9628-6

Crossref Full Text | Google Scholar

Tierney, W. G. (2006). CHAPTER TWO: the grammar of trust. Counterpoints 308, 41–57.

PubMed Abstract | Google Scholar

Tinto, V. (2015). Through the eyes of students. J. College Stud. Retent. Res. Theor. Pract. 19, 254–269. doi: 10.1177/1521025115621917

Crossref Full Text | Google Scholar

Tom, G., Tong, S. T., and Hesse, C. (2010). Thick slice and thin slice teaching evaluations. Soc. Psychol. Educ. 13, 129–136. doi: 10.1007/s11218-009-9101-7

Crossref Full Text | Google Scholar

Trinidad, J. R., Kurisu, S.-L. S., and Moore, B. J. (2024). Student–faculty interaction and academic self-concept: gender as a moderator. Teach. Coll. Rec. 126, 96–106. doi: 10.1177/01614681241292133

Crossref Full Text | Google Scholar

Tschannen-Moran, M., and Hoy, W. K. (1998). A conceptual and empirical analysis of trust in schools. J. Educ. Adm. 36, 334–352. doi: 10.1108/09578239810211518

Crossref Full Text | Google Scholar

Tschannen-Moran, M., and Hoy, W. K. (2000). A multidisciplinary analysis of the nature, meaning, and measurement of trust. Rev. Educ. Res. 70, 547–593. doi: 10.3102/00346543070004547

PubMed Abstract | Crossref Full Text | Google Scholar

Tucker, L. R., and Lewis, C. (1973). A reliability coefficient for maximum likelihood factor analysis. Psychometrika 38, 1–10. doi: 10.1007/BF02291170

Crossref Full Text | Google Scholar

Umbach, P. D., and Wawrzynski, M. R. (2005). Faculty do matter: the role of college faculty in student learning and engagement. Res. High. Educ. 46, 153–184. doi: 10.1007/s11162-004-1598-1

Crossref Full Text | Google Scholar

US Department of Education (2018). Institute of Education Sciences, National Center for Education Statistics, National Assessment of Educational Progress (NAEP), 2018 Technology and Engineering Literacy Assessment. Washington, DC.

Google Scholar

US Department of Education National Center for Education Statistics. (2024). Highlights of the 2023 U.S. PIAAC Results Web Report (NCES 2024-202). Washington, DC.

Google Scholar

Valenzuela, J. H. (2025). Dynamics of Faculty-Student Interaction Beyond the Classroom: Insights From a Hispanic-Serving Community College (Doctoral dissertation). The University of Texas at El Paso, El Paso, TX.

Google Scholar

Vogt, C., Hocevar, D., and Hagedorn, L. (2007). A social cognitive construct validation: determining women's and men's success in engineering programs. J. High. Educ. 78, 337–364. doi: 10.1080/00221546.2007.11772319

Crossref Full Text | Google Scholar

Walker, J. D., Cotner, S. H., Baepler, P. M., and Decker, M. D. (2008). A delicate balance: Integrating active learning into a large lecture course. CBE—Life Sci. Educ. 7, 361–367. doi: 10.1187/cbe.08-02-0004

PubMed Abstract | Crossref Full Text | Google Scholar

Wang, C., Cavanagh, A. J., Bauer, M., Reeves, P. M., Gill, J. C., Chen, X., et al. (2021). A framework of college student buy-in to evidence-based teaching practices in STEM: the roles of trust and growth mindset. CBE—Life Sci. Educ. 20:ar54. doi: 10.1187/cbe.20-08-0185

PubMed Abstract | Crossref Full Text | Google Scholar

Wang, M. T., and Degol, J. (2013). Motivational pathways to STEM career choices: using expectancy–value perspective to understand individual and gender differences in STEM fields. Dev. Rev. 33, 304–340. doi: 10.1016/j.dr.2013.08.001

PubMed Abstract | Crossref Full Text | Google Scholar

Wentzel, K. R. (2016). “Teacher–student relationships,” in Handbook of Motivation at School, eds. K. R. Wentzel, and D. B. Miele (New York, NY: Routledge), 211–230. doi: 10.4324/9781315773384

Crossref Full Text | Google Scholar

Whipple, J. M., Griffis, S. E., and Daugherty, P. J. (2013). Conceptualizations of trust: can we trust them? J. Bus. Logist. 34, 117–130. doi: 10.1111/jbl.12014

Crossref Full Text | Google Scholar

Wieman, C. E. (2014). Large-scale comparison of science teaching methods sends clear message. Proc. Natl. Acad. Sci. 111, 8319–8320. doi: 10.1073/pnas.1407304111

PubMed Abstract | Crossref Full Text | Google Scholar

Wilcox, P., Winn, S., and Fyvie-Gauld, M. (2005). ‘It was nothing to do with the university, it was just the people': the role of social support in the first-year experience of higher education. Stud. High. Educ. 30, 707–722. doi: 10.1080/03075070500340036

Crossref Full Text | Google Scholar

Wilson, C. E., and Davis, M. (2020). Transforming the student-professor relationship: a multiphase research partnership. Int. J. Stud. Partners 4, 155–161. doi: 10.15173/ijsap.v4i1.3913

Crossref Full Text | Google Scholar

Wilson, R. C., Woods, L., and Gaff, J. G. (1974). Social-psychological accessibility and faculty-student interaction beyond the classroom. Sociol. Educ. 47, 74–92. doi: 10.2307/2112167

Crossref Full Text | Google Scholar

Wong, W. H., and Chapman, E. (2023). Student satisfaction and interaction in higher education. High. Educ. 85, 957–978. doi: 10.1007/s10734-022-00874-0

PubMed Abstract | Crossref Full Text | Google Scholar

Zand, D. E. (1972). Trust and managerial problem solving. Adm. Sci. Q. 17, 229–39. doi: 10.2307/2393957

Crossref Full Text | Google Scholar

Zhao, S., and You, L. (2023). Exploring the impact of student-faculty partnership on engagement, performance, belongingness, and satisfaction in higher education. Educ. Adm. Theory Pract. 30:980. doi: 10.52152/kuey.v30i2.980

Crossref Full Text | Google Scholar

Zhou, D., Liu, S., Zhou, H., Liu, J., and Ma, Y. (2023). The association among teacher-student relationship, subjective well-being, and academic achievement: evidence from Chinese fourth graders and eighth graders. Front. Psychol. 14:1097094. doi: 10.3389/fpsyg.2023.1097094

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: instructor trust, undergraduate STEM education, student-instructor relationship, trust dimensions, cognitive trust, affective trust, relational trust

Citation: Zhang K, Gill JC, Zhang T, Crowley L, Bennie J, Wagner H, Bauer M, Hanauer D, Chen X and Graham MJ (2025) Investigating dimensions of instructor trust using the words of undergraduate STEM students. Front. Educ. 10:1617067. doi: 10.3389/feduc.2025.1617067

Received: 23 April 2025; Accepted: 23 June 2025;
Published: 14 July 2025.

Edited by:

Ozden Sengul, Boğaziçi University, Türkiye

Reviewed by:

Isabel Pinho, University of Aveiro, Portugal
Eric Hall, Elon University, United States

Copyright © 2025 Zhang, Gill, Zhang, Crowley, Bennie, Wagner, Bauer, Hanauer, Chen and Graham. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Mark J. Graham, bWFyay5ncmFoYW1AeWFsZS5lZHU=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.