Skip to main content

ORIGINAL RESEARCH article

Front. Psychol., 25 November 2021
Sec. Quantitative Psychology and Measurement
This article is part of the Research Topic Measurement in Health Psychology View all 24 articles

Review of the Internal Structure, Psychometric Properties, and Measurement Invariance of the Work-Related Rumination Scale – Spanish Version

  • 1Clinical Psychology Programs, School of Behavioral and Brain Sciences, Ponce Health Sciences University, Ponce, PR, United States
  • 2Ponce Research Institute, Ponce Health Sciences University, Ponce, PR, United States
  • 3Psychology Program, Department of Social Sciences, University of Puerto Rico, Cayey, PR, United States
  • 4Psychology Research Institute, School of Psychology, University of San Martín de Porres, Lima, Peru

Background: The aim of the current study was to examine the internal structure and assess the psychometric properties of the Work-Related Rumination Scale (WRRS) – Spanish version in a Puerto Rican sample of workers. This instrument is a 15-item questionnaire, which has three factors, affective rumination, problem-solving pondering, and detachment. This measure is used in the occupational health psychology context; however, there is little evidence of its psychometric properties.

Materials and Methods: A total sample of 4,100 from five different study samples was used in this cross-sectional study design in which the WRRS was used. We conducted confirmatory factor analysis (CFA) and exploratory structural equation modeling (ESEM) to examine the internal structure of the Work-Related Rumination Scale. Measurement invariance across sex and age was examined.

Results: The three-factor model was supported; however, four items were eliminated due to their cross-loadings and factorial complexity. This 11-item Spanish version of the WRRS was invariant across sex and age. Reliability of the three-factors of WRRS were within the range of 0.74 to 0.87 using Cronbach’s alpha and McDonald’s omega. Correlations between the three factors were as expected as well as with other established measures.

Conclusion: The results suggest that the WRRS-Spanish version appears to be a reliable and valid instrument to measure work-related rumination using its three factors. Comparison across sex and age appear to be useful in occupational health psychology research setting since results suggest that the WRRS is invariant regarding those variables.

Introduction

The link between the exposure to work demands and the possible deterioration of employee’s health is an area of interest for occupational stress research (Pereira and Elfering, 2014). Work demands have been associated to a series of health complications such as cardiovascular diseases (Karasek et al., 1981; Rosario-Hernández et al., 2014), burnout (Brotheridge and Grandey, 2002), depression (Dormann and Zapf, 1999; Blackmore et al., 2007; Magnavita and Fileni, 2014; Rosario-Hernández et al., 2014), and psychosomatic symptoms (Pisanti et al., 2003; van der Doef et al., 2012; Rosario-Hernández et al., 2013).

On the other hand, impediments to recovering from work demands can impair employee’s health (Meijman and Mulder, 1998; Schwartz et al., 2003; Kivimäki et al., 2006; Zijlstra and Sonnentag, 2006; Fritz et al., 2010). Thus, the process of recovery appears to be influenced in the way in which people can disconnect from their work demands and those thoughts related to them (Cropley et al., 2006; Rook and Zijlstra, 2006; Sonnentag and Zijlstra, 2006; Sonnentag et al., 2008). In this way, recovery from work is necessary for workers to avoid chronic stress (Safstrom and Harting, 2013) and therefore, rumination is a mechanism suggested that can compromise a successful disconnection and recovery from work (Roger and Jamieson, 1988; Cropley et al., 2006). Cropley and Zijlstra (2011) indicate that work-related rumination can be considered as a set of repetitive thoughts directed to issues that revolve around work; it does not matter, really, if people ruminate or think about work issues when not at work and in fact, many people do it because find it rewarding and stimulating. However, Cropley and Zijlstra (2011) argue that rumination becomes a problem when affects health and well-being. Thus, Cropley and Zijlstra suggest that people not always worry or think negatively about work on their off time. In fact, thinking about work is not compatible to switch off, and therefore, makes it difficult to recover from work. On the other hand, thinking and reflexing about work issues can also have beneficial effects and can be associated to positive results.

Furthermore, Cropley and Zijlstra (2011) conceptualize work-related rumination as a construct with three factors, which they call affective rumination (AR), problem-solving pondering (PSP), and detachment (Det). AR is a cognitive state characterize by the appearance intrusive, penetrating, and recurrence thoughts about work. These thoughts are negative in affective terms (Pravettoni et al., 2007), which if are not controlled, can become cognitively and emotionally intrusive thoughts when off work. Meanwhile, Cropley and Zijlstra point out that most of studies related to rumination at work have focused on its negative aspect, which imply if people continue to think about their work when off, they continue to be with the “power button on” and this prevent them to recuperate during their off time. It is very clear that this type of rumination impact negatively recovery when not at work; however, thinking about work when not on it, not necessarily have negative implications, since it may have a positive side. For example, there are studies that suggest that thinking about work when off might have a positive impact on innovation and creativity (e.g., Baas et al., 2008). For instance, the results obtained by Baas et al. suggest that people tend to have a positive humor when the task at hand was found to be pleasantly and intrinsically helpful. Similarly, PSP, according to Cropley and Zijlstra (2011), is a mode of thinking characterized by lengthy mental examination or the appraisal of a past difficulty at work in order to discover a solution. Finally, detachment is the third factor of the work-related rumination, and it can be defined as a sense of being away from the work situation (e.g., Etzion et al., 1998). Cropley and Zijlstra (2011) indicate that there are people who manage to press the “off button” and can disconnect and forget about work.

Based on this conceptualization, Cropley et al. (2012) developed the Work-Related Rumination Scale (WRRS), which has been used in occupational health psychology research and has been translated into different languages to measure rumination at work in different studies. These translations have been done by Syrek et al. (2017) into German, Firoozabadi et al. (2018a) into Persian, Sulak Akyüz and Sulak (2019) into Turkish, and in Puerto Rico by Rosario-Hernández et al. (2013) into Spanish. The confirmatory factor analysis (CFA) results obtained of these translations of the WRRS are like those obtained on research by Cropley et al. (2012) and Querstret and Cropley (2012) because they also yielded a three-factor internal structure; AR, PSP, and detachment.

Brief Systematic Literature Review of the Work-Related Rumination Scale

A brief systematic review was conducted to establish the pattern of findings and methodological procedures used in studies of the psychometric properties in general, and internal structure of the WRRS, as recommended by some authors in the literature (e.g., Grant and Booth, 2009). The following key words were used: WRRS AND internal structure OR psychometric properties AND validity AND reliability OR measurement invariance The review was done through the search engines in the EBSCO, Sciencedirect, Scopus, Pubmed, and Google Scholar databases, using “Boolean” connectors between November 2020 and May 2021. Our intention, at first, was to include only studies about psychometric properties of the WRRS, but given that we only found one with at least some variety of validity evidence, it was decided to include studies which at least tested for some sort of psychometric property as part of the study, such as those that used structural equation modeling (SEM) as an analytical tool in which was tested the measurement model and those who at least reported the reliability of the WRRS (see Table 1). Thus, we only found one study in which its main research objective was to examine the psychometric properties of WRRS (Sulak Akyüz and Sulak, 2019). This mentioned study was the Turkish version of the WRRS, and their CFA results supported the three-factor model proposed by the WRRS’s authors using the maximum likelihood estimation. Also, they reported reliability coefficients ranging between 0.73 to 0.79 and appears that they did not examine for measurement invariance because it was not reported.

TABLE 1
www.frontiersin.org

Table 1. Brief literature review of the work-related rumination scale.

Interestingly, of the 25 studies revised, only seven studies used the complete WRRS and those included the original study in which the WRRS was developed (Cropley et al., 2012; Querstret and Cropley, 2012; Vandevala et al., 2017; Dunn and Sensky, 2018; Sulak Akyüz and Sulak, 2019; Weigelt et al., 2019a; Mullen et al., 2020), 11 used the affective and problem-solving pondering subscales (Bisht, 2017; Kinnunen et al., 2017, 2019; Querstret et al., 2017; Syrek et al., 2017; Vahle-Hinz et al., 2017; Firoozabadi et al., 2018a,b; Junker et al., 2020; Zhang et al., 2020; Pauli and Lang, 2021), two studies used the problem-solving pondering and detachment subscales (Zoupanou et al., 2013; Mehmood and Hamstra, 2021), only one used the detachment subscale (Svetieva et al., 2017), and four studies used the affective rumination subscale (Querstret et al., 2016; Van Laethem et al., 2019; Weigelt et al., 2019b; Cropley and Collis, 2020; Smyth et al., 2020). Thus, the use of the subscales of the WRRS vary according to the researchers need and purpose. But the use of affective rumination and problem-solving pondering are the most widely used subscales of the WRRS.

Regarding of method of factorial designs, one used exploratory factor analysis (EFA; Cropley et al., 2012), seven studies used CFA (Bisht, 2017; Syrek et al., 2017; Vahle-Hinz et al., 2017; Firoozabadi et al., 2018a; Kinnunen et al., 2019; Sulak Akyüz and Sulak, 2019; Weigelt et al., 2019a,b; and two of the studies did not report any of such methods, Querstret et al., 2016; Cropley and Collis, 2020). Those seven studies that relied on CFA, two studies used the maximum likelihood (ML) estimator, two used robust maximum likelihood (MLR), one used diagonally-weight least squares (DWLS), and two did not report it. Moreover, none of the studies examined the internal structure using exploratory structural equation modeling (ESEM) and none examined the measurement invariance of the WRRS. In addition, and in terms of the examination of the internal consistency, all the studies used Cronbach’s alpha, and only one (Junker et al., 2020) used McDonald’s omega that is a better estimate for the internal consistency (Crutzen and Peters, 2017; Flora, 2020).

Another point that stands out from the brief systematic review of the WRRS is that in the studies that did not use SEM, they presumed that the WRRS was a valid instrument without examining it with their sample. This tends to be a bad practice widely used in psychological studies, which has been pointed out by some authors (Merino-Soto and Calderón-De la Cruz, 2018; Merino-Soto and Angulo-Ramos, 2020, 2021) in the literature indicating that researchers are inducing the validity of the instrument, which is called as measurement validity induction.

Therefore, an attempt was made to push forward the research of the internal structure of the WRRS by implementing ESEM (Asparouhov and Muthén, 2009) approach, a model not incorporated in previous studies of the dimensionality of the WRRS, which is a reformulation of the modeling of item-construct relationships to solve CFA modeling problems. ESEM provides more information to decide on the multidimensionality of a measure created to represent multidimensional constructs (Morin et al., 2015). The ESEM was developed to subsume the exploratory approach within SEM, and characteristically consists of estimating the cross-factorial loads in the rest of the factors analyzed, and not only in the factor hypothesized as the main causal influence of the items (Asparouhov and Muthén, 2009). The implementation of a traditional exploratory approach, as occurs in some studies with the WRRS does not seem to be different from the ESEM, because in the exploratory model’s factor loadings are also estimated in all the factors. However, the advantages of nested exploratory modeling in SEM lead to obtaining fit measures, examining correlated residuals and other parameters usually not estimated in the exploratory approach (Asparouhov and Muthén, 2009; Mansolf and Reise, 2016). Estimates through ESEM have been shown to influence the decrease in factor loadings and interfactorial correlations (Asparouhov and Muthén, 2009; Mansolf and Reise, 2016). In this way, the factorial solutions obtained by the ESEM approach are considered more realistic (Asparouhov and Muthén, 2009). Due to the consistent demonstration of the efficacy of representing multidimensional constructs by means of the ESEM, the results of validation of the internal structure of the WRR in previous studies may present important biases in its parameters (i.e., factor loadings and latent correlations).

This assessment of WRR dimensionality even necessary when only estimating a reliability coefficient (specifically, internal consistency), for non-psychometric objective purposes, because proper estimation of reliability requires factor modeling (Crutzen and Peters, 2017; Flora, 2020). Studies that did not estimate reliability coefficients with their data generally induce reliability from other studies (Vassar et al., 2008), but there is no guarantee that the value obtained by inducing it from another study is equal to the one that could be calculated on their own data. On the other hand, equivalence of measurement between groups is required to ensure comparisons between groups with respect to statistics of interest, such as means, variances and covariation between scores.

In the same way, other aspects are also useful to examine for the quality of the instrument, such as the consistency of individual response (e.g., the items), especially when it is required to select items for the construction or adaptation of measures (Zijlmans et al., 2019), and that they are estimated within a reliability framework at the item level. Reliability is commonly estimated for the composite scores of the dimension constructed by the items; however, the reliability of the items is relevant for knowing the degree of reproducibility of the responses and has recently been valued as a quality measure for the choice of the items (Zijlmans et al., 2019).

Research Purpose

The WRRS was translated into Spanish and has been used in several studies in occupational health psychology in Puerto Rico (Rosario-Hernández et al., 2013, 2015, 2018a,b, 2019, 2020); however, psychometric properties of the WRRS have not been examined. Therefore, the purpose of the current study was to examine the internal structure, psychometric properties, and measurement invariance of the WRRS – Spanish version across gender and age.

Materials and Methods

A total of 4,100 protocols from five different research conducted by the authors (Rosario-Hernández et al., 2013, 2015, 2018a,b, 2019, 2020) in Puerto Rico and each selected through a non-probabilistic sample, and distributed into this five large groups: sample 1 (n = 518, 12.6%), sample 2 (n = 1046, 25.5%), sample 3 (n = 1107, 27.0%), sample 4 (n = 626, 15.3%), and sample 5 (n = 803, 19.6%). The distributional differences in the five samples in sex (χ2 [5] = 13.29, p = 0.02, Cramer’s V = 0.053), level of education (Kruskall–Wallis’ H [5] = 52.56, p < 0.01, χ2 = 0.01) and age (Kruskall–Wallis’ H [5] = 74.97, p < 0.01, χ2 = 0.01), although they were statistically significant, the effect size was trivial, that is, η2 = 0.02. Regarding the job characteristics of the position, the type of employment (χ2 [5] = 28.14, p < 0.01, Cramer’s V = 0.07), type of position (χ2 [5] = 19.43, p < 0.01, Cramer’s V = 0.06), type of company (χ2 [10] = 17.02, p < 0.01, Cramer’s V = 0.13) and years of work in the company (Kruskall–Wallis’ H [5] = 369.14, p < 0.01, η2 = 0.08) were not substantially different between the five samples.

The characteristic of the whole sample such as gender, age, among other, are shown in Table 2. The sample was composed of 56.6% of females and the average education level was 16.73 ± 2.04, which is equivalent to a bachelor’s degree to one year of graduated studies.

TABLE 2
www.frontiersin.org

Table 2. Sociodemographic results of sample.

Measures

Work-Related Rumination Scale

The WRRS was developed by Cropley et al. (2012) and has 15 questions using a 5-point Likert scale (1 = very seldom or never, 2 = seldom, 3 = sometimes, 4 = often, and 5 = very often or always). According to Cropley et al. (2012) results using the factor analytic technique support a three-factor internal structure of the WRRS, which are affective rumination, problem-solving pondering, and detachment; and authors reported their reliability via Cronbach’s alpha of 0.90, 0.81, and 0.88, respectively. An item example is: “Do you become tense when you think about work-related issues during your free time?

Depression

To measure depression, we used the PHQ-9 developed by Kroenke et al. (2001). The PHQ-9 is a nine-item questionnaire used for the assessment of depressive symptoms in primary care settings. This questionnaire evaluates the presence of depressive symptoms over the 2 weeks prior to the test’s being filled out. Each of the items can be scored from 0 (not at all), to 3 (nearly every day). Its validity and reliability as a diagnostic measure, as well as its utility in assessing depression severity and monitoring treatment response are well established (Kroenke et al., 2001; Löwe et al., 2004a,b, 2006). In the current study, the unidimensionality of the PHQ-9 was supported by a CFA analysis using the method of robust maximum likelihood, χ2 = 401.44(20), CFI = 0.904, SRMR = 0.047, RMSEA = 0.093[0.085;0.101]; reliability of the PHQ-9 using the omega (ω) was.899 (95% CI = 0.889;0.908.) An item example is: “Little interest or pleasure in doing things?

Anxiety

To measure anxiety, we used the GAD-7 (Spitzer et al., 2006). The GAD-7 is a seven-item questionnaire that measures general anxiety symptomatology and asked patients how often, during the last 2 weeks, they were bothered by each symptom. Response options were “not at all,” “several days,” “more than half the days,” and “nearly every day,” scored as 0, 1, 2, and 3, respectively. In addition, an item to assess duration of anxiety symptoms was included. Authors of the scale reported a Cronbach’s alpha coefficient of 0.93. In terms of its construct validity, internal structure was supported by factor analysis technique and convergent validity with its association to similar measures such as the Beck Anxiety Inventory and the anxiety subscale of the Symptom Checklist-90. The unidimensionality of the GAD-7 was supported by a CFA using the robust maximum likelihood estimator, χ2 = 154.69(14), CFI = 0.982, SRMR = 0.021, RMSEA = 0.058[0.050;0.066]; and its reliability was calculated using the omega (ω), which was 0.930 (95% CI = 0.925;0.935). An item example is: “Feeling nervous, anxious, or on edge.”

Sleep Well-Being

We used the Sleep Well-Being Indicator developed by Rovira Millán and Rosario-Hernández (2018) to measure sleep well-being. This indicator is a twelve-item instrument in a Likert-frequency response format ranging from 1 (Never) to 6 (Always). This indicator has three subscales which are sleep quantity (duration), sleep quality, and consequences related to sleep. Authors report reliability through Cronbach’s alpha and ranged from 0.79 to 0.86. Factor analysis results support the internal structure of three dimensions. In the current study, we used only two subscales: sleep quantity/duration and sleep quality. Thus, we examined a two-factor structure of the Sleep Well-Being Indicator using maximum likelihood robust method, χ2 = 0.847 (1), CFI = 0.999, SRMR = 0.004, RMSEA = 0.028 [0.000;0.090]; reliability using omega (ω) = 0.776 (95% CI = 0.749;0.800) and 0.723 (95% CI = 0.687;0.754) for the sleep quantity and sleep quality subscales, respectively. An item example is: “I had trouble falling to sleep.”

Burnout

We used the Maslach Burnout Inventory – General Scale (MBI-GS; Maslach et al., 1996) to measure burnout. The MBI uses a 7-point frequency scale (ranging from 0-never to 6-daily) to indicate the extent to which they experienced each item. The emotional exhaustion and cynicism have five items each and the professional efficacy six items. In this study, we used the emotional exhaustion and cynicism subscales; therefore, we tested a two-dimension model using maximum likelihood robust method, χ2 = 454.43 (5), CFI = 0.921, SRMR = 0.004.042, RMSEA = 0.153 [0.141;0.165]; reliability was estimated using omega (ω) = 0.908 (95% CI = 0.902;0.912) and 0.791 (95% CI = 0.779;0.802) emotional exhaustion and cynicism subscales, respectively. An item example is: “I feel tired when I get up in the morning and have to face another day on the job.”

Workaholism

To measure workaholism, we used the Dutch Workaholism Scale (DUWAS; Schaufeli et al., 2009) and translated into Spanish by del Líbano et al. (2010). The DUWAS is a 10-item scale which has two dimensions with 5-item each: work excessively (e.g., “I seem to be in a hurry and racing against the clock”) and work compulsively (e.g., “It’s important for me to work hard even when I don’t enjoy what I’m doing”). Results of the CFA from the study of del Líbano et al. (2010) support the internal structure of two dimensions. In the present study, a two-factor model was supported using maximum likelihood robust method, χ2 = 1,736 (34), CFI = 0.917, SRMR = 0.063, RMSEA = 0.114 [0.109;0.119]. Also, reliability was estimated and its 90% confidence interval using omega (ω) = 0.776 (95% CI = 0.749;0.80) and.723 (95% CI = 0.687;0.754) for the work excessively and work compulsively subscales, respectively. An item example is: “It’s important for me to work hard even when I don’t enjoy what I’m doing.”

Social Desirability

We used the Social Desirability Scale developed by Rosario-Hernández and Rovira Millán (2002). This is a 11-item instrument in a Likert-agreement response format ranging from 1 (Totally Disagree) to 6 (Totally Agree), which pretend to measure a response bias in which people respond to a test thinking what is acceptable socially. Authors report its internal consistency through Cronbach’s alpha to be 0.86, which is an excellent reliability coefficient. Factor analysis results suggest that the Social Desirability Scale internal structure has only one factor. As part of the current study, we examined the internal structure of the Social Desirability Scale using maximum likelihood robust method and results support a one factor structure as reported by its authors, χ2 = 2,608.64 (44), CFI = 0.907, SRMR = 0.057, RMSEA = 0.115 [0.112;0.119]; also, ω reliability was was.944 (95% CI = 0.941;0.947). An item example is: “Most people have cheated on an exam, even if it was once in their lives.”

Procedures

This study was approved by the Institutional Review Board of Ponce Health Sciences University (Protocol #2006040219) on June 17, 2020. Participants in all samples were selected by a convenience non-probabilistic sample method and the inclusion criteria were to be 21 years of age or older and to work at least 20 h per week. On the other hand, participants were excluded ante hoc, which included if they did not agree to participate voluntarily, and post hoc to data collection, when their scores on the WRRS were identified as outliers.

Cross-Validation Strategy

Instead of analyzing the entire sample in a single analysis action, a cross-validation strategy was applied to assess the stability of the validity parameters in the sample. This strategy considered some presuppositions. First, although the total sample would guarantee high statistical power and lower sampling error in the estimation of the parameters, the stability of the WRRS measurement model in the study samples cannot be empirically tested. Second, validation indices based on a single sample, to quantify the expected degree of cross-validation, combine the information obtained from the estimation method or fit function, together with the sample size and number of parameters (for example, AIC, BIC, ECVI; Browne and Cudeck, 1989; Whittaker and Stapleton, 2006), but direct contrast against another sample is absent where the model can be adjusted, and its replicability evaluated. Third, in the evaluation of the stability of the model where k samples drawn from the total sample are used, the cross-validation indices summarily report a discrepancy between the restricted variance-covariance matrix of the calibration sample, and the variance matrix-unconstrained covariances of the validation sample (Cudeck and Browne, 1983), but do not indicate the specific sources of the discrepancies, for example, the difference between the factor loadings in the compared samples (Byrne, 2012, p. 261). Therefore, the approach of Byrne (2012, p. 261) was followed, in which the naturally independent samples of the present study were compared within the framework of measurement invariance, and according to this, the degree of replicability of the WRRS measurement model. According to the above, the measurement model of the three oblique factors was evaluated in each subsample regarding its dimensionality, and its measurement invariance. With these two criteria met, the analysis continued toward modeling in the total sample.

Detection of Response Biases

A detection of multivariate outliers was made in the responses to all the items of the WRRS using the square Mahalanobis distance (D2) value, an efficient and sensitive measure for outliers derived from random responses (Zijlstra et al., 2011). The cut-off point for D2 was 3.57 (df = 15). The procedure was strengthened with the search for the longest strings of characters (long-string; Curran, 2016) based on a cut-off point (Curran, 2016): the number of consecutive repeated responses ≥ half the number of items (nRR ≥ k/2). The R careless program was used (Yentes and Wilhelm, 2018).

Item Analysis

Descriptive statistics (central response, dispersion, and distribution) and association with gender (Glass rank biserial correlation coefficient; Mangiafico, 2021) and age (ordinal eta squared; Mangiafico, 2021) were reported at the item level. The R MVN (Korkmaz et al., 2014) and rcompanion (Mangiafico, 2021) programs were used.

Internal Structure

The internal structure was evaluated through confirmatory factor analysis (CFA-SEM) and exploratory structural equation modeling (ESEM), to evaluate various measurement models of the WRRS. First, the model established by the author, consisting of three related dimensions (3F), was tested. The second model was unidimensional, to represent the use of the total score and the complete absence of discriminative validity between the dimensions, and a third model in which two-dimensional factor was tested. This third model was justified because some studies referred to a unified score for two dimensions: AR and PSP (e.g., Cropley et al., 2016, 2017; Weigelt et al., 2019a,b; Cropley and Collis, 2020). ESEM was implemented with oblique geomin target rotation (Mansolf and Reise, 2016).

In all the WRR modeling, the estimator used was WLSMV (Muthén et al., 1997) due to its effectiveness (Li, 2016), with inter-item polychoric correlations. The evaluation of the fit was made approximate fit indices (AFI): CFI (≥0.95), RMSEA (≤0.05), SRMR (≤0.05), WRMR (≤0.90; Yu, 2002). The detection of the misspecifications in the models was done with the approach of Saris et al. (2009), considering the statistical power and the size of the misspecification. Additionally, because the ESEM method estimates cross-factor loadings, the degree of factorial complexity can be observed. For this purpose, the Hoffman coefficient (Choff; Hofmann, 1977, 1978) was used; Choff values at, or near, 1.0 (Pettersson and Turkheimer, 2010), indicate that items load significantly on more than one factor (i.e., factor complexity). The modeling was carried out by the lavaan (Rosseel, 2012), semtools (Jorgensen et al., 2021), and EFA.dimensions (O’Connor, 2021) R programs.

Measurement invariance was done with a bottom–up approach, from an unrestricted model to a model with strong restrictions (Stark et al., 2006). Thus, we tested: an unrestricted model of equality (configurational invariance) and continued with successive restrictions applied to factor loadings and thresholds (metric invariance), and intercepts (scalar invariance). Taking into account the sample size (>300; Chen, 2007), the invariance criterion was: CFI < 0.010, SRMR < 0.030, and RMSEA < 0.015 (Chen, 2007).

Reliability Analysis

The reliability estimation was made with the coefficient ω (Green and Yang, 2009), with the method for categorical variables (Yang and Green, 2015); but since the α coefficient was usually reported in previous studies, for comparison purposes this coefficient was also estimated. Confidence intervals at 95% confidence were generated with bootstrap simulation (500 simulated samples). The precision in the direct score metric was estimated using the standard error of measurement (SEMrxx), which should optimally be less than 0.5 (SD) to have the maximum tolerable measurement error around the observed scores (Wyrwich et al., 1999; Wyrwich, 2004). SEMrxx was calculated with the R program psychometric (Fletcher, 2010).

At the item level, reliability (rii) was estimated, which was conceptualized as the degree of response replicability in two independent applications of the item in the same participants (Zijlmans et al., 2018b, p. 999). Due to its efficacy, the classical test theory approach was used, based on the alpha coefficient as lower bound reliability, and the square of the item-test relationship (Zijlmans et al., 2018b); According to the analysis of empirical data (Zijlmans et al., 2018a), a heuristic value of rii ≥ 0.30 is recommended as an acceptable minimum. An ad hoc program was used (Zijlmans et al., 2018b).

Convergent and Divergent Validity

To establish convergent and divergent validity of the WRRS, we conducted a multiple correlation analysis using observed scores via the Pearson product-moment correlation coefficient. The criterion used in this part was in two steps: first, the statistical significance set at p < 0.01; and second, the direction of the correlations obtained (i.e., positive, or negative). We hypothesized that AR and PSP would correlate significantly and positively with depression, anxiety, emotional exhaustion, cynicism, and workaholism; on the other hand, we expected a significantly and negative correlation with sleep duration and sleep quality. In terms of the relationship to social desirability, we expected negative and lower coefficient correlations. Meanwhile, we expected that detachment would obtain correlations significantly and negatively with depression, anxiety, emotional exhaustion, cynicism, and workaholism; on the other hand, we expected significantly and positively correlation with sleep duration and sleep quality. Regarding social desirability, we expected a negative and a lower correlation coefficient.

Descriptive Statistic and Normative Data of the Work-Related Rumination Scale

Descriptive statistics was estimated for the WRRS, such as the mean, standard deviation, standard error of measurement, possible range of scores of each factor, and the 95% confidence intervals. Normative data was produced to help interpret scores on the three factors of the WRRS.

Results

Detection of Response Biases

In each of the independent samples, no more than 100 multivariate outliers were found, with a general median of 50 participants in each sample (sample 1 = 27, sample 2 = 76, sample 3 = 72, sample 4 = 52, sample 5 = 48) and D2 > 3.57; altogether, 291 outliers were identified. Regarding the longest sequences of equal responses in the 15 items, according to Curran’s (2016) rule, a median of 42 participants with equal responses was found in the five samples (sample 1 = 15, sample 2 = 50, sample 3 = 80, sample 4 = 81, sample 5 = 30). With both, the final effective sample for subsequent analyzes was 3576 (sample 1, n = 476, 13.3%), sample 2 (n = 921, 25.8%), sample 3 (n = 956, 26.7%), sample 4 (n = 496, 13.9%; sample 5, n = 727, 2.3%).

Item Analysis

Distribution

The multivariate normality (Henze-Zirkler’s test; HZ) in the total sample was rejected (HZ test = 2.33, p < 0.01), as well as in the five subsamples (HZ test between 1.26 and 2.52, p < 0.001; see Supplementary Tables 1–5). There was also consistency in the absence of univariate normality in the items (SW: Shapiro–Wilk test) of the three subscales, in the clean total sample (Table 3), and in each of the subsamples (see Supplementary Tables 1–5). This was linked to the distributional skewness and excess kurtosis of the items; particularly, scale 3 showed a trend toward higher kurtosis. The similarity of the asymmetry pattern in the items was moderately high (one-way absolute agreement ICC = 0.746, 95% CI = 0.566,0.889), but the kurtosis pattern was low (one-way absolute agreement ICC = 0.227, 95% CI = 0.053,0.506); the latter suggests varied response dispersions.

TABLE 3
www.frontiersin.org

Table 3. Univariate descriptive for items in total sample (n = 3,576).

Central Answer

Statistically significant differences were detected in the mean response in the total sample (Table 3), and in the AR factor (Friedman χ2 = 1136.4, df = 3, p < 0.001), PSP factor (Friedman χ2 = 1049.7, df = 4, p < 0.001), and Detachment factor (Friedman χ2 = 27.58, df = 2, p < 0.001). The size of this inter-item difference can be considered large (≥0.30: large; Mangiafico, 2021) for the AR factor (Kendall W = 0.705, 95% CI:0.519,0.742), PSP factor (Kendall W = 0.579, 95% CI:0.310,0.632), and Detachment factor (Kendall W = 666, 95% CI:0.378,0.736). For each subsample, the mean response in each subscale is found in Supplementary Tables 1–5, where the trend seems to repeat what was found in the total sample.

Internal Structure Validity Evidence

The measurement model of the three oblique factors was evaluated in each subsample in its dimensionality, and jointly in its measurement invariance. With these two criteria fulfilled, the modeling was evaluated in the total sample. Three iterations of the modeling were made, corresponding to the evaluation of the initial dimensional structure, the process of modifying the model, and the definition of the final model, respectively.

First Iteration

In Table 4, the adjustment of the 15 items in each subsample is shown, with both CFA and ESEM approaches. In each sample, the fit obtained using the CFA (CFI > 0.94, RMSEA < 0.16, SRMR < 0.11, WRMR > 1.90) predominantly did not give a favorable impression to the models, since most of them deviated from the criteria to adjustment priori. In contrast, with the ESEM approach the values obtained (CFI > 0.98, RMSEA < 0.04, SRMR < 0.040, WRMR < 1.11) show a robust trend of the fit, it can be considered excellent. Additionally, the one-dimensional and two-dimensional models had a poor fit in each of the samples and in the total sample, so these models were not interpreted (see Supplementary Table 6).

TABLE 4
www.frontiersin.org

Table 4. Fit indices of the CFA y ESEM of the WRRS Models on the five samples.

The parameters of the factor loadings and correlations with both the ESEM and CFA approaches are shown respectively in Supplementary Tables 7, 8. Regarding factor loadings, these were frequently high (≥0.60) and similar within the dimensions themselves, with few exceptions. The factorial complexity of the total ESEM solution (Supplementary Table 7) in all the samples varied between 56 and 76% of the items, that is, more than half showed factorial complexity, that is, approximately greater than 1.5. Specifically, several items showed a consistently high degree of factorial complexity in the five subsamples; In the metric of the Hoffman coefficient, the complexity was expressed in its cross loads in two factors. These items were: 5, 6, and 13, which also showed consistently low loads or at the minimum limit (≥0.50). The cross-loadings of these items were around 0.30 or more.

Regarding the inter-factor correlations, the association pattern was theoretically consistent in which a positive covariation between AR and PSP, and negative between detachment and AR and detachment and PSP. The magnitude of this covariation, however, was conditioned by the analysis approach: the estimates based on the ESEM were all attenuated (i.e., smaller in size). Taking as reference the correlations obtained with the CFA, Supplementary Table 8 (100(θCFA - θESEM)/θCFA), the average percentage of attenuation of the interfactorial correlations with ESEM varied between 24.8 and 35.7%.

Since item reliability was one of the quality criteria of the instrument (Supplementary Table 7, head rii), this section also reports on this parameter. Response reproducibility through item level reliability was generally satisfactory, and most of the coefficients were > 0.40. Some items with low reliability in one sample (<0.40) showed adequate reliability in other samples and can be considered sampling error.

Second Iteration

Since the models evaluated with ESEM presented unsatisfactory specific parameters (frequent factorial complexity, and low factorial loads), and over-estimated interfactorial correlations, exclusion criteria were used based on statistical and conceptual decisions. Statistical decisions consisted of (a) the degree of factorial complexity; and (b) item level reliability should be as high as possible, at a minimum of 0.30 but with an emphasis on > 0.40. Conceptually, the exclusion criterion is the apparent redundancy of content, or the possible similar interpretation of the item chosen with another of the items of the construct. Considering these three criteria, items 5 of AR, 13 of PSP and 6 of detachment factors were eliminated. After removing these items, the ESEM was used again, but not the CFA because the decision-making was based on the ESEM results exclusively. Supplementary Table 9 shows the adjustment of the second iteration, in which an excellent adjustment is observed, with all the indices successfully fulfilled. In the parameters obtained (factorial loads and interfactorial correlations). It is observed that the percentage of complexity of the factorial solution decreased compared to the results of the first iteration, and in each subsample the Choff median was substantially low (respectively: 1.03, 1.13, 1.08, 1.06, and 1.03); on the other hand, the factor loadings were high and moderately similar. However, item 14 was identified as potentially problematic, due to its moderate complexity in all samples, and its comparatively lower factor loading with respect to the items of its dimension. This consistency and the decision to obtain a measure with the least complexity possible, led to the removal of this item, whose content represents the behavior of the detachment factor. This item read: Do you find it easy to unwind after work?

Third Iteration

After removing item 14, the model with the remaining 11 items again fitted to the data. The ESEM fit was excellent compared to the CFA fit (Table 4, 3rd iteration heading), which, although it was satisfactory, was not better than the ESEM fit. In Supplementary Table 10, it is observed that all factorial loads were > 0.50 and predominantly were > 0.60; the complexity coefficients were close to 1.0 (except for item 11, but inconsistently in the subsamples), and the item reliability coefficients were frequently > 0.40. In contrast, the estimates produced by CFA again showed an overestimation of factor loadings and factor correlations (Supplementary Table 11). On the other hand, the factorial complexity (Table 5) was substantially lower (M = 1.04, min = 1.00, max = 1.11) compared to the complexity obtained in the previous iterations and indicated that the cross loads are predominantly considered trivial, and that the items essentially represent a single dimension. Regarding the reliability of the item of the final model, all the items exceeded the chosen criterion (>0.30), with a wide variation, but predominantly high (M = 0.47, min = 0.31, max = 0.74).

TABLE 5
www.frontiersin.org

Table 5. Factor loadings in the three factors (ntotal = 3,576).

Measurement Invariance

Within Samples

The measurement invariance in every group analyzed (i.e., sex and age) was good, keeping until intercepts of scalar metric. In the Supplementary Table 13, the differences between fit indices (ΔCFI, ΔRMSEA, and ΔSRMR) keeping predominantly below 0.0. In age 3 groups (Supplementary Table 13), the measurement invariance also was moderately satisfactory, with some changes in the consecutive models assessed, particularly in equality of intercepts model. Probably, the unbalanced sample size among the age groups in each subsample (e.g., in sample 5 one of the groups had n = 80), could have generated Type I error.

Between Samples

The number of dimensions (i.e., configurational invariance), factor loadings and thresholds (i.e., metric invariance) and latent item response (i.e., scalar invariance) were satisfactory in the five samples analyzed.

Total Sample Fit

Due to the invariance achieved between the five independent samples, the model fit of the instrument (three factors, 11 items) was estimated in the total sample (n = 3,576), in which differences conditioned by the analysis approach were again observed (Table 5). In the CFA approach, the adjustment was partially satisfactory because while some indicators were satisfactory (CFI = 0.989, SRMR = 0.051), other indices showed decrease (RMSEA = 0.072, 90% CI = 0.068,0.077; WRMR = 2,895); the inferential statistic was statistically significant (WLSMV – χ2 = 809.02, p < 0.01). On the other hand, the ESEM approach produced very satisfactory results: WLSMV – χ2 = 114.34 (p < 0.01), CFI = 0.999, RMSEA = 0.019 (90% CI = 0.015,0.024), SRMR = 0.019, and WRMR = 1.074. In the Table 6,

TABLE 6
www.frontiersin.org

Table 6. Measurement invariance in WRRS: five samples (ntotal = 3,576).

Reliability – Internal Consistency

Table 7 shows the results of the reliability estimation, with the alpha and omega coefficients. Using the standard deviation of AR (SD = 4.147, SE = 0.042), PSP (SD = 3.64, SE = 0.037) and Detachment (SD = 3.224, SE = 0.031), the standard error of measurement (SEMrxx) for the three WRR scores (see Table 5, heading SEMrxx). According to the suggestion of Wyrwich et al. (1999), SEMrxx of each score was less than half the standard deviation of the score for AR and PSP, but not for Detachment (2.07, 1.82, and 1.61, respectively).

TABLE 7
www.frontiersin.org

Table 7. Internal consistency reliability.

Evidence of Convergent and Divergent Validity

To gather and to establish the convergent and divergent validity of the WRRS – Spanish version, we correlated the scores of its three factors between them and to scores of others measurement instrument. Table 8 shows that AR and PSP have a positive correlation (r = 0.478, p < 0.01) and detachment correlated negatively to AR and PSP (r = –0.329, p < 0.01, and r = –0.261, p < 0.01, respectively). Meanwhile, AR (F1) and PSP (F2) correlated positively to depression, anxiety, emotional exhaustion, cynicism, and workaholism; on the contrary, Detachment (F3) correlated negatively to those variables, as expected. On the hand, AR and PSP correlated negatively to sleep duration, sleep quality, and social desirability, whereas detachment correlated positively to those variables, also as expected (see Table 8).

TABLE 8
www.frontiersin.org

Table 8. Correlation between the subscales of Work-Related Rumination Scale – Spanish version and other measures to establish convergent and divergent validity.

Finally, we estimated the mean, standard deviation, range, and 95% confidence interval of the WRRS-Spanish version to describe it scores. Also, we provide some guidelines to better understand and interpret the scores of WRRS (see Table 9).

TABLE 9
www.frontiersin.org

Table 9. Descriptive statistics of the Work-Related Rumination Scale -Spanish version and guidelines for the interpretation of scores.

Discussion

The essential strategy of the present study was to analyze different sets of samples, obtained in different study contexts; this enhanced the inspection of the stability of the results by evaluating the measurement invariance, and in a general perspective, the replicability (de Rooij and Weeda, 2020). The estimated correlations between the latent variables based on the CFA approach were consistently different between the estimates based on the ESEM approach, to a degree it produced changes in the qualitative classification of the correlations. For example, practically all the latent correlations obtained in the CFA can be classified as high, according to the suggestions of Cohen (1992; 0.10, small; 0.30, medium; 0.50, large), or to empirically based classifications (≥0.32: 75th percentile, Bosco et al., 2015; ≥0.30: large, Gignac and Szodorai, 2016; ≥0.40, Lovakov and Agadullina, 2021). However, correlational estimates with CFA may appear to be not only high, but very high. With the ESEM, the classification of the correlational magnitude did not change, but the quantitative difference was closer to the points that separate a high magnitude from a moderate one, with the consequent impression that these correlations are high, but not very high. According to the mathematical theory behind ESEM, attenuation is produced by the estimation mechanism underlying the cross-factor loadings, in which the variance of the correlations moves toward the cross-loadings. These cross-loadings of the WRRS items are realistic representations of how the items are associated with their dimension and the rest of the dimensions, and due to the ESEM method these could be estimated. In contrast, the CFA imposes that these cross-loadings are zero, and therefore unrealistically represents the internal structure of the measurements in general, and of the WRRS in particular. Because ESEM is an approach that unites the exploratory and confirmatory approaches, the results within the exploratory framework generally carry information that leads to the analysis of factorial complexity (Fleming and Merino Soto, 2005). This result has two implications: first, that the dimensions of the WRRS maintain high correlations with each other, but not so high as to suggest a global dimension with significant interpretation; and that the correlations estimated in previous studies may be overestimated. Because the incorporation of ESEM to study the internal structure estimates the cross-loadings of the items with different factors than expected, one of the quality parameters of the internal structure was the factorial complexity, operationally defined as the degree to which the cross-loadings are different from zero. As a quality parameter, this complexity was moderately high in the first iteration of the analysis, with the full instrument as it is usually used. This highlights the consequent problem of the interpretability of the items because some of these items add invalid variance to their dimensions, because the items can represent more than one dimension. The practical implication is that, in research or professional applications, possibly a part of the contents of each dimension also incorporates other constructs of the WRRS model, to an extent that is questionable from measurement theory, that is, that a construct requires to be essentially one-dimensional to be interpreted. In the practice of construction of measures and validation, it is usual that the factorial simplicity of the items is presumed, that is, that the items purely represent their intended factors to be measured. With this conceptualization of measurement, the CFA applied to the WRRS is perfectly justified, because the cross-loadings do not exist because they are specified a priori with a value of zero.

In the three iterations of the ESEM analysis, the factorial complexity decreased due to the decisions made on the complex items, that is, they were removed on a statistical and substantive basis. One of the items removed was item 6 (detachment), whose responses need to be recoded to be joined to the other responses of its factor. Together with the strong magnitude of factorial complexity, its factorial loading in its expected dimension was very low; and both problems were reproducible in all five samples. It is known that the required recoding items usually produce method variance associated with their phrasing (DiStefano and Motl, 2009; Kam, 2018), and it is a problem commonly associated with the emergence of additional but spurious factors and low factor loadings. Therefore, the removal of this item, together with the rest of the removed items, produced an increase in the degree of fit of the WRRS model. A practical implication of this result for the user is that, as a first option to obtain more valid scores, remove this item from the calculation of the detachment factor score; A second option is to evaluate the validity of this item, to corroborate its questionable operation, and for this the user can implement some dimensionality evaluation approach (e.g., CFA, ESEM, etc.).

Within the evaluation of the internal structure, the measurement invariance was satisfactory in the three levels evaluated (configuration, metric and thresholds, and intercepts), which helps to make comparisons according to sex, and age groups, in this study, early career (21–30 old age), in prime career (31–50 old age) and past peak career (≥51 old age), and sex. However, with respect to the age groups assessed for invariance, it is unclear whether the absence of intercept invariance (i.e., scalar) could have been produced by real differences or by the imbalance in the sample size of the samples compared in each of the five subgroups. An evaluation with different age grouping mode may be necessary to explore this with more certainty. Also, other models of equivalence assessment, including effect size, will be needed.

Our strategy to investigate measurement invariance was implemented to each of the independent subsamples (n = 5), and this provided an opportunity to observe the replicability of the measurement properties of the WRRS. In this last aspect, it is highlighted that the structural properties remain similar (unless, in sex and age groups), the measured parameters remain similar, given the natural variations of the administration conditions, and the variability of the individual disposition. Given that the data cleaning was antecedent to the main analyzes, in two manifestations of probable careless/insufficient effort responses (C/IE), it is possible to think about the link between the removal of the participants with IER and the measurement invariance achieved. We also observed that the difference between the five groups in the assessment of intercept invariance (i.e., scalar invariance) was larger than the cut-off points chosen and suggested by Chen (2007). This apparent lack of scalar invariance may be influenced by the chosen criteria of Chen (2007) and not be exactly appropriate for the assessment of scalar invariance. The reason is that these criteria were developed for the comparison of two groups (in our study, there were five groups), with the estimator for normally distributed continuous variables (i.e., maximum likelihood). To conclude that the invariance was not met at this level, a corroboration of the effect size of the non-invariance may be required (Nye et al., 2019).

Regarding reliability, the coefficients α and ω, the levels obtained can be considered moderately high in a general perspective and considering the interaction between the small number of items in each subscale, the sample size and the value obtained (Ponterotto and Ruckdeschel, 2007). These levels do not indicate using the WRRS for all uses, but predominantly for group applications and where decisions on individual subjects are not needed, because the coefficients are not high (i.e., 0.85 or more), the possibility of measurement error can still be considered high (Ponterotto and Ruckdeschel, 2007). The antecedent studies with the WRRS, where the interpretations are oriented toward group responses, do not conflict with this indication. On the other hand, given the similarity of the coefficients α and ω, it is assumed that some difference between the factorial loadings were trivial (Hayes and Coutts, 2020), and did not have a substantial effect on the distance between one coefficient and the other. This distance is usually associated with the degree of equality of the factorial loadings of the items, a requirement known as tau-equivalence to validate the α coefficient (Green and Yang, 2009; Hayes and Coutts, 2020). An implication of this similarity is that the estimation of internal consistency can be satisfactorily done with the coefficient α, and without requiring SEM modeling or SEM modeling approaches to estimate the coefficient ω. If the conditions of application in future uses, and the data cleaning will be effective, this implication can be induced to other contexts. Finally, given that the standard error of measurement was greater than half the standard deviation of the detachment score, it is possible that it is necessary to incorporate revision strategies of this subscale, to improve the precision of the score (Wyrwich, 2004). These strategies may require adding an item, or refining the application of the instruments, or presenting the items in an orderly manner in each content subset.

In terms of the relationship between the three factors or zero order correlations of the WRRS, they tend to be high and positive between AR and PSP and on the other hand, these two also tend to correlate negatively and somewhat medium effect size to detachment; except in one longitudinal study in which the relationships between AR and PSP were low and fluctuated between r = 0.07 and r = 0.19 (Vahle-Hinz et al., 2017). Probably one of the main concerns regarding the WRRS is whether the AR and PSP subscales can be distinguished and thus measure different constructs. Results from this brief systematic literature review is that they appear to measure two related but different constructs. For Cropley and Zijlstra (2011), emotional arousal is one of the fundamental contrasts between AR and PSP states. Psychophysiological arousal is strong in the AR state, which is detrimental to recovery, whereas the PSP state is thought to exist without psychological or physiological arousal, making it less harmful to recovery. According to Cropley and Zijlstra, AR has a negative valence, whereas problem-solving rumination has a positive valence, especially if the process of PSP results in a solution, which is supported by research that suggests that thinking about successfully completed tasks increases positive affect, self-efficacy, and well-being (Stajkovic and Luthans, 1998; Seo et al., 2004). As a result, it’s likely that ruminating with a problem-solving emphasis can help with recovery at least is not that detrimental to health as AR. Moreover, Weigelt et al. (2019a) tested different models including one which has the three dimensions of the WRRS proposed by Cropley et al. (2012) and two other constructs that are also related to thinking about work such as positive and negative work reflection and results of their CFA supported that in fact, they were five different constructs.

As a final note, in the analysis it was detected that the dispersion of the responses (induced from the different kurtosis values, and low ICC), which suggests not only little redundancy among the responses, but also that the items are sensitive to individual differences in responses, and therefore the items may be interesting units of content to explore.

Regarding the limitations of the study, first, the population representativeness is not guaranteed, because the non-random selection of the samples did not corroborate the population similarity. Second, the evaluation of the measurement invariance was done by a single procedure, since different methods can produce different percentage of Type I and Type II errors, it may be required to explore the equivalence with other methods (for example, differential operation approach of items). Third, the bifactor model was not implemented, and an assessment of multidimensionality in contrast to the dimensionality of a general factor may be required (Reise, 2012; Gignac, 2016; Rodriguez et al., 2016a,b). Finally, the reliability evaluation of the stability of the scores was not implemented; to complete the evaluation of this aspect, you should study the reproducibility of the score at different points of time, using a test–retest approach.

Conclusion

The final version of the instrument consisted of three moderately to highly related factors, items with increased factorial simplicity, satisfactory reproducibility of the responses to the items, high reliability of internal consistency in their scores, and strong invariance between the samples.

Data Availability Statement

Data are available upon reasonable request to the corresponding author.

Ethics Statement

The studies involving human participants were reviewed and approved by Simón Carlo, Chair of the Review Institutional Review Board, Ponce Health Sciences University, Ponce, Puerto Rico. The patients/participants provided their written informed consent to participate in this study.

Author Contributions

ER-H, LR-M, and CM-S: conceptualization, methodology, writing–original draft preparation, and writing–review and editing. CM-S and ER-H: formal Analysis. ER-H and LR-M: investigation. ER-H: supervision and funding acquisition. All author: contributed to the article and approved the submitted version.

Funding

The project described was supported by the RCMI Program Award Number U54MD007579 from the National Institute on Minority Health and Health Disparities. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2021.774472/full#supplementary-material

References

Asparouhov, T., and Muthén, B. (2009). Exploratory structural equation modeling. Struct. Equ. Modeling 16, 397–438. doi: 10.1080/10705510903008204

CrossRef Full Text | Google Scholar

Baas, M., De Dreu, C. K. W., and Nijstad, B. A. (2008). A meta-analysis of 25 years of mood-creativity research: hedonic tone, activation, or regulatory focus? Psychol. Bull. 134, 779–806. doi: 10.1037/a0012815

PubMed Abstract | CrossRef Full Text | Google Scholar

Bisht, N. S. (2017). “Job stressors and burnout in field officers of microfinance institutions in India: role of work-related rumination,” in Changing business environment: gamechangers, opportunities and risks, eds N. Delener and C. Schweikert (Vienna, Austria: Global Business and Technology Association).

Google Scholar

Blackmore, E. R., Stansfeld, S. A., Weller, I., Munce, S., Zagorski, B. M., and Stewart, D. E. (2007). Major depressive episodes and work stress: results from a national population survey. Am. J. Public Health 97, 2088–2093.

Google Scholar

Bosco, F. A., Aguinis, H., Singh, K., Field, J. G., and Pierce, C. A. (2015). Correlational effect size benchmarks. J. Appl. Psychol. 100, 431–449.

Google Scholar

Brotheridge, C. M., and Grandey, A. A. (2002). Emotional labor and burnout: comparing two perspectives of “people work”. J. Vocat. Behav. 60:1739. doi: 10.1006/jvbe.2001.1815

CrossRef Full Text | Google Scholar

Browne, M. W., and Cudeck, R. (1989). Single sample cross-validation indices for covariance structures. Multivariate Behav. Res. 24, 445–455. doi: 10.1207/s15327906mbr2404_4

CrossRef Full Text | Google Scholar

Byrne, B. M. (2012). Structural equation modeling with Mplus: basic concepts, applications, and programming. Milton Park: Routledge/Taylor & Francis Group.

Google Scholar

Chen, F. F. (2007). Sensitivity of Goodness of Fit Indexes to Lack of Measurement Invariance. Struct. Equ. Modeling 14, 464–504. doi: 10.1080/10705510701301834

CrossRef Full Text | Google Scholar

Cohen, J. (1992). A power primer. Psychol. Bull. 112, 155–159. doi: 10.1037/0033-2909.112.1.155

PubMed Abstract | CrossRef Full Text | Google Scholar

Cropley, M., and Collis, H. (2020). The Association Between Work-Related Rumination and Executive Function Using the Behavior Rating Inventory of Executive Function. Front. Psychol. 11:821. doi: 10.3389/fpsyg.2020.00821

PubMed Abstract | CrossRef Full Text | Google Scholar

Cropley, M., Dijk, D.-J., and Stanley, N. (2006). Job strain, work rumination, and sleep in school teachers. Eur. J. Work Organ. Psychol. 15, 181–196. doi: 10.1080/13594320500513913

CrossRef Full Text | Google Scholar

Cropley, M., Michalianou, G., Pravettoni, G., and Millward, L. J. (2012). The relation of post-work ruminative thinking with eating behaviour. Stress Health 28, 23–30. doi: 10.1002/smi.1397

PubMed Abstract | CrossRef Full Text | Google Scholar

Cropley, M., Plans, D., Morelli, D., Sütterlin, S., Inceoglu, I., Thomas, G., et al. (2017). The Association between Work-Related Rumination and Heart Rate Variability: a Field Study. Front. Hum. Neurosci. 11:27. doi: 10.3389/fnhum.2017.00027

PubMed Abstract | CrossRef Full Text | Google Scholar

Cropley, M., Zijlstra, F. R., Querstret, D., and Beck, S. (2016). Is Work-Related Rumination Associated with Deficits in Executive Functioning? Front. Psychol. 7:1524. doi: 10.3389/fpsyg.2016.01524

PubMed Abstract | CrossRef Full Text | Google Scholar

Cropley, M., and Zijlstra, F. R. H. (2011). “Work and rumination,” in Handbook of stress in the occupations, eds J. Langan-Fox and C. L. Cooper (Cheltenham: Edward Elgar Publishing), 487–501. doi: 10.4337/9780857931153.00061

CrossRef Full Text | Google Scholar

Crutzen, R., and Peters, G.-J. Y. (2017). Scale quality: alpha is an inadequate estimate and factor-analytic evidence is needed first of all. Health Psychol. Rev. 11, 242–247. doi: 10.1080/17437199.2015.1124240

PubMed Abstract | CrossRef Full Text | Google Scholar

Cudeck, R., and Browne, M. W. (1983). Cross-validation of covariance structures. Multivariate Behav. Res. 18, 147–167. doi: 10.1207/s15327906mbr1802_2

CrossRef Full Text | Google Scholar

Curran, P. G. (2016). Methods for the detection of carelessly invalid responses in survey data. J. Exp. Soc. Psychol. 66, 4–19.

Google Scholar

de Rooij, M., and Weeda, W. (2020). Cross-Validation: a method every psychologist should know. Adv. Methods Pract. Psychol. Sci. 3, 248–263.

Google Scholar

del Líbano, M., Llorens, S., Salanova, M., and Schaufeli, W. (2010). Validity of a brief workaholism scale. Psicothema 22, 143–150.

Google Scholar

DiStefano, C., and Motl, R. W. (2009). Self-esteem and method effects associated with negatively worded items: investigating factorial invariance by sex. Struct. Equ. Modeling 16, 134–146. doi: 10.1080/10705510802565403

CrossRef Full Text | Google Scholar

Dormann, C., and Zapf, D. (1999). Social support, social stressors at work, and depressive symptoms: testing for main and moderating effects with structural equations in a three-wave longitudinal study. J. Appl. Psychol. 84, 874–884. doi: 10.1037/0021-9010.84.6.874

PubMed Abstract | CrossRef Full Text | Google Scholar

Dunn, J. M., and Sensky, T. (2018). Psychological processes in chronic embitterment: the potential contribution of rumination. Psychol. Trauma 10, 7–13. doi: 10.1037/tra0000291

PubMed Abstract | CrossRef Full Text | Google Scholar

Etzion, D., Eden, D., and Lapidot, Y. (1998). Relief from job stressors and burnout: reserve service as a respite. J. Appl. Psychol. 83, 577–585. doi: 10.1037/0021-9010.83.4.577

PubMed Abstract | CrossRef Full Text | Google Scholar

Firoozabadi, A., Uitdewilligen, S., and Zijlstra, F. R. H. (2018a). Should you switch off or stay engaged? The consequences of thinking about work on the trajectory of psychological well-being over time. J. Occup. Health Psychol. 23, 278–288. doi: 10.1037/ocp0000068

PubMed Abstract | CrossRef Full Text | Google Scholar

Firoozabadi, A., Uitdewilligen, S., and Zijlstra, F. R. H. (2018b). Solving problems or seeing troubles? A day-level study on the consequences of thinking about work on recovery and well-being, and the moderating role of self-regulation. Eur. J. Work Organ. Psychol. 27, 629–641. doi: 10.1080/1359432X.2018.1505720

CrossRef Full Text | Google Scholar

Fleming, J. S., and Merino Soto, C. (2005). Medidas de simplicidad y de ajuste factorial: un enfoque para la evaluación de escalas construidas factorialmente. Rev. Psicol. 23, 250–266.

Google Scholar

Fletcher, T. D. (2010). Psychometric: applied Psychometric Theory. R package version 2.2. Available Online at: https://CRAN.R-project.org/package=psychometric. (accessed February 15, 2021).

Google Scholar

Flora, D. B. (2020). Your coefficient alpha is probably wrong, but which coefficient omega is right? A tutorial on using R to obtain better reliability estimates. Adv. Methods Pract. Psychol. Sci. 3, 484–501. doi: 10.1177/2515245920951747

CrossRef Full Text | Google Scholar

Fritz, C., Sonnentag, S., Spector, P. E., and McInroe, J. A. (2010). The weekend matters: relationships between stress recovery and affective experiences. J. Organ. Behav. 31, 1137–1162. doi: 10.1002/job.672

CrossRef Full Text | Google Scholar

Gignac, G. E. (2016). The higher-order model imposes a proportionality constraint: that is why the bifactor model tends to fit better. Intelligence 55, 57–68. doi: 10.1016/j.intell.2016.01.006

CrossRef Full Text | Google Scholar

Gignac, G. E., and Szodorai, E. T. (2016). Effect size guidelines for individual differences researchers. Pers. Individ. Dif. 102, 74–78.

Google Scholar

Grant, M. J., and Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info. Libr. J. 26, 91–108. doi: 10.1111/j.1471-1842.2009.00848.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Green, S. B., and Yang, Y. (2009). Reliability of summed item scores using structural equation modeling: an alternative to coefficient alpha. Psychometrika 74, 155–167. doi: 10.1007/s11336-008-9099-3

CrossRef Full Text | Google Scholar

Hayes, A. F., and Coutts, J. J. (2020). Use omega rather than cronbach’s alpha for estimating reliability. But. Commun. Methods Meas. 14, 1–24. doi: 10.1080/19312458.2020.1718629

CrossRef Full Text | Google Scholar

Hofmann, R. J. (1977). Indices Descriptive of Factor Complexity. J. Gen. Psychol. 96, 103–110. doi: 10.1080/00221309.1977.9920803

CrossRef Full Text | Google Scholar

Hofmann, R. J. (1978). Complexity and simplicity as objective indices descriptive of factor solutions. Multivariate Behav. Res. 13, 247–225. doi: 10.1207/s15327906mbr1302_9

CrossRef Full Text | Google Scholar

Jorgensen, T. D., Pornprasertmanit, S., Schoemann, A. M., and Rosseel, Y. (2021). semTools: useful tools for structural equation modeling. R package version .5-3. Recuperado de. Available Online at: https://CRAN.R-project.org/package=semTools. (accessed February 15, 2021).

Google Scholar

Junker, N. M., Baumeister, R. F., Straub, K., and Greenhaus, J. H. (2020). When forgetting what happened at work matters: the role of affective rumination, problem-solving pondering, and self-control in work-family conflict and enrichment. J. Appl. Psychol. doi: 10.1037/apl0000847

CrossRef Full Text [Epub Online ahead of print]. | PubMed Abstract | Google Scholar

Kam, C. C. S. (2018). Why do we still have an impoverished understanding of the item wording effect? An empirical examination. Sociol. Methods Res. 47, 574–597. doi: 10.1177/0049124115626177

CrossRef Full Text | Google Scholar

Karasek, R., Baker, D., Marxer, F., Ahlbom, A., and Theorell, T. (1981). Job decision latitude, job demands, and cardiovascular disease: a prospective study of Swedish men. Am. J. Public Health 71, 694–705. doi: 10.2105/ajph.71.7.694

PubMed Abstract | CrossRef Full Text | Google Scholar

Kinnunen, U., Feldt, T., and de Bloom, J. (2019). Testing cross-lagged relationships between work-related rumination and well-being at work in a three-wave longitudinal study across 1 and 2 years. J. Occup. Organ. Psychol. 92, 645–670. doi: 10.1111/joop.12256

CrossRef Full Text | Google Scholar

Kinnunen, U., Feldt, T., de Bloom, J., Sianoja, M., Korpela, K., and Geurts, S. (2017). Linking boundary crossing from work to nonwork to work-related rumination across time: a variable- and person-oriented approach. J. Occup. Health Psychol. 22, 467–480. doi: 10.1037/ocp0000037

PubMed Abstract | CrossRef Full Text | Google Scholar

Kivimäki, M., Virtanen, M., Elovainio, M., Kouvonen, A., Väänänen, A., and Vahtera, J. (2006). Work stress in the etiology of coronary heart disease: a meta-analysis. Scand. J. Work Environ. Health 32, 431–442. doi: 10.5271/sjweh.1049

PubMed Abstract | CrossRef Full Text | Google Scholar

Korkmaz, S., Goksuluk, D., and Zararsiz, G. (2014). MVN: an R Package for Assessing Multivariate Normality. R J. 6, 151–162.

Google Scholar

Kroenke, K., Spitzer, R. L., and Williams, J. B. W. (2001). The PHQ-9: Validity of a brief depression severity measure. J. Gen. Intern. Med. 16, 606–613. doi: 10.1046/j.1525-1497.2001.016009606.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, C. H. (2016). Confirmatory factor analysis with ordinal data: comparing robust maximum likelihood and diagonally weighted least squares. Behav. Res. Methods 48, 936–949.

Google Scholar

Lovakov, A., and Agadullina, E. R. (2021). Empirically derived guidelines for effect size interpretation in social psychology. Eur. J. Soc. Psychol. 51, 485–504.

Google Scholar

Löwe, B., Kroenke, K., Herzog, W., and Gräfe, K. (2004a). Measuring depression outcome with a brief self-report instrument: sensitivity to change of the Patient Health Questionnaire (PHQ-9). J. Affect. Disord. 81, 61–66. doi: 10.1016/S0165-0327(03)00198-8

CrossRef Full Text | Google Scholar

Löwe, B., Unützer, J., Callahan, C. M., Perkins, A. J., and Kroenke, K. (2004b). Monitoring depression treatment outcomes with the patient health questionnaire-9. Med. Care 42, 1194–1201.

Google Scholar

Löwe, B., Schenkel, I., Carney-Doebbeling, C., and Göbel, C. (2006). Responsiveness of the PHQ-9 to Psychopharmacological Depression Treatment. Psychosomatics 47, 62–67. doi: 10.1176/appi.psy.47.1.62

PubMed Abstract | CrossRef Full Text | Google Scholar

Magnavita, N., and Fileni, A. (2014). Work stress and metabolic syndrome in radiologists: first evidence. Radiol. Med. 119, 142–148. doi: 10.1007/s11547-013-0329-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Mangiafico, S. (2021). rcompanion: functions to Support Extension Education Program Evaluation. R package version 2.4.1. Available Online at: https://CRAN.R-project.org/package=rcompanion. (accessed February 15, 2021).

Google Scholar

Mansolf, M., and Reise, S. P. (2016). Exploratory bifactor analysis: the Schmid-Leiman orthogonalization and Jennrich-Bentler analytic rotations. Multivariate Behav. Res. 51, 698–717.

Google Scholar

Maslach, C., Jackson, S. E., and Leiter, M. P. (1996). Maslach Burnout Inventory manual, 3rd Edn. Palo Alto: Consulting Psychologists Press.

Google Scholar

Mehmood, Q., and Hamstra, M. R. W. (2021). Panacea or mixed blessing? Learning goal orientation reduces psychological detachment via problem-solving rumination. Appl. Psychol. 70, 1841–1855. doi: 10.1111/apps.12294

CrossRef Full Text | Google Scholar

Meijman, T. F., and Mulder, G. (1998). “Psychological aspects of workload,” in Handbook of work and organizational: work psychology, eds P. J. D. Drenth, H. Thierry, and C. J. de Wolff (United Kingdom: Psychology Press/Erlbaum, Taylor & Francis), 5–33.

Google Scholar

Merino-Soto, C., and Angulo-Ramos, M. (2020). Validity induction: comments on the study of Compliance Questionnaire for Rheumatology. Rev. Colombiana de Reumatol. 28, 312–313. doi: 10.1016/j.rcreu.2020.05.005

CrossRef Full Text | Google Scholar

Merino-Soto, C., and Angulo-Ramos, M. (2021). Metric studies of the Compliance Questionnaire for Rheumatology (CQR): a case of validity induction? Reumatol. Clín. doi: 10.1016/j.reuma.2021.03.004

CrossRef Full Text | PubMed Abstract | Google Scholar

Merino-Soto, C., and Calderón-De la Cruz, G. A. (2018). Validez de estudios peruanos sobre estrés y burnout [Validity of Peruvian studies on stress and burnout]. Rev. Per. Med. Exp. Salud Pública 35, 353–354. doi: 10.17843/rpmesp.2018.353.3521

CrossRef Full Text | Google Scholar

Morin, A. J. S., Arens, A. K., and Marsh, H. W. (2015). A bifactor exploratory structural equation modeling framework for the identification of distinct sources of construct-relevant psychometric multidimensionality. Struct. Equ. Modeling 23, 116–139.

Google Scholar

Mullen, P. R., Backer, A., Chae, N., and Li, H. (2020). School counselors’ work-related rumination as predictor of burnout, turnover intention, job satisfaction, and work engagement. Prof Sch. Couns. 24, 1–10.

Google Scholar

Muthén, B. O., du Toit, S., and Spisic, D. (1997). Robust inference using weighted least squares and quadratic estimating equations in latent variable modeling with categorical and continuous outcomes. Unpublished manuscript. Available Online at: https://www.statmodel.com/download/Article_075.pdf (accessed February 15, 2021).

Google Scholar

Nye, C. D., Bradburn, J., Olenick, J., Bialko, C., and Drasgow, F. (2019). How big are my effects? Examining the magnitude of effect sizes in studies of measurement equivalence. Organ. Res. Methods 22, 678–709. doi: 10.1177/1094428118761122

CrossRef Full Text | Google Scholar

O’Connor, B. P. (2021). EFA.dimensions: exploratory Factor Analysis Functions for Assessing Dimensionality. R package version .1.7.2. Available Online at: https://CRAN.R-project.org/package=EFA.dimensions. (accessed February 15, 2021).

Google Scholar

Pauli, R., and Lang, J. (2021). Collective resources for individual recovery: the moderating role of social climate on the relationship between job stressors and work-related rumination: a multilevel approach. Ger. J. Hum. Resour. Manage. 35, 152–175.

Google Scholar

Pereira, D., and Elfering, A. (2014). Social stressors at work and sleep during weekends: the mediating role of psychological detachment. J. Occup. Health Psychol. 19, 85–95. doi: 10.1037/a0034928

PubMed Abstract | CrossRef Full Text | Google Scholar

Pettersson, E., and Turkheimer, E. (2010). Item selection, evaluation, and simple structure in personality data. J. Res. Pers. 44, 407–442.

Google Scholar

Pisanti, R., Gagliardi, M. P., Razzino, S., and Bertini, M. (2003). Occupational stress and wellness among Italian secondary school teachers. Psychol. Health 18, 523–536. doi: 10.1080/0887044031000147247

CrossRef Full Text | Google Scholar

Ponterotto, J. G., and Ruckdeschel, D. E. (2007). An overview of coefficient alpha and a reliability matrix for estimating adequacy of internal consistency coefficients with psychological research measures. Percept. Mot. Skills 105, 997–1014. doi: 10.2466/pms.105.3.997-1014

PubMed Abstract | CrossRef Full Text | Google Scholar

Pravettoni, G., Cropley, M., Leotta, S. N., and Bagnara, S. (2007). The differential role of mental rumination among industrial and knowledge workers. Ergonomics 50, 1931–1940. doi: 10.1080/00140130701676088

PubMed Abstract | CrossRef Full Text | Google Scholar

Querstret, D., and Cropley, M. (2012). Exploring the relationship between work-related rumination, sleep quality, and work-related fatigue. J. Occup. Health Psychol. 17, 341–353. doi: 10.1037/a0028552

PubMed Abstract | CrossRef Full Text | Google Scholar

Querstret, D., Cropley, M., and Fife-Schaw, C. (2017). Internet-based instructor-led mindfulness for work-related rumination, fatique, and sleep: assessing facets of mindfulness as mecahnisms of change. A randomized waitlist control trial. J. Occup. Health Psychol. 22, 153–169. doi: 10.1037/ocp0000028

PubMed Abstract | CrossRef Full Text | Google Scholar

Querstret, D., Cropley, M., Kruger, P., and Heron, R. (2016). Assessing the effect of a Cognitive Behaviour Therapy (CBT)-based workshop on work-related rumination, fatigue, and sleep. Eur. J. Work Organ. Psychol. 25, 50–67. doi: 10.1080/1359432X.2015.1015516

CrossRef Full Text | Google Scholar

Reise, S. P. (2012). Invited paper: the rediscovery of bifactor measurement models. Multivariate Behav. Res. 47, 667–696.

Google Scholar

Rodriguez, A., Reise, S. P., and Haviland, M. G. (2016a). Applying bifactor statistical indices in the evaluation of psychological measures. J. Pers. Assess. 98, 223–237.

Google Scholar

Rodriguez, A., Reise, S. P., and Haviland, M. G. (2016b). Evaluating bifactor models: calculating and interpreting statistical indices. Psychol. Methods 21, 137–115.

Google Scholar

Roger, D., and Jamieson, J. (1988). Individual differences in delayed heart-rate recovery following stress: the role of extraversion, neuroticism and emotional control. Pers. Individ. Dif. 9, 721–726. doi: 10.1016/0191-8869(88)90061-X

CrossRef Full Text | Google Scholar

Rook, J. W., and Zijlstra, F. R. H. (2006). The contribution of various types of activities to recovery. Eur. J. Work Organ. Psychol. 15, 218–240. doi: 10.1080/13594320500513962

CrossRef Full Text | Google Scholar

Rosario-Hernández, E., and Rovira Millán, L. V. (2002). Desarrollo y validación de una escala para medir las actitudes hacia el retiro. Rev. Puertorriqueña Psicol. 13, 45–60.

Google Scholar

Rosario-Hernández, E., Rovira Millán, L. V., Comas Nazario, A. R., Medina Hernández, A., Colón Jiménez, R., Feliciano Rivera, Y., et al. (2018a). Workplace bullying and its effect on sleep well-being: the mediatin role of rumination. Rev. Puertorriqueña Psicol. 29, 164–186.

Google Scholar

Rosario-Hernández, E., Rovira Millán, L. V., Vélez Ramos, J., Cruz, M., Vélez, E., Torres, G., et al. (2018b). Effect of the exposure to workplace bullying on turnover intention and the mediating role of job satisfaction, work engagement, and burnout. Rev. Interamericana Psicol. Ocupacional 37, 26–51. doi: 10.21772/ripo.v37n1a03

CrossRef Full Text | Google Scholar

Rosario-Hernández, E., Rovira Millán, L. V., Díaz Pla, L., Segarra Colondres, C., Soto Franceschii, J. A., Rodríguez Irizarry, A., et al. (2013). Las demandas laborales y su relación con el bienestar psicológico y físico: el papel mediador de la rumiación relacionada con el trabajo. Rev. Interamericana Psicol. Ocupacional 32, 69–95.

Google Scholar

Rosario-Hernández, E., Rovira Millán, L. V., Díaz Pla, L., Segarra Colondres, C., Soto Franceschini, J. A., Rodríguez Irizarry, A., et al. (2015). Las demandas laborales y sus efectos en el bienestar del suenno: el papel mediador de la rumiación relacionada con el trabajo. Rev. Puertorriqueña Psicol. 26, 150–169.

Google Scholar

Rosario-Hernández, E., Rovira Millán, L. V., Rodríguez Irizarry, A., Rivera Alicea, B. E., Fernández López, L. N., López Miranda, R. S., et al. (2014). Salud cardiovascular y su relación con los factores de riesgo psicosociales en una muestra de empleados puertorriqueños. Rev. Puertorriqueña Psicol. 25, 98–116.

Google Scholar

Rosario-Hernández, E., Rovira Millán, L. V., Sánchez-García, N. C., Padovani Rivera, C. M., Velázquez Lugo, A., Maldonado Fonseca, I. M., et al. (2020). A boring story about work: do bored employees ruminate? Rev. Puertorriqueña Psicol. 31, 92–108.

Google Scholar

Rosario-Hernández, E., Rovira Millán, L. V., Vega Vélez, S., Zeno-Santi, R., Farinacci García, P., Centeno Quintana, L., et al. (2019). Exposure to workplace bullying and suicidal ideation: an exploratory study. J. Appl. Struct. Equ. Modeling 3, 55–75.

Google Scholar

Rosseel, Y. (2012). lavaan: an R Package for Structural Equation Modeling. J. Stat. Softw. 48, 1–36.

Google Scholar

Rovira Millán, L. V., and Rosario-Hernández, E. (2018). Desarrollo y validación del Indicador de Bienestar del Sueño. Rev. Puertorriqueña Psicol. 29, 348–362.

Google Scholar

Safstrom, M., and Harting, T. (2013). Psychological detachment in the relationship between job stressors and strain. Behav. Sci. 3, 418–433. doi: 10.3390/bs3030418

PubMed Abstract | CrossRef Full Text | Google Scholar

Saris, W. E., Satorra, A., and van der Veld, W. M. (2009). Testing Structural Equation Models or Detection of Misspecifications? Structural Equation Modeling. Struct. Equ. Modeling 16, 561–582.

Google Scholar

Schaufeli, W. B., Shimazu, A., and Taris, T. W. (2009). Being driven to work excessively hard: the evaluation of a two-factor measure of workaholism in the Netherlands and Japan. Cross Cult. Res. 43, 320–348. doi: 10.1177/1069397109337239

CrossRef Full Text | Google Scholar

Schwartz, A. R., Gerin, W., Davidson, K. W., Pickering, T. G., Brosschot, J. F., Thayer, J. F., et al. (2003). Toward a causal model of cardiovascular responses to stress and the development of cardiovascular disease. Psychosom. Med. 65, 22–35. doi: 10.1097/01.psy.0000046075.79922.61

CrossRef Full Text | Google Scholar

Seo, M., Barrett, L., and Bartunek, J. M. (2004). The role of affective experience in work motivation. Acad. Manag. Rev. 29, 423–439. doi: 10.5465/amr.2004.13670972

CrossRef Full Text | Google Scholar

Smyth, A., de Bloom, J., Syrek, C., Domin, M., Janneck, M., Reins, J. A., et al. (2020). Efficacy of a smartphone-based intervention – “Holidaily” – promoting recovery behaviour in workers after a vacation: study protocol for randomized controlled trial. BMC Pub. Health 20:1286. doi: 10.1186/s12889-020-09354-5

CrossRef Full Text | Google Scholar

Sonnentag, S., Mojza, E. J., Binnewies, C., and Scholl, A. (2008). Being engaged at work and detached at home: a week-level study on work engagement, psychological detachment, and affect. Work Stress 22, 257–276. doi: 10.1080/02678370802379440

CrossRef Full Text | Google Scholar

Sonnentag, S., and Zijlstra, F. R. H. (2006). Job characteristics and off-job activities as predictors of need for recovery, well-being, and fatigue. J. Appl. Psychol. 91, 330–350. doi: 10.1037/0021-9010.91.2.330

PubMed Abstract | CrossRef Full Text | Google Scholar

Spitzer, R. L., Kroenke, K., Williams, J. B. W., and Löwe, B. (2006). A brief measure for assessing generalized anxiety disorder the GAD-7. Arch. Intern. Med. 166, 1092–1097.

Google Scholar

Stajkovic, A. D., and Luthans, F. (1998). Self-efficacy and work-related performance: A meta-analysis. Psychol. Bull. 124, 240–261. doi: 10.1037/0033-2909.124.2.240

CrossRef Full Text | Google Scholar

Stark, S., Chernyshenko, O. S., and Drasgow, F. (2006). Detecting dierential item functioning with conrmatory factor analysis and item response theory: toward a unifed strategy. J. Appl. Psychol. 91, 1292–1306.

Google Scholar

Sulak Akyüz, B., and Sulak, S. (2019). Adaptation of Work-Related Rumination Scale into Turkish. J. Meas. Eval. Educ. Psychol. 10, 422–434.

Google Scholar

Svetieva, E., Clerkin, C., and Ruderman, M. N. (2017). Can’t sleep, won’t sleep: exploring leaders’ sleep patterns, problems, and attitudes. Consult. Psychol. J. Pract. Res. 69, 80–97. doi: 10.1037/cpb0000092

CrossRef Full Text | Google Scholar

Syrek, C. J., Weigelt, O., Peifer, C., and Antoni, C. H. (2017). Zeigarnik’s sleepless nights: how unfinished tasks at the end of the week impair employee sleep on the weekend through rumination. J. Occup. Health Psychol. 22, 225–238. doi: 10.1037/ocp0000031

PubMed Abstract | CrossRef Full Text | Google Scholar

Vahle-Hinz, T., Mauno, S., de Bloom, J., and Kinnunen, U. (2017). Rumination for innovation? Analysing the longitudinal effects of work-related rumination on creativity at work and off-job recovery. Work Stress 31, 315–337. doi: 10.1080/02678373.2017.1303761

CrossRef Full Text | Google Scholar

van der Doef, M., Mbazzi, F. B., and Verhoeven, C. (2012). Job conditions, job satisfaction, somatic complaints and burnout among East African nurses. J. Clin. Nurs. 21, 1763–1775. doi: 10.1111/j.1365-2702.2011.03995.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Van Laethem, M., Beckers, D. G. J., de Bloom, J., Sianoja, M., and Kinnunen, U. (2019). Challenge and hindrance demands in relation to self-reported job performance and the role of restoration, sleep quality, and affective rumination. J. Occup. Organ. Psychol. 92, 225–254. doi: 10.1111/joop.12239

CrossRef Full Text | Google Scholar

Vandevala, T., Pavey, L., Chelidoni, O., Chang, N. F., Creagh-Brown, B., and Cox, A. (2017). Psychological rumination and recovery from work in intensive care professionals: associations with stress, burnout, depression, and health. J. Intensive Care 5:16. doi: 10.1186/s40560-017-0209-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Vassar, M., Ridge, J., and Hill, A. (2008). Inducing Score Reliability from Previous Reports: an Examination of Life Satisfaction Studies. Soc. Indic. Res. 87, 27–45. doi: 10.1007/s11205-007-9157-8

CrossRef Full Text | Google Scholar

Weigelt, O., Gierer, P., and Syrek, C. J. (2019a). My Mind is Working Overtime—Towards an Integrative Perspective of Psychological Detachment, Work-Related Rumination, and Work Reflection. Int. J. Environ. Res. Public Health 16:2987. doi: 10.3390/ijerph16162987

PubMed Abstract | CrossRef Full Text | Google Scholar

Weigelt, O., Syrek, C. J., Schmitt, A., and Urbach, T. (2019b). Finding peace of mind when there still is so much left undone: a diary study on how job stress, competence need satisfaction, and proactive work behavior contribute to work-related rumination during the week. J. Occup. Health Psychol. 24, 373–386.

Google Scholar

Whittaker, T. A., and Stapleton, L. M. (2006). The Performance of Cross-Validation Indices Used to Select Among Competing Covariance Structure Models Under Multivariate Nonnormality Conditions. Multivariate Behav. Res. 41, 295–335.

Google Scholar

Wyrwich, K. W. (2004). Minimal important difference thresholds and the standard error of measurement: is there a connection? J. Biopharm. Stat. 14, 97–110. doi: 10.1081/bip-120028508

PubMed Abstract | CrossRef Full Text | Google Scholar

Wyrwich, K. W., Tierney, W. M., and Wolinsky, F. D. (1999). Further evidence supporting an SEM-based criterion for identifying meaningful intra-individual changes in health-related quality of life. J. Clin. Epidemiol. 52, 861–873. doi: 10.1016/S0895-4356(99)00071-2

CrossRef Full Text | Google Scholar

Yang, Y., and Green, S. B. (2015). Evaluation of structural equation modeling estimates of reliability for scales with ordered categorical items. Methodology 11, 23–34. doi: 10.1027/1614-2241/a000087

CrossRef Full Text | Google Scholar

Yentes, R. D., and Wilhelm, F. (2018). Careless: procedures for computing indices of careless responding. R package version 1.1.3. Recuperado de. Available Online at: https://cran.r-project.org/web/packages/careless/index.html. (accessed February 15, 2021).

Google Scholar

Yu, C. (2002). Evaluating cutoff criteria of model fit indices for latent variable models with binary and continuous outcomes. Ph.D. thesis. Los Angeles, CA: University of California.

Google Scholar

Zhang, J., Li, W., Ma, H., and Smith, A. P. (2020). Switch Off Totally or Switch Off Strategically? The Consequences of Thinking About Work on Job Performance. Psychol. Rep. doi: 10.1177/0033294120968080

CrossRef Full Text [Epub Online ahead of print]. | PubMed Abstract | Google Scholar

Zijlmans, E. A. O., Tijmstra, J., van der Ark, L. A., and Sijtsma, K. (2019). Item-Score Reliability as a Selection Tool in Test Construction. Front. Psychol. 9:2298. doi: 10.3389/fpsyg.2018.02298

PubMed Abstract | CrossRef Full Text | Google Scholar

Zijlmans, E. A. O., Van der Ark, L. A., Tijmstra, J., and Sijtsma, K. (2018b). Methods for estimating item-score reliability. Appl. Psychol. Meas. 42, 553–557.

Google Scholar

Zijlmans, E. A. O., Tijmstra, J., van der Ark, L. A., and Sijtsma, K. (2018a). Item-score reliability in empirical-data sets and its relationship with other item indices. Educ. Psychol. Meas. 78, 998–102.

Google Scholar

Zijlstra, F. R. H., and Sonnentag, S. (2006). After work is done: psychological perspectives on recovery from work. Eur. J. Work Organ. Psychol. 15, 129–138. doi: 10.1080/13594320500513855

CrossRef Full Text | Google Scholar

Zijlstra, W. P., van der Ark, L. A., and Sijtsma, K. (2011). Outliers in test and questionnaire data: can they be detected and should they be removed? J. Educ. Behav. Stat. 36, 186–212.

Google Scholar

Zoupanou, Z., Cropley, M., and Rydstedt, L. W. (2013). Recovery after work: the role of work beliefs in the unwinding process. PLoS One 8:e81381. doi: 10.1371/journal.pone.0081381

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: rumination, ESEM, CFA, invariance, detachment, problem-solving pondering

Citation: Rosario-Hernández E, Rovira-Millán LV and Merino-Soto C (2021) Review of the Internal Structure, Psychometric Properties, and Measurement Invariance of the Work-Related Rumination Scale – Spanish Version. Front. Psychol. 12:774472. doi: 10.3389/fpsyg.2021.774472

Received: 12 September 2021; Accepted: 19 October 2021;
Published: 25 November 2021.

Edited by:

Victor Zaia, Faculdade de Medicina do ABC, Brazil

Reviewed by:

Hamid Sharif Nia, Mazandaran University of Medical Sciences, Iran
Maria Do Carmo Fernandes Mratins, Universidade Metodista de São Paulo (UMESP), Brazil

Copyright © 2021 Rosario-Hernández, Rovira-Millán and Merino-Soto. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Ernesto Rosario-Hernández, erosario@psm.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.