Skip to main content

ORIGINAL RESEARCH article

Front. Psychol., 12 March 2021
Sec. Personality and Social Psychology

The Digital Stressors Scale: Development and Validation of a New Survey Instrument to Measure Digital Stress Perceptions in the Workplace Context

  • 1Institute of Digital Business, Johannes Kepler University Linz, Linz, Austria
  • 2Department of Psychology, University of Bonn, Bonn, Germany
  • 3Digital Business, School of Business and Management, University of Applied Sciences Upper Austria, Steyr, Austria
  • 4Institute of Business Informatics – Information Engineering, Johannes Kepler University Linz, Linz, Austria

This article reports on the development of an instrument to measure the perceived stress that results from the use and ubiquity of digital technology in the workplace. Based upon a contemporary understanding of stress and a set of stressors that is a substantial update to existing scales, the Digital Stressors Scale (DSS) advances the measurement of digital stress. Initially, 138 items were constructed for the instrument and grouped into a set of 15 digital stressors. Based on a sample of N = 1,998 online questionnaires completed by individuals representative of the US employed population, the scale was refined using exploratory factor analyses (EFA) and PLS-SEM. The resulting and final scale consists of ten stressor categories reflective of one higher-order construct and a total of 50 items. Through a nomological network that includes important outcome variables of digital stress (emotional exhaustion, innovation climate, job satisfaction, user satisfaction) it was then demonstrated that the DSS provides substantial explanatory power, particularly related to emotional exhaustion and user satisfaction. Thus, the DSS constitutes a state-of-the-art self-report instrument to measure the extent of distress appraisal related to digital technologies in the workplace and helps to explain further how and why information and communication technologies can lead to adverse outcomes in individuals, thereby providing the starting point for job related organizational interventions.

Introduction

We are experiencing an unprecedented prevalence of information and communication technologies (ICT) in our daily lives. Currently, almost 60% of the global population have access to the internet (Internetworldstats, 2020) and an estimated 1.4 billion smartphones are shipped every year (IDC, 2019). However, ICT are not just a tool for the individual but also an important asset for many organizations with global spending on enterprise software alone reaching an estimated 3.8 trillion dollars worldwide in 2019 (Gartner, 2019). The introduction of ICT in the organizational context has also led to an extensive line of research on the impact of these technologies. Brynjolfsson (1996), for example, highlighted that the cost of investments into ICT will yield a return about three times as high in customer benefits. Similarly, most research found that investments into information technology will reap great benefits for organizations (e.g., for automation purposes, Mukhopadhyay et al., 1997; or by enabling new sourcing strategies, Schneider and Sunyaev, 2016), but also for individuals (e.g., in the form of health information technology, Buntin et al., 2011, or smart home technologies, Wilson et al., 2017).

Despite such benefits, the use of ICT also has a “dark side,” including digital stress. For example, several studies found that unexpected ICT behavior can strain individual physiological wellbeing (e.g., computer breakdowns lead to elevated levels of adrenaline excretion and mental fatigue, Riedl, 2013). In recent years, it was also found that digital stress may negatively affect outcome variables directly related to information systems success (e.g., usage intention or user satisfaction, Fuglseth and Sørebø, 2014), individual performance at work (e.g., technology-supported performance, Ragu-Nathan et al., 2008), or emotional exhaustion (Ayyagari et al., 2011; Tams et al., 2014).

A growing stream of research, therefore, now also focuses on digital stress as a side effect of the increasing economical and societal prevalence of ICT (see, for example, reviews by Fischer and Riedl, 2017; Agogo et al., 2018; La Torre et al., 2019; Benzari et al., 2020), which is in line with earlier calls for further investigations into the intangible benefits and costs of ICT (e.g., Brynjolfsson and Hitt, 2000). Within this research stream, the main focus is on the use of ICT at work (Agogo et al., 2018; La Torre et al., 2019) and the main data collection methods are self-report questionnaires (Fischer and Riedl, 2017). This fact can be explained by the dominant role of situational appraisal in the stress process (e.g., Lazarus and Folkman, 1984; Cummings and Cooper, 1998), which necessitates the use of introspective measures. In particular, in the context of the current wave of digitalization, calls have been made for further inquiries into how people perceive the new digital environment and its impact on the individual, organizations, and society (Legner et al., 2017; Parviainen et al., 2017).

Here, we report on a new self-report measure for the assessment of perceived digital stress, as the speed at which our technological environment changes also demands a regular update of related measurement techniques. In particular, we seek to provide an update to an established measurement scale, the Technostress Creators (TSC) by Ragu-Nathan et al. (2008), by answering the following main research question: “Which stressors should be part of a contemporary scale that measures digital stress and how can they be operationalized?

Materials and Methods

The development and validation of the survey instrument is based on established frameworks (Moore and Benbasat, 1991; Netemeyer et al., 2003; MacKenzie et al., 2011), which include the following steps: (1) conceptualization of the focal construct of digital stress, (2) the development of the survey measure including the generation of items and card sorting to test initially the dimensionality of the construct, (3) specification of the nomological network for the construct, and (4) validation of the instrument including data collection procedures, assessment of psychometric properties, and comparison with an existing instrument.

Conceptualization of “Digital Stress”

Changes in the technological environment have also changed the research on digital stress and in particular the conceptualization of this phenomenon. To clarify the understanding of digital stress that is used as a basis for the development of a new questionnaire, its main components are (1) stress, and (2) digital technologies (also briefly referred to as ICT). These are first clarified before definitions of digital stress as a construct are compared.

Stress

Originally understood as a bodily reaction to taxing stimuli (Selye, 1956), the understanding of this phenomenon has changed significantly and the modern approach to the conceptualization of stress entails a transaction between the individual and the environment (i.e., stress as a process, Lazarus and Folkman, 1984). Importantly, while the original understanding of stress did not consider perception to be of importance to the occurrence of adverse outcomes on the individual level, in the process-based understanding, perception (situational appraisal) plays a dominant role. To emphasize the role of perception further, we also consider the related concept of a “stressor.” Stressors are demands that force a variable outside of its range of stability (Cummings and Cooper, 1998). For example, unusual task demands might force an individual to handle an uncomfortably high amount of work, or system malfunctions might create interruptions in an individual’s usual workflow. To be stressors (i.e., a source of individual distress), these demands must first be perceived by the individual and then be appraised as detrimental to the individual’s well-being (e.g., a higher workload could also be perceived as beneficial if the individual is in need of higher levels of stimulation).

Technology

Rather than referring to all types of man-made inventions at this point (e.g., technologies such as wheels or written language), the focus is on digital technologies for the purpose of information management in a wider sense (e.g., capture, storage, retrieval, and analysis purposes). Such a conceptualization was, for example, the basis for the seminal study on digital stress by Ayyagari et al. (2011), who introduced a clear distinction between information and communication technologies and technologies found on the shop-floor (e.g., technologies for manufacturing automation). Digital technologies include, amongst others, mobile technologies (e.g., cell phones), network technologies (e.g., the Internet), communication technologies (e.g., e-mail), and generic application technologies (e.g., for word processing).

Digital Stress

The more widespread term used for digital stress in previous research is so-called “technostress,” which was coined by Brod (1982, p. 754) and refers to “… a condition resulting from the inability of an individual or organization to adapt to the introduction and operation of new technology.” Proposing a definition that is compatible with the transactional paradigm of stress, Ragu-Nathan et al. (2008, pp. 417-418) opted for a more general definition of the phenomenon, which is still widely used today and describes digital stress as “[a] phenomenon of stress experienced by end users in organizations as a result of their use of [ICT].” Riedl (2013, p. 18) more recently added that not only direct interaction, but also “… perceptions, emotions, and thoughts regarding the implementation of ICT in organizations and its pervasiveness in society in general” should be considered when assessing the stress potential of ICT. This addendum is also adopted here, as it helps to explain why potential future developments (e.g., the threat of job loss due to automation) could also lead to distress appraisal.

Dimensionality

Previous conceptualizations of digital stress indicate that it is a latent construct, usually composed of a multitude of stressors (Agogo et al., 2018). For example, Ayyagari et al. (2011) included six technology characteristics (i.e., usefulness, complexity, reliability, presenteeism, anonymity, pace of change) and Ragu-Nathan et al. (2008) used a set of five stressors that are reflective of digital stress (i.e., overload, invasion, complexity, insecurity, uncertainty). In both of these cases, there is a strong link to previous research in the wider context of organizational stress, with Ragu-Nathan et al. (2008) adapting popular work stressors (e.g., work overload becoming “techno-overload”) for their measurement scale and Ayyagari et al. (2011) linking technology characteristics (e.g., unreliability) to work stressors such as work overload. Hence, previous conceptualizations of work stress are an important basis for the conceptualization of digital stress at this point (e.g., Ivancevich and Matteson, 1980; Kahn and Byosiere, 1992; Williams and Cooper, 1998). In addition, there are stressors that are specific to digital technology (i.e., Privacy, Security, Unreliability, and Usefulness in the case of this study), which were consequently added to form a preliminary list of 15 ICT-related stressor categories (please refer to Section 1 in the Supplementary Material for further details):

1. Boredom. ICT can lead to boredom if more and more parts of an individual’s job are machine-paced and tasks that may be of importance to the employee are pushed towards automation (e.g., Stock, 2015).

2. Complexity. If ICT are not easily understood by individuals (e.g., software being hard to use, Al-Fudail and Mellar, 2008) this may be an important deterrent from work.

3. Conflicts. In some instances ICT can contribute to the blurring of boundaries between important life domains (e.g., work and home), referred to as the invasive property of technology (e.g., Ragu-Nathan et al., 2008).

4. Control (lack of). ICT can also limit the job autonomy of individuals and therefore reduce the degree of control that individuals have over their workday (e.g., Jones, 1999; Poole and Denny, 2001).

5. Costs. The use of ICT in the work context often involves a significant level of costs (e.g., Sahin and Coklar, 2009), though from an employee’s point of view costs is mostly reflected in time and cognitive effort.

6. Insecurity. ICT can cause a fear of unemployment (e.g., Sahin and Coklar, 2009; Frey and Osborne, 2017) as it is not certain which tasks and skills will be subject to automation in the future.

7. Involvement (lack of). Earlier research into the success of ICT (e.g., in terms of user satisfaction) found that the involvement of individuals in decision processes related to technological change (e.g., system design choices or purchase decisions) can be critical (e.g., McKeen et al., 1994).

8. Overload. External demands exceeding a desired level of stimulation (overload) in the form of work overload or information and communication overload are intensified through the use of ICT (e.g., Ayyagari et al., 2011; Barley et al., 2011; Galluch et al., 2015).

9. Privacy Invasion. The prospect of interactions with ICT being tracked is of major concern for many individuals and has also sparked an extensive stream of research (e.g., Bélanger and Crossler, 2011; Smith et al., 2011).

10. Role Stress. ICT can also contribute to higher levels of job-related ambiguity, as individuals are faced with a variety of demands that often compete for attention (e.g., Ayyagari et al., 2011; Schellhammer and Haines, 2013; Galluch et al., 2015).

11. Safety. There are many outside threats (i.e., outside of an organization) to the safety of work-related ICT, which can lead to stressful effects for the individual. In particular, many knowledge workers have to deal with potentially harmful programs (e.g., downloads that could include malicious code) that demand additional attention and not only threaten the individual, but also the organization (e.g., loss of company secrets) (e.g., Burke, 2009; D’Arcy et al., 2014; Hwang and Cha, 2018).

12. Social Environment. The characteristics of ICT and in particular communication technologies (e.g., e-mail) can also create unwanted norms and expectations that individuals have to deal with and may deviate from the actual desires of an individual (e.g., not wanting to communicate constantly) (e.g., Sahin and Coklar, 2009; Maier et al., 2015; Cao and Sun, 2018).

13. Technical Support (lack of). Not only do we consider stressful demands caused or mediated by ICT, but also the lack of resources to deal with such demands (e.g., inadequate technical support being in itself a source of distress, Ogan and Chung, 2002; Voakes et al., 2003; Al-Fudail and Mellar, 2008).

14. Unreliability. It can be highly stressful for individuals if ICT do not behave in an expected fashion, such as when response times are long or when a system breakdown occurs (e.g., Boucsein, 2009; Riedl et al., 2012).

15. Usefulness (lack of). Next to low levels of ease-of-use (i.e., high technology complexity), a lack of usefulness (Davis, 1989) is also considered to be a substantial digital stressor.

Measure Development

Item Generation

Item statements for each of the 15 initial stressor categories were formulated independently by the first and third authors of this article and checked by the second author (e.g., phrasing and cognitive effort involved; items were reformulated where necessary), which led to an initial pool of 138 items (please refer to Section 2 in the Supplementary Material for a list of all items). In addition, as none of the authors is an English native speaker, the items were translated into their native tongue (i.e., German) based on their intended meaning and then translated back into English by a professional translator who was only involved for this purpose in the research project. The original English version was then compared with the translated version by the English native speaker and rated for content similarity, which was then the basis for corrections (see, for example, Eremenco et al., 2005 for a comparable approach).

Card Sorting

In line with the recommendations by MacKenzie et al. (2011), the dimensionality of the 138 items and 15 initial stressor categories as representations of digital stressors in the organizational context was initially assessed before the collection of survey data. More specifically, through two rounds of card sorting (five individuals in each round, blend of professionals and students), it was assessed whether these 15 stressor categories adequately represent the dimensionality of digital stress. In particular, we ascertained (i) whether they are in themselves crucial to the assessment of digital stressors and (ii) whether they are sufficiently distinct from each other. Based on a methodology applied by Moore and Benbasat (1991), separate rounds of open sorting (i.e., participants defined stressor categories themselves) and closed sorting (i.e., participants assigned statements to predefined stressor categories) were conducted. The closed sorting round revealed particular problems related to internal consistency of the initial stressor category Costs and based on the two open categories (“Not clear” and “Does not fit into any group”) items were flagged as potential candidates for removal during the measurement model evaluation stage (please refer to Section 3 in the Supplementary Material for further details on the Card Sorting procedure).

Nomological Network

To assess the construct validity of the proposed instrument (Cronbach and Meehl, 1955), a nomological network with constructs known to have a relationship with digital stressors was established (MacKenzie et al., 2011). If digital stressors are actually measured, comparable patterns to those found in previous (technostress) research should emerge (e.g., Coltman et al., 2008, also refer to criterion variables in this context). The structure of the resulting nomological network is based on frameworks used frequently in research on digital stress (e.g., Ayyagari et al., 2011; Adam et al., 2017; Agogo et al., 2018). Common to these frameworks is a set of stimuli appraised as stressors, which then leads to detrimental outcomes (i.e., strains).

Based on evidence by Sarabadani et al. (2018), who analyzed ten years of applications of a technostress measurement instrument published in 2008, we identified important antecedents and outcomes of digital stress. This set includes (1) emotional exhaustion due to work as an outcome that is reflective of individual well-being at work and potentially indicative of long term consequences (e.g., health-related absences, Bakker et al., 2003), (2) the organizational climate for innovation, which is reflective of the perception that innovative behavior is supported within the organization, with innovation being crucial to organizational success (e.g., Wang and Wang, 2012), (3) job satisfaction as an outcome that is reflective of the work-related well-being of the individual, and (4) user satisfaction as an outcome that is reflective of the success of ICT employed at work.

Emotional Exhaustion

Next to cynicism and professional efficacy, emotional exhaustion is a common component of scales that measure symptoms of burnout and has been referred to as the stress dimension of burnout (Maslach et al., 2001, p. 403). More specifically, Maslach and Jackson (1981, p. 101) define it as “feelings of being emotionally overextended and exhausted by one’s work.” In line with previous research on digital stress (e.g., Ayyagari et al., 2011; Tams et al., 2014), it is expected that digital stressors will be positively related to emotional exhaustion.

Innovation Climate

Thus far, “…[a climate that] provide[s] support for innovation, encourage[s] communication, encourage[s] new ideas, and promote[s] supportive relationships among employees…” (Tarafdar et al., 2010, p. 315) has mainly been regarded as an inhibitor of digital stress (e.g., Ragu-Nathan et al., 2008; Tarafdar et al., 2015). It is argued here though that the presence of substantial stressors can reduce the perception of an organizational environment being conducive to innovative behavior (e.g., Clercq et al., 2014) Hence, it is expected that digital stressors will be negatively related to innovation climate.

Job Satisfaction

Previous research on digital stress has also found that job satisfaction, which can be defined as “a pleasurable or positive emotional state resulting from the appraisal of one’s job or job experiences” (Locke, 1976, p. 1300) can be negatively affected by digital stressors (e.g., Ragu-Nathan et al., 2008; Califf et al., 2015). In addition, reduced job satisfaction can be indicative of further long-term consequences of stress, such as individual turnover intention (e.g., Van Dick et al., 2004).

User Satisfaction

Both user satisfaction and job satisfaction are among the most important outcome variables in information systems research (e.g., Petter et al., 2008; Morris and Venkatesh, 2010), organization science (e.g., Bailey and Pearson, 1983; Wright and Bonett, 2007), and organizational psychology (e.g., Judge et al., 2001). Bhattacherjee (2001, p. 359) defines user satisfaction as “users’ affect with (feelings about) prior [digital technology] use” and it has been established in prior studies that ICT-related stressors can negatively impact this outcome variable (e.g., Tarafdar et al., 2010; Fuglseth and Sørebø, 2014).

In addition to these criterion variables, a set of control variables is also included in the nomological network that have frequently been part of digital stress investigations. More specifically, individual characteristics including age, gender, highest level of education, and computer self-efficacy were measured. In line with previous studies, it is expected that age will be negatively related to digital stress such that younger individuals will experience higher levels of digital stress (Ragu-Nathan et al., 2008; Tarafdar et al., 2011; Hauk et al., 2019). Note that some studies also report a positive relationship between age and digital stress. However, these studies typically focus on a narrow facet of digital stress, and not on a more global construct (e.g., Tams et al., 2014, 2018 focus on interruption-based stress during computer work). For gender, it is expected that men will experience higher levels of digital stress than women (e.g., Tarafdar et al., 2011; Riedl et al., 2013). For education, a negative relationship with digital stress is expected, such that individuals with a higher level of education will experience lower levels of digital stress than individuals with a lower level of education (e.g., Tarafdar et al., 2011). Finally, computer self-efficacy is included as a control variable, which refers to the “…judgment of one’s capability to use a computer” (Compeau and Higgins, 1995, p. 192). In line with existing research (e.g., Shu et al., 2011), it is expected that individuals with high levels of computer self-efficacy will experience lower levels of digital stress as compared with individuals with lower levels of computer self-efficacy. The research model that is the basis for scale validation is summarized in Figure 1, with relationships for control purposes only being indicated by a dashed line and control variables being indicated by a dashed border. It is also highlighted that the Digital Stressors Scale (hereafter DSS) will be estimated as a higher-order construct, with the scores of the lower-order constructs (i.e., stressor categories) being used as indicators, following the disjointed two-stage approach as outlined by Sarstedt et al. (2019).

FIGURE 1
www.frontiersin.org

Figure 1. Nomological network for scale validation.

Data Collection

Measures

Aside from the new measurement instrument, only established scales were used to collect data on the outcome and control constructs in the research model (see Figure 1). For emotional exhaustion, the corresponding five-item sub-scale in the Burnout Inventory by Maslach and Jackson (1981) was used (e.g., “I feel emotionally drained from my work”). For innovation climate, the five-item scale by Tarafdar et al. (2010) was applied (e.g., “We have a very open communications environment”). For job satisfaction, the three-item scale by Ragu-Nathan et al. (2008) was applied (e.g., “I like doing the things I do at work”). For user satisfaction, the four-item scale by Bhattacherjee (2001) was applied with a 7-point Likert scale with adjective pairs (e.g., “How do you feel about your overall experience of utilizing ICT in connection with your work tasks?” with answers ranging from 1 = very dissatisfied to 7 = very satisfied). For all other constructs, a 7-point Likert scale was consistently used ranging from 1 – “strongly disagree” to 7 – “strongly agree.”

Of the controls, only computer self-efficacy was a latent variable, which was measured using the 10-item instrument by Compeau and Higgins (1995) (e.g., “I could complete my tasks using new ICTs if there was no one around to tell me what to do as I go”) and a 7-point Likert scale (from 1 – “not at all imaginable” to 1 – “completely imaginable”). There were three options for gender (1 = male, 2 = female, and prefer not to say; “prefer not to say” with the latter treated as missing data), for age, participants were asked to indicate their year of birth and for educational attainment, all of the single-choice options were based on a classification system used by the US Bureau of Labor Statistics (10 categories, plus other, plus prefer not to say; see Table 1 below for the specific categories, Brundage, 2017).

TABLE 1
www.frontiersin.org

Table 1. Overview of sample characteristics.

To assess the convergent validity of the DSS, an existing instrument to measure digital stress was also included, the TSC scale by Ragu-Nathan et al. (2008). This scale’s 23 items were measured using a 7-point Likert scale ranging from 1 – “strongly disagree” to 7 – “strongly agree.”

Online Survey

Data were collected through a market research company1 from October 26 to November 8, 2018. The target population of the survey were employed individuals from the United States. All individuals who did not fulfill this criterion were excluded from participation. In addition to the survey items, two engagement checks that instructed participants to choose one specific option on the provided scale were included.

Sample Characteristics

The initial sample amounted to N = 3,358 completed questionnaires, which were then subject to a rigorous screening procedure to ensure the quality of the data (this was necessary due to the length of the questionnaire and the repetitiveness of the items for the new instrument) (Meade and Craig, 2012; DeSimone et al., 2015). Speeders were excluded (i.e., individuals with completion times of less than 10 min, the average for all N = 3,358 was about 25 min; N = 1,048 were excluded based on this criterion) as well as individuals who missed at least one of the engagement checks (N = 520). To ensure further the quality of the data, questionnaires containing a large share of missing data (i.e., more than 10% of items missing, N = 886) and/or showed low levels of engagement (i.e., a standard deviation of less than .50 on all continuous scales, N = 103) were also excluded. The final sample is N = 1,998 completed questionnaires for further analyses (please note that the listed exclusion criteria are not mutually exclusive and hence are overlapping, for example in the case of speeders and missing data).

The data were then split randomly into two sub-samples, one for the evaluation of the measurement model and one for the evaluation of the structural model (MacKenzie et al., 2011). The characteristics of these samples are displayed in Table 1 and are also compared to the US census, where data were available (Brundage, 2017; United States Department of Labor - Bureau of Labor Statistics., 2018). It can be observed that overall the samples are slightly younger, contain more men and show a higher educational attainment (e.g., more individuals with a bachelor’s degree and fewer individuals with only a highschool diploma) than the US average, which has to be kept in mind when interpreting the results of the analyses.

Data Analysis

For step four in the scale development process (i.e., validation), a number of data analysis procedures are necessary, which are mainly used to establish the reliability and validity of the new instrument. In line with psychological and social science practices, the 7-point Likert scales data were treated as interval-scaled data (e.g., Norman, 2010; Wu and Leung, 2017).

The analyses were performed in several phases, as it was likely that the indicators that are part of the instrument would form a higher-order construct. For each level in this higher-order construct (i.e., from indicators to lower-order constructs, from lower-order to higher-order constructs), reliability and validity metrics were first assessed to guarantee internal consistency (initially without any relationships to external variables). Second, the relationships with criterion variables were tested (for further details, please refer to the Confirmatory Composite Analysis proposed by Hair et al., 2020).

In each phase, the directionality of the relationship between indicators and the higher-order construct had to be defined first (i.e., reflective if indicators are manifestations of a common construct or formative if they form the construct; Jarvis et al., 2003). For the first level (i.e., indicators to lower-order constructs), we followed Ragu-Nathan et al. (2008) and hence used reflective specification. We then initially conducted a series of exploratory factor analyses (EFA) as well as parallel analyses and a Velicer’s MAP test (O‘Connor, 2000) to develop insight into the dimensionality of the DSS further. For the resulting factors, we then followed the steps recommended by Hair et al. (2019) to ensure the quality of the measurement model involving the resulting 1st order constructs:

• In line with recommendations by MacKenzie et al. (2011), the validity of the new construct (construct validity) can be indicated by its (i) content validity (initially established based on the literature review that was used to create the items and initial factors), (ii) convergent validity (indicators’ load on their respective construct; average variance extracted (AVE) is used as the main indicator, as well as extent and significance of loadings of an indicator), (iii) discriminant validity (smallest possible overlap with other constructs; Fornell-Larcker criterion (Fornell and Larcker, 1981), and heterotrait-monotrait ratio of correlations (HTMT, Henseler et al., 2015) were used as indicators), and its (iv) nomological validity (based on existing knowledge, the construct is expected to show relationships with other constructs). Content, convergent, and discriminant validity were tested in both phases, while nomological validity was tested using the highest-level construct (i.e., the 2nd order construct in our case).

• For the reliability of the constructs, three indicators are used, namely Cronbach’s Alpha (α - most conservative measure and therefore the lower bound), Composite reliability (ρc - higher bound) and the Rho Alpha (ρA) (Hair et al., 2019).

These indicators were used to create a set of 1st order constructs that showed sufficient reliability and convergent validity (constructs that did not fulfill these minimum requirements were removed). The indicators were then used to form a 2nd order construct, and the model specification (reflective vs. formative) was investigated using the criteria proposed by Coltman et al. (2008). For the 2nd order construct, reliability and validity were assessed again, including discriminant validity in relation to the four criterion variables.

These steps concluded the evaluation of the measurement model. Thus, the new instrument as well as other constructs included in this investigation showed sufficient internal consistency and were also sufficiently conceptually different from each other.

The nomological validity of the new instrument was then tested during the structural model evaluation, when its relationships with the four criterion variables and the control variables were tested. For this purpose, a number of regression models were estimated. In addition, the same procedures were implemented using the existing TSC instrument to make possible a direct comparison with our DSS. Moreover, we confirmed that the relationships with other variables found with TSC could also be found with the new instrument.

The psychometric properties of the DSS were predominantly assessed using PLS-SEM (using SmartPLS 3 v. 3.2.8) due to some of the benefits of this analytical approach as compared with covariance-based SEM (CB-SEM). According to recent evidence presented by Hair et al. (2019), PLS-SEM is more robust against non-normality of data and is particularly suited for formative models (formative models are also feasible in CB-SEM using MIMIC models, Diamantopoulos, 2011, though such models can lead to results that are not theoretically sound, Hair et al., 2019). Although CB-SEM is the prime method to investigate higher order constructs, it has also been shown recently that PLS-SEM supports models with higher order constructs (Sarstedt et al., 2019).

Results

Measurement Model Evaluation

To evaluate the measurement model, the factor structure for the DSS first had to be checked for its internal consistency, convergent validity, and discriminant validity (Hair et al., 2019; Sarstedt et al., 2019). For the related analyses, the first sub-sample was used and if not reported otherwise, 5,000 iterations were applied in each run.

Exploratory Factor Analysis (EFA)

As the open sorting task led to further stressor categories that could be considered, the factor structure was further checked employing an EFA in SPSS v. 26 (extraction: principal axis; rotation: promax) with all 138 items as input. With no factor restrictions, this approach resulted in 17 factors with an Eigenvalue above 1 (KMO:0.985, Bartlett’s:0.000, explained variance: 54.70%) (please refer to Section 4.1 in the Supplementary Material for the full pattern matrix). When restricting the factor extraction to 15 and 20 factors respectively (15 original stressor categories and five categories considered from the open sorting), the results only changed marginally (15 factors – KMO:0.985, Bartlett’s:0.000, explained variance: 53.86%; 20 factors – KMO:0.985, Bartlett’s:0.000, explained variance: 55.87%). Hence, there is potential for factor reduction, which is further indicated by the first extracted factor explaining 36.92% of indicator variance and eight factors being sufficient to explain a majority of indicator variance (i.e., the cumulative explained variance of the first eight factors with the largest share of explained variance is 50.25%). In line with recommendations by O‘Connor (2000), we also ran two additional analyses to get an idea of the number of factors in the final solution. We conducted a parallel analysis and a Velicer’s MAP test (MAP), using the syntax for SPSS provided by O‘Connor (2000). We ran the parallel analysis syntax and compared the randomly generated Eigenvalues with the Eigenvalues created by a principal component analysis (PCA) without rotation. In this procedure, the PCA resulted in 17 factors with an Eigenvalue above 1, but only 7 of these factors had Eigenvalues larger than the respective factors randomly generated during parallel analysis, which indicates that this number of factors should be retained. We then also ran the MAP, which resulted in a recommendation of 12 factors that should be retained. Hence, both of these methods further substantiated the idea that a solution with 15 factors would not be realistic and we expected that the final factor solution would be within the range of 7 to 12 factors.

Set of 1st Order Constructs

The original 15 stressor categories were used initially for measurement model evaluation in SmartPLS with the goal of creating a set of lower order constructs that is internally consistent as indicated by reliability metrics, has sufficient convergent validity as indicated by the average variance explained (AVE) and, if possible at this stage (i.e., without the use of a higher-order construct), has sufficient discriminant validity as indicated by the Fornell-Larcker criterion and the HTMT (Hair et al., 2019; Sarstedt et al., 2019). This process involved the refinement of each stressor category (e.g., removal of indicators with low factor loadings and high cross-loadings), which was necessary as the 15 categories in their initial form did not meet the reliability and validity thresholds (see Tables 10 and 11 in Section 4.2 of the Supplementary Material).

First, after items were removed from these categories, they were used to form alternative categories (i.e., the five categories identified during the card sorting exercise) with the goal of building internally consistent categories, while also retaining as many items and categories as possible. This procedure was chosen as these additional categories overlapped with existing stressor categories. However, none of the additional categories emerged as a viable alternative (in terms of reliability and convergent validity) to the 15 existing categories without introducing additional ambiguity. As an example, based on the results of the open sorting procedure “Distraction through ICT” would include between 5 and 18 items, which are mostly part of the original category “Role stress,” yet also including items from “Social Environment” and “Safety.” Hence, creating this larger category “Distraction through ICT” would have threatened the internal consistency of other categories and therefore would have led to an overall less distinctive factor structure.

Second, the process was then repeated with the initial 15 categories, with the priority being the retention of categories rather than items (i.e., number of items per category was reduced before the elimination of a whole category was considered). This was a repetitive and hence exhausting process with a back and forth movement between the elimination of items and categories and an reintroduction of items and categories (e.g., when issues due to substantial cross-loadings were resolved, which then warranted an attempt to reintroduce a previously eliminated category). These two goals (i.e., trying to retain as many items and categories as possible, while also trying to create internally consistent categories) ultimately led to the elimination of five stressor categories (due to internal consistency issues). Hence, due to the iterative nature of this process and the involved challenges, further investigations into the factor structure of digital stressor categories and the validation of our final 10-factor structure are warranted.

The final factor structure, which fulfills all necessary criteria is presented in Table 2 (i.e., reliability metrics > 0.700, Nunnally and Bernstein, 1994; AVE > 0.500, MacKenzie et al., 2011) (please refer to Section 4.2 in the Supplementary Material for further details). For discriminant validity, the following criteria were applied: fulfillment of the Fornell-Larcker criterion (Fornell and Larcker, 1981) and HTMT < 0.900 (fulfilled in most cases) (Henseler et al., 2015). In the resulting factor structure, five of the original stressor categories had to be removed due to issues related to reliability and/or convergent validity. The resulting set of 1st order constructs was then used as the basis to test a model including a higher order construct for the DSS, as digital stress has previously also been mainly measured as a 2nd order construct (e.g., Ragu-Nathan et al., 2008; Sarabadani et al., 2018).

TABLE 2
www.frontiersin.org

Table 2. Reliability and validity statistics for 1st order DSS constructs.

The items included in the final ten stressor categories are listed below:

I. Complexity

1. I often find it too complicated to accomplish a task using the ICT that are available to me at work.

2. I often need more time than expected to accomplish a task using the ICT that are available to me at work.

3. I feel that the ICT that are available to me at work are too confusing.

4. I often do not find enough time to keep up with new functionalities of ICT at work.

5. It would take me too long to completely figure out how to use the ICT that are available to me at work.

II. Conflicts

1. I feel that my private life suffers due to ICT enabling work-related problems to reach me everywhere.

2. It is too hard for me to keep my private life and work life separated due to ICT.

3. ICT make it harder to create clear boundaries between my private life and work life.

4. My work-life balance suffers due to ICT.

5. The ubiquity of ICT disturbs my work-life balance.

III. Insecurity

1. I feel that my job position is threatened due to ICT.

2. I fear that I could be replaced at work due to the increasing standardization of work processes, which is enabled by ICT.

3. I cannot be optimistic about my long-term job security because of the threat of ICT automatization.

4. I fear that I could be replaced by machines.

5. I fear that digitalization will cost me my job.

IV. Invasion (of Privacy)

1. I fear that my use of ICT is less confidential than I would like to.

2. I fear that the information that I exchange using ICT is not as protected as I would like to.

3. I fear that malevolent outsiders (e.g., hackers) can easily copy my identity due to ICT.

4. My personal information is too easily accessible due to ICT.

5. I fear that my personal data can easily be stolen by others online.

V. Overload

1. Due to ICT I have too much to do.

2. Due to ICT I have a too large variety of different things to do at work.

3. ICT make it too easy for other individuals to send me additional work.

4. I never have any spare time, because my schedule is too tightly organized by ICT.

5. There is a constant surge of work-related information coming in through ICT that I just cannot keep up with.

VI. Safety

1. I have to worry too often, whether I might download malicious programs.

2. I have to worry too often, whether I might receive malicious e-mails.

3. I fear that hackers might get access to company secrets through a mistake of mine.

4. I feel anxious when I get an e-mail from somebody that I do not know as it could be a malevolent attack.

5. E-Mails whose sender I do not know make me nervous.

VII. Social Environment

1. Due to ICT I have too much to do with the problems of others.

2. I think that ICT generate too much of an expectation that I have to be reachable everywhere and at any time.

3. Too much time gets lost at work because of irrelevant communication with other people on social media.

4. I feel that ICT create unwanted social norms (e.g., the expectation that e-mails should be answered right away).

5. It is too hard to take a break from social interactions at work due to the communication possibilities of ICT.

VIII. Technical Support

1. I have to worry about ICT-related problems as our organization does not offer enough support for their removal.

2. In the case of ICT-related problems, it happens too often that there is not enough support available at work.

3. I think that it happens too often that technical support is not available when I need it.

4. I often have to wait for a long time because technical problems cannot be adequately solved in our organization.

5. I fear that a technical problem I have at work could not be solved by anyone else at work.

IX. Usefulness

1. I think that the demands of my work and the functions provided by the ICT I use do not fit sufficiently.

2. I think that I do not gain enough benefits from using the ICT that I am provided with at work for my tasks.

3. The ICT I use at work are full of too many functionalities that I never need.

4. It requires too many different systems to fulfill the tasks that I have to do during an average day at work.

5. I think that most of the ICT I am supplied with at work is not useful enough and I could work without it.

X. Unreliability

1. I think that I am too often confronted with unexpected behavior of the ICT I use at work (e.g., breakdowns or long response times).

2. I think that I lose too much time due to technical malfunctions.

3. I think that I spend too much time trying to fix technical malfunctions.

4. There is just too much of my time at work wasted coping with the unreliability of ICT.

5. The daily hassles with ICT (e.g., slow programs or unexpected behavior) are really bothering me.

Model Specification

In line with previous conceptualizations of digital stress as a higher order construct (e.g., Ragu-Nathan et al., 2008), such a conceptualization was also tested for the new measurement scale. Support for a potential higher order construct can also be found in the correlation patterns of the 1st order constructs in the DSS, which range from 0.414 (Insecurity and Invasion) to.809 (Complexity and Unreliability). This is comparable with the correlations of the five 1st order constructs in an existing instrument (i.e., the TSC), which range from 0.357 (Invasion and Uncertainty) to.727 (Invasion and Overload). Regarding the relationships with outcome variables, in most cases, correlations with emotional exhaustion are positive (0.405 to.613). Further, correlations with innovation climate (-0.004 to -0.135; one exception with a correlation of.001), job satisfaction (-0.128 to -0.258), and user satisfaction (-0.199 to -0.409) are negative. Although a reflective specification was chosen for the 1st order constructs, it was further assessed whether the higher-order construct should be specified as a reflective or as a formative construct (Jarvis et al., 2003; MacKenzie et al., 2011). For this purpose, the six theoretical and empirical considerations proposed by Coltman et al. (2008) were applied to argue for a reflective or formative specification. Regarding the distinction between reflective and formative models, we refer to Kenny (2016), who distinguished them as follows: “A formative construct or composite refers to an index of a weighted sum of variables. In a formative construct, the indicators cause the construct, whereas in a more conventional latent variables, sometimes called reflective constructs, the indicators are caused by the latent variable.” This distinction is also in line with Jarvis et al. (2003) and MacKenzie et al. (2011) and we illustrate the reflective specification in Figure 2 and the formative specification in Figure 3 below (please refer to Section 4.3 in the Supplementary Material for further details).

FIGURE 2
www.frontiersin.org

Figure 2. Reflective model specification.

FIGURE 3
www.frontiersin.org

Figure 3. Formative model specification.

In order to estimate the 2nd order construct, we followed the disjointed two-stage approach as outlined by Sarstedt et al. (2019), which involved first calculating a model in which all 1st order constructs are connected to the outcome variables. The resulting factor scores for the 1st order constructs were then used for a second model in which these factor scores served as indicators for the 2nd order construct.

Following Coltman et al.’s (2008) considerations led us to mixed results and therefore a more practical approach was chosen, and a reflective specification was directly compared with a formative specification. This step involved the estimation of both types of models and the comparison of the resulting path coefficients and the explained variance for the endogenous variables (see also Coltman et al., 2008 for a comparable approach). The results of this comparison can be found in Table 3. The patterns for the path coefficients (i.e., sign and significance of paths from the 2nd order construct to the criterion variables) are comparable, though the loadings (weights) for the 1st order constructs differ, as some of the weights in the formative specification are not significant. In addition, the difference in explained variance only ranges from 0.004 to 0.023, which is considered marginal at this point as it is generally expected that formative specifications explain a larger share of variance (Coltman et al., 2008). Therefore, as there is no clear indication for a formative specification, a reflective specification was chosen instead, in line with Ragu-Nathan et al. (2008). Nonetheless, as the results were mostly ambiguous, alternative specifications (e.g., 1st order reflective and 2nd order formative) should be further investigated in the future (see, for example, Sarstedt et al., 2019, p. 198) for an overview of all four main types of model specifications, combining reflective and formative specifications).

TABLE 3
www.frontiersin.org

Table 3. Comparison of 2nd order DSS construct with reflective and formative specification.

2nd Order Construct Model Assessment

Based on a reflective (1st order)/reflective (2nd order) specification (also referred to as a superordinate construct by Edwards (2001), or a Type I construct by Jarvis et al. (2003); as shown in Figure 2) reliability and validity metrics were then estimated again (Sarstedt et al., 2019). The results are displayed in Tables 4, 5, which indicate sufficient reliability (Cronbach’s α, ρA, and ρc > 0.700), sufficient convergent validity (AVE > 0.500), and sufficient discriminant validity (based on Fornell-Larcker criterion displayed in Table 4 and HTMT<0.900 or <0.850 as displayed in Table 5).

TABLE 4
www.frontiersin.org

Table 4. Reliability and validity statistics for 2nd order DSS construct (reflective/reflective).

TABLE 5
www.frontiersin.org

Table 5. Discriminant validity for 2nd order DSS construct based on HTMT.

Further details of the resulting model specification can be found in Section 4.5 of the Supplementary Material. In addition, two alternative model specifications were also tested and the results are presented in Section 4.4 of the Supplementary Material (one 1st order construct including all items in Section 4.4.1 and the possibility of several 2nd order constructs in Section 4.4.2), though none of them emerged as a better alternative to the current model specification.

Structural Model Evaluation

To evaluate the structural model, which also includes four control variables (i.e., age, gender, highest level of education, and computer self-efficacy), the second sub-sample was used and calculations in SmartPLS involved 5,000 iterations if not stated differently. To check initially whether the two sub-samples were comparable and that the selection of samples would not coincidentally lead to different results, the scores for each included latent variable were statistically compared using Mann-Whitney tests. As none of these tests approach statistical significance, it can be assumed that the results of model estimations with both samples will lead to comparable results. The model that was assessed at this stage is illustrated in Figure 4 below. Note that dashed variables and dashed lines indicate control variables and their relationships with outcome variables.

FIGURE 4
www.frontiersin.org

Figure 4. Structural model.

Explanatory Power

To assess construct validity, the relationship of the new instrument with the four criterion variables and a selection of control variables were estimated. In addition, the same models were also estimated with an alternative measure that is already established in research on digital stress (i.e., TSC scale, Ragu-Nathan et al., 2008). An indicator for their comparable scope, in addition to their items and dimensions, is the high correlation of their latent variable scores of.923 (i.e., correlation of DSS and TSC, p < 0.001, based on Spearman correlation). Four separate regression models were estimated at this point (DSS without controls and with controls, and TSC without controls and with controls; for further details please refer to Section 5 in the Supplementary Material). The main results related to nomological validity are presented in Table 6, which includes an assessment of the support for previously expected relationships between constructs based on significance (p values) and path coefficients (β values). The main results related to explanatory power are presented in Table 7, which includes the path coefficients and significance for each criterion variable as well as the explained variance (R2 adjusted) and effect size (f2) in the case of models with controls.

TABLE 6
www.frontiersin.org

Table 6. Nomological validity assessment for DSS and TSC.

TABLE 7
www.frontiersin.org

Table 7. Path coefficients and effect sizes for DSS and TSC.

As can be seen in Table 6, there is support for most of the expected relationships, with two exceptions. First, while there is a significant influence of gender on the TSC in the expected direction (i.e., men experienced higher levels of digital stress as measured by the TSC), this relationship is not significant for the DSS. It has to be noted though that while this relationship is clearly not significant for the DSS (t = 1.613, p = 0.107), it is also not highly significant for the TSC (t = 2.121, p = 0.034). Hence, a substantial difference related to the influence of gender on the results should be subject to further investigations in the future. Second, the highest level of education did not have a significant impact on either the DSS or the TSC. As these results are again comparable across measures (as for age), this does not pose a substantial threat to the results in terms of nomological validity.

For both measures (i.e., DSS and TSC), all relationships with criterion variables are significant and remain significant if control variables are included in the structural model (Table 7). Based on values for f2, it can also be observed that the new instrument shows a large effect size for emotional exhaustion, a medium effect size for user satisfaction and a small effect size for innovation climate and job satisfaction (small: >0.02, medium: >0.15, large: >0.35, based on Cohen, 1992). In addition, these effect sizes are consistently larger than those of the TSC. Based on the review results of Sarabadani et al. (2018), it can also be assessed whether the found path coefficients are comparable to the findings of other studies or constitute a potential outlier. For user satisfaction, Sarabadani et al. (2018) found that the TSC in previous studies showed path coefficients between -0.17 and -0.42, to which the value of this study with -0.35 is comparable. For job satisfaction, they found that the TSC in previous studies showed path coefficients between -0.13 and -0.41, to which the value of this study with -0.18 is also comparable. Hence, we can assume that the effect sizes found in this study are within an expected range. Finally, the combined included control variables only explain 3.8% of the variance in DSS and 4.9% of the variance in TSC, which further indicates that the found effects are mainly due to the measures for digital stress. Figures 5, 6 below summarize the estimates for the main relationships in the nomological network for the DSS and the TSC respectively. Please note that numbers in brackets for the criterion variables indicate total variance explained by the TSC and the control variables.

FIGURE 5
www.frontiersin.org

Figure 5. Results of the nomological validity assessment for the DSS.

FIGURE 6
www.frontiersin.org

Figure 6. Results of the nomological validity assessment for the TSC.

Discussion

The DSS is a state-of-the-art instrument to measure the perception of digital stressors in the workplace. It comprises 50 items in ten stressor categories that can be consolidated in one 2nd order construct to measure digital stress, which has been performed as part of this study. Each indicator is measured with a 7-point Likert scale, with higher values indicating higher levels of stress. It has to be noted though that each stressor category is also a reliable and internally consistent scale in itself and could therefore be applied on its own, although further research is needed to establish the value of these separate scales.

As the DSS is not the first measurement instrument in the area of digital stress, it was tested against the widely used TSC by Ragu-Nathan et al. (2008). As an initial proof of its construct validity, the DSS correlates strongly with the existing measure (rs = 0.923, p < 0.001), though it provides additional benefits. First, the ten involved stressor categories (1st order constructs) cover aspects that are not included in the existing measure, such as perceptions of distress related to information security or technology unreliability. What follows is that the richness of the phenomenon is better captured by the new instrument and also considers more recent forms of potential stressors. More specifically, the new scale additionally covers stress perceptions caused by data privacy issues (Invasion; e.g., a lack of confidentiality of data), the threat of malignant aspects of technology (Safety; e.g., malware or malicious e-mails), pressure from the social environment (Social Environment; e.g., pressure to respond to e-mails quickly), a lack of usefulness of technology (Usefulness; e.g., too many functionalities of ICT with little value to the work of a user), a lack of technical support (Technical Support; e.g., help not being available when technical malfunctions occur), and technology that does not behave as expected (Unreliability; e.g., long response times or system breakdowns).

Second, it was demonstrated that the DSS can explain more variance for a number of criterion variables, including emotional exhaustion (f2 DSS: 0.699, f2 TSC: 0.568, Δf2 = 0.131), innovation climate (f2 DSS: 0.043, f2 TSC: 0.024, Δf2 = 0.019), job satisfaction (f2 DSS: 0.061, f2 TSC: 0.036, Δf2 = 0.025), and user satisfaction (f2 DSS: 0.221, f2 TSC: 0.139, Δf2 = 0.081). This is further substantiated by a number of hierarchical regressions that were additionally calculated, which show that the DSS can explain variance for each of our four criterion variables over and above the TSC (Δr2 for emotional exhaustion of 0.042; Δr2 for innovation climate of 0.025; Δr2 for job satisfaction of 0.023; and Δr2 for user satisfaction of 0.059; see Section 5.3 in the Supplementary Material for further details).

Third, items for the DSS were formulated based on the concept of a discrepancy between situational circumstances and internal standards (e.g., desires) that form distress perceptions. This focus is not given fully in the TSC. Consider, for example, the item “There are constant changes in computer software in our organization” (part of “Techno-Uncertainty” in TSC). Some items in the TSC do not conform to the most established conceptualization of stress in psychology, namely the Lazarus model, which defines stress as a discrepancy between a desire and an actual value. Regarding the mentioned sample item, note that the constant changes can be regarded as stressful, but at the same time they could be perceived as beneficial because technologies are less likely to show bugs or other errors due to constant maintenance. Hence TSC is limited in its potential to capture distress.

Limitations

This study’s limitations are mainly caused by practical constraints inherent in the development of a new measurement instrument as not every step in the development process can be feasibly executed in an ideal fashion (MacKenzie et al., 2011). First, as data were collected through a single cross-sectional survey, the threat of common-method bias must be considered (Podsakoff et al., 2003). As such, several remedies were implemented to reduce the likelihood that the results of this study were affected by this potential issue. This included engagement checks in the survey (i.e., two separate questions that instructed the participant to choose a specific option), splitting the overall samples into sub-samples which then served as an initial means to cross-validate the results (e.g., MacKenzie et al., 2011), and a statistical means to assess the extent of common method bias (i.e., the full-collinearity score, Kock, 2015), which did not indicate any significant bias. Nonetheless though, further investigations should be conducted to cross-validate the results of this study.

In addition, while cross-validation of new measurement instruments is critical, in line with the recommendations by MacKenzie et al. (2011) this study first and foremost ensured that the conceptual definition of the instrument, the development of its indicators, and the specification of the measurement model are sound. Hence, although a large sample of N = 1,998 individuals (mostly representative of the US employed population) was the basis for this study, there is further need for cross-validation. In particular, an extension to other countries and languages is needed.

Directions for Future Research

Through a cross-sectional online survey, this study showed that the perception of distress caused by ICT is positively related to emotional exhaustion and negatively related to satisfaction with ICT at work. This investigation showed that a linear relationship between the DSS and these constructs already explains a substantial share of their variance (i.e., R2 adj. without controls for emotional exhaustion of 0.409; R2 adj. without controls for user satisfaction of 0.192). In the wake of further investigations into the types of relationships that digital stressors show with outcome variables, it should be kept in mind that the relevance of stressors included in the DSS may change. In fact, the changing nature of our technological environment was one of the main motivations for the development of the DSS and the investigation of its dimensionality (i.e., stressor categories). Regular updates of instruments such as the DSS are crucial to ensure that stressors that appear more relevant over time in practice (i.e., the work context) are not overlooked in research and organizational practice. Likewise, it has to be acknowledged that stressors may become obsolete or are found to be less relevant than others over time and therefore have to be removed from the set of stressor categories included in the DSS. This is particularly true when the goal is to investigate digital stress in the context of more specific participant groups (e.g., less formally educated people, Marchiori et al., 2018) or technologies (e.g., social media, Maier et al., 2015). Hence, studies that intend to apply the DSS should always reflect on the composition of its stressors and argue their relevance for the specific research question.

While this study investigated the role of digital stressors within a nomological network of important outcome variables that have previously been found to be related to digital stress (i.e., emotional exhaustion, innovation climate, job satisfaction, user satisfaction), as well as a set of control variables that have been found to influence digital stress appraisal (i.e., age, gender, education, computer self-efficacy), further variables should be added to this nomological network in future studies. This will further bolster the validity of the proposed instrument (e.g., individual characteristics such as personality characteristics like negative affectivity and extraversion, Ayyagari et al., 2011; or organizational characteristics such as social norms related to technology use, Barley et al., 2011). In addition, relationships between these new variables which are conceptualized in seminal theories (e.g., in the organizational stress domain the Person-Environment Fit Theory, Edwards et al., 1998; or in the technology use domain the Technology Acceptance Model, Davis, 1989) should also be considered as a model extension in future research.

While this study highlighted the convergent validity of the DSS with another measure related to digital stress (i.e., the TSC), it should also be a focus of future research to establish further convergent validity and/or discriminant validity with other potentially related measures, particularly in the area of occupational stress. Potential scales that could be the subject of such investigations have been proposed in the past such as measures of stress perceptions at work (e.g., Hackman and Oldham, 1975; Motowidlo et al., 1986; Williams and Cooper, 1998; Siegrist et al., 2009).

Overall, our newly developed instrument is a complement and update to the already existing set of measures in the field of occupational stress research, and particularly studies into digital stress. It is hoped that the instrument’s usefulness, which has been demonstrated in this study, will be further validated and extended through application in future research.

Data Availability Statement

The raw data supporting the conclusion of this manuscript will be made available by the authors, without undue reservation.

Author Contributions

TF, MR, and RR conceptualized the development approach. TF collected the data. TF and MR processed the data and performed the analyses. TF drafted the manuscript. All authors discussed the results and commented on the manuscript.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

This research was funded by the Austrian Science Fund as part of the project ‘Technostress in Organizations’ (grant no. P 30865) at the University of Applied Sciences Upper Austria.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2021.607598/full#supplementary-material

Footnotes

  1. ^ https://www.respondi.com/

References

Adam, M. T. P., Gimpel, H., Maedche, A., and Riedl, R. (2017). Design blueprint for stress-sensitive adaptive enterpris systems. Bus. Inform. Syst. Eng. 59, 277–291. doi: 10.1007/s12599-016-0451-3

CrossRef Full Text | Google Scholar

Agogo, D., Hess, T. J., Te’eni, D., and McCoy, S. (2018). “How does tech make you feel?“: a review and examination of negative affective responses to technology use. Eur. J. Inform. Syst. 27, 570–599. doi: 10.1080/0960085X.2018.1435230

CrossRef Full Text | Google Scholar

Al-Fudail, M., and Mellar, H. (2008). Investigating teacher stress when using technology. Comput. Educ. 51, 1103–1110.

Google Scholar

Ayyagari, R., Grover, V., and Purvis, R. (2011). Technostress: technological antecedents and implications. MIS Q. 35, 831–858.

Google Scholar

Bailey, J. E., and Pearson, S. W. (1983). Development of a tool for measuring and analyzing computer user satisfaction. Manage. Sci. 29, 530–545. doi: 10.1287/mnsc.29.5.530

CrossRef Full Text | Google Scholar

Bakker, A. B., Demerouti, E., Boer, E., de, and Schaufeli, W. B. (2003). Job demands and job resources as predictors of absence duration and frequency. J. Vocat. Behav. 62, 341–356. doi: 10.1016/S0001-8791(02)00030-1

CrossRef Full Text | Google Scholar

Barley, S. R., Meyerson, D. E., and Grodal, S. (2011). E-mail as a source and symbol of stress. Organ. Sci. 22, 887–906. doi: 10.1287/orsc.1100.0573

PubMed Abstract | CrossRef Full Text | Google Scholar

Bélanger, F., and Crossler, R. E. (2011). Privacy in the digital age: a review of information privacy research in information systems. MIS Q. 35, 1017–1041. doi: 10.2307/41409971

CrossRef Full Text | Google Scholar

Benzari, A., Khedhaouria, A., and Torrès, O. (2020). “The rise of technostress: a literature review from 1984 until 2018,” in Proceedings of the European Conference on Information Systems (ECIS), Marrakech.

Google Scholar

Bhattacherjee, A. (2001). Understanding information systems continuance: an expectation-confirmation model. MIS Q. 25, 351–370. doi: 10.2307/3250921

CrossRef Full Text | Google Scholar

Boucsein, W. (2009). “Forty years of research on system response times – what did we learn from it?,” in Industrial Engineering and Ergonomics, ed. C. M. Schlick (Berlin: Springer), 575–593.

Google Scholar

Brod, C. (1982). Managing technostress: optimizing the use of computer technology. Pers. J. 61, 753–757.

Google Scholar

Brundage, V. Jr. (2017). Spotlight on Statistics: Profile of the Labor Force by Educational Attainment. Available online at: https://www.bls.gov/spotlight/2017/educational-attainment-of-the-labor-force/home.htm (accessed February 27, 2021).

Google Scholar

Brynjolfsson, E. (1996). The contribution of information technology to consumer welfare. Inform. Syst. Res. 7, 281–300. doi: 10.1287/isre.7.3.281

PubMed Abstract | CrossRef Full Text | Google Scholar

Brynjolfsson, E., and Hitt, L. M. (2000). Beyond computation: information technology, organizational transformation and business performance. J. Econ. Perspect. 14, 23–48. doi: 10.1257/jep.14.4.23

CrossRef Full Text | Google Scholar

Buntin, M. B., Burke, M. F., Hoaglin, M. C., and Blumenthal, D. (2011). The benefits of health information technology: a review of the recent literature shows predominantly positive results. Health Aff. 30, 464–471. doi: 10.1377/hlthaff.2011.0178

PubMed Abstract | CrossRef Full Text | Google Scholar

Burke, M. S. (2009). The incidence of technological stress among baccalaureate nurse educators using technology during course preparation and delivery. Nurse Educ. Today 29, 57–64.

Google Scholar

Califf, C., Sarker, S., Sarker, S., and Fitzgerald, C. (2015). “The bright and dark sides of technostress: an empirical study of healthcare workers,” in Proceedings of ICIS 2015 AIS (Chair), Fort Worth, TX.

Google Scholar

Cao, X., and Sun, J. (2018). Exploring the effect of overload on the discontinuous intention of social media users: an S-O-R perspective. Comput. Hum. Behav. 81, 10–18. doi: 10.1016/j.chb.2017.11.035

CrossRef Full Text | Google Scholar

Clercq, D., de, Dimov, D., and Belausteguigoitia, I. (2014). Perceptions of adverse work conditions and innovative behavior: the buffering roles of relational resources. Entrep. Theory Pract. 40, 515–542. doi: 10.1111/etap.12121

CrossRef Full Text | Google Scholar

Cohen, J. (1992). A power primer. Psychol. Bull. 112, 155–159.

Google Scholar

Coltman, T., Devinney, T. M., Midgley, D. F., and Venaik, S. (2008). Formative versus reflective measurement models: two applications of formative measurement. J. Bus. Res. 61, 1250–1262. doi: 10.1016/j.jbusres.2008.01.013

CrossRef Full Text | Google Scholar

Compeau, D. R., and Higgins, C. A. (1995). Computer self-efficacy: development of a measure and initial test. MIS Q. 19, 189–211. doi: 10.2307/249688

CrossRef Full Text | Google Scholar

Cronbach, L. J., and Meehl, P. E. (1955). Construct validity in psychological tests. Psychol. Bull. 52, 281–302. doi: 10.1037/h0040957

PubMed Abstract | CrossRef Full Text | Google Scholar

Cummings, T. G., and Cooper, C. L. (1998). “A cybernetic theory of organizational stress,” in Theories of Organizational Stress, ed. C. L. Cooper (Oxford: Oxford University Press), 101–121.

Google Scholar

D’Arcy, J., Herath, T., and Shoss, M. K. (2014). Understanding employee responses to stressful information security requirements: a coping perspective. J. Manag. Inform. Syst. 31, 285–318. doi: 10.2753/MIS0742-1222310210

CrossRef Full Text | Google Scholar

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13, 319–340.

Google Scholar

DeSimone, J. A., Harms, P. D., and DeSimone, A. J. (2015). Best practice recommendations for data screening. J. Organ. Behav. 36, 171–181. doi: 10.1002/job.1962

CrossRef Full Text | Google Scholar

Diamantopoulos, A. (2011). Incorporating formative measures into covariance-based structural equation models. MIS Q. 35, 335–358. doi: 10.2307/23044046

CrossRef Full Text | Google Scholar

Edwards, J. R. (2001). Multidimensional constructs in organizational behavior research: an integrative analytical framework. Organ. Res. Methods 4, 144–192. doi: 10.1177/109442810142004

CrossRef Full Text | Google Scholar

Edwards, J. R., Caplan, R. D., and van Harrison, R. (1998). “Person-environment fit theory: conceptual foundations, empirical evidence, an directions for future research,” in Theories of Organizational Stress, ed. C. L. Cooper (Oxford: Oxford University Press), 28–67.

Google Scholar

Eremenco, S. L., Cella, D., and Arnold, B. J. (2005). A comprehensive method for the translation and cross-cultural validation of health status questionnaires. Eval. Health Profess. 28, 212–232. doi: 10.1177/0163278705275342

PubMed Abstract | CrossRef Full Text | Google Scholar

Fischer, T., and Riedl, R. (2017). Technostress research: a nurturing ground for measurement pluralism? Commun. Assoc. Inform. Syst. 40, 375–401.

Google Scholar

Fornell, C., and Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 18:39. doi: 10.2307/3151312

CrossRef Full Text | Google Scholar

Frey, C. B., and Osborne, M. A. (2017). The future of employment: how susceptible are jobs to computerisation? Technol. Forecast. Soc. Change 114, 254–280. doi: 10.1016/j.techfore.2016.08.019

CrossRef Full Text | Google Scholar

Fuglseth, A. M., and Sørebø, Ø (2014). The effects of technostress within the context of employee use of ICT. Comp. Hum. Behav. 40, 161–170.

Google Scholar

Galluch, P., Grover, V., and Thatcher, J. B. (2015). Interrupting the workplace: examining stressors in an information technology context. J. Assoc. Inform. Syst. 16, 1–47.

Google Scholar

Gartner (2019). Gartner Says Global IT Spending to Reach $3.8 Trillion in 2019. Available online at: https://www.gartner.com/en/newsroom/press-releases/2019-01-28-gartner-says-global-it-spending-to-reach–3-8-trillio (accessed February 27, 2021).

Google Scholar

Hackman, J. R., and Oldham, G. R. (1975). Development of the job diagnostic survey. J. Appl. Psychol. 60, 159–170. doi: 10.1037/h0076546

CrossRef Full Text | Google Scholar

Hair, J. F., Howard, M. C., and Nitzl, C. (2020). Assessing measurement model quality in PLS-SEM using confirmatory composite analysis. J. Bus. Res. 109, 101–110. doi: 10.1016/j.jbusres.2019.11.069

CrossRef Full Text | Google Scholar

Hair, J. F., Risher, J. J., Sarstedt, M., and Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. Euro. Bus. Rev. 31, 2–24. doi: 10.1108/EBR-11-2018-0203

CrossRef Full Text | Google Scholar

Hauk, N., Göritz, A. S., and Krumm, S. (2019). The mediating role of coping behavior on the age-technostress relationship: a longitudinal multilevel mediation model. PLoS One 14:e0213349. doi: 10.1371/journal.pone.0213349

PubMed Abstract | CrossRef Full Text | Google Scholar

Henseler, J., Ringle, C. M., and Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 43, 115–135. doi: 10.1007/s11747-014-0403-8

CrossRef Full Text | Google Scholar

Hwang, I., and Cha, O. (2018). Examining technostress creators and role stress as potential threats to employees‘ information security compliance. Comp. Hum. Behav. 81, 282–293. doi: 10.1016/j.chb.2017.12.022

CrossRef Full Text | Google Scholar

IDC (2019). IDC Forecasts Worldwide Smartphone Market Will Face Another Challenging Year in 2019 with a Return to Growth on the Horizon. Framingham, MA: IDC.

Google Scholar

Internetworldstats (2020). World Internet Usage and Population Statistics. Available online at: https://www.internetworldstats.com/stats.htm (accessed February 27, 2021).

Google Scholar

Ivancevich, J. M., and Matteson, M. T. (1980). Stress and Work: A Managerial Perspective. Management applications series. Glenview, IL: Scott, Foresman.

Google Scholar

Jarvis, C. B., MacKenzie, S. B., and Podsakoff, P. M. (2003). A critical review of construct indicators and measurement model misspecification in marketing and consumer research. J. Consum. Res. 30, 199–218. doi: 10.1086/376806

CrossRef Full Text | Google Scholar

Jones, D. E. (1999). Ten years later: support staff perceptions and opinions on technology in the workplace. Library Trends 74, 711–745.

Google Scholar

Judge, T. A., Thoresen, C. J., Bono, J. E., and Patton, G. K. (2001). The job satisfaction-job performancerelationship: a qualitative and quantitative review. Psychol. Bull. 127, 376–407. doi: 10.1037//0033-2909.127.3.376

CrossRef Full Text | Google Scholar

Kahn, R. L., and Byosiere, P. (1992). “Stress in organizations,” in Handbook of Industrial and Organizational Psychology, 2nd Edn, eds M. D. Dunnette and L. M. Hough (Palo Alto, CA: Consulting Psychologists Press), 571–650.

Google Scholar

Kenny, D. A. (2016). Miscellaneous Variables: Formative Variables and Second-Order Factors. Available online at: http://davidakenny.net/cm/mvar.htm (accessed February 27, 2021).

Google Scholar

Kock, N. (2015). Common method bias in PLS-SEM. Int. J. e-Collab. 11, 1–10. doi: 10.4018/ijec.2015100101

CrossRef Full Text | Google Scholar

La Torre, G., Esposito, A., Sciarra, I., and Chiappetta, M. (2019). Definition, symptoms and risk of techno-stress: a systematic review. Int. Arch. Occup. Environ. Health 92, 13–35. doi: 10.1007/s00420-018-1352-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Lazarus, R. S., and Folkman, S. (1984). Stress, Appraisal, and Coping. New York, NY: Springer Pub. Co.

Google Scholar

Legner, C., Eymann, T., Hess, T., Matt, C., Böhmann, T., Drews, P., et al. (2017). Digitalization: opportunity and challenge for the business and information systems engineering community. Bus. Inform. Syst. Eng. 59, 301–308. doi: 10.1007/s12599-017-0484-2

CrossRef Full Text | Google Scholar

Locke, E. A. (1976). “The nature and causes of job satisfaction,” in Handbook of Industrial and Organizational Psychology, ed. M. D. Dunnette (Chicago: Rand McNally College Pub. Co.), 1297–1343.

Google Scholar

MacKenzie, S. B., Podsakoff, P. M., and Podsakoff, N. P. (2011). Construct measurement and validation procedures in MIS and behavioral research: integrating new and existing techniques. MIS Q. 35, 293–334. doi: 10.2307/23044045

CrossRef Full Text | Google Scholar

Maier, C., Laumer, S., Weinert, C., and Weitzel, T. (2015). The effects of technostress and switching stress on discontinued use of social networking services: a study of facebook use. Inform. Syst. J. 25, 275–308.

Google Scholar

Marchiori, D. M., Mainardes, E. W., and Rodrigues, R. G. (2018). Do individual characteristics influence the types of technostress reported by workers? Int. J. Hum. Comp. Inter. 35, 218–230. doi: 10.1080/10447318.2018.1449713

CrossRef Full Text | Google Scholar

Maslach, C., and Jackson, S. E. (1981). The measurement of experienced burnout. J. Organ. Behav. 2, 99–113. doi: 10.1002/job.4030020205

CrossRef Full Text | Google Scholar

Maslach, C., Schaufeli, W. B., and Leiter, M. P. (2001). Job burnout. Annu. Rev. Psychol. 52, 397–422. doi: 10.1146/annurev.psych.52.1.397

PubMed Abstract | CrossRef Full Text | Google Scholar

McKeen, J. D., Guimaraes, T., and Wetherbe, J. C. (1994). The relationship between user participation and user satisfaction: an investigation of four contingency factors. MIS Q. 18, 427–451. doi: 10.2307/249523

CrossRef Full Text | Google Scholar

Meade, A. W., and Craig, S. B. (2012). Identifying careless responses in survey data. Psychol. Methods 17, 437–455. doi: 10.1037/a0028085

PubMed Abstract | CrossRef Full Text | Google Scholar

Moore, G. C., and Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an information technology innovation. Inform. Syst. Res. 2, 192–222.

Google Scholar

Morris, M. G., and Venkatesh, V. (2010). Job characteristics and job satisfaction: understanding the role of enterpriseResource planning system implementation. MIS Q. 34:143. doi: 10.2307/20721418

CrossRef Full Text | Google Scholar

Motowidlo, S. J., Manning, M. R., and Packard, J. S. (1986). Occupational stress: its causes and consequences for job performance. J. Appl. Psychol. 71, 618–629.

Google Scholar

Mukhopadhyay, T., Rajiv, S., and Srinivasan, K. (1997). Information technology impact on process output and quality. Manag. Sci. 43, 1645–1659. doi: 10.1287/mnsc.43.12.1645

PubMed Abstract | CrossRef Full Text | Google Scholar

Netemeyer, R. G., Bearden, W. O., and Sharma, S. C. (2003). Scaling procedures: Issues and applications. London: Sage.

Google Scholar

Norman, G. (2010). Likert scales, levels of measurement and the “laws” of statistics. Adv. Health Sci. Educ. Theory Pract. 15, 625–632. doi: 10.1007/s10459-010-9222-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Nunnally, J. C., and Bernstein, I. H. (1994). Psychometric theory, 3rd Edn. New York, NY: McGraw-Hill.

Google Scholar

Ogan, C., and Chung, D. (2002). Stressed out! A national study of women and men journalism and mass communication faculty, their uses of technology, and levels of professional and personal stress. J. Mass Commun. Educ. 57, 352–369. doi: 10.1177/107769580205700405

CrossRef Full Text | Google Scholar

O‘Connor, B. P. (2000). SPSS and SAS programs for determining the number of components using parallel analysis and Velicer‘s MAP test. Behav. Res. Methods Instru. Comp. 32, 396–402.

Google Scholar

Parviainen, P., Tihinen, M., Kääriäinen, J., and Teppola, S. (2017). Tackling the digitalization challenge: how to benefit from digitalization in practice. Int. J. Inform. Syst. Project Manag. 5, 63–77.

Google Scholar

Petter, S., DeLone, W., and McLean, E. (2008). Measuring information systems success: models, dimensions, measures, and interrelationships. Eur. J. Inf. Syst. 17, 236–263. doi: 10.1057/ejis.2008.15

CrossRef Full Text | Google Scholar

Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., and Podsakoff, N. P. (2003). Common method biases in behavioral research: a critical review of the literature and recommended remedies. J. Appl. Psychol. 88, 879–903. doi: 10.1037/0021-9010.88.5.879

PubMed Abstract | CrossRef Full Text | Google Scholar

Poole, C. E., and Denny, E. (2001). Technological change in the workplace: a statewide survey of community college library and learning resources personnel. Coll. Res. Librar. 62, 503–515. doi: 10.5860/crl.62.6.503

PubMed Abstract | CrossRef Full Text | Google Scholar

Ragu-Nathan, T. S., Tarafdar, M., Ragu-Nathan, B. S., and Tu, Q. (2008). The consequences of technostress for end users in organizations: conceptual development and empirical validation. Inform. Syst. Res. 19, 417–433.

Google Scholar

Riedl, R. (2013). On the biology of technostress: literature review and research agenda. Data Base Adv. Inform. Syst. 44, 18–55.

Google Scholar

Riedl, R., Kindermann, H., Auinger, A., and Javor, A. (2012). Technostress from a neurobiological perspective - system breakdown increases the stress hormone cortisol in computer users. Bus. Inform. Syst. Eng. 4, 61–69.

Google Scholar

Riedl, R., Kindermann, H., Auinger, A., and Javor, A. (2013). Computer breakdown as a stress factor during task completion under time pressure: identifying gender differences based on skin conductance. Adv. Hum. Comp. Inter. 1, 1–8.

Google Scholar

Sahin, Y. L., and Coklar, A. N. (2009). Social networking users’ views on technology and the determination of technostress levels. Proc. Soc. Behav. Sci. 1, 1437–1442.

Google Scholar

Sarabadani, J., Carter, M., and Compeau, D. R. (2018). “10 years of research on technostress creators and inhibitors: synthesis and critique,” in Proceedings of AMCIS 2018 AIS (Chair), New Orleans, LA.

Google Scholar

Sarstedt, M., Hair, J. F., Cheah, J.-H., Becker, J.-M., and Ringle, C. M. (2019). How to specify, estimate, and validate higher-order constructs in PLS-SEM. Austr. Mark. J. 27, 197–211. doi: 10.1016/j.ausmj.2019.05.003

CrossRef Full Text | Google Scholar

Schellhammer, S., and Haines, R. (2013). “Towards contextualizing stressors in technostress research,” in Proceedings of the International Conference on Information Systems AIS (Chair), Milan.

Google Scholar

Schneider, S., and Sunyaev, A. (2016). Determinant factors of cloud-sourcing decisions: reflecting on the IT outsourcing literature in the era of cloud computing. J. Inform. Technol. 31, 1–31. doi: 10.1057/jit.2014.25

CrossRef Full Text | Google Scholar

Selye, H. (1956). The Stress of Life. New York, NY: McGraw Hill.

Google Scholar

Shu, Q., Tu, Q., and Wang, K. (2011). The impact of computer self-efficacy and technology dependence on computer-related technostress: a social cognitive theory perspective. Int. J. Hum. Comp. Inter. 27, 923–939.

Google Scholar

Siegrist, J., Wege, N., Pühlhofer, F., and Wahrendorf, M. (2009). A short generic measure of work stress in the era of globalization: effort-reward imbalance. Int. Arch. Occup. Environ. Health 82, 1005–1013.

Google Scholar

Smith, H. J., Dinev, T., and Xu, H. (2011). Information privacy research: an interdisciplinary review. MIS Q. 35, 989–1015. doi: 10.2307/41409970

CrossRef Full Text | Google Scholar

Stock, R. M. (2015). Is boreout a threat to frontline employees‘ innovative work behavior? J. Prod. Innov. Manag. 32, 574–592. doi: 10.1111/jpim.12239

CrossRef Full Text | Google Scholar

Tams, S., Hill, K., Ortiz, de Guinea, A., Thatcher, J., and Grover, V. (2014). NeuroIS - alternative or complement to existing methods? Illustrating the holistic effects of neuroscience and self-reported data in the context of technostress research. J. Assoc. Inform. Syst. 15, 723–753.

Google Scholar

Tams, S., Thatcher, J. B., and Grover, V. (2018). Concentration, competence, confidence, and capture: an experimental study of age, interruption-based technostress, and task performance. J. Assoc. Inform. Syst. 19, 857–908. doi: 10.17705/1jais.00511

CrossRef Full Text | Google Scholar

Tarafdar, M., Pullins, E. B., and Ragu-Nathan, T. S. (2015). Technostress: negative effect on performance and possible mitigations. Inform. Syst. J. 25, 103–132.

Google Scholar

Tarafdar, M., Tu, Q., and Ragu-Nathan, T. S. (2010). Impact of technostress on end-user satisfaction and performance. J. Manag. Inform. Syst. 27, 303–334.

Google Scholar

Tarafdar, M., Tu, Q., Ragu-Nathan, T. S., and Ragu-Nathan, B. S. (2011). Crossing to the dark side: examining creators, outcomes, and inhibitors of technostress. Commun. ACM 54, 113–120.

Google Scholar

United States Department of Labor - Bureau of Labor Statistics. (2018). Household Data - Annual Averages: 11. Employed Persons by Detailed Occupation, Sex, Race, and Hispanic or Latino Ethnicity. Washington, DC: United States Department of Labor - Bureau of Labor Statistics.

Google Scholar

Van Dick, R., Christ, O., Stellmacher, J., Wagner, U., Ahlswede, O., Grubba, C., et al. (2004). Should i stay or should i go? Explaining turnover intentions with organizational identification and job satisfaction. Br. J. Manag. 15, 351–360. doi: 10.1111/j.1467-8551.2004.00424.x

CrossRef Full Text | Google Scholar

Voakes, P. S., Beam, R. A., and Ogan, C. (2003). The impact of technological change on journalism education: a survey of faculty and administrators. J. Mass Commun. Educ. 57, 318–334.

Google Scholar

Wang, Z., and Wang, N. (2012). Knowledge sharing, innovation and firm performance. Expert Syst. Appl. 39, 8899–8908. doi: 10.1016/j.eswa.2012.02.017

CrossRef Full Text | Google Scholar

Williams, S., and Cooper, C. L. (1998). Measuring occupational stress: development of the pressure management indicator. J. Occup. Health Psychol. 3, 306–321.

Google Scholar

Wilson, C., Hargreaves, T., and Hauxwell-Baldwin, R. (2017). Benefits and risks of smart home technologies. Energy Policy 103, 72–83. doi: 10.1016/j.enpol.2016.12.047

CrossRef Full Text | Google Scholar

Wright, T. A., and Bonett, D. G. (2007). Job satisfaction and psychological well-being as nonadditive predictors of workplace turnover. J. Manage. 33, 141–160. doi: 10.1177/0149206306297582

CrossRef Full Text | Google Scholar

Wu, H., and Leung, S.-O. (2017). Can likert scales be treated as interval scales? — A simulation study. J. Soc. Serv. Res. 43, 527–532. doi: 10.1080/01488376.2017.1329775

CrossRef Full Text | Google Scholar

Keywords: digital stress, stressors, questionnaire, measurement scale, validation, technostress, work stress, digitalization

Citation: Fischer T, Reuter M and Riedl R (2021) The Digital Stressors Scale: Development and Validation of a New Survey Instrument to Measure Digital Stress Perceptions in the Workplace Context. Front. Psychol. 12:607598. doi: 10.3389/fpsyg.2021.607598

Received: 17 September 2020; Accepted: 15 February 2021;
Published: 12 March 2021.

Edited by:

Monika Fleischhauer, MSB Medical School Berlin, Germany

Reviewed by:

Karl-Heinz Renner, Munich University of the Federal Armed Forces, Germany
Rajnish Kumar Misra, Jaypee Institute of Information Technology, India
Emina Hadzibajramovic, University of Gothenburg, Sweden

Copyright © 2021 Fischer, Reuter and Riedl. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Thomas Fischer, thomas.fischer@jku.at; Martin Reuter, martin.reuter@uni-bonn-diff.de; René Riedl, rene.riedl@fh-steyr.at

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.