Skip to main content

ORIGINAL RESEARCH article

Front. Psychol., 12 January 2021
Sec. Organizational Psychology

A Comparison of Conventional and Technology-Mediated Selection Interviews With Regard to Interviewees’ Performance, Perceptions, Strain, and Anxiety

  • 1Institut für Psychologie und Pädagogik, Universität Ulm, Ulm, Germany
  • 2Migros-Genossenschafts-Bund, Zurich, Switzerland
  • 3Department of Psychology, University of Fribourg, Fribourg, Switzerland

Organizations increasingly use technology-mediated interviews. However, only limited research is available concerning the comparability of different interview media and most of the available studies stem from a time when technology-mediated interviews were less common than in the present time. In an experiment using simulated selection interviews, we compared traditional face-to-face (FTF) interviews with telephone and videoconference interviews to determine whether ratings of interviewees’ performance, their perceptions of the interview, or their strain and anxiety are affected by the type of interview. Before participating in the actual interview, participants had a more positive view of FTF interviews compared to technology-mediated interviews. However, fairness perceptions did not differ anymore after the interview. Furthermore, there were no differences between the three interview media concerning psychological and physiological indicators of strain or interview anxiety. Nevertheless, ratings of interviewees’ performance were lower in the technology-mediated interviews than in FTF interviews. Thus, differences between different interview media can still be found nowadays even though most applicants are much more familiar with technology-mediated communication than in the past. The results show that organizations should take this into account and therefore avoid using different interview media when they interview different applicants for the same job opening.

Introduction

Over the past decades, technological progress has considerably changed how organizations recruit and select applicants (Tippins and Adler, 2011; Ryan et al., 2015; Ployhart et al., 2017). The computer and telecommunication technology now available allows organizations to use web-based and computer-administered selection tools at all stages of the selection process. Furthermore, given the COVID-19 pandemic, many organizations had to change their selection processes to web-based or technology-mediated procedures to be able to evaluate candidates even during times of physical distancing. To do so, rather diverse tools have been introduced such as multimedia simulation tests that are administered via the internet (Oostrom et al., 2010), internet-based testing that can be completed using a computer (Tippins, 2009) or even on a smartphone (e.g., Arthur et al., 2018), and many other procedures (cf. Tippins and Adler, 2011; Ryan et al., 2015; Woods et al., 2019).

The increasing use of technology-based selection is also accompanied by significant changes in how selection interviews are administered with organizations increasingly making use of technology-mediated interviews. Furthermore, there is not only an increase in the number of interviews that are administered via the internet, but even before the COVID-19 pandemic the number of interviews that are conducted via telephone has risen compared to earlier levels from the last millennium (Amoneit et al., 2020). This increased use of technology-mediated selection interviews in addition to, or instead of, the traditional face-to-face (FTF) interviews, raises important questions concerning the comparability of the different ways in which interviews can be conducted. However, although these questions have been previously examined, most of the relevant studies were conducted when telephone-based interviews were less common than nowadays and when easy-to-use videoconference systems like Skype, Google Hangouts, or Zoom were not commonplace. Hence, it is likely that participants in previous studies may not have been as familiar and comfortable with these systems as they may be now. Consequently, it is unclear whether these earlier results still generalize to current generations of applicants.

The purpose of the present study is to respond to repeated calls (e.g., Huffcutt and Culbertson, 2011; Levashina et al., 2014; Ryan and Ployhart, 2014; Blacksmith et al., 2016) for more research on technology-mediated interviews and to compare FTF interviews with telephone and videoconference interviews on a broad range of variables. Ratings of interviewees’ performance (i.e., of their answers to specific interview questions and/or to the interview as a whole) are central in this regard. However, interviews do not only have a diagnostic function (i.e., to identify the best applicant for a given job), but also serve an important recruiting function because organizations aim to present themselves in an attractive way regarding the applicants (Dipboye et al., 2012; Wilhelmy et al., 2016). Therefore, it is also important to examine interviewee perceptions of the different kinds of interviews.

Background

Selection Interviews

Meta-analytic evidence has shown that selection interviews can be highly valid selection tools that may even reach similar levels of criterion-related validity as tests of general mental ability (e.g., Huffcutt and Arthur, 1994; Huffcutt et al., 2014). However, such high levels of validity can only be obtained when structured interviews are used. In a seminal article, Campion et al. (1997) discussed several factors that affect the degree of interview structure and reviewed the existing related literature. These factors can be categorized as being related to consistency (e.g., asking the same questions to all applicants, all interviews are conducted by the same interviewer), question sophistication (e.g., asking questions that minimize probing and prompting of the interviewee), and evaluation standardization (e.g., using descriptively-anchored scales to rate the interviewees’ responses to each question or interviewers receiving training about how to rate applicant performance, cf. Chapman and Zweig, 2005).

Traditionally, interviews have been conducted face-to-face. However, as a consequence of the advancement of telecommunication technology – and recently also of the COVID-19 pandemic – a growing number of organizations do not only rely on traditional FTF interviews but also make use of technology-mediated interviews. These interviews can be administered via telephone or videoconference systems (Chapman et al., 2003) or they might even be conducted without an actual interviewer when organizations use asynchronous video interviewing technology. In these asynchronous video interviews, interviewees are shown the interview questions on the computer screen, record their answers with their webcam, and submit them via an online platform, so that the videotaped answers can be evaluated later (e.g., Brenner et al., 2016; Langer et al., 2017).

If one uses different administration media for different applicants who apply for the same job, this reduces the consistency with which the interview is administered and thus influences the degree of standardization. However, this factor was not yet considered by Campion et al. (1997). Furthermore, as pointed out by Huffcutt et al. (2011) in their model on interviewee performance, technology-mediated interviews might increase interviewees’ apprehensiveness when they are not familiar with a given interview medium (also cf. Lukacik et al., 2020). Furthermore, Huffcutt et al. (2011) raised the question whether using technology-mediated interviews impairs the identification of social cues that are sent from the interviewer. Therefore, it is an important question to determine whether the administration medium influences ratings of interviewees’ performance in the interview, interviewees’ perceptions of the interview, or their reactions to the interview procedure.

Relevant Theories in the Context of Technology-Mediated Interviews

With regard to technology-mediated selection interviews, several general theoretical approaches are relevant that have been developed to describe preferences and the suitability of different media for communication with others. The first of these theories is media richness theory (e.g., Daft and Lengel, 1986), which is concerned with communication media (e.g., email, telephone, or videoconference) and the extent to which these media allow or limit the transmission of information. The theory assumes that the use of different channels of information transmission (verbal, non-verbal, and para-verbal) reduces the ambiguity of a message and thus also the uncertainty of the communication partners. There have been concerns that the conveyance of (subtle) social cues is impaired during technology-mediated communication (e.g., Straus and McGrath, 1994; Chapman and Webster, 2001; Huffcutt et al., 2011). However, the extent to which such social cues can be perceived depends on the kind of technology used. While videoconferencing represents the upper end of communication bandwidth in technology-mediated communication (since it conveys auditory and visual information but also non- and para-verbal cues), telephone conversations may be positioned in the middle of the continuum while email communication would be at the lower end of it. Based on the media richness theory, one would expect that high bandwidth communication would be more advantageous in selection interviews (even though it should be acknowledged that low bandwidth communication might be beneficial in some other situations, e.g., Sauer et al., 2000). Accordingly, FTF interviews would be more beneficial than videoconference interviews because of the more comprehensive use of different channels of information transmission and videoconference interviews would be more beneficial than telephone interviews, which allow for the most limited information transmission.

Similar to media richness theory, social presence theory (Short et al., 1976) can also be used to explain the effects associated with computer-mediated communication. It assumes that communication media differ in the level to which communication partners experience the presence of each other, including the perception of the conversation partner’s gestures, gaze, and facial expressions. Furthermore, higher social presence is associated with more positive communication outcomes such as mutual attraction, trust, and enjoyment (e.g., Lee et al., 2006). In line with this, a study by Croes et al. (2016), for example, found that face-to-face communication led to more social presence, which in turn influenced the level of interpersonal attraction.

Social presence theory makes similar predictions as media richness theory but explains the phenomena in a different way. Thus, the importance of the degree of psychological awareness of another person is stressed rather than the level of communication bandwidth. With regard to selection interviews, FTF interviews should be the most beneficial medium in terms of perceived social presence. Although the interviewee is in a different location than the interviewer both during videoconference interviews and telephone interviews, both conversation partners see each other in videoconference interviews and are probably able to perceive more social presence, whereas social presence in telephone interviews is limited to the voice.

A third theoretical approach that helps to differentiate between interview media is Potosky’s (2008) framework of media attributes. According to Potosky, media communication can vary in four general attributes: social bandwidth, interactivity, transparency, and surveillance. Social bandwidth means that communication is easier in general when more communication paths are used. Therefore, this attribute is relatively similar to what is assumed in media richness theory. Interactivity defines the extent of interaction that is possible during an interview. Transparency refers to the fact that one is aware of technology-mediation during the interview. And surveillance describes the feeling that an interview could be surveilled or recorded by a third party. The first three attributes are beneficial in the context of technology-mediated interviews and the fourth attribute is negative.

Taken together, all three theoretical approaches agree that there are several advantages of FTF interviews compared to both kinds of technology-mediated interviews. This is either because of the most complete transmission of subtle cues (according to media richness theory), because of the actual presence of the conversation partners (according to social presence theory) or because of the highest level of transparency and the lowest risk of surveillance (according to Potosky’s media attributes). Furthermore, according to all the different approaches, videoconference interviews should be more advantageous than telephone interviews because the lack of visual information in the latter might make it more difficult to correctly interpret ambiguous social cues and might also impair the social nature of the interview. Furthermore, the lack of non-verbal behavior should result in interaction impairments (because non-verbal signals cannot be used as additional sources of conversation information) and impairments of transparency (see also Morelli et al., 2017).

In addition to theories that were developed to describe the suitability of different media for communication with others, it has also been suggested to consider an evolutionary perspective to better understand differences between FTF and technology-mediated interaction (e.g., Kock, 2004; Piazza and Bering, 2009; Abraham et al., 2013). According to evolutionary psychology (e.g., Buss, 2005), human behavior can be understood from the view of evolutionary adaptations that occurred over hundreds of thousands of years in response to only slowly changing environmental conditions. With regard to technology-mediated interaction, the evolutionary perspective assumes that humans have acquired competencies in FTF communication over very many centuries while the acquisition of competencies in technology-mediated communication is only very recent in the development of humankind. Furthermore, positive social interactions with others that have occurred FTF throughout the vast majority of human evolution also serve to satisfy the underlying need for belongingness (Baumeister and Leary, 1995). However, with the advent of technology-mediated communication, interactions do not always have to be FTF, which can impair the satisfaction of social belongingness needs of the communication partners who have been used to FTF interaction throughout evolution. In line with this, Sacco and Ismail (2014), for example, found that FTF interactions satisfied the need for social belongingness of their participants better than virtual interactions or compared to a control group without interaction. With regard to selection interviews, it is therefore also conceivable that the interview medium could negatively affect the social character of these interviews because of its impact on the lower satisfaction of belongingness needs. Thus, for technology-mediated interviews, evolutionary approaches converge with predictions from the other theoretical approaches.

Previous Research on Technology-Mediated Interviews

Most of the earlier research on technology-mediated interviews has focused on the effects of the different interview media on ratings of interviewees’ performance and on interviewees’ perceptions of the different interviews. Furthermore, most of this research focused on telephone and videoconference interviews in which an interviewer and an interviewee interacted directly. These two kinds of technology-mediated interviews are also the main focus of our study.

Concerning telephone and videoconference interviews, an obvious issue is that videoconferencing has seen considerable technological progress over the past decades, whereas fewer changes were observed for telephone interviews. This implies that research on telephone interviews that was conducted some time ago (e.g., Silvester et al., 2000; Straus et al., 2001) may still be relevant to this day. However, despite the absence of huge technological change affecting communication bandwidth, the prevalence of telephone interviews has also increased in recent years (e.g., Amoneit et al., 2020). This may be of importance as experience with telephone interviews has been identified as a moderating variable that may affect performance in these interviews (Silvester et al., 2000).

Increased familiarity with videoconferencing systems might not only affect the relevance of previous research (e.g., many students and employees are now familiar with videoconference systems like Skype, Google Hangouts, or Zoom) but also the considerable technological progress over recent years. Thus, some of the previous findings may be less relevant today because they were obtained with technology that is now considered to be obsolete (Ployhart et al., 2017). Due to rapid progress in computing power and the availability of high-speed internet and high-definition cameras, today’s communication bandwidth is considerably more extensive than it used to be 10–15 years ago. Therefore, performance impairments in videoconference interviews reported in studies published more than a decade ago should be considerably reduced when using contemporary technology. Accordingly, potential differences between FTF and videoconference interviews that were attributed to lower communication bandwidth may have become smaller, now. Nevertheless, in a series of three qualitative studies McColl and Michelotti (2019) still found that interviewers reported several limitations of videoconference interviews in comparison to FTF interviews. In addition to some remaining technical issues, these limitations included aspects such as impairments of non-verbal communication (e.g., eye contact, perceptions of hand gestures) and problems of the setting (e.g., lighting and noise).

Interview Performance Ratings in Technology-Mediated vs. FTF Interviews

Concerning the effects of the interview medium on interview performance ratings, a recent meta-analysis of previous studies, which have all been published at least 9 years before the meta-analysis, found that interviewees generally receive better ratings in FTF interviews than in technology-mediated interviews (Blacksmith et al., 2016, but see Chapman and Rowe, 2001, or Straus et al., 2001, for exceptions). Furthermore, the meta-analytic effects were of intermediate size and were comparable for telephone and for videoconference interviews. In addition, interviewees in previous studies also had higher outcome expectations in FTF interviews than in telephone or videoconference interviews, which means that they also considered their performance to be better in FTF interviews (Chapman et al., 2003) and assumed to be able to use more impression management in FTF interviews than in technology-mediated interviews (Basch et al., 2020a). However, despite using modern videoconference technology, two recent studies still found higher interview performance ratings (Sears et al., 2013; Basch et al., 2020b) in FTF interviews compared to videoconference interviews with mean differences of intermediate size.

A limitation of the meta-analysis by Blacksmith et al. (2016) beyond its reliance on relatively old primary studies is that the empirical basis for effects concerning specific types of interviews is also rather sparse (e.g., the meta-analytic estimate for the comparison for videoconference vs. FTF interviews is based on an N of only 103 individuals). In addition, the effects from the corresponding primary studies vary considerably so that specific aspects of the few primary studies and the respective interviews have more impact on the meta-analytic estimate than is usually the case in other meta-analyses. Furthermore, earlier primary studies usually also only compared FTF interviews with one kind of technology-mediated interviews (e.g., Silvester et al., 2000; Sears et al., 2013; Basch et al., 2020b). Thus, it is unclear whether the differences between FTF and telephone or videoconference interviews are indeed comparable when the content and other aspects of the interviews beyond the interview medium are held consistent. This is a relevant question, because in comparison to FTF interviews one would expect to find larger performance differences for telephone interviews than for videoconference interviews according to the different theoretical approaches because telephone interviews should go hand in hand with lower media richness, lower social presence and so on.

Another limitation of previous research is that it remains unclear whether the lower performance ratings were in fact related to lower interviewee performance or whether they were due to effects on the side of the interviewers. This issue is important in light of a study by Van Iddekinge et al. (2006) who found higher performance ratings when interviewees were rated on the basis of FTF interviews than when the same interviewees were rated on the basis of videotaped interviews. Thus, it is possible that the interview medium does not lead to lower performance by interviewees but to lower evaluations of this performance by raters.

Interviewee Perceptions of Technology-Mediated Interviews

Similar to the evidence regarding interview performance ratings, the majority of the previous studies also found that interviewees had a preference for FTF interviews in comparison to telephone or videoconference interviews and perceived them as fairer (e.g., Kroeck and Magnusen, 1997; Chapman et al., 2003; Sears et al., 2013). Furthermore, interviewees in Straus et al.‘s (2001) study felt more comfortable in FTF interviews than in videoconference interviews. In line with this, Blacksmith et al.’s (2016) meta-analysis found that, overall, interviewees react negatively to technology-mediated interviews in comparison to FTF interviews. Furthermore, evidence from a recent study by Basch et al. (2020b) that compared perceptions of FTF and videoconference interviews confirmed that social presence is a mediator of the effects of the interview medium on interviewees’ fairness perceptions.

The negative perceptions of technology-mediated interviews by interviewees are especially relevant for organizations because of evidence that lower fairness perceptions are accompanied by lower perceptions of organizational attractiveness (Bauer et al., 2004; Langer et al., 2020) and lower intentions to accept a job offer (Chapman et al., 2003). Thus, using technology-mediated interviews may have negative effects for the recruitment function of employment interviews (e.g., Wilhelmy et al., 2016).

Unfortunately, there are several limitations concerning previous research on interviewee perceptions of the different kinds of interviews. First, in contrast to previous research concerning interview performance, the available database related to interviewee perceptions of technology-mediated interviews is considerably smaller so that the meta-analytic results by Blacksmith et al. (2016) for the different interview media are based on an even more limited empirical basis with only two or three primary studies for each comparison. Furthermore, the primary studies that considered telephone as well as videoconference interviews either could not ensure that the interviews per se were comparable concerning the different interview media (Chapman et al., 2003) or the available videoconference technology that was used in the primary studies was much more plagued by impaired conversation flow in comparison to present technology (Straus et al., 2001). Finally, in some more recent studies, participants did not take part in actual interviews but only had to answer survey questions after a description of the different interviews (e.g., Basch et al., 2020a), they only observed videos of technology-mediated interviews (e.g., Langer et al., 2020), or the study did not consider telephone interviews (Basch et al., 2020b).

Strain and Anxiety as Reactions to the Interview

Even though fairness perceptions are the most commonly investigated aspect of applicant reactions, other reactions to interviews are also relevant. For example, Straus et al.’s (2001) finding that interviewees felt more comfortable in FTF interviews than in videoconference interviews already indicated that emotional reactions might play a role when different interview media are compared.

In this context, it is important to realize that selection interviews can be a strong psychosocial stressor. This stressor may lead to interviewee strain that manifests itself not only at the subjective level (e.g., by increased anxiety) but also at the psychophysiological level (e.g., by increased transpiration and heartbeat). In line with this, stress researchers even used the analogy of an employment interview when they designed stressful situations for their research (Kirschbaum et al., 1993). Furthermore, people with higher interview anxiety show lower interview performance (e.g., McCarthy and Goffin, 2004; Powell et al., 2018), which is consistent with general evidence that applicants who experience higher test anxiety achieve lower test scores (Hausknecht et al., 2004). The latter is of particular concern in the present context because there are considerable differences between interviewees regarding interview anxiety. Thus, given that applicants are affected to different degrees by interview anxiety, this may influence the selection decision in the form of reduced chances to receive a job offer for those applicants who suffer from strong interview anxiety. Beyond the obvious negative consequences for applicants, there may also be negative effects for organizations because they may reject suitable applicants when these are unable to show their true performance level during the interview due to excessive anxiety levels (cf. McCarthy and Goffin, 2004).

There is little work on test anxiety that has examined strain beyond self-report measures. While it has not been uncommon to use psychophysiological indicators of strain in many research areas within work and organizational psychology (e.g., Åkerstedt, 1990; Zeier et al., 1996), such measures have rarely been employed in personnel selection research and none of the previous studies on technology-mediated interviews has used such indicators. Their use would allow us to obtain a broader picture of the multiple effects of stressors because it would not be necessary to rely on self-report measures and performance indicators alone. Of the many psychophysiological indicators available, heart rate variability (HRV) may be particularly suitable for the present study, as it is sensitive to changes in mental strain and negative affect (Kettunen and Keltikangas-Järvinen, 2001).

The Present Study

The literature reviewed above has raised the central question to what extent technology-mediated interviews can influence interviewees’ performance and their perceptions of and reactions to the interview procedure. Until now, the research related to this question has mainly found that interviewees received lower performance ratings in technology-mediated interviews than in FTF interviews and that applicant perceptions of technology-mediated interviews were less positive (Blacksmith et al., 2016). However, most of the previous work was conducted at a time when a different generation of technology-mediated communication tools was used when the fidelity of the technology was much lower. At that time, interviewees were also less familiar with these communication tools. Therefore, it is unclear to what degree previous evidence can still be generalized to current applicants. Furthermore, there is a lack of research concerning strain and anxiety in the different types of interviews.

To address these issues, we set up an experiment to compare three different ways in which interviews can be conducted: In the traditional FTF manner, via telephone, or by using a videoconference system. In the latter two conditions, which used technology-mediated interviews, the interviewer and the interviewee did not meet in person. We compared the three conditions with regard to ratings of interviewees’ performance and to their perceptions of the interviews, but also with regard to the anxiety and the subjective as well as the physiological strain that they experienced.

Methods

Participants

A total of 95 German-speaking final year students and recent graduates from a Swiss university took part in the study. The data of 7 participants had to be discarded because they did not consent to using videotapes of their interview performance (see below). Thus, the final sample consisted of 88 participants (36 males and 52 females; age: M = 25.09 years; SD = 4.05). Post hoc power analyses using GPower 3.1 (Faul et al., 2009) on the basis of our sample size revealed that we had a power of 0.53 to detect a medium-sized effect (using an alpha-level of 0.05) in a one-way analysis of variance and of 0.59 for t-tests for mean differences between two of the three groups.

The participants were from a broad range of subjects, with larger groups majoring in communication sciences (23.9%), psychology (15.9%), and economics (12.5%). On average, participants had been enrolled for 3.82 years (SD = 2.09). Participants were recruited via an email that was sent to all final year students and that invited them to take part in a simulated selection interview, allowing them to gain experience regarding selection procedures and to receive feedback on their performance. All the data were collected before the advent of the COVID-19 pandemic.

Procedure

To register for the study, participants had to complete an online registration form. After registration, they were asked to complete a questionnaire that contained questions concerning demographic and personal data. This was followed by a questionnaire measuring participants’ fairness expectations and favorability ratings for the three interview media that were examined (FTF, telephone, and videoconference). Finally, participants could make an appointment with the experimenter to come to the laboratory for the interview.

The interviews in the three experimental conditions were always conducted in the same room. This room was equipped for telephone as well as for videoconference interviews. When participants arrived in the room, they were told that the present study was investigating subjective and physiological reactions to selection procedures. Then, the heart rate monitoring system was attached to the participants’ chest and wrist. Participants were asked to sit quietly for 5 min while watching a relaxing video. The reason for this was that previous studies indicated that body movements may influence HRV measurement by introducing artificial variability (Jorna, 1992; Bernardi et al., 2000). The physiological data were collected during the last 2 min of this 5-min period and were used as a baseline for the experimental HRV measure. After the resting stage, participants completed the KAB, a short questionnaire to measure their current subjective strain levels.

Next, participants completed a short general mental ability (GMA) test. During this test, HRV data were again collected during the first 2 min of the test administration. Directly after the GMA test, participants completed the strain questionnaire for a second time.

Participants were then randomly assigned to one of the three interview conditions: FTF (n = 30), telephone (n = 26), and videoconference (n = 32). In all three conditions, the experimenter instructed participants to remain seated at all times (which also meant they should not get up in the FTF condition when the interviewer entered the room) to prevent contamination of HRV by movement artifacts. Furthermore, the participants were asked to imagine that they had applied for an attractive leadership position in their field of study and that they would have direct reports in this position.

After the experimenter had left the room, the actual interview began. In the FTF condition, the interviewer entered the room. To make the interview situation more authentic, the interviewer wore a suit and a tie in all conditions. For the telephone condition, a typical office telephone was used. The interview began by the interviewer calling the participant on the telephone. In the videoconference condition, a program (Skype) was used that allowed voice calls to be made over the internet with an additional videoconference function. The interviewer began the interview by calling the participant via computer, upon which a window opened on the participant’s screen, with the interviewer being shown in full screen mode on a stand-alone 22-inch computer screen. The interviewer and the interviewee used the built-in microphones to talk to each other.

To ensure that participants could have eye contact with the interviewer in the videoconference condition, the webcam that was used to record the interviewer was fixed in the middle of his screen. In contrast to this, the webcam used to record the participants was fixed on top of the screen.

At the beginning of the interview, the interviewer introduced himself. Then, the participant was asked to prepare a short self-introduction in which he/she should introduce himself/herself to the interviewer. Participants were told that this self-introduction should not last for more than 3 min and they were given 4 min to prepare their answer, which allowed us to measure HRV during the last 2 min of the preparation phase. After the self-introduction, the interviewer asked a set of standardized questions. This set consisted of seven past-behavior questions and four future-oriented questions (see below). All questions were asked in the same order and no probing or follow-up questions were used. The interviewer took notes and evaluated the participants’ answers before asking the next question.

After the interview, participants were asked for a self-evaluation of their overall interview performance. Then, they were asked to complete a questionnaire to rate their current level of interview anxiety and the perceived fairness of the interview they had just experienced, and to complete the strain questionnaire again.

During the interview, all participants were videotaped with a hidden video camera so that their performance could be evaluated by a second rater after the completion of the study. This hidden camera was necessary because in the telephone condition in particular the feeling of being observed might have changed interviewees’ perception of the situation. After the last strain questionnaire, participants were debriefed and informed that a video recording had been made. They were asked to grant permission that these video recordings can be used for later analyses. As noted above, seven participants did not grant permission so that their video recordings were deleted by the experimenter in the presence of the participant. Thus, no data from these participants were used for later analyses.

Measures

Interview Performance Ratings

As noted above, the interview consisted of two parts, a short self-introduction and a set of standardized questions containing seven past-behavior questions and four future-oriented questions. The past-behavior questions asked interviewees to recall situations they had previously experienced and to describe what exactly they had done in those situations. The future-oriented questions asked interviewees to describe what they would do in hypothetical situations presented to them. Two of the questions targeted Cooperation, three targeted Information Management, five targeted Leadership, and one targeted Systematic Planning. All interview questions were taken from previous studies in which participants stem from similar populations (Melchers et al., 2009, 2012). The questions were suitable for university graduates applying for management trainee positions. An example question can be found in the Appendix.

Several ratings of interviewees’ performance were collected. First, the interviewer and a second rater (two Master students) rated interviewees’ performance in the two main parts of the interview (i.e., the self-introduction and the 11 structured interview questions). For both parts of the interview, these ratings were made on descriptively-anchored 5-point rating scales, ranging from 1 = poor through 3 = average to 5 = good. To ensure that both raters had a common frame of reference for their evaluations, each rating scale had descriptive anchors. These anchors were similar to BARS (Smith and Kendall, 1963) and described what behavior of an interviewee would be considered as poor, average, or good (see the example in the Appendix). The rating scales were employed in previous studies (Melchers et al., 2009, 2012) after they had undergone extensive pretesting. After the interview, the interviewer and the second rater evaluated the participants’ overall interview performance on another rating scale, again ranging from 1 = poor to 5 = good. Finally, participants also provided self-ratings of their overall interview performance on a 6-point rating scale ranging from 1 = very poor to 6 = very good.

Prior to the first interview, both raters were trained over a period of 2 days. During this training, they were introduced to the self-introduction and the different interview questions as well as to definitions and behavioral examples for answers to the different interview questions. In order to develop a consistent frame of reference for rating the interviewees’ answers, raters received specific training on the purpose of each interview question, typical errors committed by raters, and the idea behind descriptively-anchored rating scales (Melchers et al., 2011; Roch et al., 2012). Furthermore, it was emphasized that it was crucial to conduct all interviews in the same prescribed manner, that is, to read the interview questions as printed on the interview forms and not to rephrase them or give additional cues. After completing the training, one of the students was assigned the role of the interviewer during the experiment while the other one served as the second rater and also as the experimenter, who welcomed participants and looked after them.

For our later analyses, we averaged the ratings from both raters for the interviewees’ overall interview performance and for the self-introduction, respectively, and also the means from both raters across all the structured interview questions. To determine the inter-rater reliability of the ratings from the interviewer and the second rater, we calculated intraclass correlations (ICC 2,1). Mean inter-rater reliabilities (i.e., the reliability of each rater) were 0.81 (overall interview performance), 0.71 (self-introduction), and 0.96 (overall mean across the structured questions).

Interviewee Perceptions

We used two interviewee perception measures. First, participants rated the favorability for each of three interview media on one item (“If you applied for a job, how much would you like to go through each of the following selection procedures?”) on a scale ranging from 1 = not at all to 6 = a lot. And second, participants were asked twice to provide ratings of interview fairness. Prior to the interview, they rated the expected fairness of each interview medium (FTF, telephone, and videoconference) on a one-item 6-point rating scale (“Please rate the fairness of the following selection procedures?” ranging from 1 = very unfair to 6 = very fair). After the interview, participants rated the fairness of the interview again, but this time only the interview medium they had experienced during the study (“How fair did you feel that the experienced interview was as a selection procedure?”), using the same 6-point rating scale.

Strain and Anxiety

We used two ways to measure participants’ strain. First, we used the KAB (Müller and Basler, 1993), a short German-language questionnaire, to measure participants’ current subjective strain. In this questionnaire, participants had to indicate how they were feeling “right now” on six bipolar adjective pairs (e.g., tense – relaxed) using a 6-point scale. In our study, internal consistencies ranged between 0.80 and 0.86 for the different measurements.

Second, we measured heart rate variability as a psychophysiological indicator of participants’ strain. HRV describes the variation of the interval between two successive heart beats. By using spectral analysis, the main components of HRV can be separated and analyzed individually. The high frequency (HF) band represents the sympathetic activation and the low frequency (LF) band describes the parasympathetic activation (Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology, 1996). The relation of these two components (i.e., the LF/HF ratio) is an indicator of psychosocial strain with a low ratio indicating elevated levels of strain (Bosch et al., 2003).

The Polar S810iTM heart rate monitor (Polar S810iTM, Kempele, Finland) was used to measure HRV in a non-intrusive manner. Specifically, each participant was equipped with a belt, containing a pulse monitor, worn around the chest. The heart rate data was transferred to a watch (worn in addition to the belt) via an infra-red connection. The data were subsequently analyzed by using the Kubios software (Niskanen et al., 2004).

In addition to strain, we also measured participants’ state anxiety during the interview. To do so, we used the MASI (Measure of Anxiety in Selection Interviews, McCarthy and Goffin, 2004) and converted the items into a state measure, which allowed us to measure the impact of interview medium on current levels of interview anxiety. The conversion involved the translation into German, but also the removal of 4 of the original 30 items because they did not apply to our study (e.g., “When meeting a job interviewer, I worry that my handshake will not be correct” because in two of the three conditions the interviewee did not meet the interviewer in person). Furthermore, changes in wording were made so that all items referred to the specific interview situation rather than the experience of being interviewed in general.

The MASI items covered aspects such as communication anxiety (e.g., “During this interview, I often couldn’t think of a thing to say”), appearance anxiety (e.g., “I often felt uneasy about my appearance during this interview”), social anxiety (e.g., “I was worrying about whether the interviewer was liking me as a person”), performance anxiety (e.g., “During this interview, I got very nervous about whether my performance was good enough”), and behavioral anxiety (e.g., “I felt sick to my stomach during the interview”). Participants rated all items on a 5-point Likert scale ranging from 1 = strongly disagree to 6 = strongly agree). In line with McCarthy and Goffin (2004), the mean across all items was calculated as an overall measure of participants’ current interview anxiety. Coefficient alpha for the MASI was 0.88.

Additional Variables

We used a short but extensively validated test from a consultancy to measure GMA. This test represents a commonly used measure of cognitive ability and contained 50 items to assess verbal, arithmetic, and spatial reasoning. Participants had to complete as many items as possible within 12 min.

In addition, participants provided demographic information and answered questions concerning their previous experience with face-to-face selection interviews and also with telephone and videoconference interviews. For each of these interviews, they were asked to indicate the number of previous interviews that they had experienced in the past. Furthermore, participants also had to indicate their body size and weight to determine their body-mass index and to answer questions concerning their activity levels because these variables might influence HRV.

Results

Preliminary Analysis

Correlations and descriptive data for all study variables are shown in Table 1. Inspection of this table shows that participants’ performance in the interview (except for the self-introduction) was significantly related to interview state anxiety and to perceived psychological strain during the interview, all rs > | −0.21|, all ps < 0.05. However, HRV was not related to interviewees’ performance.

TABLE 1
www.frontiersin.org

Table 1. Descriptive information and intercorrelations for study variables.

Concerning the comparability of the experimental conditions, the preliminary analyses revealed that participants in the three experimental groups did not differ with regard to age, sex, experience with technology-mediated interviews, body size, or weight, but that they differed with regard to their previous experience of selection interviews in general, F(2,85) = 4.60, p < 0.05, η2 = 0.10, (M = 3.20, SD = 1.88, for the FTF condition, M = 4.42, SD = 1.94, for the telephone condition, and M = 3.19, SD = 1.38, for the videoconference condition). Therefore, we used interview experience as a covariate for all later analyses that compared the three experimental groups.

Interview Performance Ratings

Means and SDs for ratings of participants’ performance in the different interview conditions as well as effects sizes for the mean differences are shown in Table 2. Descriptively the means in the FTF condition were higher than the means in the two other conditions for all the different ratings of participants’ interview performance. Furthermore, one-factorial ANCOVAs with prior interview experience as a covariate also revealed a significant main effect of the interview condition on interview performance ratings for the standardized questions, F(2,84) = 5.15, p < 0.01, η2 = 0.11, and for the overall interview ratings, F(2,84) = 3.36, p < 0.05, η2 = 0.07. Pairwise comparisons confirmed that the significant main effects were driven by higher performance ratings in the FTF condition in comparison to the other two conditions. Most of the differences were in the medium to large range (cf. Table 2). The ANOVA for interviewees’ self-rated overall performance was not significant, F(2,84) = 2.70, p > 0.05, η2 = 0.06, even though the mean differences were still in the medium range (cf. Table 2). Finally, given the small mean differences, the main effect for performance ratings in the self-introduction failed to reach significance, F < 1.

TABLE 2
www.frontiersin.org

Table 2. Means, standard deviations, and Cohen’s ds for the different interview conditions.

In addition to these analyses, we also conducted separate ANCOVAs for the ratings from the interviewer and those from the second rater. This was done because of findings by Van Iddekinge et al. (2006) that ratings from a FTF interview were significantly higher than ratings of the same interviews on the basis of videotapes. The additional ANCOVAs generally paralleled the previous results, with the exception that the analysis with the overall interview rating was no longer significant for the second rater, F(2,84) = 2.87, p > 0.05, η2 = 0.06. All other ANCOVAs remained significant. Furthermore, we also compared ratings from the interviewer with those from the second rater for the FTF condition directly because this is the same kind of comparison as in Basch et al.’s (2020b) and Van Iddekinge et al.’s (2006) studies. None of these comparisons reached significance, all ts < 1.48, all ps > 0.05. Thus, the overall pattern of results suggested that the differences between the interview conditions were related to performance differences between interviewees in the different conditions and not to differences in rating levels or rater biases between the interviewer and the second rater.

Interviewee Perceptions

The perceptions of the different interview media are also shown in Table 2. The means showed a clear preference for FTF interviews over both types of technology-mediated interviews. In line with this, a one-factorial repeated-measures ANOVA for participants’ interview favorability ratings revealed a significant difference, F(2,174) = 104.88, p < 0.01, η2 = 0.55. The same pattern was also observed for expected fairness prior to the interview experience, with higher ratings for the FTF interview over the two other interviews, F(2,174) = 47.63, p < 0.01, η2 = 0.35. As for interview performance, these main effects were again driven by the more positive attitudes toward FTF interviews and the corresponding mean differences represent large effects (cf. Table 2).

Interestingly, after participants had actually experienced their respective interview, the difference in fairness ratings was no longer significant, as was revealed by a one-factorial ANCOVA, F(2,85) = 1.88, p > 0.05. The corresponding effect sizes were in the low to medium range (cf. Table 2). The reason for this was that perceived fairness scores after the interview were considerably higher in comparison to pre-interview fairness expectation for the telephone and videoconference conditions, t(25) = 3.55 and t(31) = 6.47, both ps < 0.01, Cohen’s ds = 0.70 and 1.14, respectively. Thus, actually experiencing these technology-mediated interviews seems to have alleviated some of the interviewees’ previous concerns. In contrast, only a minor and non-significant difference was found for the comparison of pre- vs. post-interview fairness perceptions for the FTF condition, t < 1, d = 0.15.

Strain and Anxiety

In a first step, we tested whether the two strain variables (i.e., the self-report measure as an indicator of interviewees’ subjective strain and the HRV as an indicator of their physiological strain) would reflect expected differences between the resting stage on the one hand and the GMA test and the interview, respectively, on the other hand. Descriptive data for the different experimental stages can be found in Table 3.

TABLE 3
www.frontiersin.org

Table 3. Subjective and physiological strain during different stages of the selection simulation.

For both variables, interviewees’ subjectively experienced strain and their HRV, we found the lowest means during the resting stage (cf. Table 3). In line with this, one-factorial repeated-measures ANOVAs revealed a significant main effect of the experimental stage, F(2,174) = 16.08, p < 0.01, η2 = 0.16, for subjective strain and F(2,174) = 28.46, p < 0.01, η2 = 0.25, for HRV.

In the next step, we compared interviewees’ subjective strain between the different interview conditions. As can be seen in Table 2, there were only rather small descriptive differences in participants’ subjective strain. In line with this, the interview condition was neither significant in the one-factorial ANCOVAs for subjective strain, F(2,84) = 2.26, p > 0.05, η2 = 0.05, nor for interview state anxiety, F < 1, η2 = 0.02. Furthermore, even though HRV was lowest in the FTF condition, this difference also failed to reach significance, F(2,84) = 1.20, p > 0.05, η2 = 0.03.

Discussion

The aim of the present study was to evaluate whether the interview medium affects ratings of interviewees’ performance in a simulated selection interview, their perceptions of the interview, and the strain and anxiety that they experience during the interview. It makes several contributions to the literature. First, it confirms results from previous studies that were conducted in the previous decade when technology-mediated interviews were less common and videoconference systems were less powerful than today (Blacksmith et al., 2016). In line with these earlier findings and with the different communication-based theories and with the evolutionary perspective, we found that interviewees obtained lower performance ratings in technology-mediated interviews and had less positive views of these interviews before the interview. Thus, the interview medium still makes a difference for current interviewees who are more used to using communication technologies.

Furthermore, the differences of performance ratings were particularly strong for answers to the standardized questions, which included past behavior as well as future-oriented questions, but were negligible for the self-introduction. This may be because asking standardized questions resulted in an ongoing interaction, which required continuous adjustments of the interviewee. No such adjustments were required during the self-introduction because it involved a monolog. Alternatively, the preparation time that was granted to interviewees to prepare for the self-introduction was so long that interviewees could prepare their answer in sufficient detail so that potential effects of the interview medium no longer impacted the quality of their answers.

As a second contribution, we did not find any significant differences between telephone vs. videoconference interviews and descriptively the differences between them and FTF interviews were sometimes larger for telephone interviews and sometimes for videoconference interviews. Based on media richness theory (Daft and Lengel, 1986), social presence theory (Short et al., 1976), and Potosky’s (2008) framework of media attributes we would have expected larger differences for telephone interviews than for videoconference interviews. This suggests that the two technology-mediated interview media were more alike concerning the effects they produced than assumed by the different theoretical approaches.

As a third contribution to the existing literature, we found large effects concerning initial differences between interviewees’ perceptions of the different interviews but also that these differences were no longer significant once interviewees had completed their respective interview. This is an important finding given that participants in some previous studies did not experience actual interviews (e.g., Basch et al., 2020a) whereas they did in others (e.g., Chapman et al., 2003). Concerning the pre-interview results, we found that interviewees expressed more positive opinions on FTF interviews than on both technology-mediated interviews prior to the interview. Qualitatively, this replicates the overall pattern of results from previous studies (e.g., Kroeck and Magnusen, 1997; Chapman et al., 2003). Furthermore, our pre-interview results suggest that stronger familiarity with new technologies among current participants does not lead to more favorable initial views of these interviews. Given that fairness perceptions of a selection procedure can influence important outcomes like applicants’ perceptions of organizational attractiveness (Hausknecht et al., 2004), reapplication behavior (Gilliland et al., 2001), or job offer acceptance (Harold et al., 2016), they can affect the recruitment function of the interview. Thus, when organizations use an interview medium for which applicants hold lower pre-interview perceptions then this might impair perceptions of organizational attractiveness so that applicants are less likely to accept an invitation for such an interview. However, after having experienced the interview, no significant differences were found any more. This suggests that actual experience can alleviate concerns among interviewees with regard to telephone or videoconference interviews. Thus, the recruitment function of the interview is mainly at risk during the pre-interview stage.

As a final contribution, we extended the usual set of dependent variables and also investigated potential differences between the different interview media concerning strain and anxiety. However, we found no significant differences between the different interview media for the respective variables. Thus, in contrast to the performance ratings and the perception variables, interviewees did not experience differences in strain levels as a function of the interview medium and this was observed for the psychophysiological data as well as for subjective strain. Similarly, the data for state anxiety did not provide any evidence for an influence of the interview medium, either. This converging evidence from three measures suggests that neither the technology itself nor the setting associated with the two technologies had a stronger negative effect on participant strain than FTF interviews did. Thus, it seems unlikely that differences in participants’ strain or anxiety led to the observed differences for the performance ratings. While there were no differences between the three interview conditions, there was clear evidence from the physiological data as well as the self-report data that interviews caused strain among participants, compared to pre-interview baseline levels. Furthermore, the general finding that state anxiety as well as subjective strain was negatively related to interview performance ratings indicates that participants differed with regard to anxiety and strain and that these differences were related to their performance.

In addition to the different contributions, the present study also avoided several limitations of earlier research. First, we evaluated effects of the interview medium while at the same time holding the content of the interview constant. Thus, differences were indeed related to differences in the interview medium. And second, as noted above, Van Iddekinge et al. (2006) found that interviewees received higher performance ratings when they were rated on the basis of FTF interviews than when they were rated on the basis of videotaped interviews. A limitation of previous studies on technology-mediated interviews is that they could not rule out that higher ratings for FTF interviews in comparison to technology-mediated interviews were not due to better performance by interviewees but due to biases on the side of the interviewer. However, the present results suggest that interviewees indeed performed worse in technology-mediated interviews.

Limitations and Lines for Future Research

A potential limitation of the present study concerns the generalizability of our results. As the current study was a lab-based interview simulation one might question whether the present results are also representative of high-stakes selection situations. Although our data suggest that participants experienced the interviews as stressful so that HRV was higher during the interview than for the baseline condition, this increase does not mean that the situation in our study is equivalent to a high-stakes field setting. However, as interviewees volunteered to take part in the study with the aim to gain first-hand experience of job interviews, there is reason to believe that they considered the procedure as a serious dry-run for a real job interview. This would also support evidence from other studies that used a similar approach and in which the vast majority of the participants indicated that they had acted and felt as in a real selection interview (e.g., Van Iddekinge et al., 2005; Jansen et al., 2013; Oostrom et al., 2016). Nevertheless, using a student sample may underestimate the differences between interview conditions since familiarity with modern communication technologies such as videoconferencing may have contributed to a lessening of concerns related to these technologies. Therefore, differences in performance and perceived fairness may be more pronounced for older or non-academic job applicants. Furthermore, applicant reaction research has often revealed that real applicants show more pronounced responses in comparison to participants from simulated settings (e.g., Hausknecht et al., 2004; Truxillo et al., 2009). Thus, future research with real applicants is crucial to evaluate whether potential effects in high-stakes selection contexts are larger than in our study. However, it might also be possible that the need to use videoconference technologies during the COVID-19 pandemic in many jobs – and also more experience with technology-mediated interviews for people who were on the job market during the pandemic – changed perceptions of technology-mediated interviews. Therefore, it would be good to repeat the present study once the COVID-19 pandemic is over.

A second limitation concerns the size of our participant sample. As explained above, the current sample size led to only moderate levels of power. And even though many of our analyses revealed significant differences between FTF interviews on the one hand and technology-mediated interviews on the other hand, we have to acknowledge that the sample was probably too small to detect potential differences between telephone vs. videoconference interviews if the true differences are small in magnitude.

As a third limitation, we want to mention the missing convergence of interview anxiety and strain as self-report variables and HRV as a physiological indicator. Even though it is not uncommon that self-ratings of variables like anxiety or strain do not closely correspond to other indicators of the same constructs, (e.g., Feiler and Powell, 2013) future research should consider whether other physiological markers of strain (e.g., cortisol levels) match self-report measures more closely.

Fourth, we only used one-item measures for some of the interviewee perception variables. The reason for this was to alleviate participants’ necessary effort to complete our surveys. Nevertheless, it would have been beneficial to use multi-item measures to improve the reliability of the corresponding measures.

And a fifth limitation concerns the issue that the present research provides only limited insights into the reasons that contributed to the performance differences between the different interview conditions. As noted above, our results are at variance with the different communication-based theories. However, a direct test in which central variables assumed by the theories such as social presence, for example, are measured might allow better insights.

In addition to these limitations another potential line for future research might be to also consider interviewer perceptions of the different interview media. Specifically, it would be interesting to evaluate whether interviewers’ preferences for the different interview media converge with interviewees’ preferences. As already noted above, the interview medium might impair additional interview functions beyond selection and assessment and some of these functions might be more salient to interviewers. Furthermore, especially in unstructured interviews, the intended assessment of fit between the organization’s or the interviewer’s values and applicants’ personal values is also relevant for many interviewers (Adams et al., 1994) and it might well be that interviewers might consider technology-mediated interviews as less suitable for this. In contrast, the potential flexibility-related advantages of technology-mediated interviews might be more salient for interviewers and this might be especially true for interviewers who interview large numbers of applicants. However, until now information on interviewers’ views of technology-mediated interview are largely missing. Therefore, future research is needed to consider interviewers’ perceptions of the different media and also whether there are conditions that influence these perceptions such as the use or non-use of structured interviews or the number of interviews that an interviewer conducts per year.

Practical Implications

Considering the different performance ratings between the different interview conditions we recommend that the same interview medium is used for all applicants. Thus, although it might facilitate organizational selection processes if different interview media are used in the same selection process (e.g., so that applicants in the vicinity are interviewed face-to-face and those living further away by videoconference or telephone), this might increase variance in interviewer ratings that is not related to interviewees’ actual performance because according to our results interviewee performance ratings were affected by the interview medium. Therefore, the interview medium might even be added to the list of factors (cf. Campion et al., 1997; Levashina et al., 2014) that should be standardized across applicants.

Conversely, for applicants it seems advisable that they choose the former if they are offered the possibility to choose between a FTF interview and a telephone or videoconference interview. Taking part in a FTF interview seems to be beneficial for applicants’ interview performance. Thus, it might increase their chances for a job offer.

Furthermore, organizations and recruiters should be aware of applicants’ concerns about technology-mediated interviews. Even though these concerns were alleviated in the present study after participants had experienced the actual interview, their initial preferences and fairness expectations clearly favored FTF interviews. Since evidence has shown that such expectations can affect several other important perceptions and intentions (Bell et al., 2006), organizations might have to pay a price that impairs the recruitment function of their interviews if they ignore applicants’ preferences for FTF interview. Fortunately, however, previous research on the effects of explanations (Truxillo et al., 2009) suggests that explanations that provide a justification concerning the use of a selection procedure or that stress its advantages are often a suitable way to address applicants’ concerns. In line with this research, recent evidence has shown that such explanations can also improve perceptions of technology-mediated interviews (Basch and Melchers, 2019).

As organizations will probably make increasing use of technology-mediated interviews (and probably especially of videoconference interviews) in the future given the ongoing technological progress and also given the experience of the COVID-19 pandemic, an additional general issue for applied settings concerns the question whether the interview medium affects the psychometric properties of the interview. Even though our study does not allow conclusions in this regard, there is evidence that structured technology-mediated interviews can be valid predictors of interviewees’ job performance (e.g., Schmidt and Rader, 1999; Gorman et al., 2018). Thus, it seems likely that the same factors as for FTF interviews contribute to the validity of these technology-mediated interviews. Especially the usual aspects of structure seem relevant in this regard such as standardized questions for all applicants or the use job-related questions and of common frame-of-reference to evaluate applicants’ answers.

Data Availability Statement

The data supporting the conclusions of this article can be obtained from the first author.

Ethics Statement

Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author Contributions

AP designed the study together with KM and JS. AP conducted the study. KM analyzed the data and drafted the manuscript. JB and JS edited and revised previous versions of the manuscript. JS provided feedback during all stages of the study. All authors contributed to the article and approved the submitted version.

Conflict of Interest

AP is employed by Migros-Genossenschafts-Bund, Zurich, Switzerland.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors would like to thank Laura Hein for her help with setting up the study and collecting data and Franziska Gaßner for helpful suggestions concerning an earlier version of this manuscript.

References

Abraham, C., Boudreau, M.-C., Junglas, I., and Watson, R. (2013). Enriching our theoretical repertoire: The role of evolutionary psychology in technology acceptance. Eur. J. Inf. Syst. 22, 56–75. doi: 10.1057/ejis.2011.25

CrossRef Full Text | Google Scholar

Adams, G. A., Elacqua, T. C., and Colarelli, S. M. (1994). The employment interview as a sociometric selection technique. J. Group Psychother. Psychodrama Sociom. 47, 99–113.

Google Scholar

Åkerstedt, T. (1990). Psychological and psychophysiological effects of shift work. Scand. J. Work Environ. Health 16(Suppl. 1), 67–73. doi: 10.5271/sjweh.1819

PubMed Abstract | CrossRef Full Text | Google Scholar

Amoneit, C., Schuler, H., and Hell, B. (2020). Nutzung, validität, praktikabilität und akzeptanz psychologischer personalauswahlverfahren in deutschland 1985, 1993, 2007, 2020: fortführung einer trendstudie [Use, validity, practicality, and acceptance of personnel selection methods in Germany 1985, 1993, 2007, 2020: the continuation of a trend study]. Z. Arbeits Organisationspsychol. 64, 67–82. doi: 10.1026/0932-4089/a000311

CrossRef Full Text | Google Scholar

Arthur, W. Jr., Keiser, N. L., and Doverspike, D. (2018). An information-processing-based conceptual framework of the effects of unproctored internet-based testing devices on scores on employment-related assessments and tests. Hum. Perform. 31, 1–32. doi: 10.1080/08959285.2017.1403441

CrossRef Full Text | Google Scholar

Basch, J. M., and Melchers, K. G. (2019). Fair and flexible?! Explanations can improve applicant reactions towards asynchronous video interviews. Pers. Assess. Decis. 5:2. doi: 10.25035/pad.2019.03.002

CrossRef Full Text | Google Scholar

Basch, J. M., Melchers, K. G., Kegelmann, J., and Lieb, L. (2020a). Smile for the camera! The role of social presence and impression management in perceptions of technology-mediated interviews. J. Manag. Psychol. 35, 285–299. doi: 10.1108/JMP-09-2018-0398

CrossRef Full Text | Google Scholar

Basch, J. M., Melchers, K. G., Kurz, A., Krieger, M., and Miller, L. (2020b). It takes more than a good camera: which factors contribute to differences between face-to-face interviews and videoconference interviews regarding performance ratings and interviewee perceptions? J. Bus. Psychol. doi: 10.1007/s10869-020-09714-3 [Epub ahead of print].

CrossRef Full Text | PubMed Abstract | Google Scholar

Bauer, T. N., Truxillo, D. M., Paronto, M. E., Weekley, J. A., and Campion, M. A. (2004). Applicant reactions to different selection technology: Face-to-face, interactive voice response, and computer-assisted telephone screening interviews. Int. J. Sel. Assess. 12, 135–148. doi: 10.1111/j.0965-075X.2004.00269.x

CrossRef Full Text | Google Scholar

Baumeister, R. F., and Leary, M. R. (1995). The need to belong: Desire for interpersonal attachments as a fundamental human motivation. Psychol. Bull. 117, 497–529. doi: 10.1037/0033-2909.117.3.497

CrossRef Full Text | Google Scholar

Bell, B. S., Wiechmann, D., and Ryan, A. M. (2006). Consequences of organizational justice expectations in a selection system. J. Appl. Psychol. 91, 455–466. doi: 10.1037/0021-9010.91.2.455

PubMed Abstract | CrossRef Full Text | Google Scholar

Bernardi, L., Wdowczyk-Szulc, J., Valenti, C., Castoldi, S., Passino, C., Spadacini, G., et al. (2000). Effects of controlled breathing, mental activity and mental stress with or without verbalization on heart rate variability. J. Am. Coll. Cardiol. 35, 1462–1469. doi: 10.1016/s0735-1097(00)00595-7

CrossRef Full Text | Google Scholar

Blacksmith, N., Willford, J. C., and Behrend, T. S. (2016). Technology in the employment interview: A meta-analysis and future research agenda. Pers. Assess. Decis. 2, 12–20. doi: 10.25035/pad.2016.002

CrossRef Full Text | Google Scholar

Bosch, J. A., de Geus, E. J. C., Veerman, E. C. I., Hoogstraten, J., and Nieuw Amerongen, A. V. (2003). Innate secretory immunity in response to laboratory stressors that evoke distinct patterns of cardiac autonomic activity. Psychosom. Med. 65, 245–258. doi: 10.1097/01.PSY.0000058376.50240.2D

CrossRef Full Text | Google Scholar

Brenner, F. S., Ortner, T. M., and Fay, D. (2016). Asynchronous video interviewing as a new technology in personnel selection: The applicant’s point of view. Front. Psychol. 7:863. doi: 10.3389/fpsyg.2016.00863

PubMed Abstract | CrossRef Full Text | Google Scholar

Buss, D. (2005). The Handbook of Evolutionary Psychology. Hoboken, NJ: Wiley.

Google Scholar

Campion, M. A., Palmer, D. K., and Campion, J. E. (1997). A review of structure in the selection interview. Pers. Psychol. 50, 655–702. doi: 10.1111/j.1744-6570.1997.tb00709.x

CrossRef Full Text | Google Scholar

Chapman, D. S., and Rowe, P. M. (2001). The impact of videoconference technology, interview structure, and interviewer gender on interviewer evaluations in the employment interview: A field experiment. J. Occup. Organ. Psychol. 74, 279–298. doi: 10.1348/096317901167361

CrossRef Full Text | Google Scholar

Chapman, D. S., Uggerslev, K. L., and Webster, J. (2003). Applicant reactions to face-to-face and technology-mediated interviews: A field investigation. J. Appl. Psychol. 88, 944–953. doi: 10.1037/0021-9010.88.5.944

PubMed Abstract | CrossRef Full Text | Google Scholar

Chapman, D. S., and Webster, J. (2001). Rater correction processes in applicant selection using videoconference technology: The role of attributions. J. Appl. Soc. Psychol. 31, 2518–2537. doi: 10.1111/j.1559-1816.2001.tb00188.x

CrossRef Full Text | Google Scholar

Chapman, D. S., and Zweig, D. I. (2005). Developing a nomological network for interview structure: Antecedents and consequences of the structured selection interview. Pers. Psychol. 58, 673–702. doi: 10.1111/j.1744-6570.2005.00516.x

CrossRef Full Text | Google Scholar

Croes, E. A. J., Antheunis, M. L., Schouten, A. P., and Krahmer, E. J. (2016). Teasing apart the effect of visibility and physical co-presence to examine the effect of CMC on interpersonal attraction. Comput. Hum. Behav. 55, 468–476. doi: 10.1016/j.chb.2015.09.037

CrossRef Full Text | Google Scholar

Daft, R. L., and Lengel, R. H. (1986). Organizational information requirements, media richness and structural design. Manag. Sci. 32, 554–571. doi: 10.1287/mnsc.32.5.554

PubMed Abstract | CrossRef Full Text | Google Scholar

Dipboye, R. L., Macan, T., and Shahani-Denning, C. (2012). “The selection interview from the interviewer and applicant perspectives: Can’t have one without the other,” in The Oxford Handbook of Personnel Assessment and Selection, ed. N. Schmitt (New York, NY: Oxford University Press), 323–352.

Google Scholar

Faul, F., Erdfelder, E., Buchner, A., and Lang, A.-G. (2009). Statistical power analyses using GPower 3.1: Tests for correlation and regression analyses. Behav. Res. Methods 41, 1149–1160. doi: 10.3758/BRM.41.4.1149

PubMed Abstract | CrossRef Full Text | Google Scholar

Feiler, A. R., and Powell, D. M. (2013). Interview anxiety across the sexes: Support for the sex-linked anxiety coping theory. Pers. Individ. Dif. 54, 12–17. doi: 10.1016/j.paid.2012.07.030

CrossRef Full Text | Google Scholar

Gilliland, S. W., Groth, M., Baker, R. C. IV., Dew, A. F., Polly, L. M., and Langdon, J. C. (2001). Improving applicants’ reactions to rejection letters: An application of fairness theory. Pers. Psychol. 54, 669–703. doi: 10.1111/j.1744-6570.2001.tb00227.x

CrossRef Full Text | Google Scholar

Gorman, C. A., Robinson, J., and Gamble, J. S. (2018). An investigation into the validity of asynchronous web-based video employment-interview ratings. Consult. Psychol. J. Pract. Res. 70, 129–146. doi: 10.1037/cpb0000102

CrossRef Full Text | Google Scholar

Harold, C. M., Holtz, B. C., Griepentrog, B. K., Brewer, L. M., and Marsh, S. M. (2016). Investigating the effects of applicant justice perceptions on job offer acceptance. Pers. Psychol. 69, 199–227. doi: 10.1111/peps.12101

CrossRef Full Text | Google Scholar

Hausknecht, J. P., Day, D. V., and Thomas, S. C. (2004). Applicant reactions to selection procedures: An updated model and meta-analysis. Pers. Psychol. 57, 639–683. doi: 10.1111/j.1744-6570.2004.00003.x

CrossRef Full Text | Google Scholar

Huffcutt, A. I., and Arthur, W. (1994). Hunter and Hunter (1984) revisited: Interview validity for entry-level jobs. J. Appl. Psychol. 79, 184–190. doi: 10.1037/0021-9010.79.2.184

CrossRef Full Text | Google Scholar

Huffcutt, A. I., and Culbertson, S. S. (2011). “Interviews,” in APA Handbook of Industrial and Organizational Psychology, Vol. 2, ed. S. Zedeck (Washington, DC: American Psychological Association), 185–203.

Google Scholar

Huffcutt, A. I., Culbertson, S. S., and Weyhrauch, W. S. (2014). Moving forward indirectly: Reanalyzing the validity of employment interviews with indirect range restriction methodology. Int. J. Sel. Assess. 22, 297–309. doi: 10.1111/ijsa.12078

CrossRef Full Text | Google Scholar

Huffcutt, A. I., Van Iddekinge, C. H., and Roth, P. L. (2011). Understanding applicant behavior in employment interviews: A theoretical model of interviewee performance. Hum. Resour. Manag. Rev. 21, 353–367. doi: 10.1016/j.hrmr.2011.05.003

CrossRef Full Text | Google Scholar

Jansen, A., Melchers, K. G., Lievens, F., Kleinmann, M., Brändli, M., Fraefel, L., et al. (2013). Situation assessment as an ignored factor in the behavioral consistency paradigm underlying the validity of personnel selection procedures. J. Appl. Psychol. 98, 326–341. doi: 10.1037/a0031257

PubMed Abstract | CrossRef Full Text | Google Scholar

Jorna, P. G. A. M. (1992). Spectral analysis of heart rate and psychological state: A review of its validity as a workload index. Biol. Psychol. 34, 237–257. doi: 10.1016/0301-0511(92)90017-O

CrossRef Full Text | Google Scholar

Kettunen, J., and Keltikangas-Järvinen, L. (2001). Intraindividual analysis of instantaneous heart rate variability. Psychophysiology 38, 659–668. doi: 10.1111/1469-8986.3840659

CrossRef Full Text | Google Scholar

Kirschbaum, C., Pirke, K.-M., and Hellhammer, D. H. (1993). The “Trier Social Stress Test”: A tool for investigating psychobiological stress responses in a laboratory setting. Neuropsychobiology 28, 76–81. doi: 10.1159/000119004

PubMed Abstract | CrossRef Full Text | Google Scholar

Kock, N. (2004). The psychobiological model: Towards a new theory of computer-mediated communication based on Darwinian evolution. Organ. Sci. 15, 327–348. doi: 10.1287/orsc.1040.0071

PubMed Abstract | CrossRef Full Text | Google Scholar

Kroeck, K. G., and Magnusen, K. O. (1997). Employer and job candidate reactions to videoconference job interviewing. Int. J. Sel. Assess. 5, 137–142. doi: 10.1111/1468-2389.00053

CrossRef Full Text | Google Scholar

Langer, M., König, C. J., and Krause, K. (2017). Examining digital interviews for personnel selection: Applicant reactions and interviewer ratings. Int. J. Sel. Assess. 25, 371–382. doi: 10.1111/ijsa.12191

CrossRef Full Text | Google Scholar

Langer, M., König, C. J., Sanchez, D. R.-P., and Samadi, S. (2020). Highly automated interviews: Applicant reactions and the organizational context. J. Manag. Psychol. 35, 301–314. doi: 10.1108/JMP-09-2018-0402

CrossRef Full Text | Google Scholar

Lee, K. M., Jung, Y., Kim, J., and Kim, S. R. (2006). Are physically embodied social agents better than disembodied social agents? The effects of physical embodiment, tactile interaction, and people’s loneliness in human–robot interaction. Int. J. Hum. Comput. Stud. 64, 962–973. doi: 10.1016/j.ijhcs.2006.05.002

CrossRef Full Text | Google Scholar

Levashina, J., Hartwell, C. J., Morgeson, F. P., and Campion, M. A. (2014). The structured employment interview: Narrative and quantitative review of the research literature. Pers. Psychol. 67, 241–293. doi: 10.1111/peps.12052

CrossRef Full Text | Google Scholar

Lukacik, E.-R., Bourdage, J. S., and Roulin, N. (2020). Into the void: A conceptual model and research agenda for the design and use of asynchronous video interviews. Hum. Resour. Manag. Rev. doi: 10.1016/j.hrmr.2020.100789 [Epub ahead of print].

CrossRef Full Text | Google Scholar

McCarthy, J. M., and Goffin, R. (2004). Measuring job interview anxiety: Beyond weak knees and sweaty palms. Pers. Psychol. 57, 607–637. doi: 10.1111/j.1744-6570.2004.00002.x

CrossRef Full Text | Google Scholar

McColl, R., and Michelotti, M. (2019). Sorry, could you repeat the question? Exploring video-interview recruitment practice in HRM. Hum. Resour. Manag. J. 29, 637–656. doi: 10.1111/1748-8583.12249

CrossRef Full Text | Google Scholar

Melchers, K. G., Bösser, D., Hartstein, T., and Kleinmann, M. (2012). Assessment of situational demands in a selection interview: Reflective style or sensitivity? Int. J. Sel. Assess. 20, 475–485. doi: 10.1111/ijsa.12010

CrossRef Full Text | Google Scholar

Melchers, K. G., Klehe, U.-C., Richter, G. M., Kleinmann, M., König, C. J., and Lievens, F. (2009). “I know what you want to know”: The impact of interviewees’ ability to identify criteria on interview performance and construct-related validity. Hum. Perform. 22, 355–374. doi: 10.1080/08959280903120295

CrossRef Full Text | Google Scholar

Melchers, K. G., Lienhardt, N., von Aarburg, M., and Kleinmann, M. (2011). Is more structure really better? A comparison of frame-of-reference training and descriptively anchored rating scales to improve interviewers’ rating quality. Pers. Psychol. 64, 53–87. doi: 10.1111/j.1744-6570.2010.01202.x

CrossRef Full Text | Google Scholar

Morelli, N., Potosky, D., Arthur, W. Jr., and Tippins, N. (2017). A call for conceptual models of technology in I-O psychology: An example from technology-based talent assessment. Ind. Organ. Psychol. Perspect. Sci. Pract. 10, 634–653. doi: 10.1017/iop.2017.70

CrossRef Full Text | Google Scholar

Müller, B., and Basler, H.-D. (1993). Kurzfragebogen zur Aktuellen Beanspruchung (KAB) [Short Questionnaire for Current Strain]. Weinheim: Beltz.

Google Scholar

Niskanen, J. P., Tarvainen, M. P., Ranta-Aho, P. O., and Karjalainen, P. A. (2004). Software for advanced HRV analysis. Comput. Methods Programs Biomed. 76, 73–81. doi: 10.1016/j.cmpb.2004.03.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Oostrom, J. K., Born, M. P., Serlie, A. W., and van der Molen, H. T. (2010). Webcam testing: Validation of an innovative open-ended multimedia test. Eur. J. Work Organ. Psychol. 19, 532–550. doi: 10.1080/13594320903000005

CrossRef Full Text | Google Scholar

Oostrom, J. K., Melchers, K. G., Ingold, P. V., and Kleinmann, M. (2016). Why do situational interviews predict performance? Is it saying how you would behave or knowing how you should behave? J. Bus. Psychol. 31, 279–291. doi: 10.1007/s10869-015-9410-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Piazza, J., and Bering, J. M. (2009). Evolutionary cyber-psychology: Applying an evolutionary framework to Internet behavior. Comput. Hum. Behav. 25, 1258–1269. doi: 10.1016/j.chb.2009.07.002

CrossRef Full Text | Google Scholar

Ployhart, R. E., Schmitt, N., and Tippins, N. T. (2017). Solving the supreme problem: 100 years of selection and recruitment at the Journal of Applied Psychology. J. Appl. Psychol. 102, 291–304. doi: 10.1037/apl0000081

PubMed Abstract | CrossRef Full Text | Google Scholar

Potosky, D. (2008). A conceptual framework for the role of the administration medium in the personnel assessment process. Acad. Manag. Rev. 33, 629–648. doi: 10.2307/20159428

CrossRef Full Text | Google Scholar

Powell, D. M., Stanley, D. J., and Brown, K. N. (2018). Meta-analysis of the relation between interview anxiety and interview performance. Can. J. Behav. Sci. 50, 195–207. doi: 10.1037/cbs0000108

CrossRef Full Text | Google Scholar

Roch, S. G., Woehr, D. J., Mishra, V., and Kieszczynska, U. (2012). Rater training revisited: An updated meta-analytic review of frame-of-reference training. J. Occup. Organ. Psychol. 85, 370–395. doi: 10.1111/j.2044-8325.2011.02045.x

CrossRef Full Text | Google Scholar

Ryan, A. M., Inceoglu, I., Bartram, D., Golubovich, J., Grand, J., Reeder, M., et al. (2015). “Trends in testing: Highlights of a global survey,” in Employee Recruitment, Selection, and Assessment: Contemporary Issues for theory and Practice, eds I. Nikolaou and J. K. Oostrom (Hove: Psychology Press), 136–153.

Google Scholar

Ryan, A. M., and Ployhart, R. E. (2014). A century of selection. Annu. Rev. Psychol. 65, 693–717. doi: 10.1146/annurev-psych-010213-115134

PubMed Abstract | CrossRef Full Text | Google Scholar

Sacco, D. F., and Ismail, M. M. (2014). Social belongingness satisfaction as a function of interaction medium: Face-to-face interactions facilitate greater social belonging and interaction enjoyment compared to instant messaging. Comput. Hum. Behav. 36, 359–364. doi: 10.1016/j.chb.2014.04.004

CrossRef Full Text | Google Scholar

Sauer, J., Schramme, S., and Rüttinger, B. (2000). Knowledge acquisition in ecological product design: The effects of computer-mediated communication and elicitation method. Behav. Inf. Technol. 19, 315–327. doi: 10.1080/014492900750000027

CrossRef Full Text | Google Scholar

Schmidt, F. L., and Rader, M. (1999). Exploring the boundary conditions for interview validity: Meta-analytic validity findings for a new interview type. Pers. Psychol. 52, 445–464. doi: 10.1111/j.1744-6570.1999.tb00169.x

CrossRef Full Text | Google Scholar

Sears, G. J., Zhang, H., Wiesner, W. H., Hackett, R. D., and Yuan, Y. (2013). A comparative assessment of videoconference and face-to-face employment interviews. Manag. Decis. 51, 1733–1752. doi: 10.1108/MD-09-2012-0642

CrossRef Full Text | Google Scholar

Short, J., Williams, E., and Christie, B. (1976). The Social Psychology of Telecommunications. London: Wiley.

Google Scholar

Silvester, J., Anderson, N., Haddleton, E., Cunningham-Snell, N., and Gibb, A. (2000). A cross-modal comparison of telephone and face-to-face selection interviews in graduate recruitment. Int. J. Sel. Assess. 8, 16–21. doi: 10.1111/1468-2389.00127

CrossRef Full Text | Google Scholar

Smith, P. C., and Kendall, L. M. (1963). Retranslation of expectations: An approach to the construction of unambiguous anchors for rating scales. J. Appl. Psychol. 47, 149–155. doi: 10.1037/h0047060

CrossRef Full Text | Google Scholar

Straus, S. G., and McGrath, J. E. (1994). Does the medium matter? The interaction of task type and technology on group performance and member reactions. J. Appl. Psychol. 79, 87–97. doi: 10.1037/0021-9010.79.1.87

PubMed Abstract | CrossRef Full Text | Google Scholar

Straus, S. G., Miles, J. A., and Levesque, L. L. (2001). The effects of videoconference, telephone, and face-to-face media on interviewer and applicant judgments in employment interviews. J. Manag. 27, 363–381. doi: 10.1016/S0149-2063(01)00096-4

CrossRef Full Text | Google Scholar

Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology (1996). Heart rate variability: Standards of measurement, physiological interpretation, and clinical use. Eur. Heart J. 17, 354–381. doi: 10.1093/oxfordjournals.eurheartj.a014868

CrossRef Full Text | Google Scholar

Tippins, N. T. (2009). Where is the unproctored internet testing train headed now? Ind. Organ. Psychol. Perspect. Sci. Pract. 2, 69–76. doi: 10.1111/j.1754-9434.2008.01111.x

CrossRef Full Text | Google Scholar

Tippins, N. T., and Adler, S. (2011). Technology-Enhanced Assessment of Talent. San Francisco, CA: Jossey-Bass.

Google Scholar

Truxillo, D. M., Bodner, T. E., Bertolino, M., Bauer, T. N., and Yonce, C. A. (2009). Effects of explanations on applicant reactions: A meta-analytic review. Int. J. Sel. Assess. 17, 346–361. doi: 10.1111/j.1468-2389.2009.00478.x

CrossRef Full Text | Google Scholar

Van Iddekinge, C. H., Raymark, P. H., and Roth, P. L. (2005). Assessing personality with a structured employment interview: Construct-related validity and susceptibility to response inflation. J. Appl. Psychol. 90, 536–552. doi: 10.1037/0021-9010.90.3.536

PubMed Abstract | CrossRef Full Text | Google Scholar

Van Iddekinge, C. H., Raymark, P. H., Roth, P. L., and Payne, H. S. (2006). Comparing the psychometric characteristics of ratings of face-to-face and videotaped structured interviews. Int. J. Sel. Assess. 14, 347–359. doi: 10.1111/j.1468-2389.2006.00356.x

CrossRef Full Text | Google Scholar

Wilhelmy, A., Kleinmann, M., König, C. J., Melchers, K. G., and Truxillo, D. M. (2016). How and why do interviewers try to make impressions on applicants? A qualitative study. J. Appl. Psychol. 101, 313–332. doi: 10.1037/apl0000046

PubMed Abstract | CrossRef Full Text | Google Scholar

Woods, S. A., Ahmed, S., Nikolaou, I., Costa, A. C., and Anderson, N. R. (2019). Personnel selection in the digital age: A review of validity and applicant reactions, and future research challenges. Eur. J. Work Organ. Psychol. 29, 64–77. doi: 10.1080/1359432X.2019.1681401

CrossRef Full Text | Google Scholar

Zeier, H., Brauchli, P., and Joller-Jemelka, H. I. (1996). Effects of work demands on immunoglobulin A and cortisol in air traffic controllers. Biol. Psychol. 42, 413–442. doi: 10.1016/0301-0511(95)05170-8

CrossRef Full Text | Google Scholar

APPENDIX

Appendix

Example of a past-oriented interview question used in the interview and of the corresponding descriptive anchors:

“You probably remember days like this: Your day was packed with appointments, for example school, sports, doctors’ appointments, and you have invited friends over for a dinner at your place but still needed to do the grocery shopping. Furthermore, there were several important phone-calls to be made that couldn’t be delayed. How did you deal with that situation?”

Descriptive anchors for 5 = good through 3 = average to 1 = poor answers:

5 (good): set priorities and made a written daily plan. Thought about how tasks could be summarized in a meaningful way (e.g., doctor’s office is next to the gym).

3 (average): thought about an order in which he/she wanted to do things and kept to it to a large extent.

1 (poor): simply worked through the tasks in a more or less spontaneous manner.

Keywords: selection interviews, technology-mediated interviews, interview performance, personnel selection, applicant perceptions, interview anxiety

Citation: Melchers KG, Petrig A, Basch JM and Sauer J (2021) A Comparison of Conventional and Technology-Mediated Selection Interviews With Regard to Interviewees’ Performance, Perceptions, Strain, and Anxiety. Front. Psychol. 11:603632. doi: 10.3389/fpsyg.2020.603632

Received: 07 September 2020; Accepted: 11 December 2020;
Published: 12 January 2021.

Edited by:

Carlos María Alcover, Rey Juan Carlos University, Spain

Reviewed by:

Deborah M. J. Powell, University of Guelph, Canada
Stephen M. Colarelli, Central Michigan University, United States

Copyright © 2021 Melchers, Petrig, Basch and Sauer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Klaus G. Melchers, klaus.melchers@uni-ulm.de

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.