Impact Factor 2.990 | CiteScore 3.5
More on impact ›


Front. Psychol., 16 July 2021 |

The Multivariable Multiaxial Suggestibility Inventory-2 (MMSI-2): A Psychometric Alternative to Measure and Explain Supernatural Experiences

  • 1School of Psychology, Education and Sport Sciences, Blanquerna, Ramon Llull University, Barcelona, Spain
  • 2Faculty of Health, Psychology and Social Care, Manchester Metropolitan University, Manchester, United Kingdom

This paper presents the English adaptation of the Multivariable Multiaxial Suggestibility Inventory-2 (MMSI-2), a questionnaire developed specifically for psychological assessment and prediction of anomalous phenomena. The sample consisted of 613 respondents from England (47.6% were women and 52.4% men). All of them were of legal age (mean = 34.5; standard deviation = 8.15). An exploratory factor analysis was applied, and three confirmatory factor models were adjusted. Omega coefficients and test-retest designs were used for reliability analysis. The MMSI-2 has a valid internal structure consisting of five macrofactors: Clinical Personality Tendencies (CPT), Anomalous Perceived Phenomena (APP), Incoherent Manipulations (IMA), Altered States of Consciousness (ASC), and Openness (OP). Omega coefficients for CPT and OP factors were low but acceptable. Furthermore, test-retest trials were excellent for all scales and factors. The psychological factors CPT, IMA, and ASC predicted 18.3% of the variance of anomalous experiences (APP). The authors concluded the English MMSI-2 was a valid and reliable test for the evaluation of anomalous phenomena but recommend that subsequent research reviews the predictive quality of the underlying model.


Anomalous phenomena represent behaviors and perceptions that conflict with the ontological bases of current science (e.g., Gallagher et al., 1994; Kuhn et al., 2016). Examples are paranormal beliefs and experiences, such as feeling the physical presence of deceased beings, and hearing unexplained noises or blows (e.g., Jinks, 2019). Other parapsychology perceptions include anticipation of unpredictable stimuli (called precognition), mind-and-mind communication (telepathy), and mind-matter interaction (e.g., Wiseman and Watt, 2017; Cardeña, 2018). These experiences constitute rationally impossible phenomena in scientific terms (e.g., Tobacyk, 2004; Musella, 2005). Accordingly, psychology and psychiatry generally explain these behaviors and phenomena from three theoretical approaches, both clinical and subclinical (e.g., Escolà-Gascón, 2021).

The first relates to the continuum model of psychosis (e.g., Johns and van Os, 2001; van Os et al., 2009; American Psychiatric Association, 2013). From this perspective, anomalous phenomena are explained as hallucinatory symptoms, which manifest at different levels (e.g., Stefanis et al., 2002; Shapiro et al., 2019). Less intense or attenuated hallucinations represent subclinical symptoms that lack psychopathological value within the framework of psychosis (e.g., Nordgaard et al., 2019; Fekih-Romdhane et al., 2020). The most frequent and invasive hallucinations are the most dysfunctional and define acute hallucinatory pictures (in clinical terms) (e.g., Kelly et al., 2020). The fact that anomalous phenomena are classified as hallucinations means that they are not real and lack ontological value (e.g., Reber and Alcock, 2020).

The second approach relates to perceptual distortion and deception (e.g., Ey et al., 1980). Although both perceptual alterations present within the spectrum of psychoses, they differ from hallucinations since they require a sensory triggering object (e.g., Jaspers, 1993; El-Mallakh and Walker, 2010). However, perceptual deception does not usually have a psychopathological origin (e.g., Parker, 2006), which is why they are also called “illusions” or perceptual biases (e.g., Barberia et al., 2013, 2018). Examples include pareidolia and the Barnum effect and affective (e.g., Belloch et al., 1995; Shermer, 2011). Numerous studies have found that these distortions of perception are common in subjects who believe in the existence of the paranormal (e.g., Matute et al., 2011; Griffiths et al., 2018; Torres et al., 2020). Likewise, in some cases, they represent causal attributions or illusions that try to reduce levels of uncertainty in the face of specific problems, so that their psychological function responds to the need to seek control (e.g., Groth-Marnat and Pegden, 1998; Matute et al., 2015).

The third model is called phenomenological and cognitive because it focuses on the belief systems and meanings of the individual (e.g., Irwin, 1993, 2009, 2017; Font, 2016; Irwin and Marks, 2018; Lange et al., 2019). According to this model, human beings interact with environmental inputs through neuropsychological processes that define the sensation and perception of stimuli (e.g., Wain and Spinella, 2007). These conclude with the cognitive representation of perceived objects (e.g., Lee et al., 2018). The mental representation of a given content implies the conscious attribution of a category or meaning - previously learned and recorded - which allows the individual to develop a logical and relational interpretation of the phenomena that occur in objective reality (e.g., Fishbein and Ajzen, 1975).

Interpretations configure the belief system and allow for a conscious sense of experience (e.g., Irwin et al., 2013). Note that from this perspective, the concept of “belief” does not mean accepting the existence or non-existence of an object; it refers to the way of understanding environment inputs (e.g., Drinkwater et al., 2017). Representations are variable, and each individual constructs their own comprehensive schemes on the functioning of reality (e.g., Pennycook et al., 2012). The crystallization of learned schemes form the belief systems (e.g., Schriever, 2000; Irwin, 2009). Then, the “paranormal” experience is resolved psychologically by explaining it as a cognitive representation, whose categories or meanings are based on contents incompatible with scientific rationalism (e.g., Simmonds-Moore, 2016).

However, psychology has the problem that certain scientific investigations have tried to statistically contrast the occurrence of some apparently impossible experiences and obtained significant results. This is the case for pre-cognition (e.g., Tressoldi et al., 2009; Bem, 2011; Mossbridge et al., 2012; McCraty and Atkinson, 2014; Bem et al., 2016; Mossbridge and Radin, 2018), telepathy (e.g., Moss and Gengerelli, 1967; Krippner and Ullman, 1970; Honorton, 1985; Sheldrake and Avraamides, 2009), the anomalous reception of information or mediumship (e.g., Beischel and Schwartz, 2007; Kelly and Arcangel, 2011; Sudduth, 2013; Beischel et al., 2015), and the mind-matter interaction (e.g., Radin, 2006; Tressoldi et al., 2014). Studies of core “psi” phenomena experiences (see Cardeña, 2018; Jinks, 2019), such as these facilitate discussion regarding the possibility of the existence of alternative phenomena that transgress the bases of human perception (e.g., Utts, 2018; Cardeña, 2019). For this reason, these experiences are also called anomalous, since results are observed in favor of these phenomena that supposedly challenge the foundations of science (see French and Stone, 2014). This is a problem because if, until now, paranormal experiences were discussed and examined as hallucinations, perceptual deformations and representation of meanings.

These studies were highly controversial, and currently, the scientific value of the respective results is contentious (e.g., Reber and Alcock, 2020). It seems that the scientific community is divided into two factions (see Carter, 2012). Model 1 starts from the apriorism that “psi” phenomena exist and represents phenomena with ontological-scientific validity (see Bem, 2011). Whereas, model 2 contends that such phenomena do not exist (see also Álvarez, 2007). This last position produces research that systematically and rationally denies the existence of “psi” (e.g., Carter, 2012). Since both lines have published research - and even meta-analysis (e.g., Storm et al., 2010, 2013; Utts, 2018) - supporting their perspectives, no unanimous conclusion has been reached (e.g., French and Stone, 2014). It is common for model 1 scientists to not recognize the research of model 2 scientists, and vice versa (e.g., Carter, 2012). Moreover, this controversy is so competitive that some scientists overlook formal scientific research, discussions and databases published on this topic (see Moreira-Almeida et al., 2005; Parker, 2006). This is a serious error since science can be harmed by erroneous research decisions. Explicitly, researchers cannot and should not ignore the controversy associated with the complexity of knowledge (e.g., Bunge, 2013). The complexity of knowledge must be resolved empirically and rationally through the application of the scientific method, and not via arguments based on opinion that aise from academic beliefs-conceptions (e.g., León and Montero, 2002).

This discussion on how to interpret “psi” or anomalous phenomena directly impacts psychological evaluation (within and outside the psychopathological field) (e.g., Escolà-Gascón and Gallifa, 2020), since in numerous clinical cases behaviors similar to “psi” phenomena are described (e.g., Bobrow, 2003). How is a hallucination different from a “parapsychological” experience (related to “psi” phenomena)? The answer to this question is resolved to the extent that mental health professionals believe in the existence of “psi” phenomena. Professional disinformation is also a fact, and subsequently individuals rely heavily on their own opinion or belief (e.g., Pasricha, 2011). Outside the experimental context, there are no psychiatric and psychological assessment tools that address this conflict. Specifically, psychometric questionnaires can be found that measure anomalous phenomena such as hallucinations (e.g., Stefanis et al., 2002; Mason and Claridge, 2006; Fonseca-Pedrero et al., 2011), perceptual deformations (e.g., Chapman et al., 1978; Bell et al., 2006) or illusions (e.g., Peters et al., 2004), or evaluate these behaviors as if phenomena exists (e.g., Wahbeh et al., 2019). Actually, neither the apriorism of model 1, nor the apriorism of model 2 are determinable, because in such an instance the Aristotelian fallacy of affirming the consequent applied to statistical methodology is incurred (see Pardo and Román, 2013): one would be accepting the veracity of a hypothesis (null or alternative) from causes that have been contrasted, but remain uncertain because results are contradictory.

What consequences would the diagnosis of a hallucination as a “psi” phenomenon and vice versa have for the patient? This question confronts the scientific beliefs of each professional - there are those who offer a discourse close to model 1 and those who defend model 2 -, but in any case, it reflects also a need: that of an a priori model that does not deny or affirm the existence of “psi” phenomena. It would be useful to propose an integrative model (not an eclectic one), which allows examination of anomalous phenomena from a utilitarian, pragmatic and empirical-statistical perspective.

This perspective could be based on the following idea: it is not the job of the psychiatrist or psychologist to contrast the empirical-experimental value of what the patient tells (e.g., Groth-Marnat, 2009). However, it is important to examine whether the anomalous phenomena perceived by the patient could be explained by other psychological indicators usually observed in these cases (e.g., Irwin, 2009; Pasricha, 2011). Discarding the greatest possible number of explanations (or variables) observed from psychology and what is scientifically observed in anomalous phenomena does not resolve the controversy between model 1 and model 2, though, it allows greater objectivity.

This study examined the validity of the internal structure of the empirical-statistical model of the Multivariable Multiaxial Suggestibility Inventory-2 (hereafter MMSI-2). This is a broad-spectrum questionnaire specialized for the evaluation of the psychological foundations of anomalous phenomena, which gathers up to 16 psychological variables predictive of this class (e.g., Escolà-Gascón, 2020a).

The MMSI-2 starts from 4 logical-rational assumptions: (1) a perceptual alteration in itself is not a hallucination, a perceptual deformation, a cognitive bias or a fraudulent invention. (2) The measurement and quantification of other psychological variables (such as structured sources of information) is required to contrast the hallucinatory, perceptive, cognitive, and fraudulent value of a perceptual alteration. (3) Although certain psychological variables do not statistically explain certain anomalous phenomena, it does not mean that these phenomena have a “parapsychological” or “supernatural” origin. Finally, (4) anomalous phenomena can be explained or statistically unexplained, but that does not imply that they are “inexplicable” or “explainable” phenomena for science” (the latter belongs to the field of philosophy of science and not to the scientific method) (see Escolà-Gascón, 2020b).

In clinical practice focused on psychiatric evaluation of cases, the experimental method is not applied, and self-report techniques mostly inform diagnostic decisions (see Groth-Marnat, 2009). Therefore, what should be offered is not only basic research focused on how to “export” the experimental method to clinical practice, but also on “importing” or developing the necessary evaluation systems that allow objective and effective evaluative decisions (this means, based on evidence).

The objective of this study was to offer a useful tool that allows knowing - from the empirical-statistical evidence - whether there are objective reasons to suspect the presence of psychological indicators that could explain the anomalous experiences reported by the patient, without assuming a priori the existence or non-existence of this class of “supposed” phenomena.

Materials and Methods

Description of the Sample

The sample consisted of 613 participants (47.6% were women and 52.4% men). All were adults (mean = 34.5; standard deviation = 8.15), who agreed to participate voluntarily in the research, and declared no official psychiatric history. The subjects came from three English locations: 33.8% resided in Portsmouth, 32.1% resided in Worthing, and 34.1% resided in Brighton. In relation to educational level, 32.8% completed secondary education, 36.4% also attended vocational training, and 30.8% reached university studies. Likewise, the participants signed an informed consent form in which the objectives of the study were specified and guaranteed that the treatment of the data would be completely anonymous. Those who accepted (77.6%) offered their first name and email as the only references to contact them and thus were able to develop the test-retest design (see section Statistical Analysis Applied). Finally, two conditions were considered as inclusion and exclusion criteria for the sample: (1) all subjects had to be adults over 18 years of age, and (2) no subjects had to have official psychiatric antecedents. All participants met these two conditions.


This research used a correlation-based design grounded in analysis of self-report questionnaires. Specifically, during the summer months between 2016 and 2019, the research team traveled to England for educational and work reasons unrelated to this research. During different stays, the Multivariable Multiaxial Suggestibility Inventory-2 (MMSI-2) questionnaire was applied digitally, which was originally developed in Spain by Escolà-Gascón (2020a). The translation was carried out by the same author of the questionnaire and was subsequently reviewed by different English-speaking health professionals, both Americans (specifically from the state of California, USA), and British (specifically residents between Brighton and Worthing) (for more information, see the Acknowledgments section).

Originally, the translation was done as a complement to the Spanish version (in the hypothetical case that other foreign professionals wanted to use the MMSI-2). However, the possibility of traveling to England and the United States led to the mobilization of the necessary resources to prepare its application in the respective countries. It was then that professionals who reviewed the translations and collaborated with the research were contacted. The English application of the MMSI-2 during the months and years specified above was carried out in parallel with the Spanish application, which is also in the process of publication.

It should be taken into account that the items of the MMSI-2 were successfully subjected to peer-to-peer validation (in their original version), this allowed for the elimination of 49 items out of the 223 belonging to the first version of the MMSI. In the same way, before proceeding with the applications of the 174 definitive items, an unpublished pilot study was developed that warned of the errors that had to be changed to optimize the initial factorial solutions. These errors were based on excessively ambiguous expressions that prevented obtaining the minimum variability necessary for the application of any statistical analysis. This type of error was corrected. Thus, the 174 final items of the MMSI were distributed in such a way that it was possible to detect if the subject answered randomly to the questions posed. A scale was developed (called Inconsistencies or K) with 12 statements that expressed rationally impossible contents (e.g., “Little Red Riding Hood is a real character”). These items were positioned based on question 52, since they intended to prevent not only random responses but also the fatigue effect associated with this type of extensive test (see Barbero et al., 2015).

This study used confirmatory factor analysis (hereafter CFA) to validate the empirical model of the 16 primary dimensions of MMSI-2, which were obtained by exploratory factorial techniques. It is precisely for this reason that the underlying structural model of the MMSI can be called “empirical-statistical,” given that – unlike most of the questionnaires that are identified in this context of anomalous phenomena – the scales were not defined from a hypothetical-deductive theory. The published scientific evidence was considered - not scientific “a priorism” discussed in the theoretical framework - and the exploratory factor analyses applied to the items of the Spanish version (see Escolà-Gascón, 2020a).

Given that it was intended to test the construct validity of the 16 dimensions of the MMSI, only the direct scores for each scale of the questionnaire were recorded in the raw data matrix. The individual responses for each item were not saved because the application was digital and the correction of the responses was automated to save time in the manual coding of the scores and to increase the sample size. It should be noted that no subject showed missing values, so the sample used did not undergo purifications that substantially reduced its size. The process related to the conceptual and methodological development of the MMSI-2 can be consulted in more detail at Escolà-Gascón (2020a,b).

Description of the Instrument

The English version of the Multivariable Multiaxial Suggestibility Inventory-2 (MMSI-2) was used, which consists of 174 items, whose responses are coded using a Likert scale that fluctuates from the value 1 (“completely disagree”) to 5 (“totally agree”). Its items are distributed in the following 16 first-order scales: Inconsistencies (K), Lies (L), Fraud (F), Simulation (Si), Neurasthenia (Nt), Substance Use (Cs), Suggestibility (Su), Thrill-Seeking (Be), Histrionism (Hi), Schizotypy (Ez), Paranoia (Pa), Narcissism (Na), Anomalous Visual/Auditory Phenomena (Pva), Anomalous Tactile Phenomena (Pt), Anomalous Olfactory Phenomena (Po), and Anomalous Cenesthetic Phenomena (Pc). The exploratory factor analyses of the Spanish version indicated that these 16 scales could be grouped into 4 higher order factors: Clinical Personality Tendencies (CPT), Anomalous Perceived Phenomena (APP), Incoherent Manipulations (IMA), and Altered States of Consciousness (ASC). The MMSI-2 presents statistical evidence in its Spanish version that supports the validity and reliability of the test, even in its reduced version (the MMSI-2-R) (see Escolà-Gascón, 2020a,b; Escolà-Gascón and Gallifa, 2020).

Statistical Analysis Applied

Data analysis used the JAMOVI program (see The Jamovi Project, 2019) and the R code applied to the R Core team (see R Core Team, 2018).

Three confirmatory factor analyses (CFAs) were applied by the maximum likelihood estimation method and were based on: (1) the original Spanish version, (2) second-order factors extracted from a previous exploratory factorial analysis (hereafter EFA), and (3) the predictive value of second-order factors on the anomalous phenomena themselves (in this way, the underlying empirical-statistical model could be tested). In the EFA, the criterion based on the minimum unweighted residuals was used as the extraction method, since it does not require the a priori calculation of the communalities of items (see Mulaik, 2018). The parallel analysis technique was used to determine the number of factors to be extracted (e.g., Reise et al., 2000) because it is a more precise and effective method than the traditional Kaiser criterion (see Kline, 1999). Direct oblimin rotation was also applied to optimize the extracted solution. Orthogonal rotations were not applied as they are unrealistic criteria in the field of social sciences, since they reduce the correlation between the factors to “0” (see Abad et al., 2015). Logically, the rotation was only applied on the previous EFA, and not on the CFAs.

Reliability was examined for each macrofactor by the internal consistency coefficients based on factorial loads. They differ from the classic Cronbach's alpha in that they do not take into account the number of items of each factor; instead, the factorial loads obtained for each grouped variable are used (e.g., Barbero et al., 2015). For this reason, they are very useful coefficients in the multidimensional measurement of internal consistency (see Trizano-Hermosilla and Alvarado, 2016). There are different coefficients based on factorial loads (see Heise and Bohrnstedt, 1970), but in this study, the version proposed by McDonald (1999) can be formulated as follows:

ωt=(λj)2[(λj)2+(1λj2)]=(λj)2[(λj)2+(ψ)]    (1)


λj is the saturation of the item-variable j,

λj2 is the commonality of the item-variable j, and

ψ is the unique variance.

This equation is integrated into the JAMOVI program. It should be noted that reliability is applied to second-order factors, which means that the subscripts j will not be the items but the scales themselves.

Given that the coefficient ωt would only be applied to the macrofactors and not to the primary scales, we proceeded with the application of test-retest trials in the primary scales (which measure the longitudinal consistency of the scores). These tests could only be applied to 23.2% of the sample (N = 142) and 160 total days elapsed between both applications (number of minimum days elapsed since the first application = 150; number of flexible days = 10). The number of flexible days refers to the time each participant had to respond to the second application of the questionnaire. After these 10 days, the participant could no longer answer the second application. Of the 77.6% of the subjects, 54.4% did not answer the second application and left the study or responded outside the deadline. A total of 22.4% of the participants did not want to give their email to follow up and answer the second application. The analysis is performed with Student's t distribution and Pearson's correlation linear coefficients. Understanding that it is intended to maintain the null hypothesis in these contrasts and the alternative in the case of correlations, if any Student's t-test yielded significant differences, non-parametric tests would also be applied (Mann-Whitney U-test). In all analyses, a risk of error of 1% was applied.


Exploratory Factor Analysis

Before testing the different models of the internal structure of the MMSI-2, we wanted to explore whether it was possible to extract a factorial solution outside the theoretical background from the raw scores recorded. Table 1 shows the descriptive statistics for each scale. Table 2 shows the exploratory factorial solution obtained, which is formed by five factors that together explain 46.7% of the variance. Considering Figure 1, the crossing of the curves indicates that the best and most stable solution is that which retains up to five factors.


Table 1. Descriptive statistics of MMSI-2 scales (N = 613).


Table 2. Exploratory factor analysis with oblimin rotation.


Figure 1. Scree plot of parallel analysis.

The first coincides with the factor of the original version called Incoherent Manipulations (IMA) and is composed of the K, L, F, and Si scales. The second also coincides with previous research, is called Clinical Personality Tendencies (CPT) and is characterized by the Hi, Ez, Pa, and Na scales. The third group includes the Pva, Pt, Po, and Pc scales, which is equivalent to the second-order factor called Anomalous Perceived Phenomena (APP). The fourth factor groups the Be and Su scales, a fact that differs from the original statistical justification and proposes the formation of a new second-order factor, which can be called Openness (henceforth OP). Both susceptibility (Su) and the search for emotions (Be) represent two facets of personality that describe the subject's predisposition to feel new experiences and tolerate new emotional states (e.g., Costa and McCrae, 2008). This seems to coincide with the “Big Five” model of personality, researched and replicated by multiple studies (see Goldberg, 1993). This will be analyzed in the discussion. The last factor is called Altered States of Consciousness (ASC), which includes the Nt and Cs scales. This coincides with previous validations of the MMSI-2.

Confirmatory Factor Analysis

From the previous results, three confirmatory models were adjusted: (1) the exploratory-empirical model, which is based on the previous EFA; (2) the original theoretical model, whose solution does not include the OP factor (in total, it groups the four factors described in section Description of the Instrument); and (3), the alternative model inferred from the first and second models. This last proposal examined with what weights the anomalous phenomena evaluated in the APP factor can be predicted by the IMA, CPT, and ASC factors (this idea is also proposed in the original statistical justification). The weights and standardized correlations for each type of model are shown in Figures 24.


Figure 2. Trace graph for the exploratory-empirical model (5-factor model).


Figure 3. Trace graph of the Spanish theoretical model (4-factor model).


Figure 4. Trace graph of the alternative model (of 5 latent-variables model with 3 factors with prediction effects on APP).

To contrast whether the estimated parameters successfully reproduce the variance-covariance matrices extracted from the raw matrix, the fit indices specified in Table 3 were used. It should be noted that the risk of error is adjusted to 1% and that the Chi Square statistic is highly sensitive to the sample size (see Brown, 2015). Table 3 also shows the results of the fit indices for each type of model tested (see also Figures 24).


Table 3. Model fit indices for the exploratory and theoretical model.

Both the empirical-exploratory model and the alternative model allow maintaining the null hypothesis of goodness of fit through the Chi Square statistic. In fact, for these two solutions, the AGFI (adjusted goodness of fit index) and the RMSEA (root mean square error of approximation), which take into account the degree of parsimony of the adjusted model, yielded favorable results, and the theoretical model offered a more parsimonious solution by including fewer parameters (see Figure 3). Unlike the comparative indices, the AIC and CAIC (Akaike information criterion and Consistent Akaike information criterion) and Bayesian (Bayes information criterion or BIC) indicators quantify the discrepancies observed between the variance-covariance matrix estimated from the parameters and the empirical variance-covariance matrix attributed to the data. According to these indices, the theoretical model showed the highest values, which means that it is the model that offers the most imbalance in relation to the other 2.

The exploratory-empirical model yielded the best fit indices, followed by the alternative model. However, the alternative model predicted anomalous phenomena (APP) with standardized regression weights below 0.3. Factors CPT, ASC and CPT predicted 18.3% of the variance of the anomalous phenomena evaluated by APP, which is a substantially low percentage compared to the original version (>50%) (see Escolà-Gascón, 2020a). Considering these results, in the Anglo-Saxon framework, it is appropriate to adjust the construct validity of the MMSI-2 according to the 5-factor solution and not the 4-factor solution.

Reliability Analysis

In this study, the reliability of the scales and factors was examined using two types of methods: on the one hand, the McDonald Omega coefficients measured the internal consistency of the second-order factors, and on the other hand, Pearson's correlation coefficients were also used as reliability estimators between two equivalent but temporarily different applications. Table 4 shows the descriptive statistics for the factors of the extracted solutions and the McDonald Omega coefficients.


Table 4. Descriptive statistics and reliability coefficients for second order factors.

The omega coefficients were not especially high for most of the factors, with the exception of IMA and CPT, whose indices are above 0.7. When the CPT internal consistency was examined by including the Su and Be scales in this factor (see Figure 3), the factor obtained a poorer result (<0.55). The negative correlations between the Be-Su scales and the other indicators could explain this unexpected change. This is suspected because OP negatively correlates with CPT. However, this hypothesis can be tested by the correlation between Be-Su and the other scales that make up CPT in the theoretical model. Figure 5 shows a heatmap in which Be-Su is related to the CPT factor scales.


Figure 5. Heatmap and correlations between CPT scales (including Be and Su).

Tables 5, 6 contain descriptions for each scale and factor (given that this sample is limited to participants who responded promptly to the second application of MMSI-2). In this subsample, 142 subjects collaborated (30.9% were men and 23.3% were women). A total of 30.9% of the participants resided in Worthing, 10.3% in Portsmouth and 13% in Brighton. A total of 16.4% of the subjects completed secondary education, 20.2% received vocational training and 17.6% attended university studies.


Table 5. Test re-test coefficients and t-tests for paired groups.


Table 6. Test re-test coefficients and t-tests for paired groups (continuation Table 5).

These results did not show significant changes in the average scores for each scale and factor. All the variables included demonstrated significant, positive linear correlations, and in most cases, they were also high. The K scale showed the lowest correlation (rk = 0.613). Only the APP factor possessed a critical level close to 0.01, but it was still not significant (p = 0.087). This indicated that the MMSI-2 examines behaviors whose longitudinal variability is reliable. Therefore, the high correlation indices and the non-significant critical levels ensured the stability of the scores and accept their reliability.


This paper outlines the English adaptation of the MMSI-2. This includes examination of internal validity and scale reliability. Regarding validity, analysis indicated that the MMSI-2 adequately represented subclinical psychological constructs and anomalous perceptions related to parapsychology. Concerning internal reliability, although McDonald omega coefficient was below the recommended lower limit (0.6) (see Hair et al., 2010), test-retest trials presented very favorable results, for both dimension and factor scores. Thus, the adapted MMSI-2 demonstrated satisfactorily validity and reliability.

Conceptual Analysis Derived From the Theoretical Background

Before delving into the psychometric and statistical part of the MMSI-2, it is worth reflecting on the need and usefulness of an instrument such as this, applied in psychological evaluation and specifically in the field of anomalous phenomena. It is not incorrect to state that, at least for now, it is not possible to prove or verify the existence of “psi” phenomena (see French and Stone, 2014). Therefore, outside the experimental methodology, it is also not possible to contrast whether a given anomalous experience is truly a hallucination, a perceptual deformation, or a simple invention of the patient. In this context, the MMSI-2 represents a useful instrument for assessing abnormal experiences because it includes the main psychological indicators that predict these experiences. For example, a person who has had a “psi” experience and obtains high levels of schizotypy (Ez) may have experienced an attenuated hallucination of a psychotic nature rather than a delusion (Simmonds-Moore et al., 2019). However, if this person scored low on schizotypy and the other subclinical variables, it is possible that he or she would have had a non-pathological delusion.

As noted in the introduction, questionnaires that measure anomalous phenomena are scales that start from the apriorism belonging to model 1 (e.g., Wahbeh et al., 2019) or model 2 (e.g., Stefanis et al., 2002). In reality, it is not correct or objective that the investigations of model 1 affirm that there is scientific evidence in favor of parapsychological phenomena. Nor is it admissible that the investigations of model 2 comment on the following naturalistic fallacy (see Feldman, 2019): - parapsychological phenomena cannot exist because they are impossible at the scientific level (proposition A); - a subject tells me a parapsychological phenomenon (proposition B); therefore … - what the subject counts is a hallucination (fallacious conclusion). Only in proposition A can one already observe the Aristotelian fallacy of affirming the consequent: one cannot verify the “impossibility” of a phenomenon by contrasting hypotheses (see Popper, 2008). This is explained because the real basis of the MMSI-2 lies in this point: how to know if an anomalous experience is a hallucination, an illusion, an interpretation or a fraud. Following Escolà-Gascón's (2020b) criteria, an anomalous experience may be a hallucination when the participant obtains high scores (typical scores >50 or 60) on the K, Ez, Pa, and Cs scales. It will be an illusion or perceptual deception when the scores are elevated on the Hi, Na, Nt, Su, and ASC scales. It will be a subjective interpretation when the Si, Be, and Hi scales score very high. Likewise, it will be a fraud or a lie when the scores on the K, L, F, and Si scales are high.

The fact that a statistically valid model can be fitted in the structure of the MMSI-2 suggests that an alternative model is possible. This is discussed below.

Methodological Analysis of the Results

The exploratory results of the initial EFA seem to coincide with the internal structure of the original statistical validation of the MMSI-2 (see Escolà-Gascón, 2020a,b; Escolà-Gascón and Gallifa, 2020). Unlike what was expected, the OP factor was novel because in the Spanish factorial solutions, only four macrofactors were retained. However, the grouping of scales offered by OP can be extrapolated to the classical theories of personality based on the “Big Five” model (see Goldberg, 1993). Both Be and Su are classified into multiple statistical and theoretical models of personality as two facets belonging to the Openness dimension (e.g., Costa and McCrae, 2008). The MMSI-2 is not a psychopathological test, and the items were written in such a way that they express attenuated subclinical contents in different degrees or levels that remain within the normative or non-clinical framework. This means that some scales of the MMSI-2 may have a certain correspondence with the factorial models of personality. For this reason, the Be and Su scales may compose the Openness or OP dimension. In future research, this could happen again with other MMSI-2 scales.

The scientific literature agrees that subjects believing in the existence of the paranormal tend to present elevated traits both in suggestion and in search of emotions (e.g., Jinks, 2019). However, in the results obtained, the OP factor correlates negatively with other MMSI factors. This seems unexpected, since in other studies, the factors IMA, CPT, ASC, and APP correlated positively with paranormal beliefs (see Irwin, 2009). In fact, the scales K, L, F, Si, Hi, Ez, Pa, Na, Cs, Nt, Su, and Be of the MMSI were obtained empirically (using EFAs and CFAs), and their reagents measure behaviors that, according to the scientific literature, see French and Stone (2014) for a review, are common in subjects who believe in the paranormal and claim to have had anomalous experiences (see also Escolà-Gascón, 2020b). If this is so, it seems strange that the correlations between OP and the other factors are negative, especially the covariation with APP. In the Spanish version, the correlations are positive between all factors. This raises the debate about whether the predictor variables of paranormal beliefs and experiences could have different effects according to the sociocultural environment from which the participants come. Is it likely that the culture and the educational model promote different interpretations of the items in these two scales? As suggested by Brown (2015), to assess this the factorial invariance of the model of 5 must be analyzed comparing two equivalent samples from different cultural environments or countries.

An important detail is that in the three models examined, the factors were related to each other, which seems to indicate that the anomalous phenomena (APP) do not have an independent operation with respect to the other variables. However, although the models in Figures 2, 4 provide correlations close to zero (one is between ASC and IMA and the other is observed between ASC and CPT), the model in Figure 3 only maintains this trend for the relationship between ASC and IMA. The standardized covariance between ASC and CPT is equal to 0.104. The fact that the 4-factor model shows a higher correlation between CPT (Clinical personality tendencies) and ASC (Altered States of Consciousness) can be explained by the inclusion of the Su and Be scales in the CPT factor. As shown in Figure 5, these two scales correlate negatively with the other dimensions of the same factor. Therefore, it is likely this compromises the internal consistency of CPT (see Table 4).

Nevertheless, at the same time, it could generate an increase in the covariance between CPT and ASC. This would have an impact on CPT interpretation: on the one hand, in Figures 2, 4 CPT could indicate attenuated clinical tendencies of the personality, by including behaviors that are not necessarily psychopathological but their qualitative content if included in the systems clinical classification (see Diagnostic and Statistical Manual of Mental Disorders, DSM-5). On the other hand, in Figure 3, the interpretation of CPT is more complex, since it could describe non-pathological contents, but Be and Su would have a pre-disposition to the clinical because they are included in the same group as Hi, Ez, Pa, and Na. Whereas, some conventional personality questionnaires - for example the NEO-PI-R, from Costa and McCrae (2008) - do not have an applicability in the clinical evaluation, other questionnaires also of the personality - for example the 16PF of Cattell (1946) - they do have value in psychopathological terms (see Karson et al., 2003). This allows us to consider the possibility that MMSI-2 may also have clinical utility, especially for the CPT and OP factor scales. It would be advisable to test the 4- and 5-factor model in non-clinical samples (without a psychiatric history) and clinical samples (with a formally diagnosed history), with the objective of analyzing the factorial invariance of each of the factors and their scales. Is it possible that OP represents a different construct when it is applied in a clinical sample?

It should also be questioned why the ASC, CPT, and IMA factors predict only 18.3% of the variance of anomalous phenomena (APP). In the original version and using the same factors, this explained variance increases substantially to 51.2% (see Escolà-Gascón, 2020a). Again, this suggests that the interpretation of the MMSI-2 scales may have different connotations when the cultural environment changes. This does not have to directly affect the construct validity of the MMSI-2. To contrast the possibility of biased and different interpretations (in addition to examining factorial invariance), the analysis of the Differential Item Functioning (hereafter DIF) could also be considered (see DIF, Abad et al., 2015). In reality, within the context of parapsychological beliefs and experiences, it would not be the first time that a test presents biases or DIF when comparing the responses between believing and non-believing subjects in the existence of the paranormal (see Lange et al., 2000). However, if this proposal were applied, the specific items of the scales to be evaluated should be selected, since analyzing the presence or absence of DIF for the 174 items of the MMSI-2 is somewhat costly at the logistical level. As an alternative, logistic regression could be used using the direct scores of the classes as predictive variables.

Regarding the reliability indices, it should be noted that the internal consistency of the ASC and OP macrofactors is not high because omega coefficients were close to the 0.6 cut-off point. IMA offers the highest value, and the other factors yield acceptable or questionable results. This questions the accuracy of the factor scores in the individual interpretation of the profiles. Thus, although scales have already been defined for the Spanish population, the English standardization of the scores would not be recommended until higher internal consistency indices in the ASC and OP macrofactors were obtained. For the other scale dimensions and factors, a first proposal for the standardization of direct scores could be initiated.

Although the low reliability indices based on internal consistency already represent a limitation for the use of the MMSI in the professional practice of psychological evaluation, in statistical terms, the reliability of the questionnaire can be accepted if the test-retest trials applied are taken into account. Both internal and longitudinal consistency represent two empirical markers of the same psychometric property: reliability. It would be ideal to accept both types of reliability (internal and longitudinal), but the acceptance of one already confirms the reliability of the test in statistical terms (although the good results of one do not replace the shortcomings of the other) (see Abad et al., 2015).

The K scale showed the lowest correlation (although it was also higher than 0.6). This fact may be due to the type of content and items included in this dimension. This is a scale that examines the presence of logical inconsistencies in the responses of the participant. This allows (1) to know if the evaluated subject answers randomly to the test questions, (2) if he correctly understands the statements and (3), his collaborative predisposition toward the evaluation. These three characteristics could yield a temporal variability in K more independent with respect to the other scales. It is possible that this has affected the covariability of the scores and, therefore, their temporal consistency. However, the correlation obtained in this scale is acceptable for this type of test (see Abad et al., 2015).

Criticisms and Limitations

At least in the sample used, it can be stated that the macrofactors were positively related to APP. That is, as a subject perceives anomalous phenomena, it is also possible to present correlative traits in the other factors of the MMSI. According to what traits the subject presents, the perceived anomalous phenomena could be validated as hallucinatory phenomena, perceptual deformations, cognitive-social biases, belief systems, or as unexplained behaviors. In this context, the psychological attributes related to psychosis (e.g., the Ez, Pa, K, and Cs scale of the MMSI-2) (see Fonseca-Pedrero et al., 2011), combined with high scores in APP, support the hallucinatory value of the perceived parapsychological phenomena (and therefore, they should no longer be called “anomalous”). The same hypothetical logic could be applied to perceptual deformations and other typologies that define the “supposed” anomalous phenomena.

However, with such a low percentage of variance, it seems advisable to offer decision criteria that specify what combinations of scales it would be possible to discriminate between a hallucination, a bias, a perceptual deformation, etc. It should not be forgotten that the main objective of this research was the psychometric examination of the validity and reliability of the English adapted MMSI-2. Therefore, the analysis of the quality and predictive quality of the ASC, CPT and IMA factors on APP should be tested using cut-off points and analysis of receiver operating characteristic (ROC) curves.

Taking into account that the believing subjects in the “supernatural” tend to present higher levels in the different scales that measure hallucinations and perceptual deformations with respect to non-believers (see Matute et al., 2011; Griffiths et al., 2018; Torres et al., 2020; Wright et al., 2020), a possible way for the covariation between the macrofactors to increase would be by replicating the CFA for the 5-factor model only with subjects believing in the existence of the paranormal. It seems likely that the participants of this sample do not believe in the existence of the paranormal with the same intensity as the subjects of the Spanish samples. This could generate a bias in the APP scores that would harm the correlations with the other factors. Thus, “beliefs in the existence of the paranormal” could be a strange variable that should be controlled in future research.

Nevertheless, the construct validity of the model belonging to Figure 2 offers sufficient reasons to continue reviewing the psychometric properties of MMSI-2 and the application of the hypothetical empirical-statistical model as an explanation of anomalous phenomena in parapsychology, but from a rational and psychological perspective.

A relevant limitation is also related to the theoretical interpretation of the OP factor. It has already been mentioned in the previous subsection the possibility - although it is not a proven fact - that in English culture scales and macrofactors could have different meanings in relation to social interpretations from the Spanish-speaking culture. This would involve reviewing the factorial invariance and the possibility of DIF in some of the items or scales of the MMSI-2. It should be taken into account that the OP factor represents a first hypothetical classification extracted from the statistical evidence from the applied EFA, but it should not be accepted as a conclusive macrofactor, since in other EFAs and CFAs, this respective macro was not retained. -factor.

The fact of not saving the responses of the subjects to each item of the MMSI and using the direct calculation of the scores required estimating the internal consistency only for the second-order factors or macrofactors. This way of proceeding is not ideal, but it is not incorrect at the psychometric level (e.g., Arribas, 2011). In reality, the probability that the measurement model of a test obtains a good fit increases when the structural model applied to the scales also shows a correct fit (see Brown, 2015). This is due to mathematical reasons (see Mulaik, 2018) and because the structural model validates not only the quality of the measurements - whose test scales would be incorporated into the structural equations as observable variables -, it also validates the underlying theoretical construct. It is a top-down methodological process: valid constructs that form valid theories must offer guarantees that prove the validity of the respective measurements (see Gorsuch, 1983).

Therefore, in the MMSI-2, the individual responses were originally recorded in the online software but were not saved because it was not intended to examine the ordinal responses of the items but the factorial and structural model of the test itself. In addition, if the answers are ordinal, they should not technically be analyzed using factorial procedures, which require that the variables be quantitative (e.g., Mulaik, 2018). Working with the scales directly saves the dilemma of how to save this class of situations, in which it is debated how to quantitatively treat variables whose values are discrete-ordinal. A possible solution would have been the use of the polychoric correlations applied in the responses of the items, but as has already been justified, this was not part of the objective of this research.

Finally, we would like to highlight a limitation related to the application of the post-tests. We allowed participants a flexible margin of 10 days to answer the post-tests. This was done because not all participants could answer the post-tests in a timely manner. We did not want to put additional pressure on the participants and for this reason we offered them a few flexible days. However, this difference in the dates of the post-tests could have generated a variability that would have minimally biased the test-retest reliability coefficients. If this strategy is used in future research, we recommend analyzing the effects of this date-related variability. However, being only 10 days, we also believe that the variability will have had minimal effects on the results. As an adjunct, we also suggest that future applications take into account the ways of application of this assessment test. It is likely that there is also variability regarding to the format of application of the MMSI-2 (i.e., pencil-paper or digital format). In this research the applications were exclusively digital. Thus, it would also be useful to analyze whether there are differences between responses collected both conventionally (pencil-paper) and those obtained online.


The Multivariable Multiaxial Suggestibility Inventory-2 (MMSI-2) presents a valid internal structure formed by five factors: Clinical Personality Tendencies (CPT), Anomalous Perceived Phenomena (APP), Incoherent Manipulations (IMA), Altered States of Consciousness (ASC) and Openness (OP). This model belonging to the MMSI-2 is called empirical-statistical for two reasons: (1) because both the scales and the factors were extracted using statistical-factorial techniques and (2) because the scores of the scales and factors represent empirical markers of the behavior that allow us to correlate and predict anomalous phenomena, including “psi” phenomena (APP). The CPT, IMA, and ASC factors are correlated and explain 18.3% of the APP macrofactor. However, more than 50% of the variance of anomalous phenomena remains to be explained. This result contrasts with the Spanish version of the MMSI-2, in which these same factors predict anomalous phenomena with a weight of 51.2% of the variance. It is concluded that the low explained variance obtained in this research is because the subjects of the sample were not believers in the existence of the paranormal. This could affect the covariation between the factors, causing some of them to have a more independent statistical behavior.

The MMSI-2 offers reliable and stable scores over time, whose longitudinal consistency is guaranteed for at least 160 days after the first application. The reliability relative to the internal consistency of the scores belonging to the macrofactors was not very high and for this reason should be reviewed in future research.

The empirical-statistical model should be analyzed again to review the predictive value of the factors CPT, ASC, and IMA on APP. However, this research offers results that prove the validity and reliability of the MMSI-2 in the English population and support the relationship between APP and the other factors.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics Statement

The studies involving human participants were reviewed and approved by The Committee of Ethical Guarantees of Ramon Llull University. The patients/participants provided their written informed consent to participate in this study.

Author Contributions

ÁE-G conceived and planned the study, collected the sample, performed the statistical analyses, and wrote the manuscript in consultation with ND. JG supervised the project. All authors contributed to the article and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.


The authors of this research would like to thank CIE Consulting, Inc. for the logistical organization of this project, organization of travel, stays in universities or colleges in the United Kingdom and the facilitation or access to the English sample. Sister Ángeles Marín (from San Francisco, CA), Dr. Mary O'Neill (from Dublin, Ireland), and Yvonne Barrientos (from Sacramento, CA) deserve special mention for their correction and revision of the MMSI-2 items in both the American translation and the English.


Abad, F. J., Olea, J., Ponsoda, V., and García, C. (2015). Medición en ciencias sociales y de la salud [Measurement in social and health sciences]. Editorial Síntesis.

Álvarez, J. C. (2007). La parapsicología: !Vaya timo! [Parapsychology: What a Scam!]. Navarra: Laetoli, S.L.

American Psychiatric Association (2013). Diagnostic and Statistical Manual of Mental Disorders (DSM-5), 5th Edn>. Washington, DC: American Psychiatric Association. doi: 10.1176/appi.books.9780890425596

CrossRef Full Text | Google Scholar

Arribas, D. (2011). Psychometric properties of the TEA personality test: evidence of reliability and construct validity. Eur. J. Psychol. Assess. 27, 121–126. doi: 10.1027/1015-5759/a000057

CrossRef Full Text | Google Scholar

Barberia, I., Blanco, F., Cubillas, C. P., and Matute, H. (2013). Implementation and assessment of an intervention to debias adolescents against causal illusions. PLoS ONE 8:e71303. doi: 10.1371/journal.pone.0071303

PubMed Abstract | CrossRef Full Text | Google Scholar

Barberia, I., Tubau, E., Matute, H., and Rodríguez-Ferreiro, J. (2018). A short educational intervention diminishes causal illusions and specific paranormal beliefs in undergraduates. PLoS ONE 13:e0191907. doi: 10.1371/journal.pone.0191907

PubMed Abstract | CrossRef Full Text | Google Scholar

Barbero, M. I., Vila-Abad, E., and Holgado, F. (2015). Psicometría [Psychometrics]. Editorial Sanz y Torres.

Beischel, J., Boccuzzi, M., Biuso, M., and Rock, A. (2015). Anomalous information reception by research mediums under blinded conditions II: replication and extension. Explore 11, 136–142. doi: 10.1016/j.explore.2015.01.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Beischel, J., and Schwartz, G. E. (2007). Anomalous information reception by research mediums demonstrated using a novel triple-blind protocol. Explore 3, 23–27. doi: 10.1016/j.explore.2006.10.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Bell, V., Halligan, P. W., and Ellis, H. D. (2006). The Cardiff Anomalous Perceptions Scale (CAPS): a new validated measure of anomalous perceptual experience. Schizophr. Bull. 32, 366–377. doi: 10.1093/schbul/sbj014

PubMed Abstract | CrossRef Full Text | Google Scholar

Belloch, A., Baños, R. M., and Perpiñá, C. (1995). “Psicopatología de la percepción y la imaginación [Psychopathology of perception and imagination],” in Manual de psicopatología: volumen I [Handbook of Psychopathology: volume I], eds A. Belloch, B. Sandín, and F. Ramos (Columbus, OH: McGraw-Hill), 188–230.

Bem, D. J. (2011). Feeling the future: experimental evidence for anomalous retroactive influences on cognition and affect. J. Pers. Soc. Psychol. 100, 407–425. doi: 10.1037/a0021524

PubMed Abstract | CrossRef Full Text | Google Scholar

Bem, D. J., Tressoldi, P., Rabeyron, T., and Duggan, M. (2016). Feeling the future: a meta-analysis of 90 experiments on the anomalous anticipation of random future events. F1000Research 4:1188. doi: 10.12688/f1000research.7177.2

PubMed Abstract | CrossRef Full Text | Google Scholar

Bobrow, R. S. (2003). Paranormal phenomena in the medical literature sufficient smoke to warrant a search for fire. Med. Hypotheses 60, 864–868. doi: 10.1016/S0306-9877(03)00066-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Brown, T. A. (2015). Confirmatory Factor Analysis for Applied Research. New York, NY: The Guilford Press.

Google Scholar

Bunge, M. (2013). La ciencia, su método y su filosofía [Science, its method and philosophy]. Laetoli, S.L.

Cardeña, E. (2018). The experimental evidence for parapsychological phenomena: a review. Am. Psychol. 73, 663–677. doi: 10.1037/amp0000236

PubMed Abstract | CrossRef Full Text | Google Scholar

Cardeña, E. (2019). “The data are irrelevant”: Response to Reber and Alcok (2019). J. Sci. Explor. 33, 593–598. doi: 10.31275/2019/1653

CrossRef Full Text | Google Scholar

Carter, C. (2012). Science and Psychic Phenomena: The Fall of the House of Skeptics. Rochester: Inner Traditions.

Google Scholar

Cattell, H. B. (1946). The Description and Measurement of Personality. San Diego, CA: Harcourt, Brace and World.

Google Scholar

Chapman, L. J., Chapman, J. P., and Raulin, M. L. (1978). Body-image aberration in schizophrenia. J. Abnorm. Psychol. 87, 399–407. doi: 10.1037/0021-843X.87.4.399

CrossRef Full Text | Google Scholar

Costa, P. T., and McCrae, R. R. (2008). Inventario de Personalidad Neo Revisado e Inventario Neo Reducido de Cinco Factores (manual profesional) [Neo Revised Personality Inventory and Neo Reduced Five Factor Inventory (professional manual)]. TEA Ediciones, S.A.U.

Google Scholar

Drinkwater, K., Dagnall, N., Grogan, S., and Riley, V. (2017). Understanding the unknown: a thematic analysis of subjective paranormal experiences. Austr. J. Parapsychol. 17, 23–46.

Google Scholar

El-Mallakh, R., and Walker, K. L. (2010). Hallucinations, pseudohallucinations, and parahal-lucinations. Psychiatry 73, 34–42. doi: 10.1521/psyc.2010.73.1.34

CrossRef Full Text | Google Scholar

Escolà-Gascón, Á. (2020a). Researching unexplained phenomena: empirical-statistical validity and reliability of the Multivariable Multiaxial Suggestibility Inventory-2 (MMSI-2). Heliyon 6:e04291. doi: 10.1016/j.heliyon.2020.e04291

PubMed Abstract | CrossRef Full Text | Google Scholar

Escolà-Gascón, Á. (2020b). Researching unexplained phenomena II: new evidences for anomalous experiences supported by the Multivariable Multiaxial Suggestibility Inventory-2 (MMSI-2). Curr. Res. Behav. Sci. 1:100005. doi: 10.1016/j.crbeha.2020.100005

CrossRef Full Text | Google Scholar

Escolà-Gascón, Á. (2021). New techniques to measure lie detection using COVID-19 fake news and the Multivariable Multiaxial Suggestibility Inventory-2 (MMSI-2). Comput. Hum. Behav. Rep. 3:100049. doi: 10.1016/j.chbr.2020.100049

CrossRef Full Text | Google Scholar

Escolà-Gascón, Á., and Gallifa, J. (2020). Psychology of Anomalous Experiences: psychometric properties of the Multivariable Multiaxial Suggestibility Inventory−2 Reduced (MMSI-2-R). Anu. de Psicol. 50, 115–126. Available online at: (accessed January 13, 2021).

Ey, H., Bernard, P., and Brisset, C. H. (1980). Tratado de Psiquiatría [Psychiatry Treaty]. Toray-Masson, S.A.

Fekih-Romdhane, F., Sassi, H., Ennaifer, S., Tira, S., and Cheour, M. (2020). Prevalence and correlates of psychotic like experiences in a large community sample of young adults in Tunisia. Community Ment. Health J. 56, 991–1003. doi: 10.1007/s10597-019-00542-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Feldman, F. (2019). “The Naturalistic Fallacy: what it is, and what it isn't,” in The Naturalistic Fallacy, ed N. Sinclair (Cambridge: Cambridge University Press), 30–53.

Fishbein, M., and Ajzen, I. (1975). Belief, Attitude, Intention and Behavior: An Introduction to Theory and Research. Boston, MA: Addison-Wesley.

Google Scholar

Fonseca-Pedrero, E., Lemos-Giráldez, S., Paino, M., Sierra-Baigrie, S., Santarén-Rosell, M., and Muñiz, J. (2011). Internal structure and reliability of the oviedo schizotypy assessment questionnaire (ESQUIZO-Q). Int. J. Clin. Health Psychol. 11, 385–401. Available online at: (accessed January 16, 2021).

Google Scholar

Font, J. (2016). Religión, psicopatología y salud mental: introducción a la psicología de las experiencias religiosas y de las creencias [Religion, psychopathology and mental health: introduction to the psychology of religious experiences and beliefs]. Ediciones Paidós y Fundació Vidal i Barraquer.

Google Scholar

French, C. C., and Stone, A. (2014). Anomalistic Psychology: Exploring Paranormal Belief and Experience. London: Red Globe Press, Inc.

Google Scholar

Gallagher, C., Kumar, V. K., and Pekala, R. J. (1994). The anomalous experiences inventory: reliability and validity. J. Parapsychol. 58, 402–428. Available online at: (accessed January 14, 2021).

Google Scholar

Goldberg, L. R. (1993). The structure of phenotypic personality traits. Am. Psychol. 48, 26–34. doi: 10.1037/0003-066X.48.1.26

CrossRef Full Text | Google Scholar

Gorsuch, R. L. (1983). Factor Analysis. Hillsdale, NJ: Lawrence Erlbaum.

Google Scholar

Griffiths, O., Shehabi, N., Murphy, R., and Le Pelley, M. (2018). Superstition predicts perception of illusory control. Br. J. Psychol. 110, 499–518. doi: 10.1111/bjop.12344

PubMed Abstract | CrossRef Full Text | Google Scholar

Groth-Marnat, G. (2009). Handbook of Psychological Assessment. Hoboken, NJ: John Wiley and Sons, Ltd.

Google Scholar

Groth-Marnat, G., and Pegden, J. (1998). Personality correlates of paranormal belief: locus of control and sensation seeking. Soc. Behav. Pers. 26, 291–296. doi: 10.2224/sbp.1998.26.3.291

CrossRef Full Text | Google Scholar

Hair, J. F. Jr, Anderson, R. E., Tatham, R. L., and Black, W. C. (2010). Multivariate Data Analysis. Upper Saddle River, NJ: Pearson Education (Prentice-Hall), Inc.

Google Scholar

Heise, D. R., and Bohrnstedt, G. W. (1970). “Validity, invalidity, and reliability,” in Sociological Metholodoly, eds E. E. Borgata and G. W. Bohrnstedtpp (San Francisco, CA: Jossey-Bass, Inc), 104–129. doi: 10.2307/270785

CrossRef Full Text | Google Scholar

Honorton, C. (1985). Meta-analysis of psi ganzfeld research: a response to Hyman. J. Parapsychol. 49, 51–91. Available online at: (accessed January 22, 2021).

Google Scholar

Irwin, H. J. (1993). Belief in the paranormal: a review of the empirical literature. J. Am. Soc. Psych. Res. 87, 1–39. Available online at: (accessed December 20, 2020).

Google Scholar

Irwin, H. J. (2009). The Psychology of Paranormal Belief. Hertfordshire: University of Hertfordshire Press.

Google Scholar

Irwin, H. J. (2017). An assessment of the worldview theory of belief in the paranormal. Austr. J. Parapsychol. 17, 7–21.

Google Scholar

Irwin, H. J., Dagnall, N., and Drinkwater, K. (2013). Parapsychological experience as anomalous experience plus paranormal attribution: a questionnaire based on a new approach to measurement. J. Parapsychol. 77, 39–53. doi: 10.1037/t31377-000

CrossRef Full Text | Google Scholar

Irwin, H. J., and Marks, A. D. G. (2018). Belief in the paranormal: a state, or a trait? J. Parapsychol. 82, 24–40. doi: 10.30891/jopar.2018.01.03

CrossRef Full Text | Google Scholar

Jaspers, C. (1993). Psicopatología General [General Psychopathology]. Editorial Beta.

Google Scholar

Jinks, T. (2019). Psychological Perspectives on Reality, Consciousness and Paranormal Experience. Cham: Springer Nature, Inc. doi: 10.1007/978-3-030-28902-7

CrossRef Full Text | Google Scholar

Johns, L. C., and van Os, J. (2001). The continuity of psychotic experiences in the general population. Clin. Psychol. Rev. 21, 1125–1141. doi: 10.1016/S0272-7358(01)00103-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Karson, M., Karson, S., and O'Dell, J. (2003). 16PF-5: Una guía para su interpretación en la práctica clínica [16PF Interpretation Clinical Practice: A Guide to the Fifth Edition]. TEA Ediciones, S .A.U.

Kelly, E. W., and Arcangel, D. (2011). An investigation of mediums who claim to give information about deceased persons. J. Nerv. Ment. Dis. 199, 11–17. doi: 10.1097/NMD.0b013e31820439da

PubMed Abstract | CrossRef Full Text | Google Scholar

Kelly, R., Shoulder, C., and Bell, V. (2020). “Assessment in psychosis,” in A Clinical Introduction to Psychosis: Foundations for Clinical Psychologists and Neuropsychologists, eds J. Badcock and G. Paulikpp (London: Academic Press), 135–152. doi: 10.1016/B978-0-12-815012-2.00006-7

CrossRef Full Text | Google Scholar

Kline, P. (1999). Handbook of Psychological Testing. New York, NY: Routledge, Taylor and Francis Group.

Google Scholar

Krippner, S., and Ullman, M. (1970). Telepathy and dreams: a controlled experiment with electroencephalogram-electro-oculogram monitoring. J. Nerv. Ment. Dis. 151, 394–403. doi: 10.1097/00005053-197012000-00004

PubMed Abstract | CrossRef Full Text | Google Scholar

Kuhn, G., Olson, J., and Raz, A. (2016). Editorial: the psychology of magic and the magic of psychology. Front. Psychol. 7:1358. doi: 10.3389/fpsyg.2016.01358

PubMed Abstract | CrossRef Full Text | Google Scholar

Lange, R., Irwin, H. J., and Houran, J. (2000).Top down purification of Tobacyk's Revised Paranormal Belief Scale. Pers. Ind. Diff. 29, 131–156. doi: 10.1016/S0191-8869(99)00183-X

CrossRef Full Text | Google Scholar

Lange, R., Ross, R. M., Dagnall, N., Irwin, H. J., Houran, J., and Drinkwater, K. (2019). Anomalous experiences and paranormal attributions: psychometric challenges in studying their measurement and relationship. Psychol. Conscious. 6, 346–358. doi: 10.1037/cns0000187

CrossRef Full Text | Google Scholar

Lee, S., Kravitz, D., and Baker, C. (2018). Differential representations of perceived and retrieved visual information in hippocampus and cortex. Cereb. Cortex 29, 4452–4461. doi: 10.1093/cercor/bhy325

PubMed Abstract | CrossRef Full Text | Google Scholar

León, O. G., and Montero, I. (2002). Métodos de investigación en Psicología y Educación [Research methods in Psychology and Education]. Columbus, OH: McGraw-Hill.

Google Scholar

Mason, O., and Claridge, G. (2006). The Oxford-Liverpool Inventory of Feelings and Experiences (O-LIFE): further description and extended norms. Schizophr. Res. 82, 203–211. doi: 10.1016/j.schres.2005.12.845

PubMed Abstract | CrossRef Full Text | Google Scholar

Matute, H., Blanco, F., Yarritu, I., Díaz-Lago, M., Vadillo, M., and Barberia, I. (2015). Illusions of causality: how they bias our everyday thinking and how they could be reduced. Front. Psychol. 6:888. doi: 10.3389/fpsyg.2015.00888

PubMed Abstract | CrossRef Full Text | Google Scholar

Matute, H., Yarritu, I., and Vadillo, M. (2011). Illusions of causality at the heart of pseudoscience. Br. J. Psychol. 102, 392–405. doi: 10.1348/000712610X532210

PubMed Abstract | CrossRef Full Text | Google Scholar

McCraty, R., and Atkinson, M. (2014). Electrophysiology of intuition: pre-stimulus responses in group and individual participants using a Roulette Paradigm. Glob. Adv. Health Med. 3, 16–27. doi: 10.7453/gahmj.2014.014

PubMed Abstract | CrossRef Full Text | Google Scholar

McDonald, R. P. (1999). Factor Analysis and Related Methods. Hillsdale, NJ: Erlbaum.

Google Scholar

Moreira-Almeida, A., de Almeida, A., and Neto, F. (2005). History of ‘Spiritist madness' in Brazil. Hist. Psychiatry 16, 5–25. doi: 10.1177/0957154X05044602

CrossRef Full Text | Google Scholar

Moss, T., and Gengerelli, J. A. (1967). Telepathy and emotional stimuli: a controlled experiment. J. Abnorm. Psychol. 72, 341–348. doi: 10.1037/h0024760

PubMed Abstract | CrossRef Full Text | Google Scholar

Mossbridge, J., Tressoldi, P., and Utts, J. (2012). Predictive physiological anticipation preceding seemingly unpredictable stimuli: a meta-analysis. Front. Psychol. 3:390. doi: 10.3389/fpsyg.2012.00390

PubMed Abstract | CrossRef Full Text | Google Scholar

Mossbridge, J. A., and Radin, D. (2018). Precognition as a form of prospection: a review of the evidence. Psychol. Conscious. 5, 78–93. doi: 10.1037/cns0000121

CrossRef Full Text | Google Scholar

Mulaik, S. A. (2018). “Fundamentals of common factor analysis,” in The Wiley Handbook of Psychometric Testing: A Multidisciplinary Reference on Survey, Scale and Test Development, eds P. Irwing, T. Booth, and D. J. Hughes (West Sussex: John Wiley and Sons, Ltd), 211–252. doi: 10.1002/9781118489772.ch8

CrossRef Full Text | Google Scholar

Musella, D. (2005). Gallup poll shows that American's belief in the paranormal persists. Skeptical Inq. 29:5.

Nordgaard, J., Buch-Pedersen, M., Hastrup, L., Haahr, U., and Simonsen, E. (2019). Measuring psychotic-like experiences in the general population. Psychopathology 52, 240–247. doi: 10.1159/000502048

PubMed Abstract | CrossRef Full Text | Google Scholar

Pardo, A., and Román, M. (2013). Reflections on the Baron and Kenny model of statistical mediation. An. de Psicol. 29, 614–623. doi: 10.6018/analesps.29.2.139241

CrossRef Full Text | Google Scholar

Parker, A. (2006). “Experiencias Paranormales: ‘Normalidad o anormalidad' [Paranormal experiences: normality or abnormality?],” in Psicología de las experiencias paranormales [Psychology of Paranormal Experiences], ed A. Parra (Buenos Aires: Librería Akadia Editorial), 17–33.

Pasricha, S. (2011). Relevance of parapsychology in psychiatric practice. Indian J. Psychiatry 53, 4–8. doi: 10.4103/0019-5545.75544

CrossRef Full Text | Google Scholar

Pennycook, G., Cheyne, J., Seli, P., Koehler, D., and Fugelsang, J. (2012). Analytic cognitive style predicts religious and paranormal belief. Cognition 123, 335–346. doi: 10.1016/j.cognition.2012.03.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Peters, E. R., Joseph, S. A., Day, S., and Garety, P. A. (2004). Measuring delusional ideation: the 21-item Peters et al. Delusions Inventory (PDI). Schizophr. Bull. 30, 1005–1022. doi: 10.1093/oxfordjournals.schbul.a,007116

PubMed Abstract | CrossRef Full Text | Google Scholar

Popper, K. R. (2008). La lógica de la investigación científica [The Logic in Scientific Investigation], 2nd Edn. Madrid: Editorial Tecnos.

Google Scholar

R Core Team (2018). R: A Language and Envionment for Statistical Computing [Computer software]. Available online at: (accessed December 23, 2020).

Google Scholar

Radin, D. (2006). Assessing the evidence for mind-matter interaction effects. J. Sci. Explor. 20, 361–374.

Google Scholar

Reber, A. S., and Alcock, J. E. (2020). Searching for the impossible: parapsychology's elusive quest. Am. Psychol. 75, 391–399. doi: 10.1037/amp0000486

CrossRef Full Text | Google Scholar

Reise, S. P., Waller, N. G., and Comrey, A. L. (2000). Factor analysis and scale revision. Psychol. Assess. 12, 287–297. doi: 10.1037/1040-3590.12.3.287

CrossRef Full Text | Google Scholar

Schriever, F. (2000). Are there different cognitive structures behind paranormal beliefs? Eur. J. Parapsychol. 15, 46–67.

Google Scholar

Shapiro, D. I., Li, H., Kline, E. R., and Niznikiewicz, M. A. (2019). “Assessment of risk for psychosis,” in Handbook of Attenuated Psychosis Syndrome Across Cultures, eds H. Li, D. Shapiro, and L. Seidman (Cham: Springer, Inc), 7–40. doi: 10.1007/978-3-030-17336-4_2

CrossRef Full Text | Google Scholar

Sheldrake, R., and Avraamides, L. S. (2009). An automated test for telepathy in connection with Emails. J. Sci. Explor. 23, 29–36.

Google Scholar

Shermer, M. (2011). The Believing Brain: From Ghosts and Gods to Politics and Conspiracies – How We Construct Beliefs and Reinforce Them as Truths. New York, NY: Times Books.

Google Scholar

Simmonds-Moore, C. (2016). An interpretative phenomenological analysis exploring synesthesia as an exceptional experience: insights for consciousness and cognition. Qual. Res. Psychol. 13, 303–327. doi: 10.1080/14780887.2016.1205693

CrossRef Full Text | Google Scholar

Simmonds-Moore, C. A., Alvarado, C. S., and Zingrone, N. L. (2019). A survey exploring synesthetic experiences: exceptional experiences, schizotypy, and psychological well-being. Psychol. Conscious. 6, 99–121. doi: 10.1037/cns0000165

CrossRef Full Text | Google Scholar

Stefanis, N. C., Hanssen, M., Smirnis, N. K., Avramopoulos, D. A., Evdokimidis, I. K., Stefanis, C. N., et al. (2002). Evidence that three dimensions of psychosis have a distribution in the general population. Psychol. Med. 32, 347–358. doi: 10.1017/S0033291701005141

PubMed Abstract | CrossRef Full Text | Google Scholar

Storm, L., Tressoldi, P. E., and DiRisio, L. (2010). Meta-analysis of free-response studies, 1992-2008: assessing the noise reduction model in parapsychology. Psychol. Bull. 136, 471–485. doi: 10.1037/a0019457

PubMed Abstract | CrossRef Full Text | Google Scholar

Storm, L., Tressoldi, P. E., and Utts, J. (2013). Testing the Storm et al. (2010) meta-analysis using Bayesian and frequentist approaches: reply to Rouder et al. (2013). Psychol. Bull. 139, 248–254. doi: 10.1037/a0029506

PubMed Abstract | CrossRef Full Text | Google Scholar

Sudduth, M. (2013). “Is postmortem survival the best explanation of the data of mediumship?,” in Survival Hypothesis: Essays on Mediumship, ed A. J. Rock (Jefferson, NC: McFarland and Company, Inc.), 40–64.

Google Scholar

The Jamovi Project (2019). Jamovi. (Version 1.0) [Computer Software]. Available online at: (accessed November 15, 2020).

Google Scholar

Tobacyk, J. J. (2004). A revised paranormal belief scale. Int. J. Transpers. Stud. 23, 94–98. doi: 10.24972/ijts.2004.23.1.94

CrossRef Full Text | Google Scholar

Torres, M., Barberia, I., and Rodríguez-Ferreiro, J. (2020). Causal illusion as a cognitive basis of pseudoscientific beliefs. Br. J. Psychol. 111, 840–852. doi: 10.1111/bjop.12441

PubMed Abstract | CrossRef Full Text | Google Scholar

Tressoldi, P. E., Martinelli, M., Zaccaria, E., and Massaccessi, S. (2009). Implicit intuition: how heart rate can contribute to predict future events. J. Soc. Psych. Res. 73, 1–16.

Google Scholar

Tressoldi, P. E., Pederzoli, L., Caini, P., Ferrini, A., Melloni, S., Richeldi, D., et al. (2014). Mind-matter interaction at a distance of 190 km: effects on a random event generator using a cutoff method. Neuroquantology 3, 337–343. doi: 10.14704/nq.2014.12.3.767

CrossRef Full Text | Google Scholar

Trizano-Hermosilla, I., and Alvarado, J. M. (2016). Best alternatives to cronbach's alpha reliability in realistic conditions: congeneric and asymmetrical measurements. Front. Psychol. 7:769. doi: 10.3389/fpsyg.2016.00769

PubMed Abstract | CrossRef Full Text | Google Scholar

Utts, J. (2018). An assessment of the evidence for psychic functioning. J. Parapsychol. 82, 118–146. doi: 10.30891/jopar.2018S.01.10

CrossRef Full Text | Google Scholar

van Os, J., Linscott, R. J., Myin-Germeys, I., Delespaul, P., and Krabbendam, L. A. (2009). A systematic review and meta-analysis of the psychosis continuum: evidence for a psychosis proneness-persistence-impairment model of psychotic disorder. Psychol. Med. 39, 179–195. doi: 10.1017/S0033291708003814

PubMed Abstract | CrossRef Full Text | Google Scholar

Wahbeh, H., Yount, G., Vieten, C., Radin, D., and Delorme, A. (2019). The noetic experience and belief scale: a validation and reliability study. F1000Research 8:1741. doi: 10.12688/f1000research.20409.1

PubMed Abstract | CrossRef Full Text | Google Scholar

Wain, O., and Spinella, M. (2007). Executive functions in morality, religion, and paranormal beliefs. Int. J. Neurosci. 117, 135–146. doi: 10.1080/00207450500534068

PubMed Abstract | CrossRef Full Text | Google Scholar

Wiseman, R., and Watt, C. (2017). Parapsychology. New York, NY: Routledge, Taylor and Francis Group.

Google Scholar

Wright, A., Nelson, B., Fowler, D., and Greenwood, K. (2020). Perceptual biases and metacognition and their association with anomalous self experiences in first episode psychosis. Conscious. Cogn. 77:102847. doi: 10.1016/j.concog.2019.102847

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: delusions, anomalous perceptions, anomalous phenomena, structural equation modeling, paranormal beliefs

Citation: Escolà-Gascón Á, Dagnall N and Gallifa J (2021) The Multivariable Multiaxial Suggestibility Inventory-2 (MMSI-2): A Psychometric Alternative to Measure and Explain Supernatural Experiences. Front. Psychol. 12:692194. doi: 10.3389/fpsyg.2021.692194

Received: 07 April 2021; Accepted: 23 June 2021;
Published: 16 July 2021.

Edited by:

Sara Giovagnoli, University of Bologna, Italy

Reviewed by:

Suleyman Cakiroglu, Istanbul Medeniyet University, Turkey
Andrea Svicher, University of Florence, Italy

Copyright © 2021 Escolà-Gascón, Dagnall and Gallifa. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Álex Escolà-Gascón,