Skip to main content

PERSPECTIVE article

Front. Aging Neurosci., 10 June 2021
Sec. Neurocognitive Aging and Behavior
Volume 13 - 2021 | https://doi.org/10.3389/fnagi.2021.648310

The Role of Brief Global Cognitive Tests and Neuropsychological Expertise in the Detection and Differential Diagnosis of Dementia

  • Department of Psychology and Cognitive Science, University of Trento, Trento, Italy

Dementia is a global public health problem and its impact is bound to increase in the next decades, with a rapidly aging world population. Dementia is by no means an obligatory outcome of aging, although its incidence increases exponentially in old age, and its onset may be insidious. In the absence of unequivocal biomarkers, the accuracy of cognitive profiling plays a fundamental role in the diagnosis of this condition. In this Perspective article, we highlight the utility of brief global cognitive tests in the diagnostic process, from the initial detection stage for which they are designed, through the differential diagnosis of dementia. We also argue that neuropsychological training and expertise are critical in order for the information gathered from these omnibus cognitive tests to be used in an efficient and effective way, and thus, ultimately, for them to fulfill their potential.

Introduction

With age comes wisdom, sometimes, but psychophysical decline always does. A decrease in the efficiency of cognitive functioning is among the typical age-related changes (Singh-Manoux et al., 2012). When such decline is so significant as to markedly interfere with social, occupational or domestic functioning, it is considered pathological and is referred to as dementia.

In common parlance, dementia has somehow become synonymous with Alzheimer Disease (AD) and with its most notorious symptom, that is, memory loss. However, according to the most recent version of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5; American Psychiatric Association, 2013), a memory impairment is no longer necessary to diagnose dementia. A diagnosis of Major Neurocognitive Disorder (i.e., the term that replaced the term dementia in the DSM-5) requires evidence of significant cognitive decline from a previous level of performance in one or more cognitive domains, which may or may not include memory, as well as interference of the cognitive deficit(s) with independence in daily activities, lack of exclusive occurrence in the context of a delirium, and lack of a better explanation based on another mental disorder. Similarly, the less severe form of cognitive impairment with no major repercussions on daily life, known as mild cognitive impairment (MCI, or Mild Neurocognitive Disorder in the DSM-5), can present itself with deficits that involve only (or predominantly) memory (i.e., the amnestic MCI), but can also involve another cognitive domain, or more than one cognitive domain, either with or without memory impairment. Indeed, besides memory loss and memory-related deficits (e.g., misplacing things and difficulties in keeping track of things, being confused about time and place), among the common signs and symptoms of dementia there can also be found: receptive and productive language problems, difficulties in solving problems and carrying out familiar daily tasks, difficulties in planning and organizing, changes in judgment or decision-making functions, difficulties in understanding visual images and spatial relationships, difficulties in perceptual-motor coordination, inappropriate behavior and changes in mood or personality, withdrawal from work or social activities (e.g., Alzheimer’s Disease International, n.d.).

In some cases, the aging population is screened for detection of these signs of dementia (cf., Foster et al., 2019). However, in most cases, it is the aging individual’s own awareness of a subjective decrease in cognitive functioning (leading to increased difficulty or inability to cope with daily activities) or the detection of impoverished performance by a close observer that triggers an initial examination by a general practitioner (Moore et al., 2018). The general practitioner will then decide whether to proceed with further examinations and/or refer the patient for a more in-depth examination to a specialist practitioner, who will reach a definite diagnosis based on history, examination and objective assessments (Hugo and Ganguli, 2014; Falk et al., 2018).

Brief Global Cognitive Tests

Tools used in the diagnostic process of dementia include informant questionnaires on both cognitive decline and autonomy in activities of daily living, along with tests of cognitive function providing an objective assessment of cognitive impairment. Typically, the latter are brief global cognitive function tests, that is, paper-and-pencil neuropsychological tests with short administration times aimed at assessing an individual’s general mental status (Arevalo-Rodriguez et al., 2015). According to recent reviews of the literature (Hwang et al., 2019; Razak et al., 2019), the most popular global cognitive function tests used to this end are: the Mini Mental State Examination (MMSE), the Mini Cog Test, the Montreal Cognitive Assessment (MoCA), the General Practitioner Assessment of Cognition (GPCOG), and the Clock Drawing Test (CDT; see Table 1).

TABLE 1
www.frontiersin.org

Table 1. Global cognitive tests used in the diagnosis of dementia: sub-scores, items and specific skills or functions required for their correct execution (General skill: Verbal auditory comprehension and working memory).

These brief, global cognitive tests are often defined as cognitive screening tools as they are typically used at a population screening stage to detect potential cognitive impairment that may raise the suspicion of dementia. In many handbooks for mental health practitioners, they are indeed presented as satisfying the requirements for optimal screening instruments, which include brief duration, good test-retest and inter-rater reliability, sampling of all major cognitive domains, and, last but not least, easiness of both administration and interpretation. It is generally claimed that any clinician, or health care professional, with any level of training, should be able to administer these tests and interpret their results, thanks to their clear cutoffs (Malloy et al., 1997; Larner, 2017), and they are indeed frequently administered and interpreted by non-specialist professionals (see, e.g., Arevalo-Rodriguez et al., 2015; Palm et al., 2016).

Even if designed as screening tools, these global cognitive tests are often also used at later diagnostic stages—in practice, most evidence about a person’s cognitive status is often gathered through these tests in all phases of the diagnostic process. However, clinical practice guidelines for dementia assessment warn examiners about such use of these tests and recommend that a comprehensive cognitive evaluation of the testee, including specific tests assessing different cognitive functions, be performed in order to reach a diagnosis (cf., Di Pucchio et al., 2018).

Here we present our perspective on the topic and argue that, even though a detailed neuropsychological assessment, coupled with a thorough clinical evaluation, does remain the gold standard for an accurate diagnosis, global cognitive tests can provide specific information and be actually very useful not only at a first, screening, stage of the diagnostic process, but also at the following stages, in which more refined information about a testee’s cognitive status is required. In our view, the training and neuropsychological expertise1 of the health care professional who uses these tests is what makes the difference. Results obtained through these tests can be extremely informative when read by professionals with an adequate knowledge of the cognitive functions that they aim to test. In contrast with what is usually held (i.e., that very little training is required for the use of these tests in the first, preliminary stages of the diagnostic process), we maintain that, in all the diagnostic phases in which they are used, they should be used by professionals with in-depth neuropsychological expertise. Such an expertise is critical for all the aspects connected with the use of these tests: the selection of the most appropriate test according to the purpose of the assessment (e.g., screening, diagnostic confirmation, differential diagnosis, and monitoring of symptom evolution), the setting (e.g., primary or secondary care) in which testing is undertaken and the population from which the testee is drawn, as well as aspects related with test administration, scoring, and both the interpretation and communication of results.

The Role of Global Cognitive Tests and Neuropsychological Expertise in the Diagnostic Process

As any diagnostic process, the diagnosis of dementia can be divided into two main stages: detection of dementia (which ends with the confirmation of the initial diagnostic suspicion) and etiological/differential diagnosis. On the whole, this process is characterized by the use of different methods of investigation (e.g., colloquia, cognitive tests, and laboratory exams) and may last for an extended time window, spanning from the early detection of potential symptoms to the monitoring of symptom evolution and identification of etiological components.

Detection of Dementia

Dementia may be inadequately recognized in primary care settings and it often goes undetected, at least when the symptoms are mild (Lang et al., 2017; Hwang et al., 2019). There are many reasons why elderly people and people who are close to them, or even the family physician, may not notice initial symptoms of dementia or may not judge such symptoms as needing assessment – for example, poor understanding of the difference between the memory decline due to normal aging and that observed in dementia (Ashford et al., 2007).

In light of this, several national and international consensus groups have promoted the routine screening of at-risk populations (e.g., people older than 65; Hwang et al., 2019). However, the need for an early diagnosis and early screening at the population level is not universally acknowledged (e.g., Chambers et al., 2017) and some panels of experts actually recommend not to screen individuals when the individuals themselves, or people close to them, do not express concerns about the presence of cognitive impairment (e.g., Lin et al., 2013).

Advocates of routine screening maintain that most of the criteria endorsed by the World Health Organization to trigger disease screening (Wilson and Jungner, 1968) are fulfilled by dementia and that missed or delayed diagnoses may lead to lost treatment opportunities and increase both patient and caregiver burden (e.g., Bradford et al., 2009). In contrast, opponents argue that it has yet to be demonstrated that any of the available treatment for the most common subtypes of dementia (AD, in the first place) are more beneficial when applied in a pre-symptomatic phase than at later (symptomatic) stages (e.g., Larner, 2017). Moreover, routine screening comes with potential harms, which do not only include economic issues (i.e., financial costs associated with screening), but also a higher probability of misdiagnosis. Highly sensitive tests, such as those recommended for screening purposes, carry a high risk of false positive diagnoses. According to the Canadian Task Force on Preventive Health Care (Pottie et al., 2016), for example, one in eight to ten individuals without cognitive impairment screened for dementia and MCI are incorrectly classified using the MMSE. In turn, these misdiagnoses entail psychological costs: a positive result at a screening test can be mistaken for a diagnosis of dementia which would, at best, generate anxiety in the positive screened individuals and their relatives, but could also trigger depression, loss of status, loss of employment, stigmatization, institutionalization, and loss of independence (e.g., the individual may stop driving or making independent financial and healthcare decisions; Chambers et al., 2017).

These potential harms, however, could be avoided if screening occurs under the supervision of an expert acknowledging the limitations of a positive outcome (i.e., that screening of dementia is not equivalent to a diagnosis), knowing both the actual probability of dementia/MCI in case of a positive screening outcome and how to appropriately communicate the implications of such an outcome to the screened individuals and their families. Adequate communication (i.e., communication made by an expert) of the outcomes of the evaluation is actually needed even when such outcomes are negative. Data from memory/cognitive clinics specialized in the diagnosis of dementia show that around 50% of older people who go to their physician or directly to these clinics with complaints about cognitive impairments turn out not to have either dementia or MCI (e.g., Iliffe and Wilcock, 2017). Individuals with only subjective cognitive impairment need to be reassured and informed about the possible changes in cognition that occur with normal aging. A degree of (neuro)psychological expertise of the professional who communicates the outcome is thus always highly desirable, whether it is positive or negative.

Cognitive impairment in individuals with subjective complaints should not obviously be ruled out solely because a person obtained a normal score on a screening cognitive test. Indeed, regardless of what raises the suspicion of cognitive impairment (i.e., a positive result in a routine population screening, self- or other-referral), it normally triggers a first general, comprehensive evaluation aimed to investigate this suspicion and rule out reversible forms of cognitive decline. Usually, at this (confirmatory) stage, the assessment of a testee’s general mental status is performed in non-specialist settings and with the same global cognitive tests as those used in population routine screenings. As previously mentioned, such tests are usually described as easy-to-administer tools and thus they are seen as suitable to be administered by general health professionals in typical primary, residential and acute care settings (i.e., to community-dwelling older adults or elders in nursing homes and hospitals; Hwang et al., 2019).

Very recently, however, attention has been drawn to the need of appropriate training for the administration of these apparently simple tests following the decision taken by the creator and copyright owner of the MoCA, Dr. Ziad Nasreddine, to make it only available to certified trainers (Nasreddine, n.d.). According to him, this move will reduce variability and ensure the highest accuracy: among the MoCA protocols he reviewed, he found many obvious testing inaccuracies (i.e., it was clear that the examiner had not followed the test instructions) or implausible test-retest variability (e.g., a change of as many as five points in the same individual over a few weeks; Aleccia, 2019).

High measurement errors, inter-rater, intra-rater, and test-retest variability are indeed the most important limitations of this kind of tests (Clark et al., 1999). Standardized administration protocols and related training may obviously reduce such errors and variability by increasing measurement accuracy (c.f., e.g., Kaplan and Saccuzzo, 2005). Neuropsychological expertise can further improve measurement accuracy. It has been shown that individual scores in scales aimed at measuring cognitive impairment may dramatically differ when they are administered by non-specialist health care professionals from when they are administered by specialists, even if non-specialists have undergone a specific training on the administration procedures (e.g., Fabrigoule et al., 2003). There might be several reasons for that. Professionals with neuropsychological expertise may know how to foster cooperation and maximal effort during testing, thanks to their experience in structured colloquia and neuropsychological test administration. Moreover, knowledge of the psychological constructs under investigation allows the examiner to make appropriate choices on how to manage the test administration in all its aspects, such as the choice of the most suitable place where to administer the test and of the allowable adjustments of the administration protocol that may be implemented to deal with unexpected or non-standard conditions.

For example, the CDT is considered to be very easy to administer. Nevertheless, its administration requires knowledge of all cognitive functions involved in the execution of the test. The examiner should know that it aims to evaluate constructional and visuo-spatial skills but also several other cognitive functions, such as semantic-memory processes (e.g., those involved in recalling both the visual structure of a clock and the number symbols of the hours on the clock), and should acknowledge that the evaluation of these aspects is of great importance. In light of this, the examiner should check, for example, that the testing room does not contain any real-world model of a clock (e.g., wall clocks, pictures of a clock).

Similarly, an examiner administering the calculation item of the MMSE (see Table 1) should be fully aware that such item does not aim to evaluate basic mathematical skills but mainly taps attentional and working memory process. Therefore, the examiner should avoid prompting testees when they hesitate during the sequence of calculations by reminding them the results of a previous subtraction, in order to urge them to continue with the next one. Likewise, the read-and-execute-commands item does not only evaluate basic written language comprehension skills, but also prospective memory and high-level verbal ability: the testee has to remember to perform the action contained in the written command that will be presented right after the instructions, which thus have to be understood without knowing the specific content of the command to be executed.

Finally, in-depth knowledge of the cognitive functions being evaluated by a given test or test item may allow the examiner to make the right choice when slight adjustments of the administration protocol are needed because, for example, of a testee’s physical disability (e.g., vision or hearing impairments, which are not unusual in elderly people; cf. Turner et al., 2001).

In the light of this, clinical guidelines recommend that whenever test administrators with high neuropsychological expertise are unavailable (e.g., tests have to be administrated by general health professionals), overall supervision by people with expertise be always granted (e.g., Ballard et al., 2015).

Such a supervision appears all the more critical when test scoring, rather administration, is concerned. Even very brief tests demand ad hoc training for test scoring. For example, scoring the CDT is not intuitive. Correct performance on this test reflects the integrity of many interdependent cognitive functions and only raters with both training and experience are aware of the many possible errors that a testee can make. Common mistakes of naïve CDT scorers include not taking into account errors in the spacing of numbers on the clock face (Lorentz et al., 2002) or possible switches between different numerical codes (e.g., from Arabic to Roman). Such inconsistencies may appear trivial to the untrained scorer but they actually signal problems in visual-spatial and executive functions (just as errors in hand placement), which manifest themselves in poor monitoring of the spatial relations between numbers and of the numerical format that is chosen when completing the number sequence on the clock face.

All neuropsychological tests, indeed, involve multiple cognitive skills (there is not a single test that can be considered a pure measure of a specific cognitive ability; Pruneti et al., 2018) and knowledge about the cognitive abilities that a test (or an item of a test) aims to evaluate is critical for the correct interpretation of an individual’s performance on this test. In the case of global cognitive tests, such a knowledge can help the examiner to interpret not only a total score, but also the pattern of performance across different items. For example, Pasqualetti et al. (2002) highlight that, by only failing the MMSE memory items, one would obtain an MMSE score of 27, which is usually classified as normal. However, such failure could subtend a selective long-term memory impairment and signal preclinical conditions (i.e., an MCI of the amnesic type) that will evolve into dementia. A similar, apparently normal, score may thus call for further investigation.

Test users should then be, at the very least, supervised by an expert when tests are scored and results are interpreted. Even more crucially, expert supervision is required when the appropriate cognitive tests have to be selected. Cognitive tests should be selected by people with in-depth knowledge of the psychometric properties of the tool to be used (i.e., its sensitivity, specificity, reliability, validity and predictive values). Only this kind of in-depth knowledge can lead to informed choices about what the most appropriate tools for different settings, populations and purposes are.

Indeed, the psychometric properties of a test are not fixed attributes but features of how it performs when it is applied to specific study samples (with a specific purpose), and are influenced by many factors that include age, occupation, education, and environmental context. In screening programs, different screening tools, or different cut-offs, may then be suitable depending on the target population and on the setting (community, primary, residential, or acute care) in which the screening is conducted. In particular, it is of great importance whether the screening will be conducted in selected elderly populations (e.g., people living in nursing homes, who usually need assistance for performing daily activities) or unselected populations. In the latter case, screening tests are more likely to produce false positives (e.g., Flicker et al., 1997) and tests with different sensitivities or different cut-off should be used in the two cases. Both the specificity and sensitivity of tests for dementia can be indeed affected by variations in its base rates in different populations (Ivnik et al., 2000). More generally, to adequately serve screening purposes, the selected tools should have been validated in population-based samples representing the target samples of the screening. In addition, the appropriate levels of sensitivity and specificity of the to-be-used test depend on its aims: population screening programs usually require higher cut-off scores than diagnostic settings (e.g., 28 instead of 24 for the MMSE), to minimize the risk of missing cases of cognitive impairment, but different cut-offs may also be appropriate according to the specific purposes of the screening (e.g., screening for dementia vs. MCI). In general, low specificity and high sensitivity may be appropriate when the screening is followed by a diagnostic workup in which many other cognitive tests are expected to be used. Knowledge of the psychometric properties of the selected screening tests is thus also critical in order to plan subsequent diagnostic phases.

For some screening tests, data are indeed available from different populations. MMSE scores, for example, are known to be influenced by age, education levels, social class, and sex (O’Connor et al., 1989; Lopez et al., 2005). Moreover, the test has been shown to have a major measurement limitation, that is, the frequent observation of either floor or ceiling effects when people with severe dementia or highly educated individuals with MCI are tested, respectively (Trzepacz et al., 2015). Accordingly, the MMSE may not be the most suitable instrument to use when samples from such populations are to be screened, and its cut-off should be changed to adjust sensitivity on the basis of the sample demographic characteristics, the number of false positives/negatives that are expected, whether the MMSE is the only screening instrument or other tools will be used to refine the outcomes (Pasqualetti et al., 2002; Tang-Wai et al., 2003; O’Bryant et al., 2008; Hoops et al., 2009).

Test selection may also imply the choice of a specific test version (with specific test instructions) which also depends on a neuropsychologically informed analysis of the purpose of the test (e.g., to unveil mild vs. severe impairment of certain cognitive functions) in that particular diagnostic setting. For example, it has been argued that the use of pre-drawn clocks in the CDT is useful to tap perceptual functions whereas free-drawn clocks place greater demands on language, memory, and executive functions (Freedman et al., 1994). Moreover, the request to draw the hands on the clock to indicate “10 past 11” (i.e., two numbers that are very close to each other in magnitude and on the clock face), as in Watson et al.’s (1993) version, places greater demands on executive functions (and results in a more difficult task) compared to when times on the hour or half-hour are requested (Tuokko et al., 1995).

Differential Diagnosis

Once the suspicion of dementia is confirmed, most clinical practice guidelines recommend the referral to a specialist setting where a more detailed evaluation (requiring different specialist competencies and a multidisciplinary approach, cf., Pruneti et al., 2018) is conducted to perform a differential diagnosis in accordance with type-specific validated criteria (NICE, National Institute for Health and Clinical Excellence, 2018). This phase usually includes a comprehensive neuropsychological assessment (NPA) in which a crucial role is played by cognitive testing. Cognitive tests administered in this phase are generally specific to a particular set of cognitive abilities (i.e., each test is specifically designed to assess mainly certain cognitive functions), rather than being global cognitive tests. By evaluating performance across tests, in conjunction with the relevant medical history and behavioral observations, it is possible to define a specific diagnosis, assess the severity of impairment, and formulate a prognosis. Inclusion of NPA as part of the comprehensive investigation of this phase is indeed strongly recommended: it improves diagnostic accuracy over and above both routine clinical evaluation and (possible) laboratory/neuro-imaging tests (Geroldi et al., 2008).

It is widely recognized that NPA requires neuropsychological expertise in order to administer and score cognitive tests, as well as to interpret and communicate test results (cf., e.g., American Psychological Association, 2014). We would argue, however, that such an expertise may be very useful even when an in-depth NPA cannot be performed (e.g., because of time or resource constraints), and most of the information about the neuropsychological profile of the testee is collected through the same global cognitive tools that are designed for – and usually administered in – previous phases of the diagnostic process. Not only can these tools be helpful to obtain dichotomous findings (positive vs. negative outcomes in the screening or confirmatory phases) but, if adequately used, they can also provide more refined information that is useful for the differential diagnosis.

The MMSE, for example, can be a prominent source of information. Studies investigating its factorial structure have individuated two-to-three main factors underlying the MMSE score, each of which can be associated with different cognitive profiles that are typical of different types of dementia (see Noale et al., 2006; Shigemori et al., 2010).

As suggested by these studies, the factor including temporal/spatial orientation and delayed recall items is associated with episodic memory. It has been proposed that selectively poor performance on such items may be used as a marker of episodic memory impairment in early AD or MCI of the amnesic type: the summation of the scores obtained on these items indeed appears to be more strongly associated with AD, not only than the total MMSE score, but also than scores on specific tests of episodic memory (e.g., the Free and Cued Reminding Test) usually recommended for the detection of the amnestic syndrome in AD (Carcaillon et al., 2009). The factor including constructional praxis, reading, verbal comprehension and attention/concentration items is more linked to working memory abilities, whereas the factor including naming, verbal repetition, immediate memory, and writing items reflects verbal skills and consolidated (semantic) knowledge. Indeed, these items put less emphasis on episodic memory thus being less sensitive to the memory loss that is typical of AD. In contrast, performance on these items (or on some of them) may be more impaired in other forms of dementia. For example, Frontotemporal dementia (FTD) patients may show inattention and poor organization, as well as language impairment (the behavioral and language variants of FDT, respectively), in the face of a relative preservation of episodic memory (Wittenberg et al., 2008). An analysis of performance on items tapping executive functioning and language (see Table 1), and a comparison with performance on the orientation and delayed recall items, may help in the differential diagnosis between AD and FTD. The rate of progression of cognitive decline, as estimated by the changes in the MMSE score over time, can also provide helpful insight for such a differential diagnosis. Chow et al. (2006), for example, found that both patients with language and behavioral FTD show a more rapid progression of decline on the language MMSE items than AD patients and, vice versa, a slower progression on constructional praxis2.

The MoCA, too, can give useful information about possible impairments in distinct dimensions of cognitive functioning and, thus, for the differential diagnosis between different forms of dementia (Freitas et al., 2012). MoCA’s attention and executive sub-scores, for example, are composed of items that rely on cognitive functions associated with frontal lobe processing (see Table 1). They are considered to be particularly sensitive to frontal lobe dysfunctions (Nasreddine et al., 2005) and helpful in carrying out a differential diagnosis between AD and FDT (Coleman et al., 2016). Indeed, the MoCA includes critical subtests in which AD and FDT patients have been shown to present symmetrically different performances, that is items assessing verbal fluency and language production, as well as orientation and episodic memory (i.e., delayed recall). These items can be used to calculate the so-called VLOM ratio [(verbal-fluency + language)/(orientation + memory); Mathuranath et al., 2000], which can differentiate quite well between AD and FTD patients, with AD patients typically showing a higher ratio than FDT patients (i.e., relatively better performances on VL items compared to OM items; Larner, 2018).

The pattern of performance observed in the MoCA, and the possible differences between the testee’s performance on VL and OM items, should obviously be integrated with other sources of information, such as the testee’s behavior during test administration. It is well-known, indeed, that the most common form of FDT, that is, its behavioral variant, presents with an onset of symptoms characterized more by behavioral changes than cognitive deficits (e.g., stereotypical movements, compulsive-like behaviors), thus making the observation of testees’ behavior at least as important as their performance on neuropsychological tests (Rascovsky et al., 2011).

Qualitative, in addition to quantitative, assessment of testees’ performance is also very important: testees’ errors can be a useful source of information for the differential diagnosis if correctly interpreted. For example, Lee et al. (2009) showed that, in the CDT, AD patients usually make more errors revealing deficits in accessing knowledge of the features and meaning of a clock (e.g., the clock hands are absent or the time is simply written on the clock instead of being represented as a particular position of the hands on the clock) compared to patients with Parkinson’s disease dementia (PDD) or subcortical vascular dementia (VaD). On the contrary, PDD and VaD patients showed more errors reflecting a deficit in executive functioning, such as planning and perseverative errors (e.g., the patient draws three or more hands or a given number is written more than once on the clock).

Only in-depth familiarity with neuropsychological testing and knowledge of the aims and psychometric properties of the specific neuropsychological tools being administered may enable the examiner to perform these refined evaluations of testees’ performance. Such knowledge allows an informed, competent and flexible use of these tools and can make data obtained from global cognitive tests very helpful. These data can be indeed useful even when resources are available for performing a throughout NPA (i.e., when the differential diagnosis can also rely on the results of domain-specific neuropsychological tests), as they can orient the clinician toward a specific diagnosis to be confirmed through targeted diagnostic tools.

Conclusion

Throughout this piece we have argued for the need of a synergy between the use of brief global cognitive tests and neuropsychological expertise in both detection and differential diagnosis of dementias. At the detection stage, neuropsychological expertise can make a real difference in the selection, administration, scoring, and interpretation of such tests, as well as in the communication of results both to other health care professionals involved in the detection process and to patients, their families and caregivers. Furthermore, neuropsychological expertise would enable the use of these tests to inform and orientate the differential diagnostic process at later stages, thereby saving time and efforts that can be used to help the patients and caregivers in the long process of knowledge and acceptance of dementia.

Author Contributions

BT and MR drafted the manuscript. All authors were involved in the critical revision of the manuscript.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We would like to thank Roberto Cubelli for his support.

Footnotes

  1. ^ According to the Clinical Neuropsychology Synarchy (Smith and CNS, 2019), the core competencies that are required of professionals working in clinical neuropsychology are knowledge of (a) general psychology (basic principles, problems, and methods underlying the science of psychology and main cognitive processes identified by research in experimental cognitive psychology and cognitive neuropsychology), (b) clinical psychology (including development of normal and abnormal behavior and cognition throughout the lifespan), (c) statistics and neuropsychological research methods, (d) neuropsychological assessment (current classification of neuropsychological symptoms, main tests used to detect such symptoms and their psychometric properties), (d) cultural and individual differences, (e) functional neuroanatomy and clinically relevant brain-behavior relationships, (e) neurological and related disorders (including their etiology, pathology, course and treatment), (f) neuroimaging and other neurodiagnostic techniques (g) professional ethics, (h) neuropsychological intervention, including treatment and rehabilitation. In addition, skills in the following domains are required: (a) decision-making and diagnosis, (b) consultation (patients, families, medical colleagues, agencies, etc.) and communication of both evaluation results and recommendations to diverse audiences, (c) information gathering, (d) teaching and supervision (e) continuous updating of relevant knowledge through reviews of critical literature (see also Hessen et al., 2018).
  2. ^ It is worth noticing that atypical forms of AD (i.e., AD forms with non-amnestic presentation) have been described that can mimic other forms of dementia. Among these AD variants, there are very rare forms of AD that present with executive and behavioral deficits, together with disproportionate frontal lobe atrophy, and can be easily mistaken with FDT (Ljubenkov and Geschwind, 2016). The majority of the atypical forms of AD present with an early-onset (they present in people younger than 65). In such early-onset cases, structural neuroimaging and cerebrospinal fluid examinations are usually performed, as recommended by the American Academy of Neurology and European Federation of Neurological Societies guidelines (Knopman et al., 2001). Laboratory investigations, along with clinical assessment, may help in clarifying the picture (Rossor et al., 2010). It should be noted, however, that the description of these rare AD forms has been limited to few case reports and studies with small series of patients. Therefore, clinical and neuropathological characteristics of these forms, including their evolution over time, are not yet well understood (Ossenkoppele et al., 2015). At the moment, there are no clear neuropsychological indexes that can help distinguish between FDT and these behavioral/executive variants of AD.

References

Aleccia, J. (2019). Creator Of Brain Exam That Trump Aced Demands New Training For Testers. San Francisco, CA: Kaiser Health News. Available online at: https://khn.org/news/creator-of-brain-exam-that-trump-aced-demands-new-training-for-testers/.

Google Scholar

Alzheimer’s Disease International (n.d.). Symptoms of Dementia. Available online at: https://www.alz.co.uk/info/early-symptoms (accessed December 29, 2020).

Google Scholar

American Psychiatric Association (2013). Diagnostic and Statistical Manual of Mental Disorders, 5th Edn. Washington, DC: American Psychological Association.

Google Scholar

American Psychological Association (2014). Standards for Educational and Psychological Testing. Washington, DC: American Psychological Association.

Google Scholar

Arevalo-Rodriguez, I., Smailagic, N., Roqué I Figuls, M., Ciapponi, A., Sanchez-Perez, E., and Giannakou, A. (2015). Mini-Mental State Examination (MMSE) for the detection of Alzheimer’s disease and other dementias in people with mild cognitive impairment (MCI). Cochrane Database Syst. Rev. 2015:CD010783. doi: 10.1002/14651858.CD010783.pub2

PubMed Abstract | CrossRef Full Text | Google Scholar

Ashford, J. W., Borson, S., O’Hara, R., Dash, P., Frank, L., Robert, P., et al. (2007). Should older adults be screened for dementia? It is important to screen for evidence of dementia! Alzheimers Dement. 3, 75–80. doi: 10.1016/j.jalz.2007.03.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Baddeley, A., Eysenck, M., and Anderson, M. W. (2020). Memory, 3rd Edn. New York, NY: Psychology Press- Routledge.

Google Scholar

Ballard, C., Burns, A., Corbett, A., Livingstone, G., and Rasmussen, J. (2015). Helping you Assess Cognition: A Practical Toolkit for Clinicians. London: Alzheimer’s Society.

Google Scholar

Borson, S., Scanlan, J., Brush, M., Vitaliano, P., and Dokmak, A. (2000). The Mini-Cog: a cognitive ‘vital signs’ measure for dementia screening in multi-lingual elderly. Int. J. Geriatr. Psychiatry 15, 1021–1027.

Google Scholar

Bradford, A., Kunik, M. E., Schulz, P., Williams, S. P., and Singh, H. (2009). Missed and delayed diagnosis of dementia in primary care: prevalence and contributing factors. Alzheimer Dis. Assoc. Disord. 23, 306–314. doi: 10.1097/WAD.0b013e3181a6bebc

PubMed Abstract | CrossRef Full Text | Google Scholar

Brodaty, H., Low, L.-F., Gibson, L., and Burns, K. (2006). What is the best dementia screening instrument for general practitioners to use? Am. J. Geriatr. Psychiatry 14, 391–400. doi: 10.1097/01.JGP.0000216181.20416.b2

CrossRef Full Text | Google Scholar

Carcaillon, L., Amieva, H., Auriacombe, S., Helmer, C., and Dartigues, J. F. (2009). A subtest of the MMSE as a valid test of episodic memory? Comparison with the Free and Cued Reminding Test. Dement. Geriatr. Cogn. Disord. 27, 429–438.

Google Scholar

Chambers, L. W., Sivananthan, S., and Brayne, C. (2017). Is dementia screening of apparently healthy individuals justified? Adv. Prev. Med. 2017:97084131–8. doi: 10.1155/2017/9708413

PubMed Abstract | CrossRef Full Text | Google Scholar

Chow, T. W., Hynan, L. S., and Lipton, A. M. (2006). MMSE scores decline at a greater rate in frontotemporal degeneration than in AD. Dement. Geriatr. Cogn. Disord. 22, 194–199. doi: 10.1159/000094870

PubMed Abstract | CrossRef Full Text | Google Scholar

Clark, C. M., Sheppard, L., Fillenbaum, G. G., Galasko, D., Morris, J. C., Koss, E., et al. (1999). Variability in annual mini-mental state examination score in patients with probable Alzheimer disease. a clinical perpective of data from the consortium to establish a registry for Alzheimer’s disease. Arch. Neurol. 56, 857–862. doi: 10.1001/archneur.56.7.857

PubMed Abstract | CrossRef Full Text | Google Scholar

Coleman, K. K., Coleman, B. L., MacKinley, J. D., Pasternak, S. H., and Finger, E. C. (2016). Detection and differentiation of frontotemporal dementia and related disorders from Alzheimer disease using the montreal cognitive assessment. Alzheimer Dis. Assoc. Disord. 30, 258–263. doi: 10.1097/WAD.0000000000000119

PubMed Abstract | CrossRef Full Text | Google Scholar

Denes, G., and Pizzamiglio, L. (1999). Handbook of Clinical and Experimental Neuropsychology. London: Psychology Press.

Google Scholar

Di Pucchio, A., Vanacore, N., Marzolini, F., Lacorte, E., Fiandra, T. D., Group, I.-D., et al. (2018). Use of neuropsychological tests for the diagnosis of dementia: a survey of Italian memory clinics. BMJ Open 8:e017847. doi: 10.1136/bmjopen-2017-017847

PubMed Abstract | CrossRef Full Text | Google Scholar

Fabrigoule, C., Lechevallier, N., Crasborn, L., Dartigues, J. F., and Orgogozo, J. M. (2003). Inter-rater reliability of scales and tests used to measure mild cognitive impairments by general practitioners and psychologists. Curr. Med. Res. Opin. 19, 603–608. doi: 10.1185/030079903125002298

PubMed Abstract | CrossRef Full Text | Google Scholar

Falk, N., Cole, A., and Meredith, T. G. (2018). Evaluation of suspected dementia. Am. Fam. Phys. 97, 398–405.

Google Scholar

Flicker, L., Logiudice, D., Carlin, J. B., and Ames, D. (1997). The predictive value of dementia screening instruments in clinical populations. Int. J. Geriatr. Psychiatry 12, 203–209.

Google Scholar

Folstein, M. F., Folstein, S. E., and McHugh, P. R. (1975). “Mini-mental state”: a practical method for grading the cognitive state of patients for the clinician. J. Psychiatr. Res. 12, 189–198.

Google Scholar

Foster, N. L., Bondi, M. W., Das, R., Foss, M., Hershey, L. A., Koh, S., et al. (2019). Quality improvement in neurology. Mild cognitive impairment quality measurement set. Neurology 93, 705–713. doi: 10.1212/WNL.0000000000008259

PubMed Abstract | CrossRef Full Text | Google Scholar

Freedman, M., Leach, L., Kaplan, E., Winocur, G., Shulman, K. I., and Delis, D. (1994). Clock Drawing: A Neuropsychological Analysis. New York, NY: Oxford University Press.

Google Scholar

Freitas, S., Simões, M. R., Alves, L., Vicente, M., and Santana, I. (2012). Montreal cognitive assessment (MoCA): validation study for vascular dementia. J. Int. Neuropsychol. Soc. 18, 1031–1040. doi: 10.1017/S135561771200077X

PubMed Abstract | CrossRef Full Text | Google Scholar

Gathercole, S. E., Dunning, D. L., Holmes, J., and Norris, D. (2019). Working memory training involves learning new skills. J. Lang. 105, 19–42.

Google Scholar

Geroldi, C., Canu, E., Bruni, A. C., Forno, G. D., Ferri, R., Gabelli, C., et al. (2008). The added value of neuropsychologic tests and structural imaging for the etiologic diagnosis of dementia in italian expert centers. Alzheimer Dis. Assoc. Disord. 22, 309. doi: 10.1097/WAD.0b013e3181871a47

PubMed Abstract | CrossRef Full Text | Google Scholar

Hessen, E., Hokkaken, L., Ponsford, J., Van Zandvoort, M., Watts, A., Jonathan, E., et al. (2018). Core competencies in clinical neuropsychology training across the world. Clin. Neuropsychol. 32, 642–656.

Google Scholar

Hoops, S., Nazem, S., Siderowf, A. D., Duda, J. E., Xie, S. X., Stern, M. B., et al. (2009). Validity of the MoCA and MMSE in the detection of MCI and dementia in Parkinson disease. Neurology 73, 1738–1745. doi: 10.1212/WNL.0b013e3181c34b47

PubMed Abstract | CrossRef Full Text | Google Scholar

Hubbard, E. J., Santini, V., Blankevoort, C. G., Volkers, K. M., Barrup, M. S., Byerly, L., et al. (2008). Clock drawing performance in cognitively normal elderly. Arch. Clin. Neuropsychol. 23, 295–327. doi: 10.1016/j.acn.2007.12.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Hugo, J., and Ganguli, M. (2014). Dementia and cognitive impairment: epidemiology, diagnosis, and treatment. Clin. Geriatr. Med. 30, 421–442. doi: 10.1016/j.cger.2014.04.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Hwang, A. B., Boes, S., Nyffeler, T., and Schuepfer, G. (2019). Validity of screening instruments for the detection of dementia and mild cognitive impairment in hospital inpatients: A systematic review of diagnostic accuracy studies. PLoS One 14:e0219569. doi: 10.1371/journal.pone.0219569

PubMed Abstract | CrossRef Full Text | Google Scholar

Iliffe, S., and Wilcock, J. (2017). The UK experience of promoting dementia recognition and management in primary care. Zeitschrift für Gerontol. Geriatrie 50, 63–67. doi: 10.1007/s00391-016-1175-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Ivnik, R. J., Smith, G. E., Petersen, R. C., Boeve, B. F., Kokmen, E., and Tangalos, E. G. (2000). Diagnostic accuracy of four approaches to interpreting neuropsychological test data. Neuropsychology 14, 163–177.

Google Scholar

Kaplan, R., and Saccuzzo, D. (2005). Psychological Testing: Principles, Applications and Issues, 6th Edn. Monterey, CA: Brooks/Cole.

Google Scholar

Knopman, D. S., DeKosky, S. T., Cummings, J. L., Chui, H., Corey-Bloom, J., Relkin, N., et al. (2001). Practice parameter: diagnosis of dementia (an evidence-based review). Report of the Quality Standards Subcommittee of the American Academy of Neurology. Neurology 56, 1143–1153.

Google Scholar

Lang, L., Clifford, A., Wei, L., Zhang, D., Leung, D., Augustine, G., et al. (2017). Prevalence and determinants of undetected dementia in the community: a systematic literature review and a meta-analysis. BMJ Open 7:e011146. doi: 10.1136/bmjopen-2016-011146

PubMed Abstract | CrossRef Full Text | Google Scholar

Larner, A. J. (2018). Dementia in Clinical Practice: A Neurological Perspective. New York, NY: Springer International Publishing.

Google Scholar

Larner, J. (2017). Cognitive Screening Instruments. A Practical Approach. New York, NY: Springer International Publishing.

Google Scholar

Lee, A. Y., Kim, J. S., Choi, B. H., and Sohn, E. H. (2009). Characteristics of clock drawing test (CDT) errors by the dementia type: quantitative and qualitative analyses. Arch. Gerontol. Geriatr. 48, 58–60. doi: 10.1016/j.archger.2007.10.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Lezak, M. D., Howieson, D. B., Bigler, E. D., and Tranel, D. (2012). Neuropsychological Assessment. New York, NY: Oxford University Press.

Google Scholar

Lin, J. S., O’Connor, E., Rossom, R. C., Perdue, L. A., and Eckstrom, E. (2013). Screening for cognitive impairment in older adults: a systematic review for the U.S. Preventive Services Task Force. Ann. Intern. Med. 159, 601–612. doi: 10.7326/0003-4819-159-9-201311050-00730

PubMed Abstract | CrossRef Full Text | Google Scholar

Ljubenkov, P. A., and Geschwind, M. D. (2016). Dementia. Semin. Neurol. 36, 397–404. doi: 10.1055/s-0036-1585096

PubMed Abstract | CrossRef Full Text | Google Scholar

Lopez, M. N., Charter, R. A., Mostafavi, B., Nibut, L. P., and Smith, W. E. (2005). Psychometric properties of the folstein mini-mental state examination. Assessment 12, 137–144. doi: 10.1177/1073191105275412

PubMed Abstract | CrossRef Full Text | Google Scholar

Lorentz, W. J., Scanlan, J. M., and Borson, S. (2002). Brief screening tests for dementia. Can. J. Psychiatry 47, 723–733. doi: 10.1177/070674370204700803

PubMed Abstract | CrossRef Full Text | Google Scholar

Malloy, P. F., Cummings, J. L., Coffey, C. E., Duffy, J., Fink, M., Lauterbach, E. C., et al. (1997). Cognitive screening instruments in neuropsychiatry: a report of the Committee on Research of the American Neuropsychiatric Association. J. Neuropsychiatry 9, 189–197.

Google Scholar

Mathuranath, P. S., Nestor, P. J., Berrios, G. E., Rakowicz, W., and Hodges, J. R. (2000). A brief cognitive test battery to differentiate Alzheimer’s disease and frontotemporal dementia. Neurology 55, 1613–1620. doi: 10.1212/01.wnl.0000434309.85312.19

PubMed Abstract | CrossRef Full Text | Google Scholar

Moore, A., Frank, C., and Chambers, L. W. (2018). Role of the family physician in dementia care. Can. Fam. Phys. Official J. Coll. Fam. Phys. Can. 64, 717–719.

Google Scholar

Nasreddine, Z. S. (n.d.). Upcoming Mandatory Training For MoCA Testing. Available online at: https://www.mocatest.org/mandatory-moca-test-training/ (accessed December 29, 2020).

Google Scholar

Nasreddine, Z. S., Phillips, N. A., Bédirian, V., Charbonneau, S., Whitehead, V., Collin, I., et al. (2005). The montreal cognitive assessment, MoCA: a brief screening tool for mild cognitive impairment. J. Am. Geriat. Soc. 53, 695–699. doi: 10.1111/j.1532-5415.2005.53221.x

PubMed Abstract | CrossRef Full Text | Google Scholar

NICE, National Institute for Health and Clinical Excellence (2018). Dementia: Assessment, Management And Support for People Living with Dementia and Their Carers. Available online at: https://www.nice.org.uk/guidance/ng97 (accessed December 29, 2020).

Google Scholar

Noale, M., Limongi, F., and Minicuci, N. (2006). Identification of factorial structure of MMSE based on elderly cognitive destiny: the Italian longitudinal study on aging. Dement. Geriatr. Cogn. Disord. 21, 233–241. doi: 10.1159/000091341

PubMed Abstract | CrossRef Full Text | Google Scholar

O’Bryant, S. E., Waring, S. C., Cullum, C. M., Hall, J., Lacritz, L., Massman, P. J., et al. (2008). Staging dementia using clinical dementia rating scale sum of boxes scores: a Texas Alzheimer’s research consortium study. Arch. Neurol. 65, 1091–1095. doi: 10.1001/archneur.65.8.1091

PubMed Abstract | CrossRef Full Text | Google Scholar

O’Connor, D. W., Pollitt, P. A., Treasure, F. P., Brook, C. P. B., and Reiss, B. B. (1989). The influence of education, social class and sex on Mini-Mental State scores. Psychol. Med. 19, 771–776. doi: 10.1017/S0033291700024375

PubMed Abstract | CrossRef Full Text | Google Scholar

Ossenkoppele, R., Pijnenburg, Y. A., Perry, D. C., Cohn-Sheehy, B. I., Scheltens, N. M., Vogel, J. W., et al. (2015). The behavioural/dysexecutive variant of Alzheimer’s disease: clinical, neuroimaging and pathological features. Brain 138, 2732–2749.

Google Scholar

Palm, R., Jünger, S., Reuther, S., Schwab, C. G. G., Dichter, M. N., Holle, B., et al. (2016). People with dementia in nursing home research: a methodological review of the definition and identification of the study population. BMC Geriatrics 16:78. doi: 10.1186/s12877-016-0249-7NODOI

CrossRef Full Text | Google Scholar

Pasqualetti, P., Moffa, F., Chiovenda, P., Carlesimo, G. A., Caltagirone, C., and Rossini, P. M. (2002). Mini-Mental state examination and mental deterioration battery: analysis of the relationship and clinical implications. J. Am. Geriat. Soc. 50, 1577–1581. doi: 10.1046/j.1532-5415.2002.50416.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Pottie, K., Rahal, R., Jaramillo, A., Birtwhistle, R., Thombs, B. D., Singh, H., et al. (2016). Recommendations on screening for cognitive impairment in older adults. CMAJ 188, 37–46.

Google Scholar

Pruneti, C., Innocenti, A., and Cammisuli, D. M. (2018). Multidimensional approach usefulness in early Alzhiemer’s disease: advances in clinical practice. Acta Biomed. 89, 79–86.

Google Scholar

Rascovsky, K., Hodges, J. R., Knopman, D., Mendez, M. F., Kramer, J. H., Neuhaus, J., et al. (2011). Sensitivity of revised diagnostic criteria for the behavioural variant of frontotemporal dementia. Brain 134, 2456–2477. doi: 10.1093/brain/awr179

PubMed Abstract | CrossRef Full Text | Google Scholar

Razak, M. A., Ahmad, N. A., Chan, Y. Y., Mohamad Kasim, N., Yusof, M., Abdul Ghani, M. K. A., et al. (2019). Validity of screening tools for dementia and mild cognitive impairment among the elderly in primary health care: a systematic review. Public Health 169, 84–92. doi: 10.1016/j.puhe.2019.01.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Rossor, M. N., Fox, N. C., Mummery, C. J., Schott, J. M., and Warren, J. D. (2010). The diagnosis of young-onset dementia. Lancet Neurol. 9, 793–806.

Google Scholar

Shigemori, K., Ohgi, S., Okuyama, E., Shimura, T.,, and Schneider, E. (2010). The factorial structure of the Mini-Mental State Examination (MMSE) in Japanese dementia patients. BMC Geriatr. 10:36. doi: 10.1186/1471-2318-10-36

PubMed Abstract | CrossRef Full Text | Google Scholar

Singh-Manoux, A., Kivimaki, M., Glymour, M. M., Elbaz, A., Berr, C., Ebmeier, K. P., et al. (2012). Timing of onset of cognitive decline: results from Whitehall II prospective cohort study. BMJ 344:d7622. doi: 10.1136/bmj.d7622

PubMed Abstract | CrossRef Full Text | Google Scholar

Smith, G. CNS. (2019). Education and training in clinical neuropsychology: recent developments and documents from the clinical neuropsychology synarchy. Arch. Clin. Neuropsychol. 34, 418–431. doi: 10.1093/arclin/acy075

CrossRef Full Text | Google Scholar

Tang-Wai, D. F., Knopman, D. S., Geda, Y. E., Edland, S. D., Smith, G. E., Ivnik, R. J., et al. (2003). Comparison of the short test of mental status and the mini-mental state examination in mild cognitive impairment. Arch. Neurol. 60, 1777–1781. doi: 10.1001/archneur.60.12.1777

PubMed Abstract | CrossRef Full Text | Google Scholar

Trzepacz, P. T., Hochstetler, H., Wang, S., Walker, B., and Saykin, A. J. Alzheimer’s Disease Neuroimaging Initiative. (2015). Relationship between the Montreal Cognitive Assessment and Mini-mental State Examination for assessment of mild cognitive impairment in older adults. BMC Geriatr. 15:107. doi: 10.1186/s12877-015-0103-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Tuokko, H., Kristjansson, B., and Miller, J. A. (1995). Neuropsychological detection of dementia: an overview of the neuropsychological components of the canadian study of health and aging. J. Clin. Exp. Neuropsychol. 17, 325–373. doi: 10.1080/01688639508405129

PubMed Abstract | CrossRef Full Text | Google Scholar

Turner, S. M., DeMers, S. T., Fox, H. R., and Reed, G. M. (2001). APA’s guidelines for test user qualifications: an executive summary. Am. Psychol. 56, 1099–1113. doi: 10.1037/0003-066X.56.12.1099

CrossRef Full Text | Google Scholar

Watson, Y. I., Arfken, C. L., and Birge, S. J. (1993). Clock completion: an objective screening test for dementia. J. Am. Geriat. Soc. 41, 1235–1240.

Google Scholar

Wilson, J., and Jungner, G. (1968). Principles and Practice of Screening for Disease. Public Health Papers No. 34. Geneva: World Health Organization.

Google Scholar

Wittenberg, D., Possin, K. L., Rascovsky, K., Rankin, K. P., Miller, B. L., and Kramer, J. H. (2008). The early neuropsychological and behavioral characteristics of frontotemporal dementia. Neuropsychol. Rev. 18, 91–102. doi: 10.1007/s11065-008-9056-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: aging, dementia, cognitive assessment, screening tools, psychometric testing, cognitive impairment, cognitive decline, neuropsychology

Citation: Riello M, Rusconi E and Treccani B (2021) The Role of Brief Global Cognitive Tests and Neuropsychological Expertise in the Detection and Differential Diagnosis of Dementia. Front. Aging Neurosci. 13:648310. doi: 10.3389/fnagi.2021.648310

Received: 31 December 2020; Accepted: 07 April 2021;
Published: 10 June 2021.

Edited by:

Cosimo Urgesi, University of Udine, Italy

Reviewed by:

Carlo Scialò, International School for Advanced Studies (SISSA), Italy
Valentina Moro, University of Verona, Italy

Copyright © 2021 Riello, Rusconi and Treccani. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Barbara Treccani, barbara.treccani@unitn.it

Download