Skip to main content

PERSPECTIVE article

Front. Psychol., 24 April 2023
Sec. Cognitive Science
This article is part of the Research Topic Digitalization in Education: Developing Tools for Effective Learning and Personalisation of Education View all 11 articles

The necessary, albeit belated, transition to computerized cognitive assessment

  • 1Facultad de Lenguas y Educación, Universidad Nebrija, Madrid, Spain
  • 2AcqVA Aurora Center, UiT The Arctic University of Norway, Tromsø, Norway

Cognitive assessment is a common and daily process in educational, clinical, or research settings, among others. Currently, most professionals use classic pencil-and-paper screenings, tests, and assessment batteries. However, as the SARS-CoV-2 health crisis has shown, the pencil-and-paper format is becoming increasingly outdated and it is necessary to transition to new technologies, using computerized cognitive assessments (CCA). This article discusses the advantages, disadvantages, and implications of this necessary transition that professionals should face in the immediate future, and encourages careful adoption of this change to ensure a smooth transition.

1. Introduction: the current state of cognitive assessment

Cognitive assessment refers to a set of techniques and procedures to ascertain the status of one or more aspects of a person’s cognitive profile. It is an essential process that is performed daily in clinical, academic, and research settings around the world. It is often used to identify behavioral markers of processes involving cognitive impairment (Wild et al., 2008) or neurodegeneration (Choi, 2008; Davis et al., 2015; Daroische et al., 2021), although it is also used in healthy users (White et al., 2012; Bertelli et al., 2018). These tests can take many forms, consist of different activities, have different application rules, and require different types of answers from the user. However, to a greater or lesser extent, they can all be applied in different formats, the main versions being paper-and-pencil and computerized.

Mainly for reasons of tradition and accessibility, most of the widely applied tests are paper-and-pencil tests. Classical examples include the Montreal Cognitive Assessment (MoCA; Nasreddine et al., 2005), the Stroop Test (Stroop, 1935), Raven’s progressive matrices (RSPM; Raven, 1938), Rey Auditory Verbal Learning Test (RAVLT; Rey, 1941; Schmidt, 1996), Trail Making Test (TMT; Reitan, 1955), the Corsi Block-Tapping Test (Corsi, 1972), the mini-mental state examination (MMSE; Folstein et al., 1975), Wisconsin Card Sorting Test (WCST; Heaton, 1981), Boston Naming Test (Kaplan et al., 1983), and the Wechsler scales (Wechsler, 2008; Wechsler, 2014). While quite a few of these individual tests have been digitized (Hannukkala et al., 2020), most of the assessment batteries that usually contain them have not finished making the step into the digital modality, only timidly entering the field through digitized corrections, without coming fully online. In addition, it should be noted that there is a relatively wide range of variability in the digitalization feasibility of the tests. On the one hand, the translation to digital format is straightforward for tests that require selecting one or more answers from an array of options on paper. On the other hand, tests that are manipulative or visuo-constructive, or that require drawing or complex verbal responses, represent a challenge or imply complex adaptation processes for their implementation on digital devices.

In contrast, more recently developed tests and assessment batteries are now being created fully digitized, to the point that some of them have not even been distributed in physical format. These are generally called computerized cognitive assessments (CCA), and some of the most noteworthy examples include the CogniFit General Cognitive Assessment Battery (CAB) (Haimov et al., 2008), or the Cambridge Neuropsychological Test Automated Battery (Sandberg, 2011), in addition to those that Zygouris and Tsolaki (2014) already pointed out in their review. This article outlines the advantages, disadvantages and implications of this transition to digital.

2. Reasons for the transition

Before the 2000s, it did not make sense to use digitized tests, because of the cumbersome, expensive and infrequent equipment required. However, nowadays children are educated in the use of technology (Lee et al., 2014; Wojcik et al., 2022), adults work with it every day, and older people are gaining experience and confidence in the use of these devices (Demiris et al., 2004; Kim and Preis, 2015; Kim et al., 2016). Thus, technological advances and easier access to technology have made CCAs feasible for everyday use.

The transition to CCAs is supported by different cognitive theories and psychometric approaches. For instance, computer-based assessments allow customizing the tests to the test-takers along the principles of the Item Response Theory (Mead and Drasgow, 1993; Huebner, 2010). Besides, the Cognitive Load Theory (Sweller, 1988) suggests that the computer-based format decreases ineffective cognitive load with respect to the paper-based format, thus optimizing performance (Khalil et al., 2010; for further discussion, see Taylor 1994).

Given this paradigm shift, it seems reasonable to foster a transition to the coexistence between classic cognitive assessment tools and CCA to always offer the option that best fits the circumstances of each evaluation session, evaluator, and person being assessed. The greatest example of the need to implement CCA as a regular tool and that classical tests can become non-operational has been unintendedly offered by the lockdown resulting from the SARS-CoV-2 health crisis (Larner, 2021). Many health and education professionals, and researchers who needed to assess the cognitive status of their informants have had to either pause their activity or migrate to new technologies because the classic cognitive assessment tools they used to apply were no longer suited to the needs of the situation. In fact, these circumstances have changed the modality of assessment, but not the frequency with which they are used (Webb et al., 2021). With this in mind, a selected literature review of the status and needs in the field of cognitive assessment has been conducted, and the pros and cons of transitioning to CCAs have been analyzed.

3. The advantages of CCA

Numerous studies address the advantages of computerized cognitive assessments over classic pencil-and-paper tests (Sternin et al., 2019). The most basic advantage is familiarity with the delivering device since digital technology is deeply integrated into our lives. Most people not only make use of computers, tablets, and smartphones daily, but also master these devices.

Digital platforms are also well-known for their accessibility, and the possibility of modifying the contrast, the size or color of the text, the volume, or the response requirements of people with reduced mobility or motor difficulties represent altogether a clear-cut advantage of CCA. The adaptation of the test to the specific needs of the assessed person is straightforward in CCA, including language modification when it is necessary to serve a student, patient, or user of another language (see Frances et al., 2020, for a discussion of the impact of this factor). Since COVID-19, ubiquity takes an important role, and the need to remotely assess people has skyrocketed (Larner, 2021). In this line, several studies point to the feasibility of remote cognitive assessments (Settle et al., 2015; Geddes et al., 2020; Zeghari et al., 2021).

Usability and user experience can also be taken as a clear advantage of CCA over traditional assessments. Interactive activities, automatic visual and auditory feedback, and colorful stimuli result in more engaging, motivating, and user-friendly assessments for the test-taker and are easier for the evaluator (Soto-Ruiz, 2020). But importantly, the CCA does not have to lose psychometric value because of their customizable digitized format. In fact, psychometric characteristics such as discriminant validity or test–retest reliability of CCAs are usually well documented, as in the case of conventional paper-and-pencil resources (Zygouris and Tsolaki, 2014). In fact, the ability of the device itself to record reaction times, to be extremely accurate in identifying user responses, to provide the same feedback to all users equally, to provide instant results, or to support the incorporation of more sophisticated techniques such as eye movement detection, are some of the major advantages of CCA.

Another advantage that may not be as obvious, but that stands as a critical one, is the updateable nature of the scales. If test distributors regularly update the reference scales of test scores, there is no need to buy new versions of the test to stop comparing test takers with outdated scales. This is a relatively natural process for the CCA given that the update may take place in a server, without imposing any burden on the evaluator. In contrast, any update in the reference norms of a paper-and-pencil may require renewal actions on the side of the evaluator.

4. In the way to transition

What keeps professionals attached to cognitive analog assessments? The CCA have some limitations that cannot be ignored and that will make it preferable on certain occasions to use analog tools instead of digital ones. There are still people who are digital illiterates or that have a low technological-digital command, and they may manage well or better with paper and pencil. This may be particularly important in certain pathologies, and as Witt et al. (2013) pointed out, the CCA cannot completely replace a comprehensive paper-and-pencil neuropsychological assessment in certain circumstances.

However, one of the main problems that prevents professionals from making the transition is more contextual and cultural. On the one hand, professionals are habituated to using analog screenings, tests, and cognitive assessment batteries. They have been historically trained with them, they have applied them hundreds of times and any change to a digital tool can be cumbersome. On the other hand, the prior investment of a professional needs to be considered. If professionals have spent large sums of money on analog assessment batteries, it is expected that they will be reluctant to change until the redemption of the investment is effective.

Besides, it should also be considered that access to the Internet is most of the time a mandatory technical requirement of the CCA. While there is usually a fast and stable Internet connection in urban spaces, there are places where the connection does not allow for assessments that require heavy or agile data transfer (Gerli and Whalley, 2021). The same happens with access to electricity, which is not always stable or possible, limiting access to this type of test. In addition, there is another critical aspect related to the specific testing conditions that must be taken into account when assessing remotely. Most tests are intended to be applied in laboratory-like clinical settings, with good control of potentially interfering environment-related factors (Robillard et al., 2018). When a person performs the assessment from home without being under the guidance of a professional, the chances increase that uncontrolled environmental conditions could interfere with the process (e.g., distractions, facilitations, or other elements that may affect the validity of the assessment). For this reason, a correct CCA necessarily implies advising about the necessity to prepare and secure the environment and conditions in which the assessment is to be performed and implementing mechanisms to assess the validity of the data.

In a broader sense, it should also be noted that not all tests need to be directly translated into a digital format, and that some if not all will require some adaptations or modifications. In these circumstances, it becomes especially important to ensure the psychometric qualities of CCA (Gates and Kochan, 2015) and to develop adequate quality assurance tests. Furthermore, all the modifications need to be endorsed by scientific evidence, similar to the procedure being currently developed to adapt classic laboratory 2D assessments to 3D virtual reality assessments (see Rocabado and Duñabeitia, 2022).

5. Discussion: implications of these changes

Health measures such as the lockdowns adopted during the COVID-19 pandemic have made the importance of CCA clear (Hsu et al., 2021). Nowadays digital tools are integrated into the daily life of professional and non-professional individuals at schools, healthcare centers, workplaces, and social contexts (Tomasik et al., 2020; Nordin et al., 2021; Paganin and Simbula, 2021). They are well-known and familiar tools for most users. The design and implementation of CCA based on these widespread digital tools could offer a better service to students, patients, or research participants, making the activities more attractive and dynamic, individualizing the tests, and updating the reference standards.

CCA is increasingly integrated into the daily lives of professionals and has a great and growing potential. Without sacrificing the quality of the tests, CCA allows for adapting assessment characteristics to the specific needs of the test-takers, increasing the accuracy of the measurements, assessing remotely on common devices available to a wide portion of society, and giving immediate and motivating feedback. However, CCA also has human limitations, such as users who are not proficient with digital tools or professionals who want to amortize the investment made in classic tests. Besides, certain environmental conditions may limit the degree of generalization of CCA, such as the need to have access to electricity and connectivity or the relative lack of control over the assessment environment. Thus, proper adaptations to digital format with good psychometric characteristics are essential to spread the use of CCA. Nevertheless, the transition to digital assessments is an inevitable evolution.

The debate is not about using only digitized tools or only classic tools, but about whether we are adapting classic paper-and-pencil tools to the context in which we live at the right speed. Classic assessment tools will and should continue being used for some time or, perhaps, always. But the migration to digital tools in parallel is indisputable, and it must be carried out thoroughly to facilitate professionals’ work in case we meet again in such adverse circumstances where in-person paper-and-pencil assessments cannot be used.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

DA and JD contributed to the conceptualization. DA wrote the original draft. JD reviewed, edited the text, and supervised the project. All authors contributed to the article and approved the submitted version.

Funding

This research has been partially funded by grant PID2021-126884NB-I00 from the Spanish Government. The APC was funded by the publication fund of UiT, The Arctic University of Norway.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Bertelli, M. O., Cooper, S. A., and Salvador-Carulla, L. (2018). Intelligence and specific cognitive functions in intellectual disability. Curr. Opin. Psychiatry 31, 88–95. doi: 10.1097/yco.0000000000000387

CrossRef Full Text | Google Scholar

Choi, S. H. (2008). Cognitive assessment in traumatic brain injury. Brain Neurorehabil. 1:148. doi: 10.12786/bn.2008.1.2.148

CrossRef Full Text | Google Scholar

Corsi, P. M. (1972). Human memory and the medial temporal region of the brain. PhD thesis. Montreal, QC: McGill University.

Google Scholar

Daroische, R., Hemminghyth, M. S., Eilertsen, T. H., Breitve, M. H., and Chwiszczuk, L. J. (2021). Cognitive impairment after COVID-19—a review on objective test data. Front. Neurol. 12:699582. doi: 10.3389/fneur.2021.699582

PubMed Abstract | CrossRef Full Text | Google Scholar

Davis, D. H., Creavin, S. T., Yip, J. L., Noel-Storr, A. H., Brayne, C., and Cullum, S. (2015). Montreal Cognitive Assessment for the diagnosis of Alzheimer’s disease and other dementias. Cochrane Database Syst. Rev. 2015:CD010775. doi: 10.1002/14651858.cd010775.pub2

PubMed Abstract | CrossRef Full Text | Google Scholar

Demiris, G., Rantz, M. J., Aud, M. A., Marek, K. D., Tyrer, H. W., Skubic, M., et al. (2004). Older adults’ attitudes towards and perceptions of ‘smart home’ technologies: a pilot study. Med. Inform. Internet Med. 29, 87–94. doi: 10.1080/14639230410001684387

PubMed Abstract | CrossRef Full Text | Google Scholar

Folstein, M., Folstein, S. E., and McHugh, P. R. (1975). “Mini-mental state” a practical method for grading the cognitive state of patients for the clinician. J. Psychiatr. Res. 12, 189–198. doi: 10.1016/0022-3956(75)90026-6

CrossRef Full Text | Google Scholar

Frances, C., Pueyo, S., Anaya, V., and Duñabeitia, J. A. (2020). Interpreting foreign smiles: language context and type of scale in the assessment of perceived happiness and sadness. Psicológica 41, 21–38. doi: 10.2478/psicolj-2020-0002

CrossRef Full Text | Google Scholar

Gates, N. J., and Kochan, N. A. (2015). Computerized and on-line neuropsychological testing for late-life cognition and neurocognitive disorders. Curr. Opin. Psychiatry 28, 165–172. doi: 10.1097/yco.0000000000000141

PubMed Abstract | CrossRef Full Text | Google Scholar

Geddes, M. R., O’Connell, M. E., Fisk, J. D., Gauthier, S., Camicioli, R., and Ismail, Z. (2020). Remote cognitive and behavioral assessment: report of the Alzheimer society of Canada task force on dementia care best practices for COVID-19. Alzheimer’s Dement. Diagn. Assess. Dis. Monit. 12:e12111. doi: 10.1002/dad2.12111

PubMed Abstract | CrossRef Full Text | Google Scholar

Gerli, P., and Whalley, J. (2021). Fibre to the countryside: a comparison of public and community initiatives tackling the rural digital divide in the UK. Telecommun. Policy 45:102222. doi: 10.1016/j.telpol.2021.102222

CrossRef Full Text | Google Scholar

Haimov, I., Hanuka, E., and Horowitz, Y. (2008). Chronic insomnia and cognitive functioning among older adults. Behav. Sleep Med. 6, 32–54. doi: 10.1080/15402000701796080

CrossRef Full Text | Google Scholar

Hannukkala, M. S., Mikkonen, K., Laitinen, E., and Tuononen, T. (2020). Staying on the digitalized trail. Heal. Technol. 10, 1257–1263. doi: 10.1007/s12553-020-00425-6

CrossRef Full Text | Google Scholar

Heaton, R. K. (1981). A manual for the Wisconsin card sorting test. Melton South, VIC: Western Psychological Services.

Google Scholar

Hsu, N., Monasterio, E., and Rolin, O. (2021). Telehealth in pediatric rehabilitation. Phys. Med. Rehabil. Clin. N. Am. 32, 307–317. doi: 10.1016/j.pmr.2020.12.010

CrossRef Full Text | Google Scholar

Huebner, A. (2010). An overview of recent developments in cognitive diagnostic computer adaptive assessments. Pract. Assess. Res. Eval. 15:3. doi: 10.7275/7fdd-6897

CrossRef Full Text | Google Scholar

Kaplan, E., Goodglass, H., and Weintraub, S. (1983). Boston naming test. Philadelphia, PA: Lea & Febiger.

Google Scholar

Khalil, M., Mansour, M. A., and Wilhite, D. R. (2010). Evaluation of cognitive loads imposed by traditional paper-based and innovative computer-based instructional strategies. J. Vet. Med. Educ. 37, 353–357. doi: 10.3138/jvme.37.4.353

CrossRef Full Text | Google Scholar

Kim, M. J., Kim, W. G., Kim, J. M., and Kim, C. (2016). Does knowledge matter to seniors’ usage of mobile devices? Focusing on motivation and attachment. Int. J. Contemp. Hosp. Manag. 28, 1702–1727. doi: 10.1108/ijchm-01-2015-0031

CrossRef Full Text | Google Scholar

Kim, M. J., and Preis, M. W. (2015). Why seniors use mobile devices: applying an extended model of goal-directed behavior. J. Travel Tour. Mark. 33, 404–423. doi: 10.1080/10548408.2015.1064058

CrossRef Full Text | Google Scholar

Larner, A. J. (2021). Cognitive testing in the COVID-19 era: can existing screeners be adapted for telephone use? Neurodegener. Dis. Manag. 11, 77–82. doi: 10.2217/nmt-2020-0040

PubMed Abstract | CrossRef Full Text | Google Scholar

Lee, S. M., Seo, H. A., and Han, H. J. (2014). Use of smart devices of infants and preschool-children and their mothers’ perceptions. J. Child Care Educ. 10, 111–131. doi: 10.14698/jkcce.2014.10.2.111

CrossRef Full Text | Google Scholar

Mead, A. D., and Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability tests: a meta-analysis. Psychol. Bull. 114, 449–458. doi: 10.1037/0033-2909.114.3.449

CrossRef Full Text | Google Scholar

Nasreddine, Z. S., Phillips, N. A., Bédirian, V., Charbonneau, S., Whitehead, V., Collin, I., et al. (2005). The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J. Am. Geriatr. Soc. 53, 695–699. doi: 10.1111/j.1532-5415.2005.53221.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Nordin, S., Sturge, J., Ayoub, M., Jones, A., McKee, K., Dahlberg, L., et al. (2021). The role of information and communication technology (ICT) for older adults’ decision-making related to health, and health and social care services in daily life—a scoping review. Int. J. Environ. Res. Public Health 19:151. doi: 10.3390/ijerph19010151

PubMed Abstract | CrossRef Full Text | Google Scholar

Paganin, G., and Simbula, S. (2021). New Technologies in the workplace: can personal and organizational variables affect the employees’ intention to use a work-stress management app? Int. J. Environ. Res. Public Health 18:9366. doi: 10.3390/ijerph18179366

PubMed Abstract | CrossRef Full Text | Google Scholar

Raven, J. C. (1938). Standard Progressive Matrices: A Perceptual Test of Intelligence. London, UK: H. K. Lewis.

Google Scholar

Reitan, R. M. (1955). The relation of the trail making test to organic brain damage. J. Consult. Psychol. 19, 393–394. doi: 10.1037/h0044509

PubMed Abstract | CrossRef Full Text | Google Scholar

Rey, A. (1941). L’examen psychologique dans les cas d’encephalopathie traumatique. Arch. Psychol. 28, 286–340.

Google Scholar

Robillard, J. M., Lai, J., Wu, J., Feng, T. L., and Hayden, S. (2018). Patient perspectives of the experience of a computerized cognitive assessment in a clinical setting. Alzheimer’s Dement.: Transl. Res. Clin. Interv. 4, 297–303. doi: 10.1016/j.trci.2018.06.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Rocabado, F., and Duñabeitia, J. A. (2022). Assessing inhibitory control in the real world is virtually possible: a virtual reality demonstration. Behav. Sci. 12:444. doi: 10.3390/bs12110444

PubMed Abstract | CrossRef Full Text | Google Scholar

Sandberg, M. A. (2011). Cambridge Neuropsychological Testing Automated Battery. in Encyclopedia of Clinical Neuropsychology. eds. J. S. Kreutzer, J. DeLuca, and B. Caplan New York, NY: Springer.

Google Scholar

Schmidt, M. (1996). Rey Auditory Verbal Learning Test: A Handbook. Los Angeles, CA: Western Psychological Services

Google Scholar

Settle, J. R., Robinson, S. A., Kane, R., Maloni, H. W., and Wallin, M. T. (2015). Remote cognitive assessments for patients with multiple sclerosis: a feasibility study. Mult. Scler. 21, 1072–1079. doi: 10.1177/1352458514559296

PubMed Abstract | CrossRef Full Text | Google Scholar

Soto-Ruiz, K. M. (2020). Digital neurocognitive testing. Biomarkers Trauma. Brain Inj., 355–365. doi: 10.1016/b978-0-12-816346-7.00024-5

CrossRef Full Text | Google Scholar

Sternin, A., Burns, A., and Owen, A. M. (2019). Thirty-five years of computerized cognitive assessment of aging—where are we now? Diagnostics 9:114. doi: 10.3390/diagnostics9030114

PubMed Abstract | CrossRef Full Text | Google Scholar

Stroop, J. R. (1935). Studies of interference in serial verbal reactions. J. Exp. Psychol. 18, 643–662. doi: 10.1037/h0054651

CrossRef Full Text | Google Scholar

Sweller, J. (1988). Cognitive load during problem solving: effects on learning. Cogn. Sci. 12, 257–285. doi: 10.1207/s15516709cog1202_4

CrossRef Full Text | Google Scholar

Taylor, T. R. (1994). A review of three approaches to cognitive assessment, and a proposed integrated approach based on a unifying theoretical framework. S. Afr. J. Psychol. 24, 184–193. doi: 10.1177/008124639402400403

CrossRef Full Text | Google Scholar

Tomasik, M. J., Helbling, L. A., and Moser, U. (2020). Educational gains of in-person vs. distance learning in primary and secondary schools: a natural experiment during the COVID-19 pandemic school closures in Switzerland. Int. J. Psychol. 56, 566–576. doi: 10.1002/ijop.12728

PubMed Abstract | CrossRef Full Text | Google Scholar

Webb, S. S., Kontou, E., and Demeyere, N. (2021). The COVID-19 pandemic altered the modality, but not the frequency, of formal cognitive assessment. Disabil. Rehabil. 44, 6365–6373. doi: 10.1080/09638288.2021.1963855

CrossRef Full Text | Google Scholar

Wechsler, D. (2008). Wechsler Adult Intelligence Scale (4th ed.). San Antonio, TX: Pearson/PsychCorp.

Google Scholar

Wechsler, D. (2014). Wechsler Intelligence Scale for Children (5th ed.). San Antonio, TX: Pearson/PsychCorp.

Google Scholar

White, A. J., Batchelor, J., Pulman, S., and Howard, D. (2012). The role of cognitive assessment in determining fitness to stand trial. Int. J. Forensic Ment. Health 11, 102–109. doi: 10.1080/14999013.2012.688091

CrossRef Full Text | Google Scholar

Wild, K., Howieson, D., Webbe, F., Seelye, A., and Kaye, J. (2008). Status of computerized cognitive testing in aging: a systematic review. Alzheimers Dement. 4, 428–437. doi: 10.1016/j.jalz.2008.07.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Witt, J. A., Alpherts, W., and Helmstaedter, C. (2013). Computerized neuropsychological testing in epilepsy: overview of available tools. Seizure 22, 416–423. doi: 10.1016/j.seizure.2013.04.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Wojcik, E. H., Prasad, A., Hutchinson, S. P., and Shen, K. (2022). Children prefer to learn from smart devices, but do not trust them more than humans. Int. J. Child Comput. Interact. 32:100406. doi: 10.1016/j.ijcci.2021.100406

CrossRef Full Text | Google Scholar

Zeghari, R., Guerchouche, R., Tran Duc, M., Bremond, F., Lemoine, M. P., Bultingaire, V., et al. (2021). Pilot study to assess the feasibility of a mobile unit for remote cognitive screening of isolated elderly in rural areas. Int. J. Environ. Res. Public Health 18:6108. doi: 10.3390/ijerph18116108

PubMed Abstract | CrossRef Full Text | Google Scholar

Zygouris, S., and Tsolaki, M. (2014). Computerized cognitive testing for older adults. Am. J. Alzheimers Dis. Other Dement. 30, 13–28. doi: 10.1177/1533317514522852

CrossRef Full Text | Google Scholar

Keywords: cognitive assessment, cognition, digital tools, computerized cognitive assessment, paper and pencil test

Citation: Asensio D and Duñabeitia JA (2023) The necessary, albeit belated, transition to computerized cognitive assessment. Front. Psychol. 14:1160554. doi: 10.3389/fpsyg.2023.1160554

Received: 07 February 2023; Accepted: 06 April 2023;
Published: 24 April 2023.

Edited by:

Manpreet Kaur Bagga, Partap College of Education, India

Reviewed by:

Maura Pilotti, Prince Mohammad bin Fahd University, Saudi Arabia
Natanael Karjanto, Sungkyunkwan University, Republic of Korea
Luis Carlos Jaume, University of Buenos Aires, Argentina
Rozel Balmores-Paulino, University of the Philippines Baguio, Philippines
Murat Tezer, Near East University, Cyprus

Copyright © 2023 Asensio and Duñabeitia. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jon Andoni Duñabeitia, jdunabeitia@nebrija.es

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.