Event Abstract

Assessing metacognitive monitoring accuracy during high-stake test situations: An cross-cultural study

  • 1 Zayed University, Psychology, United Arab Emirates
  • 2 University of Nicosia, School of Education, Cyprus
  • 3 University of Maryland, College Park, Psychology Department, United States
  • 4 Albright College, Psychology Department, United States

Metacognition –thinking about thinking–can help students make better decisions in the face of uncertainty (Coutinho et al., 2015; Nelson & Narens, 1990, 1994). When students know what they know and do not know, they are more likely to effectively regulate study time by allocating more time to difficult items (Kornell & Metcalfe, 2006). Researchers have developed numerous methods for assessing metacognition which typically involve making predictions about the likelihood of remembering a studied item in the near future or ratings one’s confidence on the accuracy of a response (e.g., Dunlosky & Hertzog, 1998; Dunlosky & Thiede, 2004; Son & Kornell, 2009; Son & Metcalfe, 2000). These tests have been used by researchers to answer important questions concerning metacognition accuracy, its influence on behavior and the psychological bases of these judgments. The results of these studies indicate that although people can make accurate predictions about their cognitions, their predictions are far from perfect (See Dunlosky & Lipko, 2007; Maki, 1998). However, the overwhelming majority of these studies were done in a laboratory setting and in countries where individualistic culture prevails. The present research aims to expand this research by examining monitoring accuracy in high-stake situations like an exam in an undergraduate course and among students raised in a country that the culture is known to be collectivist. In addition, the present study aims to evaluate whether making metacognitive judgments during an exam has positive effects in exam performance. We hypothesized that there would be, because ratings serve as a memory cue that could prompt revisions to low-confidence questions. To accomplish this, students from the United Arab Emirates (UAE), Cyprus and the United States of America (USA) rated their confidence for each choice in a multiple-choice exam administered as part of their normal college curriculum and upon completion of the exam, they made a global metacognitive judgment–one single estimation of how students believed they had performed in the whole test. They were instructed to circle their response choice (A, B, C, or D) for each question and rate their confidence in their answer from 1 (very likely incorrect) to 5 (very likely correct) immediately after answering. They were also asked after completing the exam to predict their grade from 0 to 100.The results showed that all students were capable of discriminating questions they knew well from these they did not know, but UAE and USA students were significantly better than Cypriot students in this task (See figure 1). It should be noted that monitoring accuracy was higher than what was previously reported in the literature (Dunlosky & Lipko, 2007; Maki,1998) suggesting that humans are not as oblivious to their mental processes as prior research suggests and that factors like motivation plays an important role in monitoring accuracy. Cultural differences were also found in global predictions between Emirati and American students. Students in the UAE underestimated their performance, whereas students in the USA overestimated it (See figure 2). The finding that American students displayed overconfidence in is not new. It is in line with prior research done in the laboratory (e.g., Dunlosky & Rawson, 2012), and it is observed in domains other than academic performance such as safe driving (Marttoli & Richardson, 1998), sports (Middleton, Harris, & Surmom, 1996), occupational abilities (Haun, Zeringue, Leach, & Foley, 2000), and social skills (Swann & Gill, 1997). But, why didn’t Emirate students show similar optimistic bias toward their performance as students in USA? We believe that Emirati students are less susceptible to the influence of factors like wishful thinking and fixed beliefs of intelligence which have been shown to increase one’s confidence (Ehrlinger, Mitchum, & Dweck, 2016; Foster, Was, Dunlosky, & Isaacson, 2016). Furthermore, we found that students who rated their confidence earned higher grades than those who did not (See figure 3). Why does having this metacognitive component in the exam boost students’ performance? First, it prompts students to engage in an analysis of knowledge that is more refined than what they typically do in their own. They evaluated their knowledge on a 5-degree criterion scale that was recorded right next to their response and could always be seen. This forces the implicit feeling of uncertainty to be expressed explicitly, which can lead to a more accurate assessment of one’s knowledge and improved self-regulation (Couchman, Miller, Zmuda, Feather, & Schwartzmeyer, 2016). Also, because each rating was recorded on the paper and available for them to see at any point during the exam it could serve as a memory cue for which questions should (or should not) be reviewed again. On normal exams, each new feeling of uncertainty retroactively interferes with previous feelings. With past metacognitive ratings easily available, retroactive interference was less likely to prevent students from forgetting which questions caused the most uncertainty. To further advance our understanding of metacognition, more research in how culture influences metacognitive judgments is critical.

Figure 1
Figure 2
Figure 3

References

Couchman, J. J., Miller, N., Zmuda, S. J., Feather, K., & Schwartzmeyer, T. (2016). The instinct fallacy: The metacognition of answering and revising during college exams. Metacognition & Learning, 11, 171-185
Coutinho, M. V., Redford, J. S., Church, B. A., Zakrzewski, A. C., Couchman, J. J., & Smith, J. D. (2015). The interplay between uncertainty monitoring and working memory: Can metacognition become automatic? Memory and Cognition, 43, 990–1006.
Dunlosky, J., & Hertzog, C. (1998). Aging and deficits in associative memory: What is the role of strategy production? Psychology and Aging, 13, 597–607.

Dunlosky, J., & Lipko, A. (2007). Metacomprehension: A brief history and how to improve its accuracy. Current Directions in Psychological Science, 16, 228-232.

Dunlosky, J., & Rawson, K. A. (2012). Overconfidence produces under achievement: Inaccurate self evaluations undermine students’ learning and retention. Learning and Instruction, 22, 271–280.

Dunlosky, J., & Thiede, K. W. (2004). Causes and constraints of the shift-to-easier-materials effect in the control of study. Memory & Cognition, 32, 779–788.
Ehrlinger, J., Mitchum, A. L., & Dweck, C. S. (2016). Understanding overconfidence: Theories of intelligence, preferential attention, and distorted self-assessment. Journal of Experimental Social Psychology, 63, 94-100.
Foster, N. L., Was, C. A., Dunlosky, J., & Isaacson, R. M. (2016). Even after thirteen class exams, students are still overconfident: The role of memory for past exam performance in student predictions. Metacognition and Learning, 1-19.
Haun, D.E., Zeringue, A., Leach, A., & Foley, A. (2000). Assessing the competence of specimenprocessing personnel. Laboratory Medicine, 31, 633–637.
Kornell, N., & Metcalfe, J. (2006). Study efficacy and the region of proximal learning framework. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32, 609–622.
Maki, R. H. (1998). Test predictions over text material. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds), Metacognition in educational theory and practice (pp. 117–144). Mahwah, NJ: Erlbaum.
Marttoli, R. A., & Richardson, E. D. (1998). Confidence in, and self-rating of, driving ability among older drivers. Accident Analysis and Prevention, 30(3), 331–336.
Middleton, W., Harris, P., & Surmom, M. (1996). Give ‘em enough rope: Perception of health and safety risks in bungee jumpers. Journal of Social and Clinical Psychology, 15(1), 68–79.
Nelson, T. O., & Narens, L. (1990). Metamemory: A theoretical framework and new findings. In G. H. Bower (Ed.), The psychology of learning and motivation (Vol. 26, pp. 125-141). New York: Academic Press.
Nelson, T. O., & Narens, L. (1994). Why investigate metacognition? In J. Metcalfe & A. P. Shimamura (Eds.), Metacognition: Knowing about knowing (pp. 1-25). Cambridge, MA: Bradford Books.
Son, L. K., & Kornell, N. (2009). Simultaneous decisions at study: Time allocation, ordering, and spacing. Metacognition and Learning, 4, 237–248.
Son, L. K., & Metcalfe, J. (2000). Metacognitive and control strategies in study-time allocation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 204–221.
Swann, W.B., Jr., & Gill, M.J. ( 1997). Confidence and accuracy in person perception: Do we know what we think we know about our relationship partners? Journal of Personality and Social Psychology, 73, 747-757.

Keywords: metacognition, Learning, testing, confidence ratings, Monitoring accuracy

Conference: 3rd International Conference on Educational Neuroscience, Abu Dhabi, United Arab Emirates, 11 Mar - 12 Mar, 2018.

Presentation Type: Oral Presentation (invited speakers only)

Topic: Educational Neuroscience

Citation: Coutinho M, Papanastasiou E, Agni S, Almansoori AF, Vasko J and Couchman JJ (2018). Assessing metacognitive monitoring accuracy during high-stake test situations: An cross-cultural study. Conference Abstract: 3rd International Conference on Educational Neuroscience. doi: 10.3389/conf.fnhum.2018.225.00002

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 22 Feb 2018; Published Online: 14 Dec 2018.

* Correspondence: Dr. Mariana Coutinho, Zayed University, Psychology, Abu Dhabi, United Arab Emirates, mariana.coutinho@uaeu.ac.ae