Critical evaluation skills when using online information are considered important in many research and education frameworks; critical thinking and information literacy are cited as key twenty-first century skills for students. Higher education may play a special role in promoting students' skills in critically evaluating (online) sources. Today, higher education students are more likely to use the Internet instead of offline sources such as textbooks when studying for exams. However, far from being a value-neutral, curated learning environment, the Internet poses various challenges, including a large amount of incomplete, contradictory, erroneous, and biased information. With low barriers to online publication, the responsibility to access, select, process, and use suitable relevant and trustworthy information rests with the (self-directed) learner. Despite the central importance of critically evaluating online information, its assessment in higher education is still an emerging field. In this paper, we present a newly developed theoretical-conceptual framework for Critical Online Reasoning (COR), situated in relation to prior approaches (“information problem-solving,” “multiple-source comprehension,” “web credibility,” “informal argumentation,” “critical thinking”), along with an evidence-centered assessment framework and its preliminary validation. In 2016, the Stanford History Education Group developed and validated the assessment of Civic Online Reasoning for the United States. At the college level, this assessment holistically measures students' web searches and evaluation of online information using open Internet searches and real websites. Our initial adaptation and validation indicated a need to further develop the construct and assessment framework for evaluating higher education students in Germany across disciplines over their course of studies. Based on our literature review and prior analyses, we classified COR abilities into three uniquely combined facets: (i) online information acquisition, (ii) critical information evaluation, and (iii) reasoning based on evidence, argumentation, and synthesis. We modeled COR ability from a behavior, content, process, and development perspective, specifying scoring rubrics in an evidence-centered design. Preliminary validation results from expert interviews and content analysis indicated that the assessment covers typical online media and challenges for higher education students in Germany and contains cues to tap modeled COR abilities. We close with a discussion of ongoing research and potentials for future development.
Different types of tasks exist, including tasks for research purposes or exams assessing knowledge. According to expectation-value theory, tests are related to different levels of effort and importance within a test taker. Test-taking effort and importance in students decreased over the course of high-stakes tests or low-stakes-tests in research on test-taking motivation. However, whether test-order changes affect effort, importance, and response processes of education students have seldomly been experimentally examined. We aimed to examine changes in effort and importance resulting from variations in test battery order and their relations to response processes. We employed an experimental design assessing N = 320 education students’ test-taking effort and importance three times as well as their performance on cognitive ability tasks and a mock exam. Further relevant covariates were assessed once such as expectancies, test anxiety, and concentration. We randomly varied the order of the cognitive ability test and mock exam. The assumption of intraindividual changes in education students’ effort and importance over the course of test taking was tested by one latent growth curve that separated data for each condition. In contrast to previous studies, responses and test response times were included in diffusion models for examining education students’ response processes within the test-taking context. The results indicated intraindividual changes in education students’ effort or importance depending on test order but similar mock-exam response processes. In particular effort did not decrease, when the cognitive ability test came first and the mock exam subsequently but significantly decreased, when the mock exam came first and the cognitive ability test subsequently. Diffusion modeling suggested differences in response processes (separation boundaries and estimated latent trait) on cognitive ability tasks suggesting higher motivational levels when the cognitive ability test came first than vice versa. The response processes on the mock exam tasks did not relate to condition.
The Internet has become one of the main sources of information for university students’ learning. Since anyone can disseminate content online, however, the Internet is full of irrelevant, biased, or even false information. Thus, students’ ability to use online information in a critical-reflective manner is of crucial importance. In our study, we used a framework for the assessment of students’ critical online reasoning (COR) to measure university students’ ability to critically use information from online sources and to reason on contentious issues based on online information. In addition to analyzing students’ COR by evaluating their open-ended short answers, we also investigated the students’ web search behavior and the quality of the websites they visited and used during this assessment. We analyzed both the number and type of websites as well as the quality of the information these websites provide. Finally, we investigated to what extent students’ web search behavior as well as the quality of the used website contents are related to higher task performance. To investigate this question, we used five computer-based performance tasks and asked 160 students from two German universities to perform a time-restricted open web search to respond to the open-ended questions presented in the tasks. The written responses were evaluated by two independent human raters. To analyze the students’ browsing history, we developed a coding manual and conducted a quantitative content analysis for a subsample of 50 students. The number of visited webpages per participant per task ranged from 1 to 9. Concerning the type of website, the participants relied especially on established news sites and Wikipedia. For instance, we found that the number of visited websites and the critical discussion of sources provided on the websites correlated positively with students’ scores. The identified relationships between students’ web search behavior, their performance in the CORA tasks, and the qualitative website characteristics are presented and critically discussed in terms of limitations of this study and implications for further research.
Enhancing students’ critical thinking (CT) skills is an essential goal of higher education. This article presents a systematic approach to conceptualizing and measuring CT. CT generally comprises the following mental processes: identifying, evaluating, and analyzing a problem; interpreting information; synthesizing evidence; and reporting a conclusion. We further posit that CT also involves dealing with dilemmas involving ambiguity or conflicts among principles and contradictory information. We argue that performance assessment provides the most realistic—and most credible—approach to measuring CT. From this conceptualization and construct definition, we describe one possible framework for building performance assessments of CT with attention to extended performance tasks within the assessment system. The framework is a product of an ongoing, collaborative effort, the International Performance Assessment of Learning (iPAL). The framework comprises four main aspects: (1) The storyline describes a carefully curated version of a complex, real-world situation. (2) The challenge frames the task to be accomplished (3). A portfolio of documents in a range of formats is drawn from multiple sources chosen to have specific characteristics. (4) The scoring rubric comprises a set of scales each linked to a facet of the construct. We discuss a number of use cases, as well as the challenges that arise with the use and valid interpretation of performance assessments. The final section presents elements of the iPAL research program that involve various refinements and extensions of the assessment framework, a number of empirical studies, along with linkages to current work in online reading and information processing.
Frontiers in Psychology
Innovative ICT Strategies for Inclusive Education: Enhancing Teacher Competencies and Student Engagement