ORIGINAL RESEARCH article

Front. Educ., 24 April 2026

Sec. Higher Education

Volume 11 - 2026 | https://doi.org/10.3389/feduc.2026.1809360

Information competence and critical thinking in digital environments evidence from university students in Kazakhstan

  • 1. L.N. Gumilyov Eurasian National University, Astana, Kazakhstan

  • 2. Astana International University, Astana, Kazakhstan

  • 3. Kazakh National Women’s Teacher Training University, Almaty, Kazakhstan

Abstract

The rapid expansion of digital platforms and generative artificial intelligence has reshaped how university students access and evaluate academic information, intensifying concerns about information reliability, critical thinking, and ethical use. This study addresses the gap between students’ widespread access to digital information and their capacity to critically evaluate and responsibly use it in academic contexts, particularly in underrepresented settings such as Kazakhstan. A sequential mixed-methods design (QUAL → quan) was employed. Qualitative data explored students’ information-seeking practices, credibility judgments, responses to conflicting information, perceptions of AI-related risks, and views on institutional support. The sample consisted of 77 undergraduate students enrolled in teacher education-related programs at a university in Kazakhstan. The findings showed a clear hierarchy of search entry points: students most often began with Google Search (n = 30), followed by university databases/library portals (n = 16) and AI tools (n = 15). Clear author credentials (n = 49) and transparent methods or data (n = 25) emerged as the strongest trust indicators. When using AI, students reported predominantly supportive uses, especially summarizing checked readings (n = 58) and generating ideas or outlines (n = 52), rather than relying on AI as a primary source. These patterns indicate that students’ engagement with generative AI is characterized by conditional trust, pragmatic usefulness, and uneven but developing evaluative awareness. The study contributes context-specific evidence from Kazakhstan to the emerging literature on student engagement with generative AI and suggests that higher education institutions should integrate source evaluation, critical reasoning, and AI literacy more explicitly into curriculum, pedagogy, and library support systems.

Introduction

Digital environments have become the primary infrastructure through which university students access, process, and produce academic knowledge (Bygstad et al., 2022). Search engines, academic databases, social media platforms, and AI-based applications now coexist within students' everyday learning ecologies, shaping not only what information is encountered but also how it is evaluated and trusted (Celik et al., 2022; Van Den Beemt et al., 2020). While these environments expand access and efficiency, they simultaneously intensify epistemic challenges related to information overload, uneven quality, and credibility ambiguity. Empirical research consistently shows that digital abundance does not automatically translate into informed judgment; rather, students must actively develop competencies to evaluate, verify, and ethically use information in academically responsible ways (Aladsani, 2025; Kunz et al., 2024; Molerov et al., 2020; Surducan and Valadas, 2025).

Within higher education research, these competencies are commonly conceptualized through the interrelated constructs of information competence (closely aligned with information literacy) and critical thinking (Golden, 2023; Salido et al., 2025). Information competence refers to students' ability to identify credible sources, examine evidence, recognize markers of authority, and judge academic suitability, whereas critical thinking encompasses higher-order reasoning processes such as analysis, comparison, justification, and reflective judgment (Shamsaee et al., 2021). This distinction is analytically useful, yet in digital academic contexts the two constructs are deeply interconnected. Information competence reflects the applied evaluative dimension of students' engagement with sources, while critical thinking becomes visible in how they compare claims, weigh evidence, handle contradiction, and justify trust decisions. Recent empirical studies argue that in digital contexts these constructs are inseparable: information competence functions as the applied evaluative dimension of critical thinking, while critical thinking is operationalized through concrete information practices such as triangulation, source comparison, and evidence checking (Molerov et al., 2020; Nagel et al., 2022).

A growing body of performance-based research demonstrates that many university students struggle to evaluate online information effectively, particularly when confronted with conflicting claims or persuasive but unreliable content (Kops et al., 2025; Lan and Tung, 2024; Trixa and Kaspar, 2024). Studies assessing critical online reasoning show that even academically successful students often rely on superficial cues (e.g., familiarity or accessibility) rather than systematic evaluation strategies when navigating open-web information (Molerov et al., 2020; Nagel et al., 2022). Similarly, investigations into students' information-seeking behavior in higher education reveal a persistent gap between access to digital resources and the ability to judge their quality, relevance, and epistemic reliability (Kunz et al., 2024; Mirazchiyski, 2025). Taken together, this literature suggests that the central issue is no longer access to information alone, but the capacity to exercise evaluative judgment in environments characterized by speed, abundance, and conflicting credibility signals.

The rapid diffusion of generative artificial intelligence (AI) tools has further reshaped the landscape of academic information evaluation. Several studies indicate that students widely perceive AI tools as beneficial for summarization, clarification, and idea generation, yet simultaneously express concerns regarding reliability, accuracy, and ethical boundaries (Blahopoulou and Ortiz-Bonnin 2025; Shuhaiber et al., 2025). Research on academic integrity highlights that AI-generated content may appear coherent and authoritative while lacking verifiable sources or containing fabricated information, thereby increasing the cognitive burden placed on students' evaluative judgment (Park and Nan, 2025; Slutskiy, 2025). Rather than reducing the importance of information competence and critical thinking, generative AI amplifies their necessity by requiring learners to interrogate outputs, verify claims, and make ethically informed decisions about academic use (Lan and Tung, 2024). This development is particularly significant because AI tools do not merely provide access to information; they increasingly mediate how information is summarized, framed, and prioritized, thereby influencing students' trust formation and decision-making processes.

Institutional context plays a decisive role in shaping how these competencies develop. Empirical studies emphasize that information competence and critical thinking do not emerge automatically from technology use; instead, they are influenced by curriculum design, pedagogical practices (Kong, 2014), library infrastructures (Antasari, 2025), instructor modeling (Batı and Kaptan, 2015), and assessment requirements (Jabali et al., 2024). Courses focused on digital technologies, research methods, and information literacy, alongside assignments that demand evidence-based argumentation and source evaluation, have been shown to support students' development of evaluative skills (Chan and Sung, 2025). However, students' perceptions of institutional support vary widely, suggesting that the visibility and coherence of such initiatives are as important as their formal existence (Barber and Anderson, 2025). Accordingly, understanding students' digital information practices requires attention not only to individual competence, but also to the institutional conditions that enable or constrain reflective and responsible evaluation.

These issues are particularly salient in Kazakhstan, a context characterized by rapid digitalization and ongoing higher education reform. National initiatives increasingly emphasize digital transformation, artificial intelligence, and media and information literacy as strategic priorities. At the same time, international mapping and regional analyses indicate that empirical, learner-centered evidence on how university students in Kazakhstan evaluate digital information remains limited (UNESCO Almaty Regional Office and MediaNet, 2025). Existing studies tend to focus on policy frameworks or technological implementation rather than students' everyday evaluative practices within academic tasks. Despite the expanding global literature on information literacy (Johnston and Webber, 2003), critical thinking (Golden, 2023), and AI in higher education (Malik et al., 2025), two notable gaps persist. First, many studies examine information competence and critical thinking separately, without empirically investigating how they interact as integrated practices in digitally mediated learning environments. Second, there is a lack of mixed-methods, contextually grounded research capturing how students actually make trust decisions, resolve conflicting information, and interpret AI-mediated risks particularly in under-represented higher education contexts such as Central Asia.

Problem statement

In response to these gaps, the objective of the present study is to examine how university students in Kazakhstan engage with digital learning environments, evaluate the credibility and academic suitability of digital information, respond to conflicting claims, perceive the influence of AI and algorithmic systems, and understand the role of their universities in supporting these competencies. By investigating information competence and critical thinking as interconnected practices rather than isolated variables, the study aims to contribute both context-specific evidence from Kazakhstan and a broader conceptual understanding of evaluative judgment in AI-rich digital learning environments.

  • How do university students in Kazakhstan engage with digital learning environments when seeking academic information?

  • How do they evaluate the reliability and academic suitability of online information?

  • How do they exercise critical thinking when confronted with conflicting digital information?

  • How do students perceive the influence of AI tools and social media algorithms on information evaluation and ethical decision-making?

  • How do university students in Kazakhstan perceive their universities’ role in supporting information competence and critical thinking in digital environments?

  • How do university students’ in Kazakhstan self-reported quantitative patterns reflect their engagement with digital learning environments, trust formation in digital information sources, responses to conflicting information, uses of AI tools in academic coursework, and perceived sources contributing to the development of their information evaluation skills.

Literature review

Information competence in digital academic contexts

Information competence in higher education is increasingly conceptualized as a contextual and evaluative practice, rather than a set of isolated technical skills. In digital academic environments, students are expected not only to locate information efficiently but also to assess its credibility, relevance, and suitability for scholarly use. Empirical research demonstrates that students' information practices are shaped by their ability to evaluate authorship, source authority, publication context, and the presence of evidence or references (Dahlen et al., 2024; Scharrer et al., 2025). This evaluative dimension is particularly salient in open digital environments, where academic and non-academic sources coexist without clear boundaries. Contemporary frameworks emphasize that information competence involves procedural judgment, such as cross-checking sources, verifying claims, and aligning information with disciplinary and institutional standards (Fraillon and Duckworth, 2025). Studies in higher education contexts show that students often rely on general-purpose search engines as entry points but subsequently differentiate sources based on perceived academic legitimacy, such as peer review or institutional affiliation (Li et al., 2022). However, access to digital resources alone does not ensure competent evaluation; rather, information competence develops through guided practice, exposure to academic norms, and repeated engagement with evaluative tasks (Spante et al., 2018). Viewed theoretically, information competence can be understood not simply as technical access to information, but as a context-sensitive capacity for judging what counts as credible and academically usable knowledge. This distinction is important because students may use the same digital tools but differ considerably in how they verify, compare, and justify information choices.

Critical thinking and critical online reasoning

Critical thinking has long been recognized as a core outcome of higher education, typically defined as the ability to analyze, evaluate, and synthesize information in a reasoned and reflective manner. In digital contexts, this construct has been extended through the notion of critical online reasoning, which captures how individuals evaluate information encountered in open, algorithmically mediated online environments (Blakston et al., 2025). Critical online reasoning emphasizes reasoning under uncertainty, where students must resolve conflicting information, assess evidentiary support, and justify trust decisions without relying solely on traditional authority cues. Empirical performance-based studies indicate that many university students struggle with these demands, particularly when information is presented persuasively but lacks scientific grounding (Molerov et al., 2020). Research consistently shows that effective critical online reasoning involves strategies such as triangulation across multiple sources, prioritization of evidence-based arguments, and recognition of bias or manipulation (Molerov et al., 2020). Importantly, critical thinking in digital environments is not merely an internal cognitive process; it is enacted through observable decision-making practices, such as selecting one source over another or seeking additional verification when claims conflict. In this study, critical thinking is treated as a situated reasoning practice rather than a purely abstract cognitive trait. Its relationship with information competence is close but not identical: information competence concerns evaluating and using information appropriately, whereas critical thinking refers more broadly to the reasoning used to weigh evidence, examine contradictions, and justify judgments. Thus, information competence may be seen as one of the key domains in which critical thinking becomes visible in digital academic work.

Digital environments, generative AI, and epistemic risk

The rapid integration of generative AI tools into academic contexts has introduced new epistemic and ethical challenges to digital learning environments. While students increasingly report using AI for summarization, explanation, and idea generation, empirical studies highlight widespread concerns regarding reliability, transparency, and academic integrity (Blahopoulou and Ortiz-Bonnin, 2025; Shuhaiber et al., 2025). Generative AI systems can produce fluent and convincing outputs that obscure uncertainty, omit sources, or generate inaccurate information, thereby complicating students' evaluation processes (Fui-Hoon Nah et al., 2023). Theoretical discussions frame AI and algorithmic systems as non-neutral epistemic actors that shape information exposure and trust formation through personalization, ranking, and content generation (Lan and Tung, 2024). Algorithmic curation may limit exposure to diverse perspectives, while AI-generated content can blur distinctions between verified knowledge and plausible but unsupported claims. Empirical research suggests that these conditions increase the importance of reflexive scepticism, ethical awareness, and critical regulation of technology use by learners (Alduais et al., 2025; Pramod, 2025). A more critical reading of this literature suggests that AI should be understood not only as a useful academic tool, but also as a source of epistemic risk. Unlike traditional digital sources, AI-generated outputs may appear authoritative while offering limited transparency regarding origin, evidence, or accountability. The literature is therefore mixed: some studies emphasize efficiency and learning support, whereas others warn of overreliance, weakened source scrutiny, and ethical ambiguity. In contexts such as Kazakhstan, where empirical evidence remains limited, examining these constructs together can also provide context-sensitive insight into how students navigate trust, evaluation, and responsible academic use of digital information.

Method

Research model

This study employed an exploratory sequential mixed-methods design (QUAL → quan) to examine university students’ information competence and critical thinking in digital environments. Mixed-methods approaches are well suited to complex educational phenomena that require both in-depth understanding of meaning-making processes and descriptive evidence of broader patterns (Creswell and Plano Clark, 2018). The QUAL → quan sequence was selected because information competence and critical thinking are situated and practice-based constructs that cannot be fully captured through quantitative data alone. In this design, the qualitative phase served as the initial exploratory stage and informed the development and refinement of the quantitative component. An initial qualitative phase enabled exploration of how students interpret digital information, evaluate credibility, and reason under conditions of informational conflict processes that prior research identifies as difficult to observe without qualitative inquiry. The subsequent quantitative phase was used to examine the prevalence of these evaluative practices and decision strategies across the student sample. Accordingly, the study followed an exploratory logic in which qualitative findings guided the construction of the quantitative instrument and quantitative results were used to extend and describe the distribution of the patterns identified qualitatively. Integrating qualitative insights with quantitative patterns strengthens construct validity and provides a more comprehensive account of students' digital information practices (Teddlie and Tashakkori, 2009).

Sample

This study employed a non-probability convenience sampling strategy in the quantitative phase. Participants were recruited from undergraduate students enrolled in teacher education-related programs at two higher education institutions in Kazakhstan, primarily the International University of Astana-Kazakhstan. The use of convenience sampling was considered appropriate because the study aimed to obtain initial descriptive evidence from an accessible student population within a specific educational context rather than to generate statistically generalizable findings for all university students in Kazakhstan. At the same time, the limitations of this sampling approach, particularly in relation to representativeness and potential sampling bias, were taken into account when interpreting the findings.

The inclusion criteria were as follows: (a) being enrolled as an undergraduate student at one of the participating higher education institutions in Kazakhstan, (b) being registered in an education-related teacher preparation program, (c) being 18 years of age or older, and (d) providing voluntary informed consent to participate. The exclusion criteria included not meeting these conditions, incomplete survey submission, or non-consent to participate.

The study sample consisted of 77 (coded P1 to P77) university students (n = 77) enrolled in undergraduate programs at two higher education institutions in Kazakhstan. The same 77 participants contributed to both components of the study, as data were collected through a single integrated instrument that included open-ended qualitative items and structured quantitative items. Thus, the qualitative phase was based on written responses from 77 participants, and the quantitative phase likewise included 77 participants. Participants were primarily drawn from the International University of Astana-Kazakhstan. In terms of age, the majority of participants were between 18 and 20 years old (n = 59), while 18 students were in the 21-23 age range, indicating that the sample largely comprised early-stage undergraduate students. The gender distribution was uneven, with 68 female and 9 male participants, reflecting the gender composition typical of education-related programs in the Kazakhstani higher education context. Regarding academic affiliation, all participants were enrolled in programs within the Faculty of Education at the International University of Astana. These included pre-school education and upbringing (n = 30), pedagogy and psychology (n = 28), foreign language education with two foreign languages (n = 11), and chemistry teacher education (n = 8).

With respect to year of study, first-year students constituted the majority (n = 50), followed by third-year students (n = 14) and second-year students (n = 13). This distribution suggests that most participants were in the early phases of their undergraduate education. Participants also reported their self-rated confidence in finding reliable academic information online on a five-point scale (1 = very low, 5 = very high). Most students reported medium confidence (level 3; n = 36) or high confidence (level 4; n = 28), while 13 participants indicated very high confidence (level 5).

Data collection instruments

Data were collected using a researcher-developed instrument consisting of qualitative open-ended questions and descriptive quantitative items, designed to capture students' information competence and critical thinking in digital environments. The qualitative component was based on written open-ended responses collected through an online survey rather than in-depth qualitative interviews. Accordingly, its purpose was not to generate highly detailed narrative accounts, but to obtain participants' concise written reflections on key aspects of their digital information practices. The qualitative component included open-ended questions exploring (a) students' engagement with digital learning environments (e.g., platforms used for academic searches), (b) criteria for evaluating the reliability and academic suitability of online information, (c) strategies for resolving conflicting information, (d) perceptions of digital risks and ethical issues related to AI and social media algorithms, and (e) perceived institutional support for developing these competencies.

The quantitative component comprised structured, non-Likert descriptive items, including single-choice, multiple-choice, and scenario-based questions. These items assessed students' primary starting points for academic searches, indicators that increase trust in digital sources, first responses to conflicting information scenarios, patterns of AI tool use in coursework, and perceived sources contributing to the development of information evaluation skills. The quantitative questions were developed in alignment with the qualitative focus areas of the study so that the second phase could descriptively examine the prevalence of the practices, judgments, and perceptions first explored in the qualitative phase. A pilot study with five undergraduate students was conducted prior to the main data collection to assess item clarity, response burden, and the functionality of the online survey format. Pilot participants were not included in the final sample. Feedback from the pilot indicated the need for minor revisions in wording clarity and item sequencing, and these revisions were made before the main administration. Although the pilot sample was limited and not intended for formal instrument validation, it supported the practical usability of the survey format. To enhance content validity, the instrument was reviewed by experts in education and digital literacy, who evaluated the relevance, clarity, and theoretical alignment of the items.

Data collection process and analysis

Data were collected during the 2025–2026 autumn semester through an online questionnaire administered via Google Forms. A criterion-based convenience sampling strategy was used in the quantitative phase. The survey link was distributed to eligible undergraduate students through institutional communication channels. Participation was voluntary, and students completed the instrument asynchronously. Ethical principles for research involving human participants were strictly observed. Prior to participation, students were informed about the purpose of the study, the voluntary nature of participation, and their right to withdraw at any time without consequence. Anonymity and confidentiality were ensured by collecting no identifying information, and all data were used solely for academic research purposes.

In line with the sequential mixed-methods design (QUAL → quan), the qualitative and quantitative components were analyzed in sequence and then integrated at the interpretation stage. Qualitative data obtained from the open-ended items were analyzed using inductive thematic analysis. First, all responses were read repeatedly to achieve familiarity with the dataset. Second, initial codes were generated by identifying recurring expressions, evaluative judgments, and meaning units related to students' digital information practices. Third, conceptually related codes were clustered into broader categories, and these categories were subsequently organized into overarching themes. As illustrated in Table 1, the coding process proceeded from raw participant statements to initial codes, categories, and themes. This analytic process resulted in themes concerning engagement with digital learning environments, information competence, critical thinking in digital contexts, digital risks, AI, and ethical use, and institutional and contextual influences.

Table 1

Raw participant statementInitial codeCategoryTheme
“I use Google first because it is fast, then I check academic sources.” (P12)Speed-oriented searchSearch convenienceEngagement with Digital Learning Environments
“If the source has references and a known author, I trust it more.” (P15)Author and referencesCredibility cuesInformation Competence
“When information conflicts, I compare several sources.” (P41)Source comparisonVerification strategyCritical Thinking in Digital Contexts
“AI gives quick answers, but I always verify them.” (P35)Conditional AI trustReflexive AI useDigital Risks, AI, and Ethical Use
“Teachers show us how to choose reliable sources.” (P41)Instructor guidanceInstitutional supportInstitutional and Contextual Influences

Example of the qualitative coding process.

Quantitative data were analyzed using descriptive statistical techniques. Frequencies and percentages were calculated for all single-choice, multiple-choice, and scenario-based items to identify patterns in students' digital information practices, trust indicators, responses to conflicting information, AI use, and perceived sources of skill development. The integration of qualitative themes and quantitative patterns enabled a comprehensive interpretation of information competence and critical thinking in digital academic contexts. Integration occurred at the interpretation stage, where the qualitative themes were used as the primary analytic framework and the quantitative findings were compared against them to examine the extent to which the identified practices and perceptions were reflected across the sample. Table 1 presents an illustrative segment of the qualitative coding process, showing how raw participant statements were transformed into initial codes, categories, and broader themes.

As shown in Table 1, the analysis moved progressively from participants' concrete expressions to more abstract conceptual groupings. For example, the statement “I use Google first because it is fast, then I check academic sources” was initially coded as “speed-oriented search,” which was then grouped under the category of “search convenience” and interpreted within the broader theme of “Engagement with Digital Learning Environments.”

Findings

Qualitative findings

Engagement with digital learning environments

Table 2 presents an analytic mapping of the learning rationales that shape university students' engagement with digital information platforms in the context of academic assignments. Rather than categorizing platform use solely by frequency, the table foregrounds the underlying purposes and cognitive intentions guiding students' choices.

Table 2

Key issue/focusSub-patternDominant digital toolsRepresentative participant codes
Speed and ConvenienceQuick access to information; time-savingGoogle, ChatGPT, AI toolsP1, P4, P5, P9, P12, P17, P30, P34, P44, P49, P57, P64
Academic ReliabilityPreference for peer-reviewed and institutional sourcesGoogle Scholar, university databases, electronic librariesP3, P15, P27, P40, P69, P74, P77
Hybrid Information PracticesCombining traditional and digital sourcesBooks + Google/AI toolsP6, P8, P18, P25, P37, P51, P67
AI-Assisted UnderstandingClarification, summarization, simplification of contentChatGPT, GeminiP12, P17, P35, P41, P53, P62
Multimodal Learning PreferencesVisual/auditory comprehensionYouTubeP46, P58, P61, P65
Breadth and Diversity of SourcesMultiple perspectives and formatsGoogle, social media, mixed platformsP16, P47, P72
Instructor-Guided ResourcesCurriculum-aligned search practicesSyllabus, recommended platformsP21, P74
Critical Awareness of AccuracyAcknowledgement of potential inaccuraciesAI tools (with caution)P52, P53

Learning rationales underpinning university students’ Use of digital information platforms.

Participants' accounts demonstrate that engagement with digital learning environments is strategic rather than random, shaped by competing demands of speed, comprehension, and academic legitimacy. The most dominant pattern is the prioritization of efficiency, with students consistently framing digital platforms as tools that reduce cognitive and temporal effort. For instance, P1 explicitly emphasized “quick access to structured and relevant information,” while P5 highlighted “quick and convenient access to a wide range of information.” Similarly, P9 stated, “Google or ChatGPT, because they provide fast and easy access to relevant information.” Analytically, this suggests that accessibility and speed often shape students' first point of engagement with digital information, but do not by themselves determine final trust or academic use.

These responses position digital tools as functional instruments, optimized for immediacy. However, speed alone does not appear sufficient when academic standards are salient. A parallel discourse strongly emphasizes reliability and scholarly credibility, particularly in relation to assessed university work. Participants such as P3 clearly distinguished academic platforms, noting, “University databases, because they provide reliable, peer-reviewed academic sources.” This stance was echoed by P15, who stated, “the information provided there is reliable and meets academic standards,” and by P74, who emphasized that recommended platforms are “reliable, scientific, and meet academic standards.” These statements indicate an internalized awareness of institutional epistemic hierarchies. More importantly, reliance on official or institutional sources appears to function as a practical indicator of information competence, as students associate such sources with verification, academic legitimacy, and lower epistemic risk.

A particularly significant finding is the instrumental use of artificial intelligence as a cognitive mediator rather than a primary knowledge authority. Participants frequently framed AI tools as mechanisms for explanation, simplification, and synthesis. P12 explained that ChatGPT is used “to simplify the material” and “explain information in a clear and concise way.” P35 similarly stated that ChatGPT is used “to clarify unclear topics.” More advanced reflective use was evident in P17's account, where AI was employed to “summarize the content in the form of notes” and even “include references so that I can check the original sources.” This indicates an emerging form of assisted critical engagement, rather than passive acceptance. In this respect, AI was not positioned as a substitute for academic judgment, but as a support tool within students' broader evaluative practices.

At the same time, several participants explicitly acknowledged the limitations and risks associated with AI-generated information. P52 noted that “the information is not always completely accurate,” while P53 similarly acknowledged that “some information may be inaccurate.” Importantly, this awareness did not result in rejection of AI tools but rather in conditional trust, suggesting that students are negotiating accuracy through cross-checking and prompt formulation. This reflexivity is a key indicator of developing information competence. Hybrid learning strategies further illustrate this negotiation between traditional and digital epistemologies. P6 emphasized the complementary role of “Books and AI tools,” while P8 highlighted the balance between “quick access to information and reliable academic content.” P67 offered one of the most nuanced accounts, stating that Gemini explanations “align well with textbooks,” while textbooks remain central for studying. Such responses reflect epistemic layering, where authority is distributed across multiple sources. Finally, engagement patterns also reveal diverse learning preferences, particularly through multimodal platforms. Participants such as P46 emphasized that “watching and listening makes it easier for me to understand and remember,” while P58 and P65 similarly noted that YouTube “helps me understand topics more easily.” These responses underline that digital engagement is not only about information access but also about learning style alignment.

Figure 1 provides a conceptual overview of the main information sources used by university students for academic assignments and the key reasons associated with their use.

Figure 1

Information competence

Table 3 summarizes the key evaluative criteria employed by university students when judging the reliability and academic appropriateness of online information. The table also captures important variation in information competence, highlighting both well-articulated evaluative practices and instances where students report uncertainty or reliance on external validation (e.g., instructors).

Table 3

Evaluation criterionDescription of practiceRepresentative participant codes
Source authority/official statusTrust in official, institutional, or licensed platformsP2, P9, P17, P19, P38, P42, P54, P56, P59, P63, P77
Authorship identificationChecking whether the author is named and credibleP7, P13, P15, P26, P27, P31, P35, P40
Use of references and evidenceReliance on cited sources, references, or evidence-based writingP5, P7, P12, P15, P18, P30, P37, P43
Cross-checking and comparisonComparing information across multiple websites or formatsP4, P11, P14, P22, P29, P44, P47, P52, P55, P58
Alignment with books/syllabusValidation through textbooks or course materialsP10, P23, P34, P46, P57, P61, P62, P64, P67, P71
Publication date/timelinessAttention to recency of informationP13, P15, P31, P35, P40, P60, P70, P74, P76
Teacher-mediated validationReliance on instructor confirmationP21, P50
Critical uncertainty/lack of strategyInability to articulate evaluation criteriaP1, P6, P68

Criteria used by university students to evaluate the reliability and academic suitability of online information.

Participants' responses indicate that information competence is primarily enacted through procedural evaluation strategies, rather than through abstract or theoretical definitions of credibility. The most dominant criterion across the dataset is source authority, with many participants equating reliability with the official status of a website. Statements such as “If it is an official website, I consider it suitable” (P9), “I only use official websites” (P38), and “I rely on official websites and do not trust information provided by AI” (P54) reflect a binary trust logic, where institutional affiliation functions as a primary marker of academic legitimacy.

Closely related to source authority is the emphasis on authorship and traceability. Participants frequently reported checking whether the author is clearly identified and credible. For example, P7 defined reliable academic information as that which has “a known author, published in a reputable source, and supported by references.” Similarly, P31 stated that reliability is determined by “the official status of the source, the author's identification, publication date, and whether it is published on scientific or educational websites.” These responses suggest an internalization of formal academic conventions, even if not explicitly framed as information literacy theory. Another central strategy involves cross-checking and comparison across multiple sources. Participants repeatedly described validation as a comparative process rather than reliance on a single source. P14 explained, “If the information is similar in most of them, I consider it acceptable,” while P55 emphasized, “I do not rely on a single article or website; I compare several and select the most accurate common information.” This practice reflects a relational understanding of reliability, where consistency across sources increases perceived trustworthiness.

Importantly, books and textbooks continue to function as epistemic anchors in students' evaluation processes. Many participants described using books as a benchmark against which online information is assessed. For instance, P10 reported “studying and comparing important information with books,” while P61 noted that information is validated by “match[ing] it with textbooks.” P73 explicitly stated, “The best approach is working with books.” This pattern indicates that traditional academic materials retain a normative authority even within digital learning environments. A smaller but analytically significant group highlighted teacher-mediated validation, with P21 stating simply, “The teacher checks it,” and P50 noting validation occurs “by asking the teacher or conducting analysis.” This suggests that for some students, information competence remains externally regulated, rather than fully internalized.

At the same time, the data also reveal zones of uncertainty and underdeveloped competence. Participants such as P1 (“I do not know”), P6 and P68 (“I do not determine it”), were unable to articulate any evaluative strategy. These responses are analytically important, as they indicate uneven distribution of information competence within the sample. Also, several responses reflect an emerging critical stance toward digital information, including scepticism and verification awareness. P22 stated, “It is not 100% reliable, so I first verify it carefully,” while P66 emphasized “verify and justify it with evidence.” Such responses suggest that information competence is not merely technical but increasingly reflective and evaluative.

Figure 2 presents a conceptual framework illustrating how university students determine the reliability of information for academic purposes.

Figure 2

Critical thinking in digital contexts

Table 4 outlines the range of strategies university students use to evaluate and resolve conflicting online information in digital learning contexts.

Table 4

Critical-thinking strategyHow it is enacted in practiceIllustrative participant codes
Authority-based trust (official/scientific)Prioritizing “official,” “scientific,” “licensed,” reputable organizations and academic sourcesP7, P12, P18, P19, P31, P35, P58, P59, P74, P77
Evidence-and-references reasoningTrusting arguments supported by evidence, references, scientific basis, academic consensusP3, P7, P9, P15, P27, P40, P41, P46, P60, P66, P76
Cross-checking/triangulationComparing multiple sources/websites; looking for consistency across sourcesP14, P19, P27, P40, P41, P42, P46, P47, P60, P69, P74, P76
Textbook anchoringVerifying online claims using books/textbooks/e-books, or preferring books outrightP10, P11, P17, P28, P29, P39, P53, P56, P61, P75
Independent reasoning/analytical reflectionSelf-directed analysis, independent thinking, careful reading, logical comparisonP4, P8, P13, P16, P37, P52, P70
Social validationAsking teacher/instructor/senior students; discussing with peers/familyP20, P22, P24, P30, P45, P48, P49, P50, P54, P64
Heuristic/low-rigor decision rulesChoosing “most accessible,” “more accurate,” “comprehensive,” popular, familiar platform, intuitionP1, P44, P57, P62, P71, P34
Non-evaluation/disengagementNo evaluation strategy; avoidance; dismissal of importanceP23, P38, P72, P73

Strategies used to judge conflicting online information in academic contexts.

Participants' accounts indicate that critical thinking in digital contexts is largely expressed as a credibility-filtering process under conditions of informational conflict, with students adopting markedly different evaluative logics. A prominent pattern is authority-based trust, in which “official” and “scientific” status functions as a primary credibility cue. For example, P12 stated: “I rely on scientific evidence and official sources.” P31 similarly noted: “I trust official and academic sources, scientific articles, and reputable organizations’ websites.” This logic is articulated most comprehensively by P7, who specified that they trust sources with “a reliable author, credible origin, strong evidence, references, and are scientific or official.” These responses suggest that, under uncertainty, many students default to institutional authority as a stabilizing epistemic anchor. This pattern reflects an emerging form of information competence, since students recognize academic and official sources as more trustworthy; however, it may also indicate that critical thinking is sometimes exercised through source status rather than through deeper interrogation of claims themselves.

A second, more analytically demanding pattern involves evidence-and-references reasoning, where trust is tied to argument quality and traceability rather than institutional labels alone. P3 explained: “I only agree with an opinion after carefully analyzing and verifying it.” P9 specified a concrete criterion: “If there are evidences and clear references showing where the information comes from, I trust it more.” P66 reinforced this emphasis: “I verify it using evidence.” In these accounts, credibility is constructed through justification practices, indicating a move toward evaluative reasoning compatible with academic norms. Compared with authority-based trust alone, this strategy points to a stronger integration of information competence and critical thinking, because students are not only selecting credible sources but also seeking explicit evidentiary support for knowledge claims.

Relatedly, many participants described cross-checking/triangulation as the central mechanism for resolving conflict. P14 stated: “I evaluate information based on the author, publication date, source reliability, and whether it appears across multiple websites.” P46 similarly emphasized verification through sourcing: “I trust information with clearly stated sources or compare it across different websites.” The most elaborated triangulation approach appears in P41: “I compare several trusted sources, prioritize scientific or official sites, and trust information supported by evidence and academic consensus.” This reveals a composite form of digital critical thinking that combines multi-source comparison with epistemic hierarchy (scientific/official first). Analytically, this is one of the clearest indicators of advanced evaluative practice in the dataset, because it shows students comparing claims across sources rather than accepting information from a single point of authority.

Notably, a large subset of students anchors evaluation in textbooks and course materials, effectively using print/academic texts as a benchmark for digital claims. P10 stated: “I verify the information by comparing it with books.” P11 echoed: “I confirm it using textbooks.” Some participants explicitly privilege books as more trustworthy, e.g., P29: “I trust books more because they are usually more accurate.” P61 described a mixed strategy: “I compare multiple sources and trust information that aligns with textbooks.” This indicates that “critical thinking” may be enacted as alignment-checking (online information is trusted when it converges with established academic materials), which can be effective but also potentially conservative if it discourages engagement with emerging research.

Another meaningful pattern is independent reasoning and reflective analysis, where students foreground their own analytical agency. P4 stated: “I think independently, do additional research, and then choose the correct information.” P8 similarly noted: “I rely on my own reasoning to decide which information seems more accurate.” P37 framed the process in formal terms: “I think logically, compare opinions, and choose the one that seems most reliable.” P52 offered a nuanced epistemic stance: “There is no single correct answer; different sources may be valid in different situations, so careful reading and analysis are always required.” These responses are analytically significant because they reflect epistemic complexity a key marker of higher-order critical thinking in digital environments.

However, the dataset also contains substantial reliance on social validation, where trust is outsourced to teachers, instructors, or knowledgeable others. P22 stated: “I seek help from the instructor.” P24: “I ask the teacher.” P30: “I ask instructors.” Some extended this to peers or family, e.g., P20: “I discuss it with my parents and make a joint decision,” and P49: “I ask students and teachers.” This suggests that critical thinking is sometimes distributed socially, which may reflect healthy academic help-seeking but also indicates dependency when personal evaluative criteria are weak or underdeveloped. Thus, these responses may be interpreted in two ways: as legitimate help-seeking within academic culture, or as a sign that some students have not yet internalized sufficiently independent evaluative strategies.

Meanwhile, there is evidence of heuristic/low-rigor decision rules and even non-evaluation. P1 stated: “By choosing the most accessible option.” P34: “I rely on intuition.” P57: “I consider widely used or popular sources.” In contrast, some participants reported no evaluation capacity or even disengagement: P23: “I do not evaluate it,” P38: “I do not evaluate it in any way,” P72: “I do not evaluate it,” and P73: “It is not important.” These responses are especially important because they show that critical thinking and information competence are unevenly developed across the sample. Rather than appearing as stable and universal competencies, they emerge as variable practices that range from sophisticated evidence-based reasoning to minimal or absent evaluation. These responses are crucial for your mixed-methods logic because they indicate that “critical thinking in digital contexts” is not uniformly present; rather, the sample contains a continuum from sophisticated evaluative reasoning (evidence/triangulation/consensus) to minimal or absent evaluative practice.

Figure 3 illustrates the key strategies students employ when evaluating information in digital environments.

Figure 3

Digital risks, AI, and ethical use

Table 5 categorizes university students' perceptions of how social media algorithms and AI applications influence their evaluation of information.

Table 5

Perceived impact patternDescription of students’ perceptions and responsesRepresentative participant codes
Perceived benefits (speed, efficiency, usefulness)AI and algorithms are seen as time-saving, supportive, and helpful for accessing information quicklyP1, P3, P4, P8, P13, P16, P17, P20, P22, P25, P30, P37, P39, P51, P58, P61, P63, P66, P72, P77
Conditional trust/cautious useTools are useful but should not be fully trusted; require verification and critical thinkingP1, P2, P10, P11, P18, P21, P24, P29, P33, P35, P41, P49, P56, P76
Awareness of algorithmic bias and filter bubblesRecognition that algorithms personalize content and may limit perspectives or create biasP7, P14, P15, P19, P26, P27, P31, P40, P47, P54, P60, P74
Use as a supplementary toolAI and algorithms used as additional or secondary sources rather than primary authoritiesP6, P32, P38, P67
Critical reflexivity encouraged by AI/algorithmsExposure to AI and algorithms leads to increased skepticism, cross-checking, or careful readingP31, P35, P42, P43, P52, P53
Perceived neutrality or no influenceBelief that AI and algorithms do not affect personal judgment or evaluationP34, P46, P48, P50, P62, P71, P73, P75
Ambivalence/situational dependenceEffects vary by context, task, or type of informationP9, P68, P69, P70

Perceived effects of social Media algorithms and AI tools on information evaluation.

Participants' responses reveal a predominantly ambivalent yet reflective stance toward social media algorithms and AI applications. Many students emphasized the instrumental benefits of these tools, particularly in terms of speed and efficiency. For example, P1 noted that AI “allows me to get quick answers,” while P20 described it as “highly effective and efficient.” Similar positive appraisals were evident in statements such as “It has a positive effect and is useful” (P3, P4) and “It provides fast and mostly accurate information” (P37). These responses position AI and algorithmic systems as functional aids within students’ academic information practices.

At the same time, this positive valuation was frequently tempered by conditional trust and ethical caution. Several participants explicitly stated that such tools “should not be fully trusted” (P2) or are “fifty-fifty; it can be both helpful and misleading” (P10). Concerns about misinformation were also evident, as P11 noted that AI “may sometimes generate false information,” and P33 emphasized that “its reliability is questionable.” These responses suggest a basic awareness of epistemic risk, but they do not necessarily indicate fully developed digital literacy on their own. Rather, they show that students recognize the possibility of inaccuracy and bias, even if their comments do not always demonstrate systematic evaluation procedures.

A particularly salient subtheme is awareness of algorithmic bias and filter bubbles. Multiple participants recognized that algorithms curate content based on prior interests, potentially narrowing perspectives. P7 stated that algorithms “may create a one-sided perspective,” while P14 similarly noted that they “can limit perspectives.” This concern was echoed by P19 and P40, who both emphasized that algorithms tend to show content aligned with personal preferences, increasing the risk of biased information exposure. In platform terms, these concerns are especially relevant to social media and AI-assisted search environments, where personalization, ranking, and content recommendation shape what users encounter before evaluation even begins. Thus, students' comments point to an emerging recognition that digital environments are not neutral channels of information access, but structured spaces that influence visibility and plausibility.

Importantly, some students responded to this risk with compensatory strategies, such as deliberately seeking alternative viewpoints (P15, P27) or avoiding reliance on a single platform (P74). These responses are significant because they link awareness of algorithmic risk to concrete evaluative action. However, such strategies were not evident across all accounts, so the findings are better interpreted as showing uneven rather than uniform development of critical digital evaluation.

Beyond risk awareness, several participants described AI and algorithms as catalysts for critical reflexivity rather than passive influence. P31 observed that repeated exposure to similar suggestions “encourage[s] me to think critically and check multiple sources,” while P52 stated that such tools “encourage me to read the information carefully.” P53 went further, noting that AI “makes me more skeptical about information sources.” These accounts should be interpreted carefully: they suggest that awareness of digital risks may prompt more reflective attitudes for some students, but they do not by themselves establish consistent reflexive critical thinking across the sample. Instead, they indicate that students differ in the extent to which risk awareness is translated into actual verification strategies.

Figure 4 presents a conceptual synthesis of students' perceptions of AI and social media in terms of their impact and usage within academic contexts.

Figure 4

Institutional and contextual influences

Table 6 summarizes the institutional and contextual mechanisms through which universities support the development of information competence and critical thinking in digital environments.

Table 6

Institutional support mechanismHow support is enactedRepresentative participant codes
Curriculum-based coursesFormal courses addressing ICT, digital technologies, AI, and information literacyP2, P3, P10, P22, P34, P44, P47, P49, P68, P77
Assignment-driven critical engagementHomework, projects, and essays requiring analysis, comparison, and source evaluationP7, P12, P14, P18, P20, P23, P35, P37, P38, P41, P42, P51
Access to digital and library resourcesElectronic libraries, databases, online platforms, and free internetP6, P25, P26, P31, P39, P53, P56, P60, P61, P72
Instructor guidance and modellingTeachers providing references, explaining reliable sources, academic integrityP9, P13, P41, P74
Training sessions and seminarsWorkshops, webinars, training courses, university eventsP1, P15, P29, P31, P40, P45, P48, P50, P58
Infrastructure and learning environmentDigital devices, interactive classrooms, technological equipmentP4, P5, P70
Ethical and regulated AI useGuidance on appropriate and supplementary use of AIP67
General positive institutional climateBroad perceptions of adequate or strong supportP8, P11, P16, P17, P36, P46, P54, P63, P69, P71, P73, P75
Uncertainty or lack of awarenessStudents unsure of or not perceiving supportP21, P30, P33, P52, P55
Perceived absence of supportExplicit perception of no institutional supportP57

Institutional mechanisms supporting information competence and critical thinking in digital environments.

Participants' responses indicate that the development of information competence and critical thinking in digital environments is largely embedded within institutional structures and pedagogical practices, rather than delivered as a single, standalone intervention. The most frequently cited form of support involves curriculum-based courses, particularly those focused on information and communication technologies, digital technologies, and artificial intelligence. Participants referred to specific courses, such as “the Information and Communication Technologies course” (P2) and “the Digital Technologies course” (P3), where foundational knowledge about information literacy and AI was introduced. This suggests that universities play a formal role in conceptualizing digital competence as part of disciplinary learning.

Beyond formal coursework, many students emphasized the importance of assignment-driven learning in fostering critical thinking. P7 noted that “the teacher assigns analysis through critical thinking,” while P14 explained that teachers “guide students to work properly with sources and to develop critical thinking.” Similarly, P20 stated that homework often requires reviewing information from “various official sources,” and P38 highlighted that “critical analysis is required” when using electronic resources. These responses indicate that critical thinking is primarily cultivated through practice-oriented academic tasks, rather than abstract instruction alone. Analytically, this suggests that institutional influence is most visible when students are required to apply evaluative judgment in concrete academic tasks, such as comparing sources, selecting credible information, and justifying their choices. In this sense, institutional support appears to contribute to competence indirectly through repeated pedagogical practice rather than through one-time instruction.

Access to digital and library resources emerged as another central institutional support. Participants frequently mentioned electronic libraries and databases as enabling environments for developing information competence. For example, P6 stated, “We make good use of the university's electronic library,” while P25 and P56 emphasized access to “electronic library databases.” P31 further expanded this by noting the availability of “online libraries, webinars, and training sessions.” Such infrastructure appears to provide the material conditions necessary for engaging with academic information critically. Yet access alone does not demonstrate competence. Rather, these resources become educationally meaningful when students use them to verify claims, compare sources, or move beyond easily accessible but less reliable information.

Instructor guidance plays a mediating role in translating institutional resources into learning outcomes. Several participants described teachers as actively modelling and reinforcing evaluative practices. P9 noted that “Teachers provide accurate and reliable references,” while P13 explained that instructors “explained which websites and information are correct.” One of the most comprehensive accounts was offered by P41, who stated that lecturers teach “how to choose reliable sources, maintain academic integrity, and analyze information critically.” These responses highlight the importance of pedagogical mediation in shaping students' information practices. More specifically, they suggest that institutional support is most effective when resources are accompanied by explicit modelling of how to evaluate credibility, use evidence, and exercise academic judgment. This makes instructors a key link between institutional provision and students' actual information evaluation practices.

Participants also referred to training sessions, seminars, and extracurricular initiatives as complementary supports. P15 mentioned “courses, trainings, and digital resources,” while P40 referred to “seminars [and] training sessions.” Additionally, some students pointed to external platforms, such as “online courses on Coursera” (P48), suggesting that universities extend institutional support through blended and external learning opportunities. However, not all students perceived institutional support equally. A notable group expressed uncertainty or lack of awareness, with responses such as “did not feel” (P21), “I will not say for sure” (P30), and “Honestly, I don't know” (P55). One participant explicitly stated “None” (P57). These responses are analytically important, as they suggest that institutional initiatives may not be uniformly visible or impactful across the student body. This uneven visibility is significant because it helps explain why access to institutional support does not necessarily translate into uniformly strong information competence or critical thinking outcomes. In other words, the findings suggest that institutional resources matter, but their influence depends on whether students recognize, access, and internalize them as part of their own evaluative practice.

Figure 5 illustrates the institutional support mechanisms through which universities foster digital literacy and critical thinking.

Figure 5

Quantitative findings

Participants were asked to indicate their primary starting point when beginning an academic information search. The results reveal a clear hierarchy of preferred entry points into digital learning environments. As shown in Figure 6, Google Search emerged as the most frequently reported starting point, selected by 30 participants, indicating that general-purpose search engines remain the dominant gateway for academic information seeking. This finding suggests that students prioritize speed, familiarity, and broad accessibility at the initial stage of their search process. The second most common starting point was university databases and library portals (n = 16), reflecting a substantial proportion of students who begin their searches within institutionally sanctioned academic environments.

Figure 6

AI tools were identified as the primary starting point by 15 participants, demonstrating the growing integration of artificial intelligence into students' academic search practices. While not surpassing traditional search engines, the prominence of AI tools suggests a shift toward algorithmically mediated information access at the earliest stage of inquiry. In contrast, social media platforms (n = 9) and YouTube (n = 4) were less frequently selected as initial search points, indicating that these platforms are more likely used as supplementary or explanatory resources rather than as primary academic entry points. Google Scholar was selected by only 2 participants, and Telegram/WhatsApp group links by 1 participant, suggesting that specialized academic search engines and peer-shared links play a relatively minor role at the starting phase of information seeking.

Figure 7 presents the key indicators that increase students' trust in digital information sources, based on participants' selections of up to three criteria.

Figure 7

Figure 7 illustrates the criteria students associate with trustworthiness when evaluating digital sources. The most influential indicator was clear author credentials (n = 49), highlighting the central role of authorship and expertise in establishing credibility. This was followed by transparent explanation of methods or data (n = 25), indicating that students value clarity and evidence in how information is produced. Indicators related to timeliness (publication date is recent and relevant; n = 20) and visibility (appearing at the top of Google results; n = 19) were moderately influential, suggesting that practical cues also shape trust judgments. In contrast, social endorsement whether content is widely shared by others was the least selected indicator (n = 9).

Figure 8 presents students' initial decision-making strategies when faced with conflicting digital information, based on a scenario contrasting a recent but unreferenced source with an older peer-reviewed article.

Figure 8

As shown in Figure 8, the most common response was using Source B (Source A: a recent, widely shared post with no references/Source B: an older peer-reviewed article with references) because it is peer-reviewed (n = 35), indicating a strong preference for academic credibility and referenced evidence when evaluating conflicting information. This was followed by searching for at least two additional sources before deciding (n = 30), reflecting an active triangulation strategy consistent with critical thinking practices. In contrast, relatively few participants reported relying on source recency (Use Source A because it is newer; n = 6) or social validation (asking a friend/Telegram group; n = 3). An equally small number indicated they would use both sources without further checking (n = 3). The findings suggest that students predominantly adopt evidence-oriented and verification-based approaches rather than heuristic or socially driven strategies when confronted with conflicting digital information.

Figure 9 illustrates students' self-reported patterns of AI tool use in coursework, based on multiple-response selections.

Figure 9

The findings indicate that AI tools are predominantly used as supportive rather than substitutive academic aids. The most frequently selected option was using AI to summarize readings that students have also checked themselves (n = 58), followed by using AI mainly to generate ideas or outlines (n = 52). These patterns suggest that students largely employ AI as a cognitive and organizational support tool, while maintaining personal responsibility for verification and understanding. In contrast, relatively few participants reported using AI as a primary information source (n = 8) or to produce text submitted with minimal editing (n = 6), indicating limited reliance on AI for direct content generation. Only one participant stated that they do not use AI tools at all. The distribution reflects a cautious and ethically bounded use of AI, aligning with principles of academic integrity and responsible AI engagement identified in the qualitative findings.

Figure 10 presents the two most influential sources that contribute to students’ skills in evaluating online information.

Figure 10

As shown in Figure 10, library training and library-based resources were identified as the most influential source (n = 39), underscoring the central role of academic libraries in fostering information literacy and evaluative skills. This was followed by self-learning through digital platforms such as YouTube, blogs, and online courses (n = 30), indicating that students actively supplement formal education with autonomous, informal learning pathways. Peers and classmates also contributed meaningfully (n = 20), suggesting that information evaluation skills are partly shaped through peer interaction and shared academic practices. In contrast, family/community (n = 7) and workplace or internship experiences (n = 4) played a relatively minor role. The findings suggest that students' information evaluation skills are developed primarily through a combination of institutional support structures (libraries) and self-directed digital learning, rather than through non-academic or professional contexts.

Discussion

This study provides a nuanced account of how university students in Kazakhstan enact information competence and critical thinking within digitally mediated learning environments. Taken together, the qualitative and quantitative findings suggest that students' digital academic practices are neither wholly informal nor fully disciplinary from the outset; rather, they reflect a structured hierarchy of entry points, trust cues, and evaluative responses. Quantitatively, Google Search was the most common starting point (n = 30), followed by university databases/library portals (n = 16) and AI tools (n = 15), while Google Scholar played only a minor role (n = 2). This pattern supports the qualitative finding that students typically begin with fast, familiar, and accessible tools before moving, when needed, toward sources perceived as more academically legitimate. Students' tendency to begin academic searches with general-purpose platforms aligns with a substantial body of international research indicating that search engines function as cognitive gateways rather than epistemic endpoints (Head and Eisenberg, 2010; Rowlands et al., 2008). Rather than interpreting this behavior as a lack of academic rigor, recent studies argue that students strategically leverage familiar platforms to orient themselves before transitioning explicitly or implicitly toward academically sanctioned sources (Biddix et al., 2011). In the present study, this hybrid search hierarchy is important because it shows that information competence is not expressed only through where students start, but through how they subsequently filter, compare, and validate information. Thus, the dominance of Google as a starting point should not be read simplistically as weak academic practice; instead, it reflects a pragmatic first step within a broader evaluative process that is uneven but often purposeful.

In the Kazakhstani higher education context, where digital infrastructures have expanded rapidly over a relatively short period, such pragmatic strategies may represent adaptive learning rather than insufficient training. This resonates with comparative research from post-Soviet and transitional higher education systems, which shows that students often develop hybrid information practices that blend informal digital tools with traditional academic resources (Kurakbayeva and Xembayeva, 2025). Pedagogically, these findings reinforce the need to bridge students' existing practices with academic norms, rather than attempting to replace them.

The study's findings reinforce theoretical positions that conceptualize information competence as an epistemic practice rather than a purely technical skill set. Students' emphasis on authorship, references, publication venue, and cross-source comparison mirrors international evidence that credibility judgments are grounded in socially learned academic conventions (Little and Green, 2022). The quantitative results strengthen this interpretation. Clear author credentials were the strongest reported trust cue (n = 49), followed by transparent methods or data (n = 25), while the fact that information was widely shared online was much less influential (n = 9). These distributions suggest that students' credibility judgments are shaped more by expertise and evidentiary transparency than by social popularity alone. At the same time, the fact that “appears at the top of Google results” still influenced a notable number of students (n = 19) indicates that convenience-based or platform-driven heuristics continue to coexist with more academically grounded evaluation criteria. Importantly, the continued reliance on textbooks and peer-reviewed sources as benchmarks of reliability suggests that traditional academic authorities retain strong epistemic legitimacy, even within highly digitalized environments.

However, the variability in students' confidence and evaluative autonomy reflects what Whitworth (2014) describes as the uneven internalization of information literacy as a critical disposition. In Kazakhstan, where formal instruction in information literacy is often embedded within specific courses rather than articulated as a transversal competence, this unevenness points to a pedagogical challenge: information competence must be made explicit, cumulative, and discipline-integrated, rather than assumed as an incidental outcome of digital exposure.

The way students evaluated conflicting digital information supports contemporary critiques of decontextualized models of critical thinking. Rather than applying abstract reasoning rules, students engaged in situated judgment, relying on evidence, institutional authority, and cross-validation practices. This aligns with research emphasizing that critical thinking is relational and context-bound, emerging through interaction with specific tasks, sources, and social expectations (Elmborg, 2006). From a pedagogical perspective, this underscores the value of explicit instruction in epistemic reasoning, particularly through scenario-based learning that foregrounds uncertainty, conflict, and justification approaches shown to strengthen students' evaluative reasoning across disciplines.

Students' perceptions of AI tools and algorithmic systems reveal a pattern of conditional trust, characterized by appreciation of efficiency alongside awareness of epistemic risk. This finding aligns with recent empirical work indicating that students increasingly approach AI outputs with scepticism, particularly in relation to accuracy, bias, and academic integrity (Zhai et al., 2024). Rather than substituting judgment, AI appears to function as a cognitive aid that intensifies the need for verification.

This conditional stance is pedagogically significant. It suggests that students are already engaging in informal ethical reasoning regarding AI use, even in contexts where institutional guidance may be limited. In Kazakhstan where national digitalization and AI initiatives emphasize innovation this finding highlights an opportunity for universities to move beyond restrictive or permissive approaches and instead integrate AI literacy with epistemic and ethical education, as recommended in recent international frameworks (OECD, 2021). While courses, assignments, and digital libraries exist, their role in developing information competence and critical thinking may not always be explicitly articulated to students. This aligns with studies showing that institutional impact is strongest when support mechanisms are coherently framed and pedagogically reinforced across programs (Secker and Coonan, 2012). In the Kazakhstani context, where higher education institutions are navigating rapid curricular reform, this finding suggests that strengthening institutional coherence and pedagogical intentionality is as important as expanding digital infrastructure. Making evaluative criteria, academic integrity norms, and reasoning expectations explicit can enhance students’ ability to transfer competencies across courses and digital contexts.

Limitations and future directions

This study has several limitations that should be considered when interpreting the findings. First, the sample was relatively small and drawn from two universities in Kazakhstan, with a concentration in education-related programs. As a result, the findings are contextually grounded rather than statistically generalizable. Future research could include larger and more diverse samples across institutions and disciplines to enable comparative analyses. Second, the study relied on self-reported data, which may not fully capture actual information evaluation behavior. Although scenario-based items helped elicit applied reasoning, future studies could incorporate performance-based assessments or observational data to strengthen validity. Third, the cross-sectional design limits insight into how information competence and critical thinking develop over time. Longitudinal and intervention-based studies would help clarify developmental trajectories and instructional effects. Fourth, the study was conducted within a rapidly changing digital environment shaped by ongoing developments in search technologies, generative AI tools, and algorithmic content curation. As these systems evolve quickly, students' information practices, trust judgments, and AI-use patterns may also shift over a short period of time. Therefore, the present findings should be interpreted as reflecting a particular technological and educational moment rather than a fixed or stable digital condition. Future research should revisit these issues regularly and examine how emerging digital tools and platform changes reshape information competence, critical thinking, and ethical decision-making in higher education.

Conclusions and implications

This study examined how university students in Kazakhstan engage with digital learning environments and how they enact information competence and critical thinking when evaluating academic information, encountering conflicting claims, using AI tools, and interpreting institutional support. Overall, the findings show that students' digital information practices are pragmatic, layered, and unevenly developed rather than uniform or linear. First, in relation to the objective of understanding students' engagement with digital environments, the study found that students most often begin with fast and familiar entry points such as Google Search and, increasingly, AI tools, before moving toward more academically legitimate sources when task demands require stronger credibility. This indicates that digital engagement is structured by convenience, accessibility, and academic purpose rather than by a simple divide between “good” and “bad” sources.

Second, regarding the evaluation of online information, students generally associated trust with author credentials, evidence, references, peer review, and institutional authority. At the same time, some participants also relied on heuristic cues such as ranking or accessibility, suggesting that information competence is present but uneven in depth and consistency. Third, with respect to critical thinking under conditions of conflicting information, the findings indicate that many students use verification-oriented strategies such as peer-review preference, cross-checking, and additional source consultation. However, these practices were not universal, which suggests that critical thinking in digital contexts is better understood as a continuum of evaluative judgment rather than a stable skill uniformly possessed by all students.

Fourth, in relation to AI and algorithmic systems, the study found that students generally adopt a stance of conditional trust. AI tools are used primarily for summarization, clarification, and idea generation, while awareness of bias, inaccuracy, and ethical risk is present but unevenly translated into explicit evaluative strategies. This shows that AI-related digital literacy is emerging, but still requires stronger pedagogical support.

Statements

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving humans were approved by International University of Astana-Kazakhstan Ethical Comitte. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

NY: Formal analysis, Methodology, Writing – original draft, Writing – review & editing. MY: Conceptualization, Formal analysis, Writing – original draft, Writing – review & editing. OT: Conceptualization, Methodology, Software, Validation, Writing – original draft, Writing – review & editing. BI: Formal analysis, Methodology, Resources, Writing – original draft, Writing – review & editing. AZ: Conceptualization, Data curation, Formal analysis, Visualization, Writing – original draft, Writing – review & editing. AT: Conceptualization, Methodology, Validation, Writing – original draft, Writing – review & editing.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Acknowledgments

The authors would like to thank the participating students for their voluntary contribution to this research.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative AI was used in the creation of this manuscript. ChatGPT (version 5.2) was used solely to assist with translation, correction of grammatical errors, and improvement of linguistic fluency in the manuscript. NapkinAI was used for the creation of figures based on the study's empirical data.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

  • 1

    AladsaniH. K. (2025). Developing postgraduate students’ competencies in generative artificial intelligence for ethical integration into academic practices: a participatory action research. Interact. Learn. Environ.33 (10), 57475765. 10.1080/10494820.2025.2487517

  • 2

    AlduaisA.QadhiS.ChaabanY.KhraishehM. (2025). Utilizing generative ai responsibly and ethically for research purposes in higher education: a policy analysis. Ser. Rev.51 (3-4), 120170. 10.1080/00987913.2025.2581429

  • 3

    AntasariI. W. (2025). The influence of information literacy skills on students’ choice of competition type: a study of library visiting day competition. Soc. Sci. Humanit. Open11, 101255. 10.1016/j.ssaho.2024.101255

  • 4

    BarberL. D.AndersonP. J. (2025). Understanding first-year university student information seeking through the theory of planned behaviour: a transnational perspective. J. Acad. Librariansh.51, 103096. 10.1016/j.acalib.2025.103096

  • 5

    BatıK.KaptanF. (2015). The effect of modeling-based science education on critical thinking. Educ. Policy Anal. Strateg. Res.10 (1), 3952. https://izlik.org/JA39LM56UN

  • 6

    BiddixJ. P.ChungC. J.ParkH. W. (2011). Convenience or credibility? A study of college student online research behaviors. Internet High. Educ.14 (3), 175182. 10.1016/j.iheduc.2011.01.003

  • 7

    BlahopoulouJ.Ortiz-BonninS. (2025). Student perceptions of ChatGPT: benefits, costs, and attitudinal differences between users and non-users toward AI integration in higher education. Educ. Inf. Technol.30, 1974119764. 10.1007/s10639-025-13575-9

  • 8

    BlakstonA.ChambersS.NotleyT. (2025). Young people, algorithms and news: exploring the relationship between algorithmic literacy and news literacy. J. Youth. Stud., 117. 10.1080/13676261.2025.2571491

  • 9

    BygstadB.ØvrelidE.LudvigsenS.DæhlenM. (2022). From dual digitalization to digital learning space: exploring the digital transformation of higher education. Comput. Educ.182, 104463. 10.1016/j.compedu.2022.104463

  • 10

    CelikI.DindarM.MuukkonenH.JärveläS. (2022). The promises and challenges of artificial intelligence for teachers: a systematic review of research. TechTrends66, 616630. 10.1007/s11528-022-00715-y

  • 11

    ChanA. Y. W.SungC. C. M. (2025). Enhancing students’ digital literacy skills through their technology use in a course-based research project: a Hong Kong case study. Asia Pac. Educ. Rev., 10.1007/s12564-025-10038-1

  • 12

    CreswellJ. W.Plano ClarkV. L. (2018). Designing and Conducting Mixed Methods Research. 3rd ed.Thousand Oaks, CA: SAGE.

  • 13

    DahlenS. P. C.Nordstrom-SanchezK.GraffN. (2024). At the intersection of information literacy and written communication: student perspectives and practices related to source-based writing. J. Acad. Librariansh.50 (6), 102959. 10.1016/j.acalib.2024.102959

  • 14

    ElmborgJ. (2006). Critical information literacy: implications for instructional practice. J. Acad. Librariansh.32 (2), 192199. 10.1016/j.acalib.2005.12.004

  • 15

    FraillonJ.DuckworthD. (2025). “Computer and information literacy framework,” in IEA International Computer and Information Literacy Study 2023, eds. FraillonJ.RožmanM. (Cham: Springer), 2134. 10.1007/978-3-031-61194-0_2

  • 16

    Fui-Hoon NahF.ZhengR.CaiJ.SiauK.ChenL. (2023). Generative AI and ChatGPT: applications, challenges, and AI-human collaboration. J. Inf. Technol. Case Appl. Res.25 (3), 277304. 10.1080/15228053.2023.2233814

  • 17

    GoldenB. (2023). Enabling critical thinking development in higher education through the use of a structured planning tool. Ir. Educ. Stud.42 (4), 949969. 10.1080/03323315.2023.2258497

  • 18

    HeadA. J.EisenbergM. B. (2010). Truth be told: How college students evaluate and use information in the digital age. Project Information Literacy. Available online at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2281485(Accessed December 10, 2025).

  • 19

    JabaliO.HamamraB.AyyoubA. (2024). Critical thinking, assessment, and educational policy in Palestinian universities. Int. J. Educ. Integr.20, 12. 10.1007/s40979-024-00160-9

  • 20

    JohnstonB.WebberS. (2003). Information literacy in higher education: a review and case study. Stud. High. Educ.28 (3), 335352. 10.1080/03075070309295

  • 21

    KongS. C. (2014). Developing information literacy and critical thinking skills through domain knowledge learning in digital classrooms: an experience of practicing flipped classroom strategy. Comput. Educ.78, 160173. 10.1016/j.compedu.2014.05.009

  • 22

    KopsM.SchittenhelmC.WachsS. (2025). Young people and false information: a scoping review of responses, influential factors, consequences, and prevention programs. Comput. Human. Behav.169, 108650. 10.1016/j.chb.2025.108650

  • 23

    KunzA. K.Zlatkin-TroitschanskaiaO.SchmidtS.NagelM. T.BrückerS. (2024). Investigation of students’ use of online information in higher education. Int. J. Educ. Technol. High. Educ.21, 44. 10.1186/s40561-024-00333-6

  • 24

    KurakbayevaA.XembayevaS. (2025). Enhancing professional abilities of university students through digital educational interventions: a study in Kazakhstani universities. Front. Educ.9, 1478622. 10.3389/feduc.2024.1478622

  • 25

    LanD. H.TungT. M. (2024). Exploring fake news awareness and trust in the age of social media among university student TikTok users. Cogent Soc. Sci.10 (1), 10.1080/23311886.2024.2302216

  • 26

    LiW.SunK.SchaubF.BrooksC. (2022). Disparities in students’ propensity to consent to learning analytics. Int. J. Artif. Intell. Educ.32, 564608. 10.1007/s40593-021-00254-2

  • 27

    LittleD.GreenD. A. (2022). Credibility in educational development: trustworthiness, expertise, and identification. High. Educ. Res. Dev.41 (3), 804819. 10.1080/07294360.2020.1871325

  • 28

    MalikA.KhanM. L.HussainK.QadirJ.TarhiniA. (2025). AI In higher education: unveiling academicians’ perspectives on teaching, research, and ethics in the age of ChatGPT. Interact. Learn. Environ.33 (3), 23902406. 10.1080/10494820.2024.2409407

  • 29

    MirazchiyskiP. V. (2025). Contemporary gaps in research on digital divide in education: a literature review. Universal Access in the Information Society24, 9911008. 10.1007/s10209-024-01166-3

  • 30

    MolerovD.Zlatkin-TroitschanskaiaO.NagelM.-T.SchmidtS.BeckK. (2020). Assessing university students’ critical online reasoning ability: a conceptual and assessment framework with preliminary evidence. Front. Educ.5, 577843. 10.3389/feduc.2020.577843

  • 31

    NagelM.-T.Zlatkin-TroitschanskaiaO.FischerJ. (2022). Validation of newly developed tasks for the assessment of generic critical online reasoning (COR) of university students and graduates. Front. Educ.7, 115. 10.3389/feduc.2022.914857

  • 32

    OECD. (2021). Global competence in a digital world. OECD Publishing. Available online at: https://www.oecd.org/en/topics/sub-issues/global-competence.html(Accessed December 10, 2025).

  • 33

    ParkS.NanX. (2025). Generative AI and misinformation: a scoping review of the role of generative AI in the generation, detection, mitigation, and impact of misinformation. AI. Soc.41 (2), 15011515. 10.1007/s00146-025-02620-3

  • 34

    PramodD. (2025). Decoding responsible AI use: the influence of digital literacy and ethical awareness. Cogent Educ.12(1), 10.1080/2331186X.2025.2592371

  • 35

    RowlandsI.NicholasD.WilliamsP.HuntingtonP.FiledhouseM.GunterB.et al (2008). The google generation: the information behaviour of the researcher of the future. Aslib. Proc.60 (4), 290310. 10.1108/00012530810887953

  • 36

    SalidoA.SyarifI.SitepuM. S.SuparjanS.WanoP. R.TaufikaR.et al (2025). Integrating critical thinking and artificial intelligence in higher education: a bibliometric and systematic review of skills and strategies. Soc. Sci. Humanit. Open12, 101924. 10.1016/j.ssaho.2025.101924

  • 37

    ScharrerL.ThommE.StadtlerM.BrommeR. (2025). What makes sources credible? How source features shape evaluation of scientific information. J. Exp. Educ., 124. 10.1080/00220973.2025.2477719

  • 38

    SeckerJ.CoonanE. (2012). “ANCIL: a new curriculum for information literacy: case study,” in Information Literacy Beyond Library 2.0, eds. GodwinP.ParkerJ. (Cambridge: Cambridge University Press), 171190.

  • 39

    ShamsaeeM.ShahrbabakiP. M.AhmadianL.FarokhzadianJ.FatehiF. (2021). Assessing the effect of virtual education on information literacy competency for evidence-based practice among the undergraduate nursing students. BMC. Med. Inform. Decis. Mak.21, 48. 10.1186/s12911-021-01418-9

  • 40

    ShuhaiberA.KuhailM. A.SalmanS. (2025). ChatGPT in higher education: a student perspective on use and ethical concerns. Comput. Hum. Behav. Rep.17, 100565. 10.1016/j.chbr.2024.100565

  • 41

    SlutskiyP. (2025). AI, post-truth realities, and Thai students’ information-seeking behavior. MANUSYA J. Humanit.28, 119. 10.1163/26659077-20252811

  • 42

    SpanteM.HashemiS. S.LundinM.AlgersA. (2018). Digital competence and digital literacy in higher education research: systematic review of concept use. Cogent Educ.5 (1), 10.1080/2331186X.2018.1519143

  • 43

    SurducanE. M.ValadasS. T. (2025). The role of digital competences in the academic success of “digital natives”. J. Digit. Educ. Technol.5 (2), ep2513. 10.30935/jdet/17296

  • 44

    TeddlieC.TashakkoriA. (2009). Foundations of Mixed Methods Research: Integrating Quantitative and Qualitative Approaches in the Social and Behavioral Sciences. London: Sage.

  • 45

    TrixaJ.KasparK. (2024). Information literacy in the digital age: information sources, evaluation strategies, and perceived teaching competences of pre-service teachers. Front. Psychol.15, 1336436. 10.3389/fpsyg.2024.1336436

  • 46

    UNESCO Almaty Regional Office & MediaNet International Center for Journalism. (2025). Media and Information Literacy in Kazakhstan (Mapping report). Available online at:https://articles.unesco.org/sites/default/files/medias/fichiers/2025/09/Mapping-MIL-Report-Eng.pdf(Accessed December 10, 2025).

  • 47

    Van Den BeemtA.ThurlingsM.WillemsM. (2020). Towards an understanding of social media use in the classroom: a literature review. Technol. Pedagog. Educ.29 (1), 3555. 10.1080/1475939X.2019.1695657

  • 48

    WhitworthA. (2014). Radical Information Literacy. Witney, Oxford: Chandos Publishing.

  • 49

    ZhaiC.WibowoS.LiL. D. (2024). The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: a systematic review. Smart Learn. Environ.11, 28. 10.1186/s40561-024-00316-7

Summary

Keywords

artificial intelligence in education, critical thinking, digital learning environments, higher education, information competence, Kazakhstan

Citation

Yrymbayeva N, Yermekova M, Tezekova O, Issatayeva B, Zhubandykova A and Tuxanbayev A (2026) Information competence and critical thinking in digital environments evidence from university students in Kazakhstan. Front. Educ. 11:1809360. doi: 10.3389/feduc.2026.1809360

Received

11 February 2026

Revised

22 March 2026

Accepted

30 March 2026

Published

24 April 2026

Volume

11 - 2026

Edited by

Neil Andrew Gordon, University of Hull, United Kingdom

Reviewed by

Diana Atuase, University of Cape Coast, Ghana

Sam Espinoza Vidaurre, Universidad Privada de Tacna, Peru

Updates

Copyright

*Correspondence: Oryntay Tezekova

ORCID Nurgul Yrymbayeva orcid.org/0000-0003-1862-0462 Moldir Yermekova orcid.org/0009-0004-5884-5062 Oryntay Tezekova orcid.org/0009-0006-4437-3089 Bakytgul Issatayeva orcid.org/0009-0002-9199-1454 Akgul Zhubandykova orcid.org/0000-0003-4545-2599 Assan Tuxanbayev orcid.org/0000-0002-0499-1089

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics