- Universidad de Extremadura, Departamento de Didáctica de las Ciencias Sociales, Lengua y Literatura, Cáceres, Spain
Introduction: This study has the overarching objective of designing and validating a Likert-scale questionnaire to gather and analyze data concerning students’ writing skills and critical reasoning in Teacher Training degrees at five Spanish universities.
Methods: The sequence of the research process was as follows: definition of the construct, generation of items, selection of the response format, review by statistics experts, pilot test, psychometric analysis, review and adjustment, sample calculation and request for Bioethics Committee authorization.
Results: The resulting questionnaire is a four-choice Likert-scale with 31 items in total divided into four factors: Declared Practices, Adherence to Principles, Self-Image as a Writer, and Resources, further broken down into various dimensions.
Discussion: Factor analysis and pilot testing confirmed the suitability of the structure. As limitations, in this phase of the research, the surveys are being implemented and the initial data is being collected. The investigative sequence is as follows: definition of the construct, generation of items, selection of the response format, review by statistical experts, pilot test, psychometric analysis, review and adjustment, sample calculation, and Bioethics Committee authorization. A series of subsequent validation tests of the instrument are planned for possible extrapolation to other research contexts.
1 Origin, justification and importance of the topic
The present study forms part of a project entitled DECERC, which stems from the need to foster skills related to the development of critical and reflective thinking, as well as the improvement of academic writing, in students of Early Childhood and Primary Education Degrees. The project is being developed by a research team composed of 9 members from 5 Spanish universities, namely the Autonomous University of Barcelona (UAB), the Complutense University of Madrid (UCM), the University of Extremadura (UEX), the University of Zaragoza (UNIZAR) and the University of Jaén (UJA).
Teaching discursive strategies, creative writing, argumentation, and critical reasoning is fundamental to fostering a critical society within a digital world where students need to develop skills related to information discernment, autonomy, and adaptation to new knowledge. These skills are also considered essential for teachers’ professional performance and, therefore, constitute challenges for 21st-century education. While these shortcomings are noted in various academic contexts, they are considered strategic deficiencies (Bargiela et al., 2022; Fontich et al., 2024; Villarroel and Sujey, 2024) in Teacher Training Degrees, as these degrees provide the foundation for future skills development in the early years of education. This, in turn, will allow them to deliver more effective instruction in writing and critical reasoning in their professional careers, ultimately achieving more positive outcomes in the education of future generations. In their training as future teachers, students in Teacher Training Degrees must become aware of the importance of this knowledge and know how to transmit it (Tresserras et al., 2022).
The first step in this process is to discover students’ self-perceptions of creative writing, critical thinking, and the use of AI in their education. To this end, it was deemed appropriate to design a Likert scale questionnaire, the structure, content, and implementation plan of which are described below.
Following this necessary contextualization regarding the origin, relevance, and need of the present study, the general and specific objectives, the scientific literature on which the research is based, the method used, and the results obtained are outlined below, as well as the appropriate discussion of these results with similar studies.
1.1 General and specific objectives
The general objective of the present study will be outlined below, followed by a detailed account of the specific objectives.
In first place, the general objective is to design and validate a Likert-scale questionnaire to gather and analyze data about the writing skills and critical thinking of students in Teacher Training Degree programs.
Following with the specific aims, it is worth mentioning the following ones:
• To describe the preparation, structure, content, ethical code and implementation of the Likert questionnaire.
• To seek the assistance of a group of experts to review the questionnaire, with the aim of gathering their feedback on its initial design and making any necessary modifications, as well as assessing and modifying the items indicated with the aim of improving comprehension and facilitating students’ completion of the questionnaire.
• To calculate the study sample size, determining the number of male and female students required to obtain significant results that can be used in similar contexts.
1.2 Theoretical framework
The theoretical framework of the present study is divided into two clearly differentiated sections: the definition, the internationally recognized scientific literature and the relationship between the concepts of creative/academic writing and critical thinking in the university context, on the one hand, and the validation and reliability of Likert-type questionnaires, on the other. Below, we will explain each of these aspects.
1.3 Academic writing and critical thinking in the university context, and the relationship between both concepts
Creative writing refers to a form of writing that prioritizes originality, imagination, and artistic expression, rather than strict adherence to plain, denotative information. It encompasses genres such as fiction, poetry, drama, and creative nonfiction, aiming to evoke emotions, convey experiences, and explore ideas through narrative techniques, figurative language, and stylistic innovation. Its goals are to develop writing skills and the expression of the subject’s viewpoints, emotions, and experiences. Aesthetic qualities and the writer’s voice are prioritized to attract or inspire readers.
Academic writing can be considered a form of creative writing, but its motivations, context, style, and purposes are distinct. So, academic writing is a formal and structured style of communication used to present research or academic work in educational settings, as well as scholarly analyses or arguments within a specific discipline. It is characterized by clarity, precision, and objectivity, relying on evidence-based reasoning and proper citation of sources. Academic writing employs specialized terminology, follows a logical organization, and adheres to established conventions and formatting standards to ensure credibility and facilitate the dissemination of knowledge (Gaona et al., 2024).
Critical thinking, for its part, can be defined as the process of actively and competently conceptualizing, analyzing, synthesizing, and evaluating information based on observation, experience, reflection, reasoning, or communication (Scriven and Paul, 1987). It involves questioning assumptions, identifying biases, and applying logical criteria to determine the validity and relevance of ideas. Critical thinking seeks to foster independent judgment and informed decision-making by integrating clear and precise evidence-based reasoning (Fontich et al., 2024).
The proliferation of recent studies in the international context indicates the global relevance of the need to develop creative writing and critical thinking. Here are some relevant studies from different international contexts (European, American, Asian). In this way, we can focus on the theoretical and practical aspects that the scientific literature has highlighted regarding these concepts.
The study by Myhill et al. (2023) analyzes how the classroom environment and changes in that environment, for example, with virtual classrooms and their use in writing, influence the motivation and creative development of university students. The project involved 32 teachers from schools in South-West England, teaching classes with students ranging from age 7–14 years old (n = 711). We should note that the participating teachers attended a week-long writing residency at one of Arvon’s writing centers (in southwest England) alongside two professional writers who led the creative writing course. The authors propose that access to feedback and open-ended tasks promotes creative expression and student development.
In an article published in Frontiers, Alvarado (2025) examines how the application of Design Thinking as an active methodology fosters creativity, interdisciplinary work, and collaboration in higher education. To this end, he analyzes practical projects carried out by future professionals, focusing on written creativity and innovation in problem solving. His study demonstrates that methodologies focused on the user and the creative process stimulate original thinking and the production of creative texts in university contexts.
In the American context, Jamil (2016) examines the main methodologies for promoting creative writing in US universities (workshops, reviews, and critical response) and how these can enhance the creative skills of future teachers. The benefits of collaborative practices in teacher and writer training are also included. On the other hand, in the research by Day et al. (2022), the authors discuss the recent development of creative writing pedagogy in universities and how programs are assessing and responding to students’ creative needs. Thus, in the article, they discuss the balance between literary theory and creative practice, the importance of feedback in the classroom, and how assessment and educational trends in the sector have changed.
Finally, research conducted in Asia on the integration of Artificial Intelligence into creative writing instruction and its effect on the narrative quality and creative thinking of university students (Bariqoh, 2025) proposes new teaching methods based on automated text analysis.
In the educational context, academic writing is used to express in writing both educational knowledge and results obtained from research. For this reason it is valued by teachers, students and authorities (Tavera and Lovón, 2023). However, at times linguistic, semantic and scientific deficiencies appear among university students, as shown by a meta-analysis of 60 scientific articles on the teaching of academic writing at university level (Martinez-Carlos and Hidalgo, 2025). The beginnings of this growing interest in academic writing can be traced back to the 1970s, especially in the United States and the United Kingdom. Since then, movements such as Writing Across the Curriculum, Academic Literacies, and Composition Studies have proliferated (Lillis and Scott, 2007), all of which agree on the need to influence teachers to modify their practices around writing skills and to approach writing as a dialogical and lifelong process that aids thinking, understanding, and reflection (Lin et al., 2023). Mastery of these skills is so necessary that, according to Bargiela et al. (2022), professionals must master thinking skills in order to subsequently teach them to their students, incorporating specific activities (Tanskanen, 2023; Darfler and Kalantari, 2022).
The issue of deficiencies, shortcomings, or the lack of academic practices related to the development of communication skills has been examined in various previous studies from different perspectives (Cardinale, 2006; Carlino, 2007, 2013; Osorio et al., 2019; Salazar, 2023; Soto et al., 2025). In the latter author’s study, it is noted that, in some cases, teaching and learning processes focus on the exact repetition of learned content rather than on using this new information as a means to generate new knowledge or transform existing knowledge.
Other studies focus on the crucial role of student revision of their writing. This point, key to the development of creative and academic writing, deserves further examination. According to Uribe-Álvarez and Álvarez-Angulo (2022), the revision stage in the writing process aims to improve both rhetorical-stylistic elements and the content of texts. However, it is common for students to either skip this stage. As a result, they tend to focus more on formal correction than on content (García et al., 2015; Estela and Pérez, 2023).
The analysis proposed by García et al. (2015) included some uses of Artificial Intelligence among university students when revising and rewriting their texts. Some advantages would be the real-time revision and increased autonomy in the writing process. This perspective is further supported by the study conducted by Salazar and Verástica (2025). Through both qualitative and quantitative research with postgraduate education students about the use of digital tools and Artificial Intelligence in the rewriting of academic texts, a gap in the use of such tools and a need to increase specific training to improve the revision of texts and their quality were detected.
As regards the relationship between both concepts (creative/academic writing and critical thinking), it is important to highlight that the presence of the former in the university context is an essential tool to develop this type of thinking (Gaona et al., 2024; Villarroel and Sujey, 2024). According to these authors, when writing, students do not only communicate their ideas, but they also develop skills in argumentation, analysis and critical reflection. These tools are very important in knowledge production and achieving academic and professional success.
The aforementioned researchers suggest that academic writing strengthens critical thinking because it involves the construction of well-founded arguments. Furthermore, it enables students to address topics in more depth, considering diverse perspectives and evaluating evidence (Gaona et al., 2024). Likewise, critical thinking is reinforced alongside students’ autonomy, as they must express their ideas clearly, coherently, and with justification when writing texts or completing academic tasks. Consequently, Villarroel and Sujey (2024) conclude that universities should explicitly incorporate and teach academic writing across all disciplines, given that it facilitates the comprehensive development of students and prepares them for the professional and social challenges emerging in the 21st century.
1.4 Previous studies about the creation and reliability of Likert-type questionnaires and scales
As with the literature on creative writing and critical thinking, there are international studies that support the idea of creating questionnaires for higher education.
At the European level, Arias-Gundín et al. (2021) validated the Spanish version of the Writing Strategies Questionnaire (WSQ-SP), which focuses on cognitive and metacognitive writing strategies. Cognitive interviews were conducted with students to explore how they understand the writing strategies assessed in the questionnaire. Students highlighted the “generation of ideas” as a crucial stage prior to writing, adapting their writing according to the intended audience. A functional use of non-homogeneous writing strategies was also observed, so the questionnaire helps to identify specific cognitive areas for improvement.
In the Anglo-Saxon context, Sagredo-Ortiz and Kloss (2025) carried out a statistical analysis and, based on this and interviews with students and teachers who had used the questionnaire, concluded that the questionnaire was valid for distinguishing levels of competence among students and for guiding pedagogical interventions.
As for Asian literature, Bariqoh (2025) included narrative analysis of open-ended responses and student comments on their experience of using AI in creative writing. It was found that interaction with AI helped overcome creative blocks and facilitated the generation of original ideas, aspects that were positively valued by users and reinforced the relevance of the questionnaire.
Finally, in a study published in Frontiers, Qin et al. (2022) validate a questionnaire on metacognitive writing strategies in university students from various degree programs. The qualitative results included focus groups and semi-structured interviews in which students reflected on their perceptions of the importance of self-regulation in improving their writing skills.
In a more specific way, Likert scales are widely used in educational research because of their ability to capture the perceptions, attitudes, and opinions of students, teachers, and other educational stakeholders. Their ordinal format allows for measuring the degree of agreement or disagreement with specific statements, facilitating the quantification of subjective variables. Several authors justify their use. Thus, Joshi et al. (2015), point out their ease of application and comprehension through clear and accessible items for various educational levels. Rokeman (2024) highlights their versatility, adapting to multiple educational contexts, from the assessment of competencies to the measurement of school climate. Other researchers, such as Kusmaryono et al. (2022), mention the reliability of this methodology thanks to its high internal consistency and construct validity. Finally, Du (2024) and Jebb et al. (2021) argue that Likert-type scales facilitate the detection of gradients in responses, which is useful for longitudinal or comparative studies, and allow for adjustments according to the sociocultural contexts of the groups studied.
In the following lines, we will include some previous studies conducted on the validation of questionnaires that address a similar issue to that of the current research or that are similar in other aspects such as the methodology or procedures followed in their development.
To begin, it is worth mentioning the study conducted by Palma et al. (2021), which focused on the adaptation and validation of an assessment instrument for critical thinking in scientific knowledge tasks, involving a sample of 161 university students. This research took into account previously validated critical thinking tasks from other contexts, adapting them to the local university setting. Each task was accompanied by evaluation scoring guides. Similar to the present study, a group of experts reviewed the relevance and clarity of the tasks and scoring guides, and the test was subsequently administered to university students. The results obtained from the pilot study were analyzed with the aim of refining the items and ensuring the validity and reliability of the instrument.
Moreover, the research conducted by Mellado-Moreno and Bernal-Bravo (2023) aimed to create a questionnaire called CEREDA, with the objective of evaluating digital resources and digital competence in Higher Education from an educational and communicative perspective. The questionnaire, designed as a Likert scale with response options ranging from 1 to 5, was created in digital format, and data collection was carried out using the Google Forms tool. Participants were selected through non-probabilistic sampling among Spanish university students and lecturers in the Primary Education Degree, comprising a total of 288 participants: 223 students and 65 lecturers. The items included within the dimensions were related to issues such as accessibility, innovation, and the usefulness of resources (covering the use of apps, word processors, AI, and online resources).
Conversely, the study conducted by Rodríguez et al. (2021) aimed to design and analyze the validity of a questionnaire that met the necessary methodological and psychometric requirements to evaluate how university students approach learning. Validation was carried out through expert judgment and confirmatory factor analysis of the prior theoretical model, concluding that the instrument was reliable and demonstrated a good fit. The instrument comprised five dimensions: methodological strategy, progress of outcomes, educational and learning environments, time management, and attributions regarding results. The final questionnaire consisted of 30 items assessed on a five-choice Likert scale.
Another series of studies focus on verifying the reliability and validity of the questionnaires created in each case. First, mention is made of the study conducted by Vendrell-Morancho and Fernández-Díaz (2024), which focuses on the design and validation of CritiTest, an instrument aimed at evaluating critical thinking in Spanish-speaking university students. This tool consisted of 5-point Likert scales. The creation process involved a theoretical review and an analysis of the questionnaire by an expert group. The statistical analysis was carried out with a sample of 5,238 students and demonstrated high reliability (α = 0.9 overall; α = 0.8 by dimensions), as well as content and construct validity.
Secondly, reference should be made to the study by Núñez-Pacheco et al. (2024), who conducted research aimed at measuring the self-perception of macro-narrative competence across three dimensions: textual, digital, and transmedia narrative, among university students through the design and validation of a Likert scale. Various statistical tests were performed, including Aiken’s V coefficient and exploratory factor analysis. The latter demonstrated adequate correlation between items and good sample adequacy, respectively, leading to the conclusion that the scale exhibited satisfactory validity and reliability.
Next, it is important to highlight the research conducted by Abarzúa-Ceballos et al. (2024), who, based on the idea of the importance of university students acquiring academic reading and writing skills for their integral development, set out to create and validate an ad hoc questionnaire for Early Childhood Education Degree students (n = 503). Their aim was to analyze autonomous digital reading practices in academic contexts using a quantitative methodology, specifically through a survey method. The initial instrument comprised five dimensions with Likert-type questions on a five-point scale, totaling 88 items. The collected responses were processed using IBM SPSS 26 software. Subsequently, exploratory factor analysis and other statistical tests were carried out. The results indicate that the developed instrument possesses reliable internal consistency (Cronbach’s alpha) and composite reliability (McDonald’s Omega coefficient). Additionally, satisfactory construct validity, both social and ecological, was confirmed. The final questionnaire consists of four dimensions and 87 items.
We can also mention the work by Torres-Gastelú and Torres-Real (2025), who developed and validated a perception scale called PEIIA, designed to measure perceptions of Artificial Intelligence in Higher Education. This Likert-type scale comprises nine dimensions: effectiveness of AI use, learning improvements, cognitive and technological dependence, ease of use, support for learning, emotional reactions towards AI, AI accuracy, aversion to AI use, and perception of teaching staff. The scale was administered to 1,010 students at a public Mexican university. Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) were conducted to assess its validity, leading to the removal of items with low or no correlation. The study concluded that the scale demonstrated acceptable levels of reliability and validity, with a Cronbach’s alpha of 0.945. The final version of the questionnaire contains 35 items grouped into seven factors.
Thus, we have relevant studies on the appropriateness of using Likert scales to capture the perceptions, attitudes, and opinions of university students. More specifically, we also have background work focusing on the adaptation and validation of these types of instruments to assess aspects such as accessibility, innovation, resources, and learning strategies. These studies have proven extremely useful as sources or initial models in the development and adaptation of our instrument. However, the innovative aspect of this study focuses on attempting to fill a research gap by directly addressing the possibility of generating a Likert scale to measure deficiencies in creative writing and critical thinking through the self-perceptions of the students surveyed. It also incorporates the novelty of perceptions about the use of Artificial Intelligence in university academic settings.
1.5 Relevance of the study
A relevant issue, in light of the bibliography consulted for the initial analysis, is the lack of tests that measure the aspects we have set as our objectives, as stated by researchers such as Martinez-Carlos and Hidalgo (2025). When considering the new developments and innovations in our study, we started from the ideas of Estela and Pérez (2023) and the difficulty students had in producing their own texts. Villarroel and Sujey (2024) also point in this direction, arguing that universities should promote critical thinking and writing. In addition to those already mentioned, other researchers also reinforce this need to design and apply an instrument to assess critical thinking and academic writing in students (Schnitzler-Sommerfeld and Núñez-Lagos, 2021) and to develop tests that study argumentative textual production (López-Ruiz et al., 2021).
In response to the shortcomings identified by previous researchers, our efforts have focused on these areas of work. Thus, in addition to the deficits observed in relation to the promotion and development of academic writing and critical thinking in Higher Education, we have also detected a lack of tests and tools to measure these skills in university students. This idea is reinforced by the authors included in the theoretical framework, such as Vendrell-Morancho and Fernández-Díaz (2024), who point out the limited availability of specialized tools for assessing critical thinking in Spanish-speaking university students. As in the previous case, other authors maintain this same position, such as García et al. (2020), for example, who highlighted the scarcity of existing tools for the accurate assessment of critical thinking in university students, or Ávila (2019), who emphasized the lack of adequate instruments to measure critical thinking skills.
The present study, therefore, is in line with the statements made by these authors, who allude to the need to mitigate the deficiencies in creative thinking and academic writing among university students by first creating Likert-type questionnaires that can somehow measure these deficiencies through a form on students’ self-perceptions regarding these subjects. Although, as noted, there are studies on the creation of Likert-type questionnaires in the university setting, our intention is to generate one specifically applied to creative writing and critical thinking, incorporating items on the use and impact of new technologies and Artificial Intelligence in university teaching.
2 Materials and methods
2.1 Schematic overview of the main stages of the process
Firstly, in order to provide the reader with an overview of the entire process, the following table (Table 1) is presented below where our work is adapted to the general recommendations of the recommended sequence for the creation and design of Likert scales:
2.2 Structure of the Likert questionnaire by dimensions
In this section, each of the dimensions1 comprising the questionnaire used in our study is briefly described, with the aim of justifying its design and structure, as well as facilitating the understanding of the items. To facilitate overall readability, all items are presented according to their respective dimensions in the following table (Table 2).
2.2.1 Declared practices
Declared practices, encompassing both personal and academic contexts, whether at school or university, are essential for understanding students’ writing trajectories. Classroom writing should be regarded as an inspiring practice, allowing students to explore and express their originality and cultural experiences. Recent research concludes that the regular practice of this skill, in both formal and informal settings, enables students to develop abilities such as written communication, creativity, and critical thinking (Sánchez and Pedraza, 2025). This first dimension is included in the questionnaire with the aim of identifying development patterns and offering solutions. The items selected for the Declared Practices dimension relate to tasks set by teaching staff on undergraduate programs, the types of assessments used, and the methods of evaluation. The aim is to determine whether such tasks are designed to foster students’ critical thinking, and whether certain factors, such as deducting marks for errors, have an impact on written production.
2.2.2 Adherence to principles (constructive and restrictive)
Adherence to constructive principles (creativity, flexibility, and openness) and restrictive ones (normativity and rigid structure) has emerged in several studies examining the relationship between beliefs, attitudes, and performance in creative writing and critical thinking. Authors such as Mera-Lomas (2025) conclude that creative writing fosters metacognitive processes and constructivist approaches. This enables students to reflect on their own practice and promotes self-regulation. The inclusion of this dimension in the questionnaire aims to assess the extent to which students value innovation over conformity in academic writing. The questions included seek to encourage students to share their ideas and perspectives on writing as a skill, the knowledge future teachers should possess in relation to it, and the aspects they consider most relevant.
2.2.3 Self-image as a writer
Self-image as a writer encompasses various aspects such as prior training, the frequency of written text production, writing competence, and visibility. This dimension is key to the university student’s self-perception and motivation. Some researchers have also included this dimension in the design of their instruments, noting that students’ writing competence and frequency are correlated with the quality of the texts they produce and their level of participation in creative activities. Furthermore, self-image is linked to the ability to overcome potential blocks faced by students when engaging in academic writing. In addition, it contributes to the refinement of their writing skills. Some ways of strengthening this self-image include continuous assessment and constant feedback, as well as reinforcement and social recognition (Coello-Sánchez et al., 2025). The dimension Self-image as a Writer aims to include items related to students’ prior knowledge of writing and the contexts and purposes for which they usually produce texts, including both leisure and academic contexts. This section also considers the role of social media and the dissemination of written work through these platforms, as well as related aspects such as the social recognition of such productions.
2.2.4 Resources: digital and technological tools
The use of digital resources, including mobile apps, word processors, Artificial Intelligence, and online resources, among others, has been integrated into recent questionnaires to reflect the role that technology currently plays in Higher Education. Scientific literature indicates that digital literacy and the ability to use technological tools effectively have an impact on academic success, as well as on the development of critical and creative competences. Specifically, in the questionnaire designed for this study, the inclusion of these items helps to analyse how technology influences written production and problem-solving (Mera-Lomas, 2025). The questions in this section are primarily aimed at identifying how students in the relevant degree programs use certain applications, Artificial Intelligence tools, and mobile devices to carry out tasks such as writing texts or taking notes, among others.
2.3 Definitive design process of the Likert questionnaire: item selection, pilot test and sample calculations
2.3.1 General methodology
The methodology for the creation and validation of questionnaires in this field is composed of several phases. First, a theoretical review and content analysis are carried out, which, as previously mentioned, allow for the creation of the questionnaire’s dimensions and factors based on existing information (Coello-Sánchez et al., 2025). According to the author, based on the proposed factors, the items that will make up the initial phase of the Likert scale are then drafted. According to Sánchez and Pedraza (2025), this phase is followed by the validation and evaluation of the items by a group of experts, who assess their relevance and clarity. Subsequently, a pilot test of the questionnaire is conducted, applying it to a specific sample of university students, and a psychometric analysis of the instrument is performed using factor analysis, in order to determine the internal structure and consistency of the instrument (Cronbach’s alpha, item-total correlations). Finally, as the authors point out, the results are reviewed and analyzed, and the appropriate items are adjusted, thereby ensuring the reliability and validity of the questionnaire across various contexts within the university setting (Sánchez and Pedraza, 2025). Authors such as Mera-Lomas (2025) emphasize the importance of integrating strategic planning, collaboration, and formative feedback in the process of creating, drafting, and refining the questionnaire items.
Subsequently, and taking into account the recommendations and stages outlined by various authors in the process of designing Likert-type questionnaires, we shall proceed to provide a detailed explanation of how the questionnaire developed in the present study was designed and validated.
First, with the aim of formulating, validating, and subsequently using the questionnaire within the Teaching Degree programs that were the focus of the study, we developed a series of statements. Through various processes and modifications, which will be detailed later, and following a decisive meeting, an initial questionnaire comprising 99 items was proposed. The items were formulated as statements rather than questions.
Subsequently, this initial version was sent to a group of experts in the subject, including, for example, professors of statistics and mathematics who were part of the project. Additionally, an Excel spreadsheet was created so that all team members could access the questionnaire and have the opportunity to contribute their ideas for each of the previously mentioned dimensions. Once all the researchers had access to the questionnaire, a refinement process was carried out, involving a first review and modification aimed at identifying and removing repetitive items, eliminating unnecessary ones, and rewording certain statements to improve clarity, among other considerations. Next, the items were transferred to a spreadsheet and to a Google Forms questionnaire to facilitate their subsequent completion. At the same time, a joint reflection process was carried out to decide which type of Likert scale to use, ultimately choosing a 4-point Likert scale. An even number of options was selected to avoid, as much as possible, random choice in responses, since students tend to select the middle option when presented with an odd number of response choices, which could bias the data collected in the research, as noted by some authors (Matas, 2018; López-Pina and Veas, 2024). Despite the advantages offered by this scale, there are some limitations, as other authors have also pointed out, such as the fact that it can induce bias or force arbitrary responses by eliminating neutral or intermediate options (Pornel and Saldaña, 2013; Alabi and Jelili, 2023). In the scale created, score of 1 means “strongly disagree” and a score of 4 means “strongly agree.”
Subsequently, as had been done in the initial stages of the questionnaire’s creation, items with unclear wording or overlaps were corrected, and others were removed, resulting in a final total of 73 items.
2.3.2 Expert group selection
With regard to the description of the expert selection process, as mentioned in the paper, this is a group of experts with two areas of expertise: language teaching and quantitative analysis. It is a group of professors from each of the universities participating in the project with proven experience in reviewing quantitative analysis models and/or teaching Spanish as a mother tongue. For reasons of anonymity, they were not included in the text, although we have included the panel of experts selected below for the reviewer’s information. These are researchers from outside the project’s research team, with the intention of ensuring impartiality in their criteria and judgment, but with knowledge of the functioning of each of the institutions analyzed. The selection process for the experts was based on their prior knowledge of quantitative analysis or their experience in work related to the subject of the article.
The evaluations of this panel were included anonymously in an Excel spreadsheet with three possible responses: “Item to be rewritten,” in which case the question could be rephrased; “Item not relevant,” in which case the expert recommended its removal; and “Item relevant,” in which case it was considered appropriate. The decision to keep or delete an item followed the following process: when two experts agreed on a negative evaluation, it was decided to delete that question; if two experts agreed on rewriting a question, it was reworked, and it was decided to include those items on which all experts agreed on their relevance.
2.3.3 Specific methodology of the group of experts
The procedure followed these phases:
Phase 1: to begin with, the group of experts in language teaching proposed all the possible items that could be measured in terms of critical thinking and creative writing.
Phase 2: In a second phase, these items were sorted into the factors and dimensions identified in the study.
Phase 3: Thirdly, the experts in quantitative analysis calculated the minimum value of the first and second quartiles, the mean value, and the maximum value for each of the selected variables. The aim was to analyze the distribution of the information.
The variance and coefficient of variation were calculated, which was estimated at 0.5 on average, indicating a dispersion characteristic of high heterogeneity in the data set, such that this heterogeneity could be reflected in the distinction between biographical variables (gender, age, inhabitants of the municipality where they attended primary and secondary school, type of public or state-subsidized private school, university degree being pursued, year of study and university, province of origin, form of access to university, and previous university studies).
Phase 4: Once the final set of items had been determined, they were transferred to Google Forms, rearranging their initial order so that respondents would not be aware that they were answering a set of questions on the same topic and belonging to the same dimension, with the aim of avoiding bias in the results obtained.
3 Results
3.1 Conducting the pilot test
After the initial meetings and the preliminary creation of the questionnaire, a pilot test was conducted, through which 104 responses were collected from students of Early Childhood and Primary Education at the University of Jaén (UJa). The entire population belonged to both Early Childhood Education and Primary Education levels. With a percentage of representation according to gender of the total student census and existing degree courses. An initial analysis of the results was carried out to identify any issues with item interpretation or response patterns during the pilot administration and to determine whether all the questions had been understood without difficulty or if any needed to be rephrased, as well as to address any doubts that arose during the completion of the questionnaire. All of this was reported to the members of the research team involved in the project.
One of the conclusions reached through this reflection was the need to include a new question at the beginning of the questionnaire asking for the respondents’ municipality of origin. This aimed to gather information about their family backgrounds, in case such information influenced the responses obtained in the questionnaire regarding cultural or social topics.
Additionally, the analysis of the results took into account several aspects related to geographical characterization. Specifically, questions were included about the population of the municipality where the respondents completed their Primary and Secondary Education, the type of school attended during Secondary Education (public, state-subsidized private, or private), the specific Teaching Degree they were studying (Early Childhood Education, Primary Education, or a Double Degree in Early Childhood and Primary Education), as well as the academic year and university of their studies. Finally, a list of the different provinces within each of the Autonomous Communities participating in the project was provided, including the various municipalities within each province, with the objective of gathering information about the participants’ family origins.
3.2 Some statistical calculations regarding the questions posed
The next step carried out in the creation of the questionnaire was related to performing statistical tests. To determine whether the questions were answered at random or showed some relationship among them, the variance was calculated, defined as “a measure of dispersion that represents the variability of a data set with respect to its mean” (Fisher, 1919). This compares the possible responses against the total number of responses to check which answer is most repeated and whether that answer lies within the mean or not. Additionally, the coefficient of variation was obtained, considered a relative measure of dispersion and defined as “the quotient between the standard deviation and the arithmetic mean, expressed as a percentage” (Pearson, 1896). This thus calculates the average variance.
Based on the results obtained, the items with the highest scores were selected for each dimension, as these showed greater variance, which is related to a wider variety of responses. Conversely, questions with lower variance were considered more dispensable, since they likely reflected obvious answers.
At the time of selecting the items, those with a coefficient of variation above 0.4 were chosen. In the case of dimensions with scores below this threshold, the two questions with the highest coefficient of variation were selected. While in the latter case only two questions per dimension were chosen, in the former, in some cases, more than two questions were selected in the corresponding dimensions due to several items having a coefficient of variation greater than 0.4.
After further refining the questionnaire and selecting and compiling the results of the chosen items, a total of 31 questions were selected, a number significantly lower than the initial 73 items.
Once again, the research team read the questionnaire and the responses provided by the students with the aim of modifying the wording of those questions where it was necessary. In the case of two questions from the first section (Sociodemographic Data), specifically: Approximately how many inhabitants did the municipality where you attended Primary Education have? and Approximately how many inhabitants did the municipality where you attended Secondary Education have?, the response options were modified because the initially created intervals were not clear enough nor did they cover all possible data options.
3.3 Sample size calculation
Finally, although this article aims to reflect the process of creating and validating the questionnaire, it is worth noting that, in order to determine the sample size needed to achieve the required reliability, a sample size calculation was carried out based on the total number of students from each university once the pilot test was completed. This allowed the researchers to determine how many students needed to complete the questionnaires to ensure reliability.
To carry out the sample selection, the first step was to consult the technical units for evaluation and quality at the five participating universities. Among their responsibilities is the preparation and publication of university statistics and indicators related to the total student population.
Regarding the criteria followed in the selection of the project sample, it is important to note that double degrees and bilingual degree programs were excluded from the selection, as these programs were not offered at all of the participating universities.
The following formula was used to calculate the sample size:
Where:
n: Sample size.
N: Population.
Z: The confidence or security level (1 − α). The pre-established confidence level gives rise to a coefficient (z) α. For a 95% confidence level α = 1.96.
d: The desired level of precision for the study.
p: An estimate of the approximate value of the parameter to be measured. If such information is not available, the value p = 0.5 (50%) will be used.
As regards the results obtained, these can be seen in Table 3 divided by universities, in Table 4 by Degree in Early Childhood Education, and in Table 5 by Degree in Primary Education (Table 5).
Thus, the results obtained focused on several aspects: on the one hand, the specification of personal, family, cultural, social, and educational level elements, as well as the geographical characterization, all of which could subsequently provide some cross-referencing of variables that would lead to interesting conclusions; on the other, the advisability of refining the randomness of the questions to measure the coefficient of variance and dispersion. This also allowed for the elimination of questions that were considered unnecessary, overlapping, or obvious. This review also suggested modifying the wording of some items. Finally, the sample size was calculated to achieve the desired reliability. These actions are directly related to the study’s objectives because they aim to refine the survey after the expert group’s assistance by refining the structure, modifying or eliminating items, facilitating the comprehension of the questions, and finally determining the number of students required to ensure reliable results and thus guarantee their possible extrapolation to similar contexts. To select the population required for our study, a stratified probability sampling method was designed, using the official databases of students enrolled at our universities as a framework. The population was divided into strata according to university and faculty, and within each stratum, participants were selected by random allocation proportional to the size of the stratum. The aim is to ensure representativeness in terms of age (year of study), gender, student origin (rural and/or urban), form of access to university and type of studies (Early Childhood and Primary Education degrees), with the aim of reflecting the census as a whole. Recruitment will be carried out through invitations sent by institutional mail and academic platforms, including information about the study and informed consent. To minimize selection bias, the inclusion of all strata will be ensured, reminders will be sent to increase the response rate, and subsequently, the characteristics of the sample will be compared with the total population, applying weightings where necessary. In this regard, attention will be paid to each of the biographical variables to ensure that each one is representative of the population as a whole.
The following section details the aspects included in the Informed Consent form, which was approved by the Bioethics Committee of the Autonomous University of Barcelona. The research objective stated in the document was to describe the knowledge of Primary and Early Childhood Education Degree students regarding written composition and critical thinking. In relation to this, the implications of participation focused on the researchers’ interest in understanding students’ perspectives on the meaning of writing and how this process relates to content learning and their future identity as teachers. The estimated time to complete the questionnaire was approximately 15 min. It was also stated that there were no risks or any form of compensation associated with participation. Regarding confidentiality, the anonymity of the data was specified, as well as the exclusive access to the data by the research team for the purposes of the study. Furthermore, voluntary participation and the right to withdraw from the study were explicitly stated.
To be more precise, in the sections of the Bioethics report relating to Confidentiality and Consent, specific notes on anonymity appear in the following terms: “Your identity in the study will be anonymized,” “I authorize the literal quoting of my answers to the questionnaire without mentioning my name, that of my institution or any other type of identifier,” “the information you provide in the questionnaire will be analyzed for scientific purposes without personal identifiers and will be made available to other researchers once the project is completed,” “I authorize the use of the answers to the questionnaire for study and scientific dissemination purposes, provided that the mechanisms for preserving my privacy are guaranteed.”
Regarding data storage, the Bioethics report states the following: “The principal investigator, Professor Xavier Fontich (Autonomous University of Barcelona), will store the questionnaire responses in Teams with UAB authentication. The questionnaires will be destroyed within 5 years after the research is completed.”
4 Discussion
The project is currently in the phase of administering the questionnaires and collecting and organizing the data. This process began recently, following the evaluation of both the questionnaire and the proposed research by the Bioethics Committee, which issued a favorable assessment. At the current stage of the research process, we are unable to provide details on the results obtained, nor, in light of these, offer future validation tests beyond those mentioned. This study, therefore, attempts to adhere to the stated objectives and, for the sake of internal consistency, should not preempt the analysis of the survey results, which is not among its stated objectives. However, as indicated in the Conclusions section and as is often recommended in the scientific literature, a series of subsequent validation tests of the instrument and the results obtained are planned.
As regards the discussion itself, and even admitting, with Ávila (2019) or Vendrell-Morancho and Fernández-Díaz (2024), the scarcity of existing tools for the accurate assessment of critical thinking in university students, the truth is that the review of the scientific literature proved decisive for the final design of our Likert scale, in terms of approach, structure, item content, and validation. Firstly, studies such as those by Rodríguez et al. (2021), Mellado-Moreno and Bernal-Bravo (2023), Vendrell-Morancho and Fernández-Díaz (2024), Núñez-Pacheco et al. (2024), and Abarzúa-Ceballos et al. (2024), all of them focus on educational measurements of Higher Education. These studies have guided us in the idea of designing an instrument but within a research sequence that has finally followed several stages: definition of the construct, generation of items, selection of the response format, review by statistics experts, pilot test, psychometric analysis, review and adjustment, sample calculation and request for Bioethics Committee authorization.
Thus, several previous studies, such as those by Arias-Gundín et al. (2021), Qin et al. (2022), Bariqoh (2025), and Sagredo-Ortiz and Kloss (2025), guided us in designing the questionnaire focused on the self-perceptions of the surveyed students. Specifically, to obtain quantitative results, we designed a Likert-type scale, a data collection instrument recommended for this purpose by Du (2024), Rokeman (2024), and Joshi et al. (2015), contextualizing the instrument to the sociocultural contexts of the student groups analyzed, as suggested by Jebb et al. (2021).
Similarly, thanks to the recommendations of several authors (Bargiela et al., 2022; Fontich et al., 2024; Villarroel and Sujey, 2024), the study focused on undergraduate students as a strategic mentorship, both to raise their awareness of writing deficits and to enable them to later teach these crucial skills in contemporary society (Tresserras et al., 2022), and even incorporate them as specific disciplines in university education (Villarroel and Sujey, 2024).
Furthermore, in accordance with studies such as those by García et al. (2015), Uribe-Álvarez and Álvarez-Angulo (2022), Estela and Pérez (2023), and Salazar and Verástica (2025), items related to the importance of writing revision have been introduced, whether using traditional tools or Artificial Intelligence.
We can also highlight several contributions that focus on conceptual contributions and implications for educational measurement, similar to those in our study. For example, Palma et al. (2021) also created and validated a questionnaire focused on critical thinking. Another point in common with our work is that a group of experts reviewed the relevance and clarity of the tasks and scoring guides, and the test was ultimately administered to university students. Furthermore, as in our research, a pilot study was conducted to adjust the items and ensure the validity and reliability of the instrument. The expert review process was also carried out in the study by Rodríguez et al. (2021).
Next, while both in the present research and in the study by Mellado-Moreno and Bernal-Bravo (2023) the questionnaire was created using the Google Forms tool, in their case the Likert scale had a response range from 1 to 5, whereas in our study the scale ranged from 1 to 4. However, another shared aspect was the selection of items, which in both cases related to issues such as accessibility, innovation, and the usefulness of resources. Similarly, in the study by Rodríguez et al. (2021), the Likert scale also had five response options, although while their final questionnaire consisted of 30 items, ours includes 31 final questions. This type of scale also appeared in other previous studies besides those mentioned, such as that by Vendrell-Morancho and Fernández-Díaz (2024). The same occurred in the study by Abarzúa-Ceballos et al. (2024), although the authors focused exclusively on the Early Childhood Education Degree. Likewise, the creation of their questionnaire differed from ours in that, instead of 31 final items, theirs included 87 items divided into 4 dimensions. A somewhat similar number of items was observed in the study by Torres-Gastelú and Torres-Real (2025), who created a final questionnaire with 35 items. The dimensions were related to themes similar to those in the present research, such as the effectiveness of AI use and teacher perception.
In light of previous studies, we concluded that the ideal number of responses should be around 30, as respondents’ attention diminishes beyond this number. Furthermore, the four-dimensional structure accurately reflects the initial objectives of the research. This four-response model avoids ambiguity in the responses and facilitates the processing of the data.
5 Conclusion
The main contributions of this study focus, first, on detecting in the existing literature a growing interest in intervening in the development of creative thinking and academic writing at a time when these skills are threatened by new technologies, the culture of instant rewards and information, the increasing use of Artificial Intelligence, and the poverty of superficial analysis. The collaborative work between members of five Spanish universities of different nature and size has been especially enriching in providing different points of view and, specifically, in creating the items, dimensions, and factors of the generated instrument, as well as in selecting the type and number of options chosen. After consulting the existing literature, we would like to highlight, in particular, the scientific design of the research sequence in different stages. In this sense, the participation of the group of experts external to the research group has been particularly notable.
Regarding the study’s limitations, it must be acknowledged that the main one is the incompleteness of the research process, since the survey is currently being implemented. Therefore, we do not yet have the results to assess whether the Likert scale creation process has been successful and thus determine whether the instrument has proven robust or, on the other hand, requires future modifications or validations with a view to its possible extrapolation to similar research contexts.
With regard to future data processing and cross-referencing actions, first, an analysis of the internal structure of the instrument will be carried out. To this end, an Exploratory Factor Analysis (EFA) will be performed using polychoric correlations, given that the items in the questionnaire are constructed on a four-point Likert scale, which implies an ordinal nature. This type of correlation allows for a more accurate estimation of the relationships between ordinal items, avoiding the biases that could arise when using Pearson correlations in this context. For factor extraction, the unweighted least squares (ULS) method with oblique rotation (Promax) will be used, given that the factors are expected to be correlated. Likewise, robust estimators will be used to compensate for possible deviations from normality in the data.
In addition, three statistical tests are planned once the responses from the survey participants have been obtained: Fisher’s Exact Test, a Chi-Square Test, and Spearman’s Correlation.
First, the Chi-Square Test will be carried out, since in educational research focused on student perception it is common to use surveys with categorical items (e.g., levels of agreement, satisfaction, preference, etc.) to explore differences between groups of students according to the variables mentioned. This test allows us to determine whether there is a significant association between two categorical variables, which is particularly useful in perception studies, such as the present research, where we seek to identify patterns or differences between subgroups. Several authors have recommended its use in the university setting. For example, Cañadas et al. (2012) highlight the usefulness of the Chi-square test in the statistical training of psychology students, emphasizing its applicability in survey analysis. Zamora Díaz (2022) uses it to validate surveys, demonstrating its versatility in quantitative studies.
On the other hand, we will perform a Fisher’s Exact Test that allows us to analyze contingency tables to accurately assess the association between two categorical variables (such as yes/no, or groups). This would allow for an analysis of unbalanced variables, if applicable. Some authors reinforce the idea of using this test with small sample sizes, such as Wang (2020) and Ruas et al. (2025). Specifically, this test is recommended in educational research (Fisterra, 2004; Amat, 2016) focused on student perception, such as ours, where it is common to use categorical items to explore differences between groups of students according to variables such as gender, educational level, type of school, or teaching modality.
Finally, the possibility of performing a Spearman correlation was considered, given that it is particularly suitable for studies using Likert-type scales, as these scales generate ordinal data, where it cannot be assumed that the distances between categories are equal. Unlike Pearson’s coefficient, Spearman’s does not require the data to follow a normal distribution or for there to be a linear relationship between the variables, making it a robust tool for this research. Furthermore, as it is based on ranks, it is less sensitive to outliers and allows for the detection of monotonic relationships (De Winter et al., 2016). These characteristics make it particularly useful for analyzing associations between perceptions, attitudes, or levels of satisfaction in educational surveys, where strict statistical assumptions are to be avoided and moderate-sized samples are used (Lee, 2025).
As a way to conclude the article, we can say that it is highly relevant to consider students’ opinions to assess their own perspectives regarding critical thinking and academic writing. For this reason, the process of creating and validating the Likert questionnaire constitutes a significant methodological advancement within the project. By administering it to future teachers and subsequently analyzing the results, we will gain a more in-depth understanding of the students’ viewpoints. This will allow us to develop a more effective intervention, as it will be grounded in the educational reality of the students to achieve greater development and accomplishment of the project’s core themes: critical thinking and creative writing.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
Author contributions
MG-C: Formal analysis, Writing – original draft, Investigation, Supervision, Methodology, Writing – review & editing, Conceptualization. JS: Conceptualization, Supervision, Writing – review & editing, Data curation, Methodology, Writing – original draft, Formal analysis. RP: Visualization, Writing – review & editing, Formal analysis, Conceptualization, Investigation, Writing – original draft.
Funding
The author(s) declared that financial support was received for this work and/or its publication. This article forms part of the research conducted within the R&D&i project The Development of Writing Competence and Critical Reasoning in Teacher Training Degrees (DECERC-GM. PID2020-117813RA-100, PI: Xavier Fontich), funded by the Spanish Ministry of Science and Innovation.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that Generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Footnotes
1. ^Dimension refers to a characteristic or category in which data can be organized, while factor is a predictor variable that can be categorical or continuous and has different levels of configuration to determine its effect on a response variable.
References
Abarzúa-Ceballos, L., Ambrós-Pallarés, A., and Ruiz-Bueno, A. (2024). Construcción y validación de un cuestionario sobre prácticas de lectura digital académica para estudiantado universitario de formación inicial de profesorado. Rev. Investig. Educ. 42:1, 33–59. doi: 10.6018/rie.548111
Alabi, A. T., and Jelili, M. O. (2023). Clarifying Likert scale misconceptions for improved application in urban studies. Qual. Quant. 57, 1337–1350. doi: 10.1007/s11135-022-01415-8,
Alvarado, L. F. (2025). Design thinking as an active teaching methodology in higher education: a systematic review. Front. Educ. 10:1462938. doi: 10.3389/feduc.2025.1462938
Amat, J. (2016). Test estadísticos para variables cualitativas: Test exacto de Fisher, chi-cuadrado de Pearson, McNemar y Q-Cochran. [CienciaDeDatos.net]. Available online at: https://dev.cienciadedatos.net/documentos/22.2_test_exacto_de_fisher_chi-cuadrado_de_pearson_mcnemar_qcochran (Accessed September 30, 2025).
Arias-Gundín, O., Real, S., Rijlaarsdam, G., and López, P. (2021). Validation of the writing strategies questionnaire in the context of primary education: a multidimensional measurement model. Front. Psychol. 12:700770. doi: 10.3389/fpsyg.2021.700770/full
Ávila, A. S. (2019). La lectura y la escritura académica como base para el desarrollo del pensamiento crítico. Available online at: https://acmspublicaciones.revistabarataria.es/wp-content/uploads/2020/11/13.lectura.inseguridades.2019.pdf (Accessed September 30, 2025)
Bargiela, I., Blanco, P., and Puig, B. (2022). Critical thinking teaching understood by a group of pre-service teachers’ educators. Int. Human. Rev. 12:2, 2–11. doi: 10.37467/revhuman.v11.3927
Bariqoh, A. (2025). Integration of artificial intelligence in creative writing learning: an exploratory study of the impact on the quality of student narratives. J. Lang. Lit. Educ. 1:1, 9–18.
Cañadas, G. R., Batanero, C., Díaz, C., and Gea, M. M. (2012). Comprensión del test chi-cuadrado por estudiantes de Psicología. En A. Estepa, Á. Contreras, J. Deulofeu, M. C. Penalva, F. J. García, and L. Ordóñez (Eds.), Investigación en Educación Matemática XVI (pp. 153–163). Alicante: Sociedad Española de Investigación en Educación Matemática (SEIEM)
Cardinale, L. (2006). La lectura y la escritura en la universidad. Aportes para la reflexión desde la pedagogía crítica. Rev. Pilquen. 8:3, 1–5. Available online at: https://revele.uncoma.edu.ar/index.php/psico/article/view/4786
Carlino, P. (2007). Qué nos dicen la investigaciones internacionales sobre escritura en la universidad? 4, 21–40.
Carlino, P. (2013). Alfabetización Académica. Diez Años Después. Rev. Mex. Investig. Educ. 18:57, 355–381.
Coello-Sánchez, G. A., Solís-Macías, S. I., Vázquez-Zubizarreta, G., and Rodríguez-Caballero, G. A. (2025). Mejoramiento de la escritura creativa en estudiantes de séptimo año de Educación General Básica a través de Storybird. MQRInvestigar 9:e411. doi: 10.56048/MQR20225.9.1.2025.e411
Darfler, M., and Kalantari, S. (2022). A synthetic review of evaluation expectation and its effects on creativity. Think. Skills Creat. 46:6, 1183–1189. doi: 10.1016/j.tsc.2022.101111
Day, C., Leahy, A., and Vanderslice, S. (2022). Where are we going next? A conversation about creative writing pedagogy. Fict. Writ. Rev. doi: 10.2307/jj.30945890.5
De Winter, J. C. F., Gosling, S. D., and Potter, J. (2016). Comparing the Pearson and Spearman correlation coefficients across distributions and sample sizes: a tutorial using simulations and empirical data. Psychol. Methods 21, 273–290. doi: 10.1037/met0000079,
Du, Y. (2024). A streamlined approach to scale adaptation: enhancing validity and feasibility in educational measurement. J. Lang. Teach. 4:1, 18–22.
Estela, M. C., and Pérez, T. Y. (2023). Producción de textos en estudiantes de educación superior: Revisión sistemática. Tecnohumanismo. Rev. Cientif. 3:2, 126–149. doi: 10.53673/th.v3i2.230
Fisher, R. A. (1919). The correlation between relatives on the supposition of Mendelian inheritance. Trans. R. Soc. Edinb. 52:2, 399–433. doi: 10.1017/S0080456800012163
Fisterra (2004). Asociación de variables cualitativas: El test exacto de Fisher y el test de McNemar. Available online at: https://www.fisterra.com/formacion/metodologia-investigacion/asociacion-variables-cualitativas-test-exacto-fisher-test-mcnemar/ (Accessed November 3, 2025).
Fontich, X., Casas-Deseures, M., Costa, A. L., and Troncoso, M. (2024). La formación lingüística en los Grados de Maestro: las nociones de ‘competencia comunicativa’, ‘razonamiento crítico’ y ‘género discursivo’. Tejuelo. Didáctica Lengua Liter. Educ. 39, 165–196. doi: 10.17398/1988-8430.39.165
Gaona, L. E., Dutasaca, M. G., Ruiz, G. F., and Martinez, M. E. (2024). La importancia de la escritura en el desarrollo del pensamiento crítico. Revista Soc. Front. 4:e251. doi: 10.59814/resofro.2024.4(2)251
García, M. A., Acosta, D., Atencia, A., and Rodríguez Sandoval, M. (2020). Identificación del pensamiento crítico en estudiantes universitarios de segundo semestre de la Corporación Universitaria del Caribe (CECAR). Rev. Electrón. Interuniv. Form. Prof. 23:3, 133–147. doi: 10.6018/reifop.435831
García, A., Gallego, J. L., and Rodríguez, A. (2015). ¿Cómo revisan sus textos los futuros docentes en formación? Revista Electrón. Investig. Educ. 17:2, 16–33.
Jamil, A. (2016). Reflections on the teaching of creative writing at the American universities. Eur. Sci. J. 12:22. doi: 10.19044/esj.2016.v12n22p324
Jebb, A. T., Ng, V., and Tay, L. (2021). A review of key Likert scale development advances: 1995–2019. Front. Psychol. 12:637547. doi: 10.3389/fpsyg.2021.637547,
Joshi, A., Kale, S., Chandel, S., and Pal, D. K. (2015). Likert scale: explored and explained. Br. J. Appl. Sci. Technol. 7:4, 396–403. doi: 10.9734/BJAST/2015/14975
Kusmaryono, I., Wijayanti, D., and Maharani, H. R. (2022). Number of response options, reliability, validity, and potential bias in the use of the Likert scale in education and social science research: a literature review. Int. J. Educ. Methodol. 8:4, 625–637. doi: 10.12973/ijem.8.4.625
Lee, S. (2025). Mastering Spearman’s Rank Correlation for Effective Research Analysis. Available online at: https://www.numberanalytics.com/blog/mastering-spearmans-rank-correlation-research-analysis (Accessed September 30, 2025)
Lillis, T., and Scott, M. (2007). Defining academic literacies research: issues of epistemology, ideology and strategy. J. Appl. Linguist. 4:1, 5–32. doi: 10.1558/japl.v4i1.5
Lin, M., Liu, L., and Pham, T. (2023). Towards developing a critical learning skills framework for master's students: evidence from a UK university. Think. Skills Creat. 48. doi: 10.1016/j.tsc.2023.101267
López-Pina, J.-A., and Veas, A. (2024). Validación de instrumentos psicométricos en ciencias sociales y de la salud: una guía práctica. An. Psicol. 41:1, 163–170. doi: 10.6018/analesps.583991
López-Ruiz, C., Flores-Flores, R., Galindo-Quispe, A., and Huayta-Franco, Y. (2021). Pensamiento crítico en estudiantes de educación superior: una revisión sistemática. Rev. Innova Educ. 3:2, 374–385. doi: 10.35622/j.rie.2021.02.006
Martinez-Carlos, G., and Hidalgo, N. H. (2025). Desentrañando la escritura académica: meta-análisis de estrategias efectivas en la enseñanza universitaria 2013-2023. Tribunal. Revista Ciencias Educ Ciencias Jurídicas 5:10, 440–449. doi: 10.59659/revistatribunal.v5i10.123
Matas, A. (2018). Diseño del formato de escalas tipo Likert: un estado de la cuestión. Rev. Electrón. Invest. Educ. 20:1, 38–47. doi: 10.24320/redie.2018.20.1.1347
Mellado-Moreno, P. C., and Bernal-Bravo, C. (2023). Evaluación de recursos digitales en abierto para la competencia digital y mediática desde una perspectiva educomunicativa. Rev. Mediterránea Comun. 14:2, 195–205. doi: 10.14198/MEDCOM.24259
Mera-Lomas, A. D. J. (2025). Influencia de la escritura creativa en el desarrollo del pensamiento crítico. Revista Multidisciplin. Perspect. Investig. 5:1, 9–17. doi: 10.62574/rmpi.v5i1.281
Myhill, D., Cremin, T., and Oliver, L. (2023). The impact of a changed writing environment on students' motivation to write. Front. Psychol. 14:1212940. doi: 10.3389/fpsyg.2023.1212940,
Núñez-Pacheco, R., Barreda-Parra, A., García-Candeira, M., and Aguaded, I. (2024). Diseño y validación de una escala de autopercepción de la competencia macronarrativa en estudiantes universitarios. Ocnos 23:2, 1–13. doi: 10.18239/ocnos_2024.23.2.418
Osorio, E., García, M., and Chacón, E. (2019). ¿Qué textos académicos escriben los estudiantes universitarios de Educación? 31, 26–55.
Palma, M., Ossa, C., Ahumada, H., Moreno, L., and Miranda, C. (2021). Adaptación y validación del test Tareas de Pensamiento Crítico en estudiantes universitarios. Revista de Estudios y Experiencias en Educación 20:42, 199–212. doi: 10.21703/rexe.20212042palma12
Pearson, K. (1896). Mathematical contributions to the theory of evolution. III. Regression, heredity, and panmixia. Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 187, 253–318. doi: 10.1098/rsta.1896.0007
Pornel, J. B., and Saldaña, G. A. (2013). Four common misuses of the Likert scale. Philipp. J. Soc. Sci. Humanit. 18:2, 12–19.
Qin, C., Zhang, R., and Xiao, Y. (2022). A questionnaire-based validation of metacognitive strategies in writing and their predictive effects on the writing performance of English as foreign language student writers. Front. Psychol. 13:1071907. doi: 10.3389/fpsyg.2022.1071907,
Rodríguez, J., Artiles, J., and Aguiar, M. V. (2021). Validación de un cuestionario para la evaluación del aprendizaje en el alumno universitario. Contextos Educ. 28, 105–127. doi: 10.18172/con.4504
Rokeman, N. R. M. (2024). Likert measurement scale in education and social sciences: explored and explained. Educatum J. Soc. Sci. 10:1, 77–88. doi: 10.37134/ejoss.vol10.1.7.2024
Ruas, P., Nunes, C., and Neto, I. (2025). Avaliation of clinical reasoning in the medical course in Portugal. Rev. Esp. Educ. Med. 1:630541. doi: 10.6018/edumed.630541
Sagredo-Ortiz, S., and Kloss, S. (2025). Academic writing strategies in university students from three disciplinary areas: design and validation of an instrument. Front. Educ. 10:1600497. doi: 10.3389/feduc.2025.1600497
Salazar, M. d. L. (2023). La escritura de textos académicos en la universidad: Una reflexión desde la experiencia docente en la Universidad Pedagógica Nacional. Rev. Educ. 47:1, 664–673. doi: 10.15517/revedu.v47i1.51949
Salazar, C., and Verástica, M. L. G. (2025). La reescritura académica con herramientas de Inteligencia artificial en una universidad pública de México: academic rewriting with artificial intelligence tools at a public university in Mexico. Latam Rev. Latinoam. Cienc. Soc. Humanid. 6:1, 498–517. doi: 10.56712/latam.v6i1.3354
Sánchez, V., and Pedraza, S. F. (2025). Estrategia didáctica para fomentar el placer hacia la escritura literaria creativa en la educación ecuatoriana. Eur. Public Soc. Innov. Rev. 10, 1–17. doi: 10.31637/epsir-2025-2076
Schnitzler-Sommerfeld, N., and Núñez-Lagos, P. (2021). Hacia una evaluación de la reflexión pedagógica desde la escritura académica. Magis 14, 1–25. doi: 10.11144/Javeriana.m14.herp
Scriven, M., and Paul, R. 1987. Defining critical thinking. 8th Annual international conference on critical thinking and education reform. Available online at: http://www.criticalthinking.org/pages/defining-critical-thinking/766 (Accessed December 10, 2025)
Soto, J., Jaraíz, F. J., Pérez, R., Gómez, M., and Fontich, X. (2025). La escritura creativa y el pensamiento crítico en los grados de Educación Infantil y Primaria de cinco universidades españolas. Profesorado. Rev. Currículum Form. Profesor. 29:2, 1–24. doi: 10.30827/profesorado.v29i2.30810
Tanskanen, I. (2023). Professional autobiographical process including identity work in creative writing practices. Il Capit. Cult. 14, 33–48. doi: 10.13138/2039-2362/3131
Tavera, E., and Lovón, M. (2023). Escritura académica de los estudiantes del posgrado de relaciones internacionales en el Perú: representaciones sociales y prácticas de literacidad. Lit. Lingüíst. 48, 447–478. doi: 10.29344/0717621X.48.3253
Torres-Gastelú, C. A., and Torres-Real, C. (2025). Validación de una escala sobre la percepción de la Inteligencia Artificial en la educación superior. Cienc. Lat. Rev. Cientif. Multidiscip. 9:2, 5706–5725. doi: 10.37811/cl_rcm.v9i2.17324
Tresserras, E., Contreras, E., and Bordons, G. 2022. Diagnóstico, evaluación y mejora de la competencia lingüística de los futuros docentes (pp. 101–102). XXIII Congreso Internacional de la SEDLL 2022, 23–25 de noviembre.
Uribe-Álvarez, G., and Álvarez-Angulo, T. (2022). Presentación. En M.T. Mateo-Girona, G. Uribe-Álvarez, and S.E. Agosto-Riera (Eds.), Revisión y reescritura para la mejora de los textos académicos (pp. 9–12). Barcelona: Octaedro).
Vendrell-Morancho, M., and Fernández-Díaz, M. J. (2024). Diseño y validación de CritiTest, un instrumento para evaluar el pensamiento crítico en estudiantes universitarios. Rev. Investig. Educ. 22:3, 586–603. doi: 10.35869/reined.v22i3.5767
Villarroel, R., and Sujey, A. (2024). Escritura académica: dificultades y retos para los estudiantes de nuevo ingreso en las universidades. Available online at: https://uhektenos.com/index.php/Uhektenos/article/view/56 (Accessed June 1, 2025)
Wang, S. S. (2020). On the statistical testing methods for single laboratory validation of qualitative microbiological assays with an unpaired design. J. AOAC Int. 103, 1426–1434. doi: 10.1093/jaoacint/qsaa038,
Zamora Díaz, W. J. (2022). Aplicación e interpretación del test de Chi Cuadrada (X2) en una investigación sobre condiciones sociolaborales del trabajo docente y sus repercusiones en la salud. Rev. Cientif. Multidiscip. JIREH. 2. Available online at: https://revistajireh.uml.edu.ni/publicaciones/vol-2-num-1-2022/217-2/ (Accessed December 19, 2025).
Keywords: creative writing, critical thinking, Likert scale, questionnaire validation, teacher education degrees
Citation: Gómez-Caballero M, Soto Vázquez J and Pérez Parejo R (2026) Validation of a Likert questionnaire for creative writing and critical thinking in five Spanish universities. Front. Educ. 10:1698270. doi: 10.3389/feduc.2025.1698270
Edited by:
Shizhou Yang, Payap University, ThailandReviewed by:
Barry Lee Reynolds, University of Macau, ChinaJared Kubokawa, Ingham Intermediate School District, United States
Copyright © 2026 Gómez-Caballero, Soto Vázquez and Pérez Parejo. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Marta Gómez-Caballero, bWdvbWV6Y2IxNkB1bmV4LmVz; José Soto Vázquez, anNvdG9AdW5leC5lcw==