Skip to main content

ORIGINAL RESEARCH article

Front. Psychol., 12 May 2023
Sec. Educational Psychology

Development of the reading literacy questionnaire for EFL learners at primary schools

  • 1Faculty of Education, Qufu Normal University, Qufu, Shandong, China
  • 2College of Foreign Languages, Qufu Normal University, Qufu, Shandong, China

Previous studies have indicated that there are a variety of factors influencing reading literacy assessment, including linguistic, cognitive, and affective factors, but little has been done on how to integrate these influential factors reasonably in a reading literacy instrument. As such, the purpose of this study is to develop and validate an English Reading Literacy Questionnaire (ERLQ) for English as foreign language (EFL) learners at the elementary level. The ERLQ was designed and revised through three rounds of validation with a sample of 784 pupils (Grades 3–6) in six primary schools from six provinces in China. Validity and reliability tests of the questionnaire were conducted with item analysis, Exploratory Factor Analysis (EFA), Confirmatory Factor Analysis (CFA), reliability test, and the analysis of criterion validity in SPSS 26.0 and AMOS 23.0. Results indicated that the revised ERLQ had high internal consistency, ranging from 0.729 to 0.823. The criterion validity of the ERLQ was supported by significant correlations to the Chinese Students’ English Rating Scale verified by the authoritative department, with a correlation coefficient of 0.871. The study shows that the revised questionnaire, with 3 dimensions of 14 items, has high reliability and validity, which can be used as an assessment instrument for the intended audience. It also suggests that modifications may be made for further use in other regions and countries, depending on the background information of the learners.

Introduction

Reading is a crucial approach to language acquisition (Avalos et al., 2007; Goldenberg, 2020), which has a far-reaching influence on people’s learning and thinking (Green et al., 2013). The important role that it plays in students’ growth and academic development has made it a hot topic of research. At present, most of the research on reading have focused on the direct or indirect impact of linguistic, cognitive, and affective factors (Cheung et al., 2009; Nevo et al., 2020), even gender (Solheim and Lundetræ, 2018; Reilly et al., 2019) and parental literacy (Hunstinger et al., 2016) of learners, and the development of regional economy (Chinyamurindi and Dlaza, 2018). However, little has been done on how to integrate these influential factors reasonably in an instrument to assess students’ reading literacy, especially for teenagers in the context of English as a foreign language (EFL). As such, it is necessary to develop a self-report questionnaire to assess EFL learners’ reading literacy, especially at the elementary level, for they are at the vital stage of language development. The following two questions are formulated to guide the study. (1) What dimensions and factors should be considered in developing an instrument to assess EFL learners’ reading literacy at the elementary level? (2) What are the crucial steps in developing and validating the assessment instrument?

Literature review

The connotation of reading literacy

The concept of reading literacy is derived from the rethinking of the issue whether reading comprehension is viewed as a product or a process (Dyke, 2021). The former tends to highlight the content of what is read with sets of questions, whereas the latter pays much attention to the cognitive process in which how the influential factors contribute to the meaningful construction of the reading materials, concerning word identification, strategy use, and some cognitive behaviors (Broek et al., 2012). At present, the dominant and popular view of the connotation of reading literacy is in line with the latter that reading literacy is considered as a cognitive process involving multiple influencing factors such as linguistic knowledge and psychological behaviors. According to the PISA (Program for International Student Assessment) 2018 Reading Assessment Framework by the Organization for Economic Co-operation and Development (OECD, 2019), reading literacy refers to understanding, using, evaluating, reflecting on, and engaging with reading materials or texts, which encompasses metacognitive, cognitive, and affective-motivational dimensions of behaviors. In the PIRLS (Progress in International Reading Literacy Study) 2021 Assessment Framework by the International Association for the Evaluation of Educational Achievement (Mullis and Martin, 2019), the definition of reading literacy, emphasizes a constructive and interactive process between the reader and the author. Similarly, the National Assessment of Educational Progress of the US (NAEP) states that reading literacy is an active and complex process that involves understanding the written text, developing and interpreting its meaning from the context (U.S. Department of Education, 2022). Pan-Canadian Assessment Program (PCAP) also claims that reading literacy is the ability to construct meaning from texts through understanding, interpreting, and responding personally and critically to text content in order to make sense of the world based on personal life experience (O’Grady et al., 2021). All the above views denote that the connotation of reading literacy has gone beyond the reading activity itself, which is closely related to the development of the reader’s comprehensive reading competence.

Taken together, reading literacy in this study can be defined as a kind of integrated competence to achieve self-development through understanding and interpreting reading materials with evaluative and reflective abilities, within which language knowledge (e.g., morphology and syntax) is the basis for constructing the literal meaning of reading materials (Charles and Joseph, 2014). Readers with better reading literacy can adopt reading strategies and skills selectively to achieve different purposes of reading tasks (Svjetlana and Igor, 2006).

The assessment of reading literacy

The assessment of reading literacy should not only focus on the static measurements of pre-existing reading competence, but also on dynamic assessment, which is an approach to psychological testing that conceptualizes the cognitive ability of readers (Dixon et al., 2023). To date, most of the international assessment institutions of reading literacy like PISA, PIRLS, and NAEP focus more on proficiency tests of reading (Rindermann, 2007; Applegate et al., 2009), whose purpose is to find out whether learners have already attained the knowledge and the skills in reading (Alderson and Hughes, 1981). Different from proficiency tests, self-report questionnaires rely on an individual’s own report of his or her behaviors, beliefs, or attitudes (Levin-Aspenson and Watson, 2018), which is commonly used in psychological studies because it can reveal the underlying causes of the concerned behaviors and yield some diagnostic information (Gollan et al., 2012). Specifically, reading literacy assessment concerns readers’ linguistic knowledge, reading strategies, even cognitive aspects in reading activities. Research revealed that the assessment of reading literacy involved complex interactive and dynamic processes (Sadeghi, 2021), not only including the readers’ linguistic and cognitive competences (NGA and CCSSO, 2010), but also containing the underlying impact of readers’ affective variable on reading (Adrian et al., 2007).

As a two-way interaction between the author and the reader (Cox et al., 2019), reading literacy is first affected by the linguistic competence of the reader, which is the most important element of reading literacy (García and Cain, 2014). According to OECD (2021), a reader’s linguistic competence refers to the ability to read effectively, largely depending on the comprehensive use of language knowledge and reading skills. Reader’s language competence necessitates a set of language knowledge and skills to derive textual meaning (Grabe, 2009). In this sense, linguistic competences such as word recognition, phonetic decoding, and syntactic parsing, can largely contribute to the construction of text meaning (Gellert and Elbro, 2017; Quintino and Julia, 2018), which should be taken into account in assessing reading literacy (NRP, 2000; Kirby et al., 2012; Li et al., 2012).

Cognitive competence, in the field of reading, refers to the ability to carry out an independent cognitive reading activity in a broad sense (Anikina, 2022), focusing on the process of turning knowledge into common sense, and then forming self-judgment on textual meaning (Sun and Hui, 2012). Previous studies (Rendell et al., 2010; Watkins, 2016) claim that cognitive strategy and cultural awareness pertain to different assessment facets of cognitive competence, which are considered as accelerators in facilitating the development of one’s reading literacy (Arabski and Wojtaszek, 2011; Piasta et al., 2018). Strategies used in reading activity comprises planning, monitoring, and mediating strategies that are embedded in reading behaviors, which have a potential impact on reading outcome (Aghaie and Zhang, 2012). Besides, cultural awareness, as a component of language proficiency, is a latent variable for better reading performance (Knutson, 2006; Byram, 2012; Baker, 2015), which contains the sensitivity to customs, traditions, values and beliefs of a specific community. Readers with different cultural backgrounds may have different comprehension and interpretation of the same reading materials. Cognitive strategy and cultural awareness jointly compose cognitive competence that has been a determinant component for the assessment of reading literacy.

As a new assessment dimension of reading literacy, affective element has become another important component in the sustainable development of one’s reading literacy, which consists of a series of factors, concerning reading attitude (Kaderavek et al., 2014; Nootens et al., 2018; Lopes et al., 2022), reading anxiety (Katzir et al., 2018), reading motivation (Schiefele et al., 2012; Stutz et al., 2017; Wang and Gan, 2021; Wang and Jin, 2021), and reading habits. Thus, in recent years, a new trend of research has been aroused, on how to assess learners’ affective impact on reading performance such as attitude and interest, showing the feature of diversified development of reading literacy assessment (Kell and Kell, 2014). Some researchers hold that students’ reading performance is closely related to their cognitive and affective factors (Chiu et al., 2012). Some think that affective factors have a potential influence on the reading proficiency of readers (Li, 2018). Others insist that individual differences should not be ignored in the process of reading literacy assessment (Conrad et al., 2013). Despite the proliferation of research on the affective effects of learners on reading performance, few studies have been made on what affective elements should be considered and integrated into reading assessment, and to what extent every influential element contributes to the development of reading literacy of the learner.

Reading literacy assessment in EFL context

The effect of whether English is used as the mother tongue or a foreign language on English reading.

literacy assessment cannot be ignored (Zuikowski et al., 2019; Calet et al., 2020). Different language environment may have different impacts on the requirement of reading literacy assessment. As a foreign language, reading in English at primary schools belongs to the stage of “learning to read,” focusing on the basic knowledge of the target language and understanding the basic meaning of reading materials. The process of reading is naturally associated with the theme, structure of the reading materials, the interaction with different views, and the abilities of one’s open-mind thinking and psychological cognition (Education Bureau of the Hong Kong, 2022). Thus, in countries where English is a foreign language learnt formally in a non-native-speaking social context, the assessment of English reading literacy focuses more on its connection with English education, embodying the disciplinary characteristics of English learning. English reading literacy in the EFL context is usually thought of as four dimensions: language ability, cultural awareness, thinking quality, and learning ability, according to the Ministry of Education of the People’s Republic of China (PRC; Ministry of Education of the PRC, 2020; Ministry of Education of the PRC, 2022). All the dimensions are interrelated and supported to reflect the core value of the English course. In view of this, numerous factors such as thematic content, text structure, language knowledge, and non-language factors have been put forward in assessing students’ reading literacy (Wang, 2012; Liu and Wu, 2019). It is believed that reading comprehension ability is the result of the joint action of a series of reading behaviors of identification, analysis, and critical evaluation.

In sum, previous research on English reading literacy assessment presented different views, but few focus on the design and validation of English reading literacy assessment instruments, especially for primary school students in the context of a non-native-speaking social context, namely, learning English as a foreign language (EFL). As such, the current research is designed to develop an instrument for the assessment of English reading literacy for EFL learners, with an attempt to provide some empirical support for the study of English reading literacy.

Questionnaire development

The development of English reading literacy questionnaire (ERLQ) is composed of two stages. The first stage aims to select the appropriate dimensions for assessing English reading literacy by referring to previous research findings and relevant official documents about reading requirements for EFL learners. The second stage is concerned with the development of specific assessment items through interviewing some experts, researchers and teachers in the field of English education. Specifically, the design of the English reading literacy assessment questionnaire is mainly based on the following two aspects: one is the core literacy requirement of English discipline, and the other is the requirement of English reading practice. The whole selection process of assessment dimensions and items referred to the in-depth analysis of relevant literature and China’s Standards of English Language Ability by the Ministry of Education of PRC, which is believed with high validity and reliability due to its strict verification from authoritative departments. With this solid foundation available, the assessment dimensions of English reading literacy are developed with the domains of language competence, cognitive strategy, and affective element. Then the questionnaire is designed in the form of a self-report, which can facilitate the collection of a large amount of quantitative data (Robin and Scott, 2015). In order to avoid invalid answers, the question items are presented in the form of declarative sentences with a 5-point Likert scale ranging from 1 (completely inconsistent) to 5 (fully consistent), which is easy to rate the answers and standardize the results. Considering the cognitive feature of the target group, short sentences with ordinary words are used to make sure that they can fully understand the meaning of the statements.

In order to make the questionnaire more specific and applicable, an interview was made for further suggestions, including 2 researchers on English teaching, and 10 English teachers. On this basis, the questionnaire items were tentatively selected and formulated. And the initial questionnaire is submitted to 3 English teaching experts from universities for further revision. Then, the assessment framework is constructed from 6 dimensions, namely: language knowledge, reading skills, reading strategies, cultural awareness, reading habits, and reading attitudes (see Figure 1) with 21 items (see Table 1).

FIGURE 1
www.frontiersin.org

Figure 1. Six assessment dimensions of English reading literacy.

TABLE 1
www.frontiersin.org

Table 1. Assessment framework.

Methods

Sampling and data collection

The samples for this study were 784 primary school students (Grades 3–6) from six schools of six provinces representing different educational levels in China (2 in Eastern China, 2 in South China, and 2 in North China), within which 48.3% were boys (N = 379) and 51.7% were girls (N = 405). Data were collected via self-reported questionnaires analyzed with the software SPSS 26.0 and Amos 23.0. First, the consent to carry out the survey was obtained from the Research Ethics Committee, the headmasters of the participant schools, and the parents of the participants. Second, the content and purpose of the survey were explained to the teachers and students in detail. Third, the questionnaires were completed in class and the students were told that the participation was anonymous and voluntary, and encouraged to make their choices faithfully.

Specifically, data collection is composed of two stages. In the first stage, in February of 2022, 450 copies of the first version of the questionnaire were handed out and 420 valid questionnaires were obtained for data analysis, with an effective response rate of 93.3%. To obtain an appropriate sample size for factor analysis, we used a sample-to-item ratio of 10:1 (Hair et al., 2010). As the initial questionnaire contains 21 question items, the 420 samples were divided into two parts on average by grade: Sample Group A (N = 210) for predictive validation including item analysis, and exploratory factor analysis (EFA), within which 49.5% were boys (N = 104) and 50.5% were girls (N = 106); Sample Group B (N = 210) for construct validation including confirmatory factor analysis (CFA), within which 48.1% were boys (N = 101) and 51.9% were girls (N = 109). After a series of analyses, the first version of the questionnaire was revised with 3 dimensions of 14 items. In the second stage, in March of 2022, the revised questionnaire was handed out to the same six schools, together with Chinese Students’ English Rating Scale verified by authoritative departments, whose data were collected for the analysis of criterion validation. Each participant was asked to fill in both the questionnaire and the scale. Three hundred and sixty four valid questionnaires and 364 valid scales were obtained, with an effective response rate of 91%. Of the 364 participants, 47.8% (N = 174) were boys and 52.2% (N = 190) were girls.

Data analysis

This study is conducted to develop and validate an English Reading Literacy Questionnaire for EFL learners at primary schools, following the established guidelines for test procedure made by American Educational Research Association, American Psychological Association, and National Council on Measurement in Education (2014). To determine whether the ERLQ has a good validity, a series of analyses were conducted, including content analysis, item analysis, exploratory factor analysis, confirmatory factor analysis, reliability analysis, and correlation analysis.

Specifically, the validation of the ERLQ involved three steps: (1) predictive validation. Predictive validation is composed of content analysis, item analysis, and Exploratory Factor Analysis (EFA), aiming to test the validation of questionnaire content, the distinction degree of the questionnaire items respectively, and to determine whether the factors of the questionnaire are reasonable for reading literary assessment. The predictive validation of the first version of the ERLQ was analyzed with the data of Sample Group A. (2) construct validation. Construct validity refers to the corresponding relationship between assessment dimensions and measurement variables, whose aim is to test whether the construct of the questionnaire is reasonable with the measurement methods of Confirmatory Factor Analysis (CFA; Schmitt, 2011). Six indices were used: Chi-squared divided by degrees of freedom (CMIN/DF), goodness-fit index (GFI), adjusted goodness-of-fit index (AGFI), normed fit index (NFI), incremental fit index (IFI), and root-mean-square error approximation (RMSEA). (3) criterion validation and reliability. The reliability of the revised questionnaire was tested for its internal consistency, which can reflect the extent to which the revised questionnaire measures English reading literacy (Livingston, 2018). Criterion validity is an index to test the quality of a questionnaire through the analysis of the relationship between the measured data and the criterion. The criterion scale was the Chinese Students’ English Rating Scale verified by authoritative departments with high validity and reliability. A correlation analysis between the revised questionnaire and the criterion scale was made by providing the value of the correlation coefficient. The criterion validation and reliability were tested with the survey data of 364 participants in the second stage of data collection.

Results of predictive validation of the initial ERLQ

Content validation

Content validation refers to the appropriateness of the questionnaire items to the measurement of relevant dimensions, and whether the items can reflect the connotation of the assessment dimensions or can appropriately measure the domain of the content under consideration. At the design stage of the ERLQ, we conducted an interview for the evaluative judgement on the content of the questionnaire, with experts in the field of English teaching, teachers from primary schools, and researchers from research institutions. All of them thought that the questionnaire contained representative items of the domain of reading literacy and that the item statements were suitable for EFL students at the elementary level. In view of this, the ERLQ can be considered with acceptable content validity.

Item analysis

The scores of all items were analyzed for statistical significance by calculating the value of standard deviation. The method of convergence analysis was used with the valid data of Sample Group A. It is believed that if the standard deviation of each question item is below 0.5, it ought to be deleted for its convergence with other question items (Marjanovic et al., 2015). The results show that the standard deviation of all items is between 0.942 and 1.303, which meets the requirements of distinction degree; The correlation coefficient between the score of each item and the total score was statistically significant (0.591 ≤ r ≤ 0.852, p < 0.01). Then, the data were arranged in descending order by the scores. The first 27% was taken as the high group and the last 27% as the low group. Due to the normal distribution of the data, the independent sample T-test was used to test the discriminant validity (p < 0.05) and criteria value (t ≥ 3.00) of high and low groups on each item. The results show that the scores of all questions were statistically significant in the high and low groups (p < 0.01; see Table 2), indicating that the question items were suitable for further analysis.

TABLE 2
www.frontiersin.org

Table 2. T-test of high and low group and correlation analysis between each item and the questionnaire.

Exploratory factor analysis

The analysis of the exploratory factor was also performed for the load value of each factor, with the data of Sample Group A. First, the feasibility of factor analysis was carried out. Results show that most of the correlation coefficients in the correlation matrix are greater than 0.3; the value of Kaiser-Meyer-Olkin (KMO) is 0.902, greater than 0.7; The value of p of Bartlett sphericity test is 0, less than 0.05, indicating that it is suitable for exploratory factor analysis. Second, the principal component analysis method and the maximum variance method were adopted for factor rotation analysis. After multiple rounds of factor rotation test, three items, namely, Q1, Q10, and Q11 were deleted for their values of factor load were lower than 0.5. Four items (Q9, Q14, Q15, and Q17) were deleted because the content of these items was not so consistent with that of the other items in this dimension. The change of six dimensions to three dimensions is not a simple deletion, but rather an integration between dimensions because of their theoretical relations. For example, language knowledge is the basic condition for improving reading skills, which can in turn help learners acquire more language knowledge. The two dimensions are integrated into one dimension namely knowledge and skills, which is handy and valid for assessing learner’s reading literacy in this aspect. Finally, 7 question items were finally deleted and the 6 assessment dimensions were condensed into 3 dimensions, namely, habit and attitude, knowledge and skills, and culture and strategy. The remaining 14 items could better reflect the whole connotation of English reading literacy, whose cumulative contribution rate is up to 59.876% shown in Table 3, implying that the three assessment dimensions could basically reflect the level of learners’ reading literacy. After the analysis of factor rotation, the correlation of between items is high, and the load value of each item is between 0.536 and 0.825 (see Table 4), indicating that the questionnaire has good construct validity and content validity.

TABLE 3
www.frontiersin.org

Table 3. Explained total variance.

TABLE 4
www.frontiersin.org

Table 4. Dimensions and assessment items.

Results of construct validation of the revised ERLQ

The construct validation of the revised ERLQ is tested with the data of Sample Group B. In the analysis of confirmatory factors with the data of Sample Group B, re-integration of dimensions was performed to test the fitting degree between the improved structure and new measured data. The Confirmatory Factor Analysis results are shown in Figure 2 that the factor load of each item is above 0.3, reflecting the factor load is relatively ideal as a whole. The correlation coefficient between the three dimensions is greater than 0.7, showing a strong positive correlation. The calculation results show that the value of CMIN/DF is 1.508, less than 3; GFI = 0.935, AGFI = 0.907, NFI = 0.909, IFI = 0.968, RMSEA = 0.049. According to various indicators, it is found that the structure of the questionnaire fits well with the measurement data, and the questionnaire has high construct validity.

FIGURE 2
www.frontiersin.org

Figure 2. The improved structure of the questionnaire.

Results of reliability and criterion validation of the revised ERLQ

Reliability

Reliability refers to the stability of a measurement instrument, which is usually tested by measuring its coefficient of Cronbach α (Livingston, 2018)f With 364 samples of survey data, this study used the software SPSS 26.0 to calculate the internal consistency coefficient of each dimension and the total questionnaire. The results show that the coefficient of Cronbach’s α of the ERLQ is 0.904, greater than 0.5 (Tabachnick et al., 2007), and the value of coefficient α of each dimension is between 0.729 and 0.823, which indicates that the internal consistency between each dimension and the total questionnaire is high and the ERLQ has high reliability (see Table 5).

TABLE 5
www.frontiersin.org

Table 5. Reliability coefficient of the questionnaire.

Criterion validity

Criterion validity refers to the correlation of measurement results between a developed instrument and a criterion scale (Peterson et al., 2010). By calculating the total score of each student on all items, this study finally obtains two sets of score data based on the ERLQ and Chinese Students’ English Rating Scale, and conducts bivariate Pearson correlation analysis with the software of SPSS. The results show that the correlation coefficient is 0.871 (p < 0.01), implying a highly positive correlation between reading literacy and language proficiency (see Table 6), indicating that the improved assessment questionnaire has good criterion validity.

TABLE 6
www.frontiersin.org

Table 6. Correlation analysis between the ERLQ and Chinese Students’ English Rating Scale.

Discussion

This paper presented the development of an assessment questionnaire for pupils on English reading literacy, during which, a variety of relevant test analyses were made for validation. Results show that the questionnaire developed in the current study has high validity and reliability, which can be used as an assessment tool for the intended EFL learners.

The multidimensionality of reading literacy assessment

The results of the study showed that the 14 items extracted from the initial 21 items had sufficient load value to reflect the connotation of reading literacy. Specifically, the four-item dimension of habit and attitude, the five-item dimension of knowledge and ability, and the five-item dimension of culture and strategy exhibited a good level of fit in the structure of the ERLQ. The item-total correlation showed that all the items were proposed to be part of the ERLQ from moderately to strongly. All of these results further confirmed the multidimensionality of reading literacy assessment as suggested by previous studies (Fletcher, 2006; Lordán et al., 2017), demonstrating that reading literacy of EFL learners is affected by different purposes or reasons and that they have multifaceted differences in language, cognition and emotion as readers. This is consistent with the views of Scott (2011), who notes that language knowledge and skills are fundamental to reading, culture and strategy for deep comprehension of reading materials, habit and attitude for reading and lifelong development. The assessment of reading literacy can be understood as a multidimensional construction of different components related to the reading process, which is in line with previous findings (Mullis and Martin, 2019; OECD, 2019; O’Grady et al., 2021) that highlights the two-way interaction between the reader and the text. Despite the fact that there are many other dimensions and factors influencing reading literacy such as home literacy environment (Chow et al., 2015; Bergen et al., 2017; Zhang et al., 2020) and educational policy (Fuchs et al., 2019), it does not mean that the assessment dimension of a questionnaire is inclusive. The multidimensionality of reading literacy assessment has its limitation, especially for pupils, because the cognitive level of primary school students is still at the development stage, whose attention, memory, thinking, and other factors are in the transitional period from low-level to high-level development. The content and ways of assessment of English reading literacy naturally need to be in line with the cognitive characteristics of their ages.

The correlation between students’ reading literacy and their language proficiency

The result of this study also showed that there was a strong positive relationship between reading literacy and language proficiency, with a value of the correlation of 0.871 gained from the criterion analysis, which is in accordance with previous studies (Stephen et al., 2002; Welcome and Meza, 2019), implying that language proficiency plays a crucial role in students’ reading literacy. The ERLQ’s reliability has been bolstered by its positive correlation with language proficiency. The correlation values in Figure 2 (e.g., 0.72, 0.75, and 0.83) indicate that the different dimensions are three distinct but highly interrelated constructs, indicating that reading literacy and language proficiency interact to influence reading performance. These findings provided empirical support to the construction of reading literacy assessment dimensions, which are in line with the views of Mirza et al. (2016) and Sun et al. (2022) who emphasize that excellent reading literacy is beneficial to the improvement of language proficiency, and vice versa. As such, in the EFL context, it is necessary to consider the effects that language proficiency has on reading literacy. We believe that sufficient language proficiency can effectively control the low-level ineffective reading process. Similarly, reading strategies can help readers comprehend the reading materials deeply. The correlations between students’ reading literacy and language proficiency indirectly illustrated the complexity of the reading process (Uccelli et al., 2015). To some extent, reading is a process of interaction, communication, and re-creation in thinking between the reader and author, requiring the integration and coordination of linguistic, cognitive, and affective factors. Notably, since reading literacy of a reader’s native language has positive transfer effect on that of his or her foreign language (Eibensteiner, 2023), reading habits and reading attitude matter more in the assessment framework than language proficiency (seen in Table 3), which is consistent with previous research (Kaderavek et al., 2014; Nootens et al., 2018; Lopes et al., 2022), demonstrating that individual behaviors of the reader should be measured more widely and deeply.

The dynamic adjustment of ERLQ

The result of this study also indicated that the design of reading literacy assessment was a dynamic process, showing the features of openness, diversity, and constructiveness. In the exploratory factor analysis, 7 items were deleted from the first version of the questionnaire, further proving that it is necessary to choose appropriate aspects and relevant items for the assessment of reading literacy. It is very important to keep a balance between assessment standards and questionnaire design, in which the most credible aspects of reading literacy can be assessed. This finding is in line with previous research (Conrad et al., 2013; Sadeghi, 2021), highlighting the development of the reader’s psychological and cognitive behaviors in the reading process. Notably, reading literacy assessment itself is not a product, but a process, just like the well-known international reading literacy assessments, PISA and PIRLS, which release a new assessment framework every few years and improve the definition of reading literacy (Zuckerman et al., 2013). Different from previous findings that a standardized system was needed to assess English reading literacy, this self-report questionnaire can be adjusted dynamically to different research requirements with simple addition or modification of few items. With regard to the dynamic assessment of English reading literacy in EFL context, the key point is to reduce cultural and linguistic bias and focus on reading behaviors, which is consistent with previous researches (Petersen and Gillam, 2013; Navarro and Lara, 2017). Hence, the structure and content of the English reading literacy assessment questionnaire should be in an ongoing process of modification with the change of the target participants and the changing view of the reading concept. In terms of the assessment itself, it is not an end, but a new start to improve learners’ reading literacy.

Conclusion

The development of the English reading assessment questionnaire is a multidimensional and dynamic process of component construction (Fletcher, 2006), not only involving linguistic and cognitive factors, but also affective elements such as attitude, habit, and motivation (Schiefele et al., 2012; Pekrun et al., 2017; Nootens et al., 2018), associated with the characteristics of the reading process and the research findings of reading literacy, together with the consideration of different learning stages and cultural background of the learner (Zuikowski et al., 2019; Calet et al., 2020). With the development of the English reading literacy assessment questionnaire in this study, some conclusions were made as follows: (1) The design of an assessment instrument for EFL reading literacy is a complex task, in which several factors should be fully considered, including the intended readers’ foreign language proficiency at different stages of learning and their cognitive and affective elements. (2) In the design process, relevant research results and different views about reading and reading literacy should be used for reference. In addition, different cultural and educational backgrounds, as well as different requirements or assessment standards, should be taken into consideration so as to meet the needs of assessment in specific regions or countries. (3) In terms of assessment, the developed questionnaire on English reading literacy in this study for students at primary schools has good reliability and validity, which can be used as an assessment tool for the intended audience.

It should also be noted that English reading literacy is an ever-developing concept. So is the development of its assessment. With the innovation of reading media and the change of reading methods, the connotation and extension of reading literacy will be enriched accordingly (Gil et al., 2015; Serafini et al., 2020). The assessment of reading literacy should also be updated. Although the cultivation of English reading literacy is a step-by-step process, it does not follow the development track from a single dimension to multi-dimension. It is a continuous process of accumulation of different dimensions.

Implication and limitation

The study can make both theoretical and practical implications. Theoretically, on one hand, this study offers extended knowledge in constructing the assessment dimensions of reading literacy in the EFL context, which is conducive to the improvement of language assessment theory. On the other hand, as can be seen from the results, more dimensions and factors are likely to be included in future research, showing the diversified development trend of reading literacy. Practically, the developed questionnaire of this study may present a reading assessment model for educational practitioners in other EFL countries and regions. It is implied that the features of participants and places of residence should be fully considered in the design and application of any questionnaire.

Although the findings of this study have provided data-based support for the validity and reliability of the developed questionnaire, limitations should be acknowledged. The test sample is relatively small, and the participants are mainly from grade 3 to grade 6 in six primary schools in China. More participants need to be involved to validate the ERLQ for more convincing results. Besides, although the research sample covers most of the English learners of primary schools participating in this survey, this study did not make a distinction between the lower grade and the upper grade of the participants. Further research may be made to assess the differences between them.

It is hard to make an effective and universal assessment criterion appropriate to all EFL learners in different regions or countries. Thus, in the future, minor modifications of the ERLQ may be made for similar target audience in order to better meet different research purposes or requirements.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving human participants were reviewed and approved by the Research Ethics Committee of Qufu Normal University, the headmaster of the participant school, and the parents of the participants. Written informed consent to participate in this study was provided by the participants’ legal guardian/next of kin.

Author contributions

WL organized the database, performed the statistical analysis, and wrote the paper. SK and YS contributed to conception and design of the study. All authors contributed to the article and approved the submitted version.

Funding

This research was funded by the grant from the National Social Science Fund of China (grant number: BEA 180110).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Author disclaimer

The views expressed in this paper reflect the opinions of the authors and not the funding agency or the authors’ respective institutions.

References

Adrian, S. C., Linderman, K. W., and Schroeder, R. G. (2007). Method and psychological effects on learning behaviors and knowledge creation in quality improvement projects. Manag. Sci. 53, 437–450. doi: 10.1287/mnsc.1060.0635

CrossRef Full Text | Google Scholar

Aghaie, R., and Zhang, L. J. (2012). Effect of explicit instruction in cognitive and metacognitive Reading strategies on Iranian EFL students’ Reading performance and strategy transfer. Instr. Sci. 40, 1063–1081. doi: 10.1007/s11251-011-9202-5

CrossRef Full Text | Google Scholar

Alderson, J. C., and Hughes, A. (1981). Issues in language testing. London: The British Council.

Google Scholar

American Educational Research Association, American Psychological Association, and National Council on Measurement in Education (2014). Standards for educational and psychological testing Washington, DC: AERA.

Google Scholar

Anikina, Z. (2022). Integration of engineering education and the humanities: Global intercultural perspectives. Cham: Springer Nature.

Google Scholar

Applegate, A. J., Applegate, M. D., McGeehan, C. M., Pinto, C. M., and Kong, A. (2009). The assessment of thoughtful literacy in NAEP: why the states Aren’t measuring up. Read. Teach. 62, 372–381. doi: 10.1598/RT.62.5.1

CrossRef Full Text | Google Scholar

Arabski, J., and Wojtaszek, A. (2011). Aspects of culture in second language acquisition and foreign language learning. Berlin: Springer.

Google Scholar

Avalos, M. A., Plasencia, A., Chavez, C., and Rascόn, J. (2007). Modified guided Reading: gateway to English as a second language and literacy learning. Read. Teach. 61, 318–329. doi: 10.1598/RT.61.4.4

CrossRef Full Text | Google Scholar

Baker, W. (2015). Research into practice: cultural and intercultural awareness. Lang. Teach. 48, 130–141. doi: 10.1017/S0261444814000287

CrossRef Full Text | Google Scholar

Bergen, E., Zuijen, T., Bishop, D., and Jong, P. F. (2017). Why are home literacy environment and Children’s Reading skills associated? What parental skills reveal. Read. Res. Q. 52, 147–160. doi: 10.1002/rrq.160

CrossRef Full Text | Google Scholar

Broek, P. V. D., Espin, C. A., and Burns, M. K. (2012). Connecting cognitive theory and assessment: measuring individual differences in Reading comprehension. Sch. Psychol. Rev. 41, 315–325. doi: 10.1080/02796015.2012.12087512

CrossRef Full Text | Google Scholar

Byram, M. (2012). Language awareness and (critical) cultural awareness—relationships, comparisons, and contrasts. Lang. Aware. 21, 5–13. doi: 10.1080/09658416.2011.639887

CrossRef Full Text | Google Scholar

Calet, N., Rocío, L. R., and Gracia, J. F. (2020). Do Reading comprehension assessment tests result in the same Reading profile? A study of Spanish primary school children. J. Res. Read. 43, 98–115. doi: 10.1111/1467-9817.12292

CrossRef Full Text | Google Scholar

Charles, P., and Joseph, S. (2014). Word knowledge in a theory of Reading comprehension. Sci. Stud. Read. 18, 22–37. doi: 10.1080/10888438.2013.827687

CrossRef Full Text | Google Scholar

Cheung, W. M., Tse, S. K., Lam, J. W. I., and Loh, E. K. Y. (2009). Progress in international Reading literacy study 2006 (PIRLS): pedagogical correlates of fourth-grade students in Hong Kong. J. Res. Read. 32, 293–308. doi: 10.1111/j.1467-9817.2009.01395.x

CrossRef Full Text | Google Scholar

Chinyamurindi, W. T., and Dlaza, Z. (2018). Can you teach an old dog new tricks? An exploratory study into how a sample of lecturers develop digital literacies as part of their career development. Read. Writ. 9, 1–8. doi: 10.4102/rw.v9i1.191

CrossRef Full Text | Google Scholar

Chiu, M. M., Catherine, M. C., and Lin, D. (2012). Ecological, psychological, and cognitive components of Reading difficulties: testing the component model of Reading in fourth graders across 38 countries. J. Learn. Disabil. 45, 391–405. doi: 10.1177/0022219411431241

PubMed Abstract | CrossRef Full Text | Google Scholar

Chow, B. W. Y., Chui, B. H. T., Lai, M. W. C., and Kwok, S. Y. C. L. (2015). Differential influences of parental home literacy practices and anxiety in English as a foreign language on Chinese Children's English development. Int. J. Biling. Educ. Biling. 20, 625–637. doi: 10.1080/13670050.2015.1062468

CrossRef Full Text | Google Scholar

Conrad, N. J., Harris, N., and Williams, J. (2013). Individual differences in Children's literacy development: the contribution of orthographic knowledge. Read. Writ. 26, 1223–1239. doi: 10.1007/s11145-012-9415-2

CrossRef Full Text | Google Scholar

Cox, T. L., Brown, J., and Bell, T. R. (2019). Foreign language proficiency in higher education. Cham: Springer Nature.

Google Scholar

Dixon, C., Oxley, E., Gellert, A. S., and Nash, H. (2023). Dynamic assessment as a predictor of Reading development: a systematic review. Read. Writ. 36, 673–698. doi: 10.1007/s11145-022-10312-3

CrossRef Full Text | Google Scholar

Dyke, J. A. V. (2021). Introduction to the special issue: mechanisms of variation in Reading comprehension: processes and products. Sci. Stud. Read. 25, 93–103. doi: 10.1080/10888438.2021.1873347

CrossRef Full Text | Google Scholar

Education Bureau of the Hong Kong (2022). Learning from Reading. Available at: https://www.edb.gov.hk/tc/curriculum-development/4-key-tasks/reading-to-learn/index.html

Google Scholar

Eibensteiner, L. (2023). Complex transfer processes in multilingual language (L3/Ln) Acquisition of Spanish Past Tenses: the role of non-native language (L2) transfer. Int. J. Multiling. 20, 1–19. doi: 10.1080/14790718.2022.2164768

CrossRef Full Text | Google Scholar

Fletcher, J. M. (2006). Measuring Reading comprehension. Sci stud. Read 10, 323–330. doi: 10.1207/s1532799xssr1003_7

CrossRef Full Text | Google Scholar

Fuchs, S., Kahn-Horwitz, J., and Katzir, T. (2019). Theory and reported practice in EFL literacy instruction: EFL Teachers' perceptions about classroom practices. Ann. Dyslexia 69, 114–135. doi: 10.1007/s11881-018-00172-4

PubMed Abstract | CrossRef Full Text | Google Scholar

García, J. R., and Cain, K. (2014). Decoding and Reading comprehension: a Meta-analysis to identify which reader and assessment characteristics influence the strength of the relationship in English. Rev. Educ. Res. 84, 74–111. doi: 10.3102/0034654313499616

CrossRef Full Text | Google Scholar

Gellert, A. S., and Elbro, C. (2017). Try a little bit of teaching: a dynamic assessment of word decoding as a kindergarten predictor of word Reading difficulties at the end of grade 1. Sci. Stud. Read. 21, 277–291. doi: 10.1080/10888438.2017.1287187

CrossRef Full Text | Google Scholar

Gil, L., Martinez, T., and Vidal-Abarca, E. (2015). Online assessment of strategic Reading literacy skill. Comput. Educ. 82, 50–59. doi: 10.1016/j.compedu.2014.10.026

CrossRef Full Text | Google Scholar

Goldenberg, C. (2020). Reading wars, Reading science, and English learners. Read. Res. Q. 55, 131–144. doi: 10.1002/rrq.340

CrossRef Full Text | Google Scholar

Gollan, T. H., Weissberger, G. H., Runnqvist, E., Montoya, R. I., and Cera, C. M. (2012). Self-ratings of spoken language dominance: a multi-lingual naming test (MINT) and preliminary norms for young and aging Spanish-English bilinguals. Biling-Lang Cogn 15, 594–615. doi: 10.1017/S1366728911000332

PubMed Abstract | CrossRef Full Text | Google Scholar

Grabe, W. (2009). Reading in a second language: Moving from theory to practice. New York: Cambridge University Press.

Google Scholar

Green, B., Cormack, P., and Patterson, A. (2013). Re-Reading the Reading lesson: episodes in the history of Reading pedagogy. Oxf. Rev. Educ. 39, 329–344. doi: 10.1080/03054985.2013.808617

CrossRef Full Text | Google Scholar

Hair, J. F., Black, W. C., Babin, B. J., and Anderson, R. E. (2010). Multivariate data analysis (7th ed.). Upper Saddle River, NJ: Pearson Prentice Hall.

Google Scholar

Hunstinger, C. S., Jose, P. E., and Luo, Z. (2016). Parental facilitation of early mathematics and Reading skills and knowledge through encouragement of home-based activities. Early Child Res. Q. 37, 1–15. doi: 10.1016/j.ecresq.2016.02.005

CrossRef Full Text | Google Scholar

Kaderavek, J. N., Guo, Y. G., and Justice, L. M. (2014). Validity of the Children’s orientation to book Reading rating scale. J. Res. Read. 37, 159–178. doi: 10.1111/j.1467-9817.2012.01528x

CrossRef Full Text | Google Scholar

Katzir, T., Kim, Y. S. G., and Dotan, S. (2018). Reading self-concept and Reading anxiety in second grade children: the roles of word Reading, emergent literacy skills, working memory and gender. Front. Psychol. 9, 1–13. doi: 10.3389/fpsyg.2018.01180

PubMed Abstract | CrossRef Full Text | Google Scholar

Kell, M., and Kell, P. (2014). Literacy and language in East Asia. Singapore: Springer.

Google Scholar

Kirby, J. R., Deacon, S. H., Bowers, P. N., Izenberg, L., Lesly, W., and Parrila, R. (2012). Children’s morphological awareness and Reading ability. Read. Writ. 25, 389–410. doi: 10.1007/s11145-010-9276-5

CrossRef Full Text | Google Scholar

Knutson, E. M. (2006). Cross-cultural awareness for second/foreign language learners. Can. Mod. Lang. Rev. 62, 591–610. doi: 10.3138/cmlr.62.4.591

CrossRef Full Text | Google Scholar

Levin-Aspenson, H. F., and Watson, D. (2018). Mode of administration effects in psychopathology assessment: analyses of gender, age, and education differences in self-rated versus interview-based depression. Psychol. Assess. 30, 287–295. doi: 10.1037/pas0000474

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, M. (2018). A study on the influence of non-intelligence factors on college students’ English learning achievement based on C4.5 algorithm of decision tree. Wirel. Pers. Commun. 102, 1213–1222. doi: 10.1007/s11277-017-5177-0

CrossRef Full Text | Google Scholar

Li, H., Shu, H., McBride-Chang, C., Liu, H., and Peng, H. (2012). Chinese Children’s character recognition: Visuo-orthographic, phonological processing and morphological skills. J Rev Read 35, 287–307. doi: 10.1111/j.1467-9817.2010.01460.x

CrossRef Full Text | Google Scholar

Liu, J., and Wu, S. (2019). A study on English proficiency scale in China. Beijing: Higher Education Press.

Google Scholar

Livingston, S. A. (2018). Test reliability — Basic concepts. Princeton. NJ: Educational Testing Service.

Google Scholar

Lopes, J., and Oliveira, C.,and Costa, P. (2022). School and student determinants of Reading performance: a multilevel analysis with Portuguese students. Rev Psicodidact 27, 29–37. doi: 10.1016/j.psicoe.2021.05.001

CrossRef Full Text | Google Scholar

Lordán, E., Solé, I., and Beltran, F. (2017). Development and initial validation of a questionnaire to assess the Reading beliefs of undergraduate students: the Cuestionario De Creencias sobre La Lectura. J Rev Read 40, 37–56. doi: 10.1111/1467-9817.12051

CrossRef Full Text | Google Scholar

Marjanovic, Z., Holden, R., Struthers, W., Cribbie, R., and Greenglass, E. (2015). The inter-item standard deviation (ISD): an index that discriminates between conscientious and random responders. Pers. Individ. Differ. 84, 79–83. doi: 10.1016/j.paid.2014.08.021

CrossRef Full Text | Google Scholar

Ministry of Education of the PRC (2020). General senior high school curriculum standards English Beijing: People's Education Press.

Google Scholar

Ministry of Education of the PRC (2022). Curriculum standards English for compulsory education Beijing: People's Education Press.

Google Scholar

Mirza, A., Gottardo, A., and Chen, X. (2016). Reading in multilingual learners of Urdu (L1), English (L2) and Arabic (L3). Read. Writ. 30, 187–207. doi: 10.1007/s11145-016-9669-1

CrossRef Full Text | Google Scholar

Mullis, I. V. S., and Martin, M. O. (2019). PIRLS 2021 Assessment Frameworks. Boston: TIMSS&PIRLS International Study Center, IEA.

Google Scholar

Navarro, J. J., and Lara, L. (2017). Dynamic assessment of Reading difficulties: predictive and incremental validity on attitude toward Reading and the use of dialogue/participation strategies in classroom activities. Front. Psychol. 8:173. doi: 10.3389/fpsyg.2017.00173

PubMed Abstract | CrossRef Full Text | Google Scholar

Nevo, E., Vered, V. N., Brande, S., and Gambrell, L. (2020). Oral Reading fluency, Reading motivation and Reading comprehension among second graders. Read. Writ. 33, 1945–1970. doi: 10.1007/s11145-020-10025-5

CrossRef Full Text | Google Scholar

NGA and CCSSO (2010). Common Core state standards for English language arts and literacy in history/social students, science, and technical subjects. Washington DC: National Governors Association Center for Best Practices, and Council of Chief State School Officers.

Google Scholar

Nootens, P., Morin, M. F., Alamargot, D., Goncalves, C., Venet, M., and Labrecque, A. (2018). Differences in attitudes toward Reading: a survey of pupils in grades 5 to 8. Front. Psychol. 9:2773. doi: 10.3389/fpsyg.2018.02773

PubMed Abstract | CrossRef Full Text | Google Scholar

NRP (2000). Report of the National Reading Panel: Teaching children to read. Washington DC: Department of Health and Human Services.

Google Scholar

O’Grady, K., Houme, K., Costa, E., Rostamian, A., and Tao, Y. (2021). PCAP 2019 report on the Pan-Canadian assessment of mathematics, Reading, and science. Toronto: Council of Ministers of Education.

Google Scholar

OECD (2019). PISA 2018 assessment and analytical framework. Paris: OECD Publishing.

Google Scholar

OECD (2021). PISA 2025 foreign language assessment framework. Paris: OECD Publishing.

Google Scholar

Pekrun, R., Lichtenfeld, S., Marsh, H. W., Murayama, K., and Goetz, T. (2017). Achievement emotions and academic performance: longitudinal models of reciprocal effects. Child Dev. 88, 1653–1670. doi: 10.1111/cdev.12704

PubMed Abstract | CrossRef Full Text | Google Scholar

Petersen, D. B., and Gillam, R. B. (2013). Predicting Reading ability for bilingual Latino children using dynamic assessment. J. Learn. Disabil. 48, 3–21. doi: 10.1177/0022219413486930

PubMed Abstract | CrossRef Full Text | Google Scholar

Peterson, P., Baker, E., and McGaw, B. (2010). International encyclopedia of education (3rd). Amsterdam: Elsevier Science Publisher.

Google Scholar

Piasta, S. B., Groom, L. J., Khan, K. S., Skibbe, L. E., and Bowles, R. P. (2018). Young children’s narrative skill: concurrent and predictive associations with emergent literacy and early word reading skills. Read. Writ. 31, 1479–1498. doi: 10.1007/s11145-018-9844-7

CrossRef Full Text | Google Scholar

Quintino, R. M., and Julia, M. G. (2018). Direct and indirect effects of print exposure on silent Reading fluency. Read. Writ. 31, 483–502. doi: 10.1007/s11145-017-9794-5

CrossRef Full Text | Google Scholar

Reilly, D., Neumann, D., and Andrews, G. (2019). Gender difference in Reading and writing achievement: evidence from the National Assessment of educational Progress (NAEP). Am. Psychol. 74, 445–458. doi: 10.1037/amp0000356

PubMed Abstract | CrossRef Full Text | Google Scholar

Rendell, L., Fogarty, L., Hoppitt, W. J. E., Morgan, T. J. H., Webster, M. M., and Laland, K. N. (2010). Cognitive culture: theoretical and empirical insights into social learning strategies. Trends Cogn. Sci. 15, 68–76. doi: 10.1016/j.tics.2010.12.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Rindermann, H. (2007). The g-factor of international cognitive ability comparisons: the homogeneity of results in PISA, TIMSS, PIRLS and IQ-tests across nations. Eur. J. Pers. 21, 667–706. doi: 10.1002/per.634

CrossRef Full Text | Google Scholar

Robin, L. C., and Scott, O. L. (2015). The encyclopedia of clinical psychology. New York: John Wiley & Sons, Inc.

Google Scholar

Sadeghi, K. (2021). Assessing second language Reading: Insights from cloze tests. Cham: Springer Nature.

Google Scholar

Schiefele, U., Schaffner, E., Möller, J., and Wigfield, A. (2012). Dimensions of Reading motivation and their relation to Reading behavior and competence. Read. Res. Q. 47, 427–463. doi: 10.1002/RRQ.030

CrossRef Full Text | Google Scholar

Schmitt, T. A. (2011). Current methodological considerations in exploratory and confirmatory factor analysis. J. Psychoeduc. Assess. 29, 304–321. doi: 10.1177/0734282911406653

CrossRef Full Text | Google Scholar

Scott, C. M. (2011). Assessment of language and literacy: a process of hypothesis testing for individual differences. Top. Lang. Disord. 31, 24–39. doi: 10.1097/TLD.0b013e31820a100d

CrossRef Full Text | Google Scholar

Serafini, F., Moses, L., Kachorsky, D., and Rylak, D. (2020). Incorporating multimodal literacies into classroom-based Reading assessment. Read. Teach. 74, 285–296. doi: 10.1002/trtr.1948

CrossRef Full Text | Google Scholar

Solheim, O. J., and Lundetræ, K. (2018). Can test construction account for varying gender differences in international Reading achievement tests of children, adolescents, and young adults? –a study based on Nordic results in PIRLS, PISA, and PIAAC. Assess Educ 25, 107–126. doi: 10.1080/0969594x.2016.1239612

CrossRef Full Text | Google Scholar

Stephen, R. B., Steven, A. H., and Christopher, J. L. (2002). Relations of the home literacy environment (HLE) to the development of Reading-related abilities: a one-year longitudinal study. Read. Res. Q. 37, 408–426. doi: 10.1598/RRQ.37.4.4

CrossRef Full Text | Google Scholar

Stutz, F., Schaffner, E., and Schiefele, U. (2017). Measurement invariance and validity of a brief questionnaire on Reading motivation in elementary students. J. Res. Read. 40, 439–461. doi: 10.1111/1467-9817.12085

CrossRef Full Text | Google Scholar

Sun, R. C. F., and Hui, E. K. P. (2012). Cognitive competence as a positive youth development construct: a conceptual review. Sci. World J. 2012, 1–7. doi: 10.1100/2012/210953

PubMed Abstract | CrossRef Full Text | Google Scholar

Sun, X., Zhang, K., Marks, R. A., Nickerson, N., Eggleston, R. L., Yu, C. L., et al. (2022). What’s in a word? Cross-linguistic influences on Spanish-English and Chinese-English bilingual Children’s word Reading development. Child Dev. 93, 84–100. doi: 10.1111/cdev.13666

PubMed Abstract | CrossRef Full Text | Google Scholar

Svjetlana, K.-V., and Igor, B. (2006). Metacognitive strategies and Reading comprehension in elementary-school students. Eur. J. Psychol. Educ. 21, 439–451. doi: 10.1007/BF03173513

CrossRef Full Text | Google Scholar

Tabachnick, B. G., Fidell, L. S., and Ullman, J. B. (2007). Using multivariate statistics (5th Edn.). Boston, MA: Pearson.

Google Scholar

U.S. Department of Education (2022). Reading assessment framework for the 2022 and 2024 National Assessment of educational Progress. Washington DC: National Assessment Governing Board.

Google Scholar

Uccelli, P., Galloway, E. P., Barr, C. D., Meneses, A., and Dobbs, C. (2015). Beyond vocabulary: exploring cross-disciplinary academic-language proficiency and its association with Reading comprehension. Read. Res. Q. 50, 337–356. doi: 10.1002/rrq.104

CrossRef Full Text | Google Scholar

Wang, S. (2012). Developing and validating descriptors of languages comprehension ability for Chinese learners of English. Beijing: Intellectual Property Press.

Google Scholar

Wang, W., and Gan, Z. (2021). Development and validation of the Reading motivation questionnaire in an English as a foreign language context. Psychol. Sch. 58, 1151–1168. doi: 10.1002/pits.22494

CrossRef Full Text | Google Scholar

Wang, X., and Jin, Y. (2021). A validation of the Chinese motivation for Reading questionnaire. J. Lit. Res. 53, 336–360. doi: 10.1177/1086296X211030474

CrossRef Full Text | Google Scholar

Watkins, T. (2016). The cultural dimension of cognition. Q Inter 405, 91–97. doi: 10.1016/j.quaint.2015.02.049

CrossRef Full Text | Google Scholar

Welcome, S. E., and Meza, R. A. (2019). Dimensions of the adult Reading history questionnaire and their relationships with Reading ability. Read. Writ. 32, 1295–1317. doi: 10.1007/s11145-018-9912-z

CrossRef Full Text | Google Scholar

Zhang, S. Z., Inoue, T., Shu, H., and Georgiou, G. K. (2020). How does home literacy environment influence Reading comprehension in Chinese? Evidence from a 3-year longitudinal study. Read. Writ. 33, 1745–1767. doi: 10.1007/s11145-019-09991-2

CrossRef Full Text | Google Scholar

Zuckerman, G. A., Kovaleva, G. S., and Kuznetsova, M. I. (2013). Between PIRLS and PISA: the advancement of Reading literacy in a 10-15-year-old cohort. Learn. Individ. Differ. 26, 64–73. doi: 10.1016/j.lindif.2013.05.001

CrossRef Full Text | Google Scholar

Zuikowski, S. S., Piper, B., Kwayumba, D., and Dubeck, M. (2019). Examining options for Reading comprehension assessment in international contexts. J Rev Read 42, 583–599. doi: 10.1111/1467/9817.12285

CrossRef Full Text | Google Scholar

Keywords: EFL learners, assessment instrument, reading literacy, elementary level, questionnaire development

Citation: Li W, Kang S and Shao Y (2023) Development of the reading literacy questionnaire for EFL learners at primary schools. Front. Psychol. 14:1154076. doi: 10.3389/fpsyg.2023.1154076

Received: 30 January 2023; Accepted: 27 April 2023;
Published: 12 May 2023.

Edited by:

Shelia Kennison, Oklahoma State University, United States

Reviewed by:

Sebastian Weirich, Institute for Educational Quality Improvement (IQB), Germany
Anita Habók, University of Szeged, Hungary

Copyright © 2023 Li, Kang and Shao. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Shumin Kang, kangsm2015@163.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.