- 1School of Foreign Languages, Suranaree University of Technology, Nakhon Ratchasima, Thailand
- 2Faculty of Arts and Humanities, Asia-Pacific International University, Mueang Saraburi District, Thailand
Despite increased integration of technology in language education, many EFL learners in Thailand continue to struggle with spoken English proficiency. Traditional instruction often lacks sufficient support for real-time speech development, and limited research has explored the use of AI-supported reading tools to address this gap. This study aimed to examine whether the use of Reading Assistant (RA) software enhances speaking fluency, grammatical accuracy, and narrative structure quality among Thai university students. A quantitative research approach was employed, involving 104 undergraduate EFL students over a 15-week intervention period. Pre-and post-tests assessed fluency (e.g., speech rate, word count), accuracy (error-free clauses), and narrative quality (coherence, sequencing, detail). Correlational analysis explored the relationship between RA software engagement and language development. Students demonstrated significant improvements in fluency and a reduction in disfluencies after sustained use of RA software. Engagement with the software was positively correlated with improvements in both fluency and grammatical accuracy. An enhanced narrative structure was also observed, particularly in terms of coherence and relevance to the visual prompts. These findings support the value of AI-supported reading tools in developing oral language skills and underscore the need for broader institutional support to ensure equitable access.
Background of the study
Language acquisition, particularly in the context of English as a Foreign Language (EFL), is a multifaceted process that encompasses the development of various linguistic competencies, including speaking fluency and accuracy (Goh, 2007). Speaking, as a productive skill, is often considered one of the most challenging aspects of language learning due to its cognitive demands and the need to process linguistic, sociolinguistic, and pragmatic knowledge in real time (Goh, 2007; Tuan and Mai, 2015). Traditional language teaching methodologies, particularly those emphasizing grammatical accuracy, have been criticized for their limited effectiveness in fostering oral proficiency. Studies have shown that overemphasizing grammatical precision, as seen in the Grammar Translation Method (GTM), often leads learners to hesitate to speak (Akramy et al., 2022; Richards and Rodgers, 2001). This has prompted a shift toward more communicative and interactive approaches that prioritize fluency and real-world language use (Dinçer et al., 2012).
In recent years, the integration of technology, such as digital apps, video games, and RA software, has emerged as a promising pedagogical tool to support and enhance language learning outcomes (Habók et al., 2025; Bakken et al., 2021; Dinda et al., 2025; Kitjaroonchai et al., 2024; Ostiz-Blanco et al., 2021; Siengyen and Wasanasomsithi, 2024; Sunghyo et al., 2025). Digital tools often incorporate features like speech recognition, instant feedback, and interactive exercises, which provide learners with opportunities for repeated practice and self-assessment (Kitjaroonchai and Maywald, 2024; Wilang et al., 2025). Such features align with the principles of effective language instruction, which emphasize the importance of constructive feedback, interactive learning, and reducing anxiety in fostering speaking competence (Celce-Murcia, 2001; Oradee, 2012). Moreover, technology can offer personalized learning experiences that cater to the diverse needs and proficiency levels of learners across different demographics (Derakhshan et al., 2016). For instance, Bakken et al. (2021) reported in their systematic review and meta-analysis that students with intellectual developmental disorders benefit substantially from reading interventions that combine decoding and sight-word instruction.
Recent studies have increasingly examined the role of AI-driven and technology-based tools in supporting EFL learning across diverse contexts. In the Thai context, Kitjaroonchai and Maywald (2024) investigated the impact of RA software on speaking fluency. Their findings indicated that consistent use of the software led to significant improvements in students’ speaking fluency and grammatical accuracy. In another study, Kitjaroonchai et al. (2024) demonstrated positive relationships between RA software use and reading comprehension among Thai university students, highlighting the potential of such tools for language development in local contexts. The software’s interactive features, which provided learners with opportunities for repeated practice and immediate feedback, were identified as critical factors contributing to these gains. Elsewhere, Li (2020) examined learners’ use of RA software and reported improvements in reading aloud proficiency and speaking performance. Other studies have shown that digital reading platforms can significantly enhance EFL learners’ comprehension (Silor and Silor, 2025). The software’s capacity to deliver targeted exercises and corrective feedback was highlighted as a key mechanism driving these improvements.
The efficacy of RA software can also be attributed to its adaptive learning capabilities. These programs typically adjust to the individual learner’s needs, providing a tailored experience that addresses specific weaknesses in both reading and speaking (Inthanon and Wised, 2024). This adaptive nature has been shown to benefit diverse learner groups. For example, Wilang et al. (2025) found a positive relationship between reading fluency and oral language performance using RA software, supporting its application in multilingual and mixed-ability classrooms. However, previous research suggests that excessive reliance on AI-related tools can lead to adverse learning outcomes under certain conditions (Silor and Silor, 2025).
While studies have explored the role of technology in enhancing reading, the impact of such tools on speaking proficiency, particularly in terms of fluency and accuracy, has not been thoroughly investigated across various EFL contexts (Kitjaroonchai and Maywald, 2024; Wilang et al., 2025). This is a critical area of inquiry, as speaking fluency and accuracy are essential components of communicative competence, enabling learners to engage effectively in academic, professional, and social contexts (Li, 2020). Moreover, beyond fluency and accuracy, another crucial yet understudied dimension of speaking performance is narrative structure quality, particularly in tasks that require learners to construct and deliver coherent spoken stories.
Storytelling tasks are increasingly recognized for their value in language learning, particularly for augmenting learners’ discourse competence. These tasks compel learners to logically organize ideas, structure events coherently, and articulate meaning effectively (Dosi and Douka, 2021; Ngoi et al., 2024). The quality of narrative structure, expressed through logical sequencing, coherence, descriptive detail, and relevance, serves as a significant indicator of communicative proficiency, especially in oral performance (Oroujlou and Haghjou, 2012). For instance, studies have demonstrated that learners who engaged in oral narrative instruction improved their overall narrative skills, mapping the structure of oral narratives to both reading comprehension and written expression (Kirby et al., 2021). Thus, narrative structure is crucial, as it directly relates to a learner’s ability to communicate effectively.
Moreover, while existing research indicates that guided reading and listening practices enhance narrative organization, much of this literature tends to focus on written narratives or comprehension outcomes (Szécsi, 2021). There is a notable lack of research examining how technology, specifically AI-supported tools like RA software, affects learners’ ability to produce well-structured spoken narratives. RA software can provide scaffolded reading and pronunciation support, potentially helping learners internalize story structures and enhance their vocabulary use (Ngoi et al., 2024). Notably, the impact of such tools on oral narrative organization remains underexplored, warranting further empirical investigation to determine their efficacy in improving narrative quality during oral storytelling tasks (Ngoi et al., 2024). As Choo et al. (2020) argue, digital scaffolding tools can facilitate oral story construction by providing learners with repetitive exposure to structured narrative models.
To address these gaps, the present study investigates the effectiveness of RA software in improving EFL learners’ speaking performance. It focuses on four core objectives aligned with four research hypotheses. First, the study investigates the relationship between students’ engagement with the software and their post-test performance in both fluency and grammatical accuracy, with the expectation that higher levels of interaction with RA software will lead to greater improvements in these areas (Hypothesis 1). Second, the study examines whether the use of RA software significantly improves students’ speech fluency, as indicated by increased word count and pruned speech rate, along with reduced long pauses, pause duration, and repetitions (Hypothesis 2). Third, it explores the extent to which RA software use contributes to improved grammatical accuracy in students’ speech, as reflected by a higher percentage of error-free clauses and fewer errors per 100 words (Hypothesis 3). Finally, the study examines whether students’ engagement with the RA software is positively associated with the quality of their narrative structure in oral storytelling tasks, particularly in organizing picture-based story prompts through logical sequencing, coherence, descriptive detail, and relevance (Hypothesis 4). These objectives aim to provide a comprehensive understanding of how RA software can support multiple dimensions of oral language proficiency in EFL contexts. Building upon these insights, this study is designed to address four interrelated hypotheses.
H1: Students’ engagement with the RA software is positively associated with improvements in post-test speech fluency and grammatical accuracy, as reflected by higher word counts, speech rate, accuracy scores, and fewer disfluency markers.
H2: Students’ speech fluency performance significantly improves after using the RA software, as indicated by increased word count and pruned speech rate, and reduced long pauses, pause duration, and repetitions.
H3: Students’ speech accuracy will improve following the use of the RA software, as evidenced by an increase in the percentage of error-free clauses (%EFC) and a decrease in the number of errors per 100 words (EPW).
H4: Students’ engagement with the RA software is positively associated with the quality of narrative structure in terms of logical sequencing, coherence and flow, content accuracy, descriptive detail, and relevance to picture-based story prompts.
Materials and methods
This study employed a quantitative research approach to examine the effects of Reading Assistant (RA) software on EFL learners’ speaking fluency, grammatical accuracy, and narrative quality using performance-based quantitative measures. The data were derived from pre- and post-test speaking performances, which were analyzed using standardized fluency and accuracy indices and rubric-based scoring. In addition, verbatim transcriptions of learners’ oral performances were systematically analyzed using an analytic rubric (refer to Table 1) to assess narrative quality, thereby operationalizing and quantifying qualitative features of spoken discourse. This approach enabled a structured evaluation of discourse-level development while maintaining analytical consistency with the study’s quantitative focus.
Research setting and participants
This study was conducted at two universities in Thailand—a private and a public one. The participants consisted of 104 EFL students from Thailand. Their English proficiency levels ranged from basic (CEFR A1–A2) to intermediate (CEFR B1–B2), as determined by the RA software upon their initial engagement with the platform. Data were collected from students who completed the RA software embedded in their reading courses. Participants’ engagement ranged from 150 to 482 min per week. User activity data were stored in a dedicated database and subsequently analyzed to evaluate the effectiveness of the RA software in supporting language development. Participant recruitment followed voluntary participation procedures, and informed consent was obtained prior to data collection.
Students who did not complete the tasks or the speaking tests at either the pre-test or post-test stage were excluded from the final analysis. This screening criterion ensured that only complete data sets were included, thereby strengthening the internal validity of the findings.
Although the use of the Reading Assistant (RA) was embedded within the course, all RA-related activities were conducted outside regular class hours. Research assistants monitored students’ progress in the activity and provided reminders and encouragement to participating students to ensure continued engagement with the tasks.
Instruments
Four instruments were employed for data collection: (1) a Pre-Speaking Test, consisting of a 2-min picture description task; (2) the Reading Assistant (RA) software and instructional intervention; (3) a Post-Speaking Test, which mirrored the pre-test format with a 2-min picture description task; and (4) a Content Analysis Rubric used to evaluate the transcriptions of students’ spoken responses. All instruments were reviewed and validated by three experts in applied linguistics to ensure content validity, appropriateness for the research objectives, and suitability for use with EFL learners.
Pre-speaking test
The pre-speaking assessment required participants to complete a 2-min spoken task based on one of eight picture sequences (see Figures 1, 2) adapted from online children’s storybooks. Each participant randomly selected a set of images and was given 2 min to prepare before describing a story. The task and scoring criteria were validated by two experts in English language teaching and assessment to ensure content appropriateness and reliability. All responses were recorded using a computer-based voice recording application for subsequent analysis.
Reading assistant software
The RA software is an educational tool designed to deliver personalized instruction.1 It listens to students as they read aloud, detecting mispronunciations and providing immediate corrective feedback. The software also generates automatic scores based on students’ oral reading performance and offers real-time guidance to support improvement (Scientific Learning, 2021). In this study, the RA software functioned as the primary intervention. Participants were encouraged to log in and engage in oral reading practice for at least 30 min per day, totaling approximately 150 min per week, at times convenient to them, over the course of 15 weeks.
Post-speaking test
Following the 15-week engagement with the RA software, the post-speaking test was administered under conditions consistent with the pre-speaking assessment. The participants were asked to randomly select a set of picture sequences and deliver a 2-min spoken response that provided a detailed description of the story. The task content and evaluation criteria were validated by two experts in English language teaching and assessment to ensure consistency, reliability, and alignment with the study objectives. Each participant’s speech was recorded using the designated voice recording system for subsequent analysis.
Data collection
Throughout the research period, the coordinating researcher monitored participants’ reading progress weekly via the RA software, recording both the duration of use and reading proficiency levels. For the pre- and post-speaking assessments, the researchers utilized VEED.IO automatic speech recognition (ASR) software to generate speech transcriptions. The pre-test was conducted in week 1, and the post-test was administered after week 15 of the RA software intervention. These transcriptions were subsequently reviewed and analyzed to assess key fluency indicators, including articulation rate, frequency of short pauses, and long pauses exceeding 4 s.
Data analysis
The data obtained from the pre- and post-speaking tests were transcribed primarily using VEED.IO automatic speech recognition (ASR) software. Following transcription, the researchers carefully reviewed and refined the texts to compute a key indicator of L2 oral fluency: pruned speech rate (PSR). This measure was adapted from Bui and Huang (2018) and was calculated using the following formula:
PSR = (Total words produced—vocal fillers—incomplete words—repeated words) × 60 ÷ Total time of speech (in seconds)
In addition to PSR, the study examined several other fluency-related variables. These included:
(1) the number of long pauses (LP), defined as silent intervals exceeding 4 s; (2) the duration of pauses (DP), representing the total time spent pausing; (3) the frequency of filled pauses (FP), such as “um,” “uh,” “like,” or “you know,” which serve as vocalized hesitation markers; and (4) the occurrence of repeated words (RP), which included instances of self-correction, word repetition, or on-the-fly revision during speech production. This multidimensional analysis provided a more nuanced understanding of the learners’ spoken fluency before and after the intervention.
Furthermore, speech accuracy was evaluated by calculating the proportion of error-free clauses within each participant’s 2-min spoken response. The analysis also included the average number of error-free clauses and the number of errors per 100 words. These measures provided a quantitative basis for assessing grammatical accuracy and were interpreted in accordance with the definitions presented below:
- Percentage of error-free clauses refers to the ratio of grammatically correct clauses to the total number of clauses
- Errors per 100 words is calculated by dividing the total number of errors by the total number of words produced, then multiplying by 100.
To ensure the reliability and integrity of the findings, the validity of the transcriptions, the accuracy of the data derived from the transcriptions, the test scores, and the content analysis ratings were verified by two experts in statistics before and after the analysis.
The content analysis scoring rubric (see Table 1) was developed and designed to assess the quality of students’ spoken outputs based on five key criteria: logical sequencing, story coherence and flow, content accuracy, descriptive detail, and relevance to visual prompts. Each criterion is rated on a 4-point scale, with descriptors ranging from “Excellent” to “Needs Improvement.” Scores reflect the extent to which a student’s spoken narrative is logically structured, factually accurate, richly detailed, and aligned with the given images. The rubric was reviewed and validated by two experts in English language assessment to ensure content validity, clarity of descriptors, and appropriateness for evaluating oral storytelling performance. The total score, ranging from 5 to 20, determines overall performance levels and guides evaluators in identifying strengths and areas for improvement in oral storytelling.
Results
H1: Students’ engagement with the RA software is positively associated with improvements in post-test speech fluency and grammatical accuracy, as reflected by higher word counts, speech rate, accuracy scores, and fewer disfluency markers.
To explore the relationships among key indicators of speech fluency and accuracy in the context of RA software use, a Pearson correlation analysis was conducted (see Table 2). The analysis focused on variables such as word count (WC), pruned speech rate (PSR), pausing behavior (long pauses, pause duration, repeated and filled pauses), and accuracy indicators (percentage of error-free clauses and errors per 100 words). Additional correlates included learners’ engagement with the software, measured by minutes per week (MPW), selection of stories per week (SPW), percentage of development (PD), and reading comprehension scores (RC).
Table 2. Correlations among speech fluency and accuracy variables associated with the use of RA software.
Pearson correlation analysis indicated a strong and statistically significant correlation between pre- and post-word count (r = 0.951, p < 0.01) and pre- and post-pruned speech rate (r = 0.847, p < 0.01), suggesting that learners’ fluency development was consistent across speaking tests. Similarly, post-word count was positively associated with MPW (r = 0.310, p < 0.01), indicating that greater weekly engagement with the RA software contributed to higher speech output.
In terms of accuracy, post-percentage of error-free clauses (PostPEFC) was significantly negatively correlated with post-errors per 100 words (PostEPW) (r = −0.373, p < 0.01), supporting the expected inverse relationship between these two measures of accuracy. PostPEFC was also negatively correlated with repeated (r = −0.237, p < 0.05) and filled pauses (r = −0.377, p < 0.01), implying that higher grammatical accuracy was associated with fewer disfluency markers.
Notably, post-pruned speech run correlated positively with MPW (r = 0.317, p < 0.01), percentage of development (r = 0.216, p < 0.05), and reading comprehension (r = 0.183, p < 0.05), highlighting that increased interaction with the software was linked to gains in both fluency and comprehension. Similarly, post-word count correlated positively with PD (r = 0.193, p < 0.05) and RC (r = 0.241, p < 0.05), further reinforcing the interconnectedness between reading activity and speaking performance. Although statistically significant, the positive correlation suggests other unmeasured factors may also influence fluency gains.
Conversely, negative correlations were found between post-pause duration and accuracy measures, including post-percentage of error-free clauses (r = −0.273, p < 0.01), suggesting that longer pauses were associated with lower grammatical accuracy. These findings collectively underscore the RA software’s potential to foster fluency and accuracy, particularly for students who engaged consistently and extensively with the platform.
H1: Students’ speech fluency performance significantly improves after using the RA software, as indicated by increased word count and pruned speech rate, and reduced long pauses, pause duration, and repetitions.
To address Hypothesis 2, the study first examines whether students’ speech fluency improved following regular engagement with the Reading Assistant (RA) software (see Table 3). Speech fluency is evaluated using multiple indicators, including total word count, pruned speech rate, long pauses, pause duration, and repetitions. Improvements in these areas would suggest that the RA software provided learners with enhanced opportunities for practice, resulting in more fluid and confident oral production.
In Table 3, a paired samples t-test was conducted to compare pre- and post-intervention performance across six speech fluency variables. Notably, there was a significant increase in word count from pre-test (M = 76.55, SD = 46.61) to post-test (M = 91.23, SD = 46.38), t(103) = −10.33, p < 0.001. The mean difference of −14.68 clearly indicates that participants produced a higher number of words after the intervention. In a similar vein, a statistically significant increase in pruned speech rate was observed, rising from pre-test (M = 64.23, SD = 34.08) to post-test (M = 71.48, SD = 33.07), t(103) = −3.98, p < 0.001. This mean difference of -7.25 points toward enhanced speech fluency as a result of the intervention. Furthermore, there was a marked reduction in the number of long pauses, decreasing significantly from pre-test (M = 1.21, SD = 1.28) to post-test (M = 0.34, SD = 0.51), t(103) = 7.78, p < 0.001. The mean difference of 0.87 reflects a notable decrease in hesitation during speech. In addition, the duration of pauses significantly declined from pre-test (M = 10.49, SD = 15.87) to post-test (M = 3.09, SD = 5.30), t(103) = 5.75, p < 0.001, with a mean difference of 7.40. This reduction suggests a smoother and more fluent delivery following the intervention. Equally important, participants demonstrated a significant decrease in repetition frequency, dropping from pre-test (M = 3.85, SD = 5.62) to post-test (M = 1.82, SD = 2.17), t(103) = 4.80, p < 0.001. The mean difference of 2.03 reinforces the effectiveness of the intervention in minimizing speech disfluencies. However, it is worth noting that the change in filled pause frequency was not statistically significant. Although there was a slight decrease from pre-test (M = 2.32, SD = 2.24) to post-test (M = 1.97, SD = 2.30), t(103) = 1.27, p = 0.207, the mean difference of 0.35 did not reach the threshold for statistical significance.
Taken together, these findings demonstrate that the intervention led to statistically significant improvements in key aspects of speaking fluency, including word count, pruned speech rate, long pauses, pause duration, and repetitions. While the reduction in filled pauses was not significant, the overall pattern of results suggests a meaningful and positive impact on the participants’ spoken performance.
H1: Students’ speech accuracy will improve following the use of the RA software, as evidenced by an increase in the percentage of error-free clauses and a decrease in the number of errors per 100 words.
To investigate the effects on EFL undergraduate students’ speech accuracy following their engagement with the RA software, a paired-sample t-test was conducted to compare two key indicators: the percentage of error-free clauses and the number of errors per 100 words between the pre-speaking and post-speaking tests. The results are summarized in Table 4.
A statistically significant increase was found in the percentage of error-free clauses, rising from a pre-test mean of 8.83% (SD = 12.93) to a post-test mean of 45.20% (SD = 20.91), t(103) = −16.32, p < 0.001. The mean difference of −36.37 percentage points indicates a substantial improvement in students’ ability to produce grammatically accurate clauses during spoken performance. This improvement suggests that the intervention successfully enhanced learners’ control over syntax and grammar structures in spontaneous speech. In contrast, a significant reduction was observed in the number of errors per 100 words, which decreased from a pre-test mean of 16.14 (SD = 9.11) to 7.85 (SD = 3.99) in the post-test, t(103) = 10.00, p < 0.001. The mean difference of 8.28 demonstrates a considerable decline in error frequency, further confirming the positive impact of the instructional intervention on learners’ spoken accuracy.
These results provide strong empirical evidence that the RA software had a positive impact on students’ grammatical accuracy in spoken English. The substantial improvements observed in both accuracy indicators suggest that, following the intervention, learners demonstrated enhanced capacity to produce more accurate and fluent speech with markedly fewer grammatical errors.
H1: Students’ engagement with the RA software is positively associated with the quality of narrative structure in terms of logical sequencing, coherence and flow, content accuracy, descriptive detail, and relevance to pictures.
To assess the impact of instructional intervention on students’ speech content quality, a paired-sample t-test was conducted to compare pre- and post-test scores across six evaluative dimensions: Logical Sequencing, Story Coherence and Flow, Content Accuracy, Descriptive Detail, Relevance to Pictures, and Total Score (see Table 5).
The analysis revealed a significant improvement in logical sequencing, with mean scores increasing from 2.63 (SD = 0.61) in the pre-test to 3.16 (SD = 0.63) in the post-test, t(103) = −7.60, p < 0.001. The mean difference of -0.54 suggests enhanced organization and structure in students’ speech production, indicating that students were better able to sequence their ideas logically. Building on this, scores for story coherence and flow also improved significantly, rising from a pre-test mean of 2.44 (SD = 0.67) to 3.16 (SD = 0.63) post-intervention, t(103) = −9.72, p < 0.001. This result indicates a substantial enhancement in students’ ability to present their ideas in a clear, connected, and fluid manner—essential elements for coherent spoken narratives. In addition, mean scores for content accuracy increased from 2.63 (SD = 0.61) to 3.16 (SD = 0.63), with a significant mean difference of −0.54, t(103) = −7.60, p < 0.001. This finding reflects greater factual correctness and stronger alignment with the core story elements after the intervention.
Similarly, the dimension of descriptive detail showed a marked improvement. Scores rose from a pre-test mean of 2.17 (SD = 0.61) to 2.85 (SD = 0.50), t(103) = −9.40, p < 0.001. The mean difference of −0.67 highlights a notable improvement in the richness and specificity of students’ spoken content. Moreover, students demonstrated an enhanced ability to relate their speech to visual prompts, as evidenced by an increase in relevance-to-picture scores from 2.64 (SD = 0.64) to 3.17 (SD = 0.63), t(103) = −7.32, p < 0.001. The significant mean difference of -0.53 suggests that students were better able to integrate visual stimuli meaningfully into their spoken descriptions. Finally, the total content score rose significantly from 12.51 (SD = 2.88) to 15.45 (SD = 2.84), t(103) = −9.16, p < 0.001. The mean difference of −2.94 demonstrates a broad and meaningful enhancement in the overall quality of students’ speech content, confirming the positive impact of the instructional intervention across all measured dimensions.
Discussion
The findings of this study revealed a strong association between students’ level of engagement with the Reading Assistant (RA) software and their improvement in fluency, grammatical accuracy, and narrative quality. Learners who interacted more consistently with the software, measured by activity duration and task completion, showed greater gains in fluency and produced more accurate, fluent speech with fewer disfluency markers. These results align with previous research emphasizing the importance of engagement and individualized learning paths in technology-enhanced instruction (Derakhshan et al., 2016) and underscore how digital tools like RA software can be effectively implemented across diverse institutional settings, as evidenced by favorable outcomes in both a multilingual, international private university context and a traditional Thai public university environment. Using performance-based indicators and rubric-guided analysis of transcribed speech (Kitjaroonchai and Maywald, 2024; Wilang et al., 2025), the study demonstrates that sustained RA engagement is associated with meaningful improvements in learners’ production and organization of spoken language.
The findings strongly support Hypothesis 2, indicating that students’ speech fluency significantly improved following sustained engagement with RA software, as evidenced by increased word count and pruned speech rate (PSR) alongside reductions in long pauses, total pause duration, and repetitions. These gains are consistent with prior work showing that technology-assisted environments offering repeated oral reading and immediate feedback accelerate utterance fluency by supporting prosodic timing and reducing hesitancy markers (Handley and Wang, 2023; Li, 2020). In RA’s design, real-time feedback on mispronunciations and oral reading performance, and the delivery of linguistically coherent texts, provide high-frequency, feedback-rich practice that fosters the automatization of sublexical decoding and chunking—mechanisms linked to improved temporal fluency and smoother delivery (Scientific Learning, 2021; Celce-Murcia, 2001). The observed decrease in long pauses and total pause duration suggests lowered planning burden and increased speech continuity, aligning with evidence that scaffolded practice and individualized pathways enhance fluency outcomes in EFL learners (Derakhshan et al., 2016; Kitjaroonchai and Maywald, 2024). Moreover, the rise in speech output (word count) and PSR mirrors findings in Thai contexts where RA use has been tied to measurable advances in oral performance and reading-speaking transfer (Kitjaroonchai and Maywald, 2024; Wilang et al., 2025). Pedagogically, these improvements resonate with the broader transition in EFL from accuracy-dominant approaches to communicative, fluency-oriented instruction, where repeated, meaningful practice and negotiated input/output are central (Richards and Rodgers, 2001; Goh, 2007). Together, the pattern of change across multiple fluency indicators and the convergence with prior evidence highlight RA software as a viable, scalable tool for building speech rate, reducing disfluency, and enhancing overall utterance fluency in university EFL settings (Li, 2020; Handley and Wang, 2023).
The results support Hypothesis 3, where the students’ speech accuracy improved significantly following RA use, as reflected in a marked increase in the percentage of error-free clauses (EFC) and a substantial reduction in errors per 100 words (EPW). These twin shifts indicate enhanced control over morphosyntax under real-time speaking conditions—an outcome that coheres with evidence that repeated oral reading plus immediate, item-level feedback accelerates the noticing and stabilization of grammatical forms in production (Li, 2020; Derakhshan et al., 2016). Mechanistically, RA’s design, automated detection of mispronunciations and miscues, instant corrective prompts, and exposure to linguistically coherent input, create high-frequency, feedback-rich practice episodes that promote form-focused attention without sacrificing communicative flow, which in turn supports more accurate clause formation (Scientific Learning, 2021). Our accuracy gains align with research showing that improvements in utterance fluency (e.g., smoother timing and reduced disfluency) can free attentional resources for grammatical encoding, thereby increasing EFC and lowering EPW during spontaneous speech (Handley and Wang, 2023; Kitjaroonchai and Maywald, 2024). They also resonate with Thai-context studies linking RA engagement to measurable advances in oral performance and reading-speaking transfer, suggesting that iterative exposure to well-modeled text supports more stable grammar in output (Kitjaroonchai and Maywald, 2024; Wilang et al., 2025). Importantly, these findings speak to long-standing challenges in Thai higher education, where accuracy-dominated, GTM-influenced traditions may constrain opportunities for oral practice; RA’s structured, individualized pathways appear to mitigate this gap by coupling self-paced rehearsal with targeted feedback, yielding more accurate spontaneous speech (Oradee, 2012). The convergent improvement across EFC and EPW, the mechanism-consistent nature of RA’s feedback loop, and corroborating results from related EFL contexts make a compelling case for RA as a practical, scalable tool to complement classroom instruction and systematically raise spoken grammatical accuracy in university EFL settings (Li, 2020; Kitjaroonchai and Maywald, 2024).
The findings provide compelling support for Hypothesis 4, which states that students’ engagement with RA software is positively associated with higher-quality narrative structure, evidenced by gains in logical sequencing, coherence/flow, content accuracy, descriptive detail, and relevance to picture prompts. Mechanistically, RA’s repeated oral reading with immediate, item-level feedback and exposure to linguistically coherent texts can reinforce story grammar (setting, initiating event, sequence, outcome) and discourse markers, enabling learners to plan, link, and elaborate events more effectively in spoken narratives (Scientific Learning, 2021; Celce-Murcia, 2001). These outcomes align with research showing that digital scaffolds and storytelling tasks strengthen discourse competence by modeling plot organization and thematic progression, thereby improving coherence and topical maintenance (Choo et al., 2020; Dosi and Douka, 2021). Improvements in descriptive detail and content accuracy are also consistent with studies indicating that technology-mediated storytelling supports vocabulary access and elaboration of detail, helping learners integrate visual stimuli with semantically appropriate language (Ngoi et al., 2024; Oroujlou and Haghjou, 2012). In addition, prior work links oral narrative skill to broader literacy benefits—suggesting that repeated exposure to structured texts and retelling tasks can transfer to organizational control and coherence in oral output (Kirby et al., 2021; Huang et al., 2022). Within Thai EFL contexts, the pattern of improvements converges with evidence that RA-supported practice facilitates measurable advances in oral performance and reading–speaking transfer, indicating discourse-level gains beyond fluency and accuracy (Kitjaroonchai and Maywald, 2024; Wilang et al., 2025). The convergent gains across rubric dimensions and corroborating literature on digital storytelling and RA-mediated practice make a strong case that sustained RA engagement can systematically enhance the discourse architecture of learners’ spoken narratives in university EFL settings (Dosi and Douka, 2021; Ngoi et al., 2024).
While this paper did not include a separate qualitative thematic analysis of learner perceptions, qualitative features of spoken performance were captured through rubric-based analysis of transcribed narratives. This analytic choice allowed narrative quality to be examined systematically alongside fluency and grammatical accuracy. The observed improvements may be attributed to repeated exposure to linguistically rich input and scaffolded reading–speaking transfer facilitated by the RA software, which may have supported lexical retrieval, syntactic patterning, and discourse planning during speaking tasks.
This finding extends existing research on the effectiveness of Reading Assistant (RA) use in the Thai EFL context (e.g., Kitjaroonchai et al., 2024; Kitjaroonchai and Maywald, 2024; Wilang et al., 2025) by examining students’ narrative quality outcomes across two universities. While the present study does not include a direct comparative analysis between institutions, such comparisons represent a promising direction for future research.
Nevertheless, several limitations warrant consideration. The sample was drawn from two institutional contexts, which may still limit the generalizability of the findings. Additionally, although narrative quality was assessed rigorously, learners’ affective experiences (e.g., motivation or autonomy) were not explored qualitatively and should be addressed in future research. Potential confounding variables, such as prior digital literacy or individual differences in language aptitude, may also have influenced the outcomes. Future studies employing longitudinal designs and richer qualitative data sources would help clarify the mechanisms underlying technology-mediated speaking development. Further, the use of RA software across different educational levels, institutional contexts, and learner populations, as well as its interaction with other AI-supported tools, can be examined. Incorporating learner interviews, reflective journals, or classroom observations would also provide richer insights into affective and motivational dimensions that complement performance-based outcomes.
Pedagogical implications
The findings of this study offer several important pedagogical implications for English language teaching in EFL contexts, particularly within Thai higher education. The observed improvements in students’ speaking fluency, grammatical accuracy, and narrative structure quality suggest that embedding Reading Assistant (RA) software within reading courses can meaningfully support the development of oral production skills. This integration appears to help bridge the persistent gap between reading comprehension and spoken performance, an area in which many EFL learners experience difficulty.
A particularly salient point emerging from the findings concerns the role of individualized feedback tailored to learners’ specific abilities, as reflected in the performance data generated by the RA software. The system’s capacity to track learners’ reading speed, pronunciation accuracy, error patterns, and repeated exposure to lexical and syntactic structures allows feedback to be aligned with individual developmental needs rather than uniformly prescribed across the class. For example, learners demonstrating lower fluency or persistent grammatical inaccuracies can be guided toward additional practice opportunities and targeted feedback, while more proficient learners may benefit from increased task complexity or extended output demands. Such data-informed feedback supports differentiated instruction by enabling learners to progress at their own pace and to focus on areas requiring the greatest attention, thereby fostering more efficient and personalized learning trajectories.
From a curricular perspective, the findings indicate that reading courses may serve as effective entry points for integrating AI-driven tools that support multiple language skills. Embedding RA software into existing reading curricula allows students to engage in extended, self-paced practice beyond class hours, thereby increasing exposure to comprehensible input and opportunities for output. For teachers, this suggests a shift toward a blended instructional model in which in-class activities focus on interaction and meaning negotiation, while RA-supported tasks reinforce fluency and accuracy through autonomous practice.
At the institutional level, the findings underscore the role of administrators and curriculum planners in facilitating the pedagogically sound adoption of educational technologies. Given ongoing challenges in English speaking proficiency among Thai university students, there is a growing need for innovative and engaging instructional approaches. Prior research suggests that technology-enhanced learning environments can support learner motivation and engagement. In this regard, RA software may contribute to increased learner confidence, reduced speaking anxiety, and greater self-directed learning. Institutional support, in terms of budget allocation, digital infrastructure, and professional development, remains essential to ensure that such tools are implemented equitably and effectively.
Directions for future research
To better understand mechanisms, longitudinal designs with intermediate checkpoints should examine how fluency and accuracy co-develop and how narrative quality evolves with different prompt types. Mixed methods studies that pair rubric outcomes with interviews and learner analytics can illuminate motivation, autonomy, and strategy use. Experimental work comparing RA to alternative speaking-focused tools (e.g., shadowing apps, dialogic practice platforms) would clarify relative efficacy. Finally, cross-institutional replication with varied cohorts (e.g., vocational colleges, secondary schools) and controlled access conditions can test scalability and equity considerations.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Ethics statement
The studies involving humans were approved by Human Research Ethics Committee, Suranaree University of Technology. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
Author contributions
JW: Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Supervision, Visualization, Writing – original draft, Writing – review & editing. NK: Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Validation, Visualization, Writing – original draft, Writing – review & editing. SS: Conceptualization, Funding acquisition, Investigation, Validation, Writing – original draft, Writing – review & editing.
Funding
The author(s) declared that financial support was received for this work and/or its publication. This work was supported by the Suranaree University of Technology (SUT), Thailand Science Research and Innovation (TSRI), and National Science, Research and Innovation Fund (NSRF), NRIIS number 204261.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was used in the creation of this manuscript. Language editing.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Footnotes
References
Akramy, S. A., Habibzada, S. A., and Hashemi, A. (2022). Afghan EFL teachers’ perceptions towards Grammar-Translation Method (GTM). Cogent Educ. 9:2127503. doi: 10.1080/2331186X.2022.2127503
Bakken, R. K., Næss, K.-A. B., Lemons, C. J., and Hjetland, H. N. (2021). A systematic review and meta-analysis of reading and writing interventions for students with disorders of intellectual development. Educ. Sci. 11:638. doi: 10.3390/educsci11100638
Bui, G., and Huang, Z. (2018). L2 fluency as influenced by content familiarity and planning: Performance, measurement, and pedagogy. Lang. Teach. Res. 22, 94–114. doi: 10.1177/1362168816656650
Celce-Murcia, M. (2001). Teaching English as a Second or Foreign Language, 3rd Edn. Boston, MA: Heinle & Heinle.
Choo, Y. B., Abdullah, T., and Nawi, A. M. (2020). Digital storytelling vs. oral storytelling: An analysis of the art of telling stories now and then. Univ. J. Educ. Res. 8, 46–50. doi: 10.13189/ujer.2020.081907
Derakhshan, A., Khalili, A. N., and Beheshti, F. (2016). Developing EFL learners’ speaking ability, accuracy, and fluency. Engl. Lang. Lit. Stud. 6, 177–186. doi: 10.5539/ells.v6n2p177
Dinçer, A., Yesilyurt, S., and Göksu, A. (2012). Promoting speaking accuracy and fluency in foreign language classroom: a closer look at English speaking classrooms. Ezincan Üniv. Eðitim Fakültesi Dergisi Cilt-Sayı 14, 97–108. doi: 10.17556/JEF.84390
Dinda, D., Noni, N., Munir, M., and Tahir, M. (2025). The effect of digital reading platforms on EFL students’ reading comprehension: a quasi-experimental study. Klasikal 7, 272–282. doi: 10.52208/klasikal.v7i1.1287
Dosi, I., and Douka, G. (2021). Effects of language proficiency and contextual factors on second language learners’ written narratives: a corpus-based study. Int. J. Res. Stud. Educ. 10, 1–18. doi: 10.5861/ijrse.2021.5076
Goh, C. C. M. (2007). Teaching Speaking in the Language Classroom. Singapore: SEAMEO Regional Language Centre.
Habók, A., Magyar, A., and Molnár, G. (2025). Developing reading strategies through technology-based intervention. Read. Psychol. 1–23. doi: 10.1080/02702711.2025.2551564
Handley, Z. L., and Wang, H. (2023). What do the measures of utterance fluency employed in automatic speech evaluation (ASE) tell us about oral proficiency? Lang. Assess. Q. 21, 3–32. doi: 10.1080/15434303.2023.2283839
Huang, B. H., Bedore, L. M., Ramírez, R., and Wicha, N. (2022). Contributions of oral narrative skills to english reading in Spanish-English Latino/a dual language learners. J. Speech Lang. Hear. Res. 65, 653–671. doi: 10.1044/2021_JSLHR-21-00105
Inthanon, W., and Wised, S. (2024). Tailoring education: A comprehensive review of personalized learning approaches based on individual strengths, needs, skills, and interests. J. Educ. Learn. Rev. 1, 35–46. doi: 10.60027/jelr.2024.7
Kirby, M., Spencer, T., and Chen, Y. (2021). Oral narrative instruction improves kindergarten writing. Read. Writ. Q. 37, 574–591. doi: 10.1080/10573569.2021.1879696
Kitjaroonchai, N., and Maywald, S. (2024). The effects of reading assistant software on the speech fluency and accuracy of EFL University students. J. Engl. Teach. 10, 183–197. doi: 10.33541/jet.v10i2.5763
Kitjaroonchai, N., Sanitchai, P., and Phutikettrkit, C. (2024). Investigating the relationship between the use of reading assistant software and reading comprehension skills: a case among Thai EFL university students. J. Engl. Teach. 10, 306–319. doi: 10.33541/jet.v10i3.6113
Li, J. (2020). An empirical study on reading aloud and learning English by the use of the reading assistant SRS. Int. J. Emerg. Technol. Learn. 15, 103–117. doi: 10.3991/ijet.v15i21.18193
Ngoi, S., Tan, K., Alias, J., and Mat, N. (2024). Digital storytelling to improve English narrative writing skills. Int. J. Acad. Res. Bus. Soc. Sci. 14, 546–560. doi: 10.6007/ijarbss/v14-i4/21249
Oradee, T. (2012). Developing speaking skills using three communicative activities (discussion, problem-solving, and role-playing). Int. J. Soc. Sci. Hum. 2, 533–535. doi: 10.7763/IJSSH.2012.V2.164
Oroujlou, N., and Haghjou, S. (2012). The impact of narrative storyline complexity on EFL learners’ oral performance. Int. J. Linguist. 4, 73–86. doi: 10.5296/ijl.v4i2.1343
Ostiz-Blanco, M., Bernacer, J., Garcia-Arbizu, I., Diaz-Sanchez, P., Rello, L., Lallier, M., et al. (2021). Improving reading through videogames and digital apps: a systematic review. Front. Psychol. 12:652948. doi: 10.3389/fpsyg.2021.652948
Richards, J. C., and Rodgers, T. S. (2001). Approaches and Methods in Language Teaching, 2nd Edn. Cambridge: Cambridge University Press.
Scientific Learning (2021). An Online Guided Reading Tool for Improving Vocabulary, Fluency, Comprehension, and Prosody. Oakland, CA: Scientific Learning Corporation.
Siengyen, C., and Wasanasomsithi, P. (2024). Development of a computerized dynamic reading assessment program to measure English reading comprehension of Thai EFL undergraduate students. rEFLections 31, 1216–1248. doi: 10.61508/refl.v31i3.277414
Silor, A. C., and Silor, F. S. C. (2025). Boosting reading comprehension through AI-based learning tools. Int. J. Learn. Teach. Educ. Res. 24, 61–79. doi: 10.26803/ijlter.24.9.4
Sunghyo, C., Hyeon, J., Sungbok, S., and Md Naimul, H. (2025). Reading.help: supporting EFL readers with proactive and on-demand explanation of English grammar and semantics. arXiv [Preprint]. doi: 10.48550/arXiv.2505.14031
Szécsi, G. (2021). Self, community, narrative in the information age. Empedocles Eur. J. Philos. Commun. 12, 167–181. doi: 10.1386/ejpc_00035_1
Tuan, N. H., and Mai, T. N. (2015). Factors affecting students’ speaking performance at Le Thanh Hien High School. Asian J. Educ. Res. 3, 8–23.
Keywords: EFL learners, fluency, grammar accuracy, narrative quality, reading assistant software, Thailand
Citation: Wilang JD, Kitjaroonchai N and Seepho S (2026) Reading assistant software and its impact on speaking fluency, grammar accuracy, and narrative quality among EFL learners. Front. Educ. 11:1754473. doi: 10.3389/feduc.2026.1754473
Received: 26 November 2025; Revised: 17 January 2026; Accepted: 19 January 2026;
Published: 06 February 2026.
Edited by:
Yasir Riady, Indonesia Open University, IndonesiaReviewed by:
Nur Hanifah Insani, Universitas Negeri Semarang, IndonesiaLeonard Mselle, The University of Dodoma, Tanzania
Copyright © 2026 Wilang, Kitjaroonchai and Seepho. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Jeffrey Dawala Wilang, d2lsYW5nQGcuc3V0LmFjLnRo