- 1University of Nebraska at Kearney, Kearney, NE, United States
- 2Harvard University, Cambridge, MA, United States
- 3Durham University, Durham, United Kingdom
Students who are at-risk of academic underachievement demonstrate comparable academic growth to their typically developing peers during the academic school year. However, research has consistently shown that these students experience significant learning loss during the summer months, averaging a decline of 3–4 months of academic progress and knowledge. While substantial funding has been allocated to support summer instruction, there remains a lack of research specifically examining the impact of summer interventions for struggling readers at the secondary level. This synthesis systematically reviewed the effects of summer interventions with a reading component on the reading outcomes for students with, or at-risk of, reading difficulties in grades 6–12. Analysis of 13 studies revealed mixed results regarding the effectiveness of such interventions. Notably, teacher-led summer programs tended to be more effective than student-directed, home-based models that provided books without direct instruction. These findings highlight the critical need for more rigorous and targeted research on summer interventions for secondary students with or at-risk of reading difficulties.
Introduction
Reading ability at the completion of high school, especially reading comprehension proficiency, is one of the strongest predictors of post-school economic and career success (Hodge et al., 2021). However, despite the emphasis to improve accountability and reading scores for struggling students through legislation such as the United States’ Act, N. C. L. B. (2001) and Every Student Succeeds Act, 20 U.S.C. § 6301 (2015), there remains a large proportion of upper elementary and secondary school students who fail to read at or near grade level. Some of these students have failed to respond to remediation efforts provided by the school previously or are late-emerging struggling readers (Catts et al., 2012). For instance, national data indicate that roughly 70% of 8th grade students in the United States score below grade-level standards in reading, including 91% below proficiency for students with disabilities (National Center for Education Statistics, 2024). This reflects a 3% decrease in students performing at or above proficiency compared to 2019, and around a 5% drop from levels observed between 2013 and 2017. Despite increased attention to reading instruction and support, eighth-grade students’ scores on the NAEP exam have remained relatively unchanged since 1992. This data is concerning given a failure to read at or near grade level is associated with up to a 20 times more likelihood of dropping out of school (Russell and Drake Shiffler, 2019; U.S. Department of Education, 2005) and can significantly affect career and job readiness (Hart, 2005).
Previous research suggests that one source of underperformance in reading is the lack of reading growth that occurs when schools are not in session over the summer, particularly for students who are at-risk or identified with reading difficulties (Kim and Quinn, 2013; Lauer et al., 2006). Some research has found that school year academic gains do not significantly differ across at-risk categories such as socioeconomic status, race, reading ability, and gender (Alexander et al., 2001; Cooper et al., 1996; Kuhfeld et al., 2020), indicating that while students with reading difficulties make progress at a rate comparable to their peers during the school year, their lower starting baseline prevents them from closing the achievement gap. However, typically developing students and students identified as at-risk based on socioeconomic status (Alexander et al., 2007; Kim and Quinn, 2013) or underperformance in reading by reading below grade level proficiency on their end of year assessment (Contesse et al., 2021) show dissimilar growth over the summer months of up to 3 months difference over the calendar year (Cooper et al., 1996). Entwisle et al. (2001) reported that children from higher SES homes gained up to 15 points on their standardized reading score over the summer months, while children from lower SES homes lost 4 points on the same assessment. Cooper et al. (1996) synthesized the effects of summer vacation on achievement scores for K–12 students and found lower-income students demonstrate small losses (d = −0.21) while middle-income students made minor gains (d = 0.06). Kuhfeld and Tarasawa (2020) reported that some at-risk populations in grades 6–8 can lose up to 39% of their school year reading gains during the summer months, while students in high school can lose up to 25–50%. Grade level has also been shown to be related to students reading progress during the summer months with summer learning loss increasing as students get older. Students in third grade lose roughly 20% of their prior year school gains in reading. This increases to an average learning loss of roughly 33% during the summer between fifth through eighth grade based on Measures of Academic Progress (MAP) data (Lewis and Kuhfeld 2022).
It is of important note that the term “at-risk” is often used to refer to a large population of students with a wide range of differing abilities and demographic characteristics. The primary factor of what can lead to a student being identified as at-risk is any characteristic that potentially contributes to the student not experiencing academic success and being a potential dropout (Donnelly, 1987) Regardless of how a student is identified as at-risk, the potential academic future of these students is similar. They often fail to experience success in school, falling behind their peers, which leads to school becoming a negative experience. This leads to low self-esteem, academic struggles (i.e., special education, credit deficiencies, and lack of school support), and adverse post-school events (e.g., dropout, pregnancy, homelessness, and unemployment) (Misanko, 2024).
Faucet theory
Differences in academic outcomes observed over the summer based on socioeconomic status are explained by the Faucet Theory (Alexander et al., 2001). This theory suggests that during the school year, students from diverse backgrounds generally have equal access to academic resources and instruction. However, when school is not in session, access to these resources is “turned off” like a faucet, particularly for students from lower socioeconomic backgrounds. As a result, these students often experience greater learning losses during the summer, especially in areas like reading comprehension and word recognition (Cooper et al., 1996). Research has shown that while students from low socioeconomic backgrounds make academic gains comparable to their more advantaged peers during the school year, they experience significantly greater losses over the summer (Alexander et al., 2007).
Due to the risk of summer loss, U.S. federal and state education policies strongly encourage and provide funding for school districts to provide summertime programs and supports to students, especially in schools serving underserved student populations (Every Student Succeeds Act, 20 U.S.C. § 6301, 2015). Over the last 50 years, summer school instruction for students has increased in popularity. Yet, it is unclear just how effective summer reading interventions are for students who are at-risk or identified with difficulties particularly at the secondary level. This synthesis of research seeks to expand knowledge of the effects of summer reading programs on secondary students identified with or at-risk of reading difficulties. The researchers began with a review of previous syntheses and meta-analyses that have examined the effects of summer interventions on the reading performance of at-risk students.
It must be noted that there is a growing body of research that questions the commonly accepted evidence that at-risk students experience the most summer learning loss (von Hippel et al., 2018; Quinn and Polikoff, 2017; Workman et al., 2023). Research conducted by von Hippel and Hamrock (2019) for instance found that many of the early research used to establish summer reading loss consisted of testing and scaling procedural flaws. For instance, in some cases, the fall assessments were more difficult including being scaled differently or asking harder questions than the spring comparison test, on a higher grade level, or a completely different test was administered. This confounds test difficulty and format changes with potential actual summer learning loss, making it hard to determine whether score differences reflect learning loss or simply differences in the tests themselves. However, to date, this new research on the validity of summer loss has been focused on the elementary grades primarily. For the purpose of this paper, we will continue to rely on the established history of expected summer loss by at-risk students in the secondary grades.
Previous reviews of summer school interventions
Three prior meta-analyses have examined the effects of interventions conducted during the summer months (Cooper et al., 2000; Lauer et al., 2006; Kim and Quinn, 2013). Cooper et al. (2000) examined the effects of academic interventions administered over the summer months on K–12 students and reported a weighted average effect size of 0.26. Results showed that students in the grades K–3 and later secondary grades 9–12 benefited the most, as did students from middle-class homes compared to students in both the lower and upper SES homes. Cooper and colleagues reported no statistical difference in academic achievement based on gender, grade level, race, or any other “at-risk” characteristic as defined in the current synthesis. Lauer et al. (2006) reported an overall effect size of 0.05 across 14 summer reading programs for grades K–12. Nine of the studies included students in grades 6–8 with an overall effect size of 0.09. Two interventions focused on grades 9–12 and had an average effect size of 0.25. This meta-analysis also found that interventions using 1–1 teaching time and a combination of group structures reported significantly larger effect sizes than those using only large group structures or small groups of 10 or fewer students. These latter two formats were more commonly observed in summer instruction.
Lauer and colleagues also emphasized the importance of a well-defined and structured reading curriculum and found that the specific timing of the summer program (when during the summer intervention was provided) had no influence on student outcomes. They recommended that policy makers consider factors such as duration, cost, and implementation challenges when evaluating summer programs, and encouraged future researchers to include both published and unpublished work in their analyses. Although Lauer et al. (2006) examined moderating variables on student reading effects (SES status, duration, and grade level, etc.) the study broadly addressed out of school time interventions and did not run separate analyses on summer programs outside of what was already stated. Kim and Quinn (2013) focused more narrowly on summer reading interventions for low-SES students, analyzing 35 studies from kindergarten through 8th grade. Most of the studies (77%) focused exclusively on students in grades K–5. The authors reported an overall effect size of 0.10 for total reading achievement. This included a significantly positive mean effect on reading comprehension (d = 0.23), and fluency and decoding (d = 0.24). The authors reported that unlike the results of Cooper et al. (2000) analysis, results suggest that summer reading interventions are more effective for at-risk students (low income) than for their middle-or high-income peers.
Study purpose
Since the last synthesis that examined the reading effects of summer interventions for students in the secondary grades was published in 2013 and only went through 8th grade, the purpose of the present systematic review was to conduct an updated synthesis on the effects of interventions implemented during the summer on reading outcomes for secondary students identified as at-risk. Researchers in this study extended the search years to include intervention studies published on or after 1965, the year the Elementary and Secondary Education Act (ESEA) was passed which mandated supplemental programs and led to an increase in summer interventions to catch potential studies the prior synthesis excluded, particularly those targeting students with disabilities (United States, 1965). To date, there has been no prior synthesis focused specifically on at-risk of reading difficulties students in secondary grades where reading on or near grade level is critical for post-school success. Students in the secondary grades are expected to decode fluently and comprehend increasingly complex text (Alvermann, 2002); however, many lack the advanced decoding, vocabulary, and comprehension skills necessary for academic success. This highlights the need for a more in-depth review of effective summer interventions for at-risk students, particularly because summer is a period when these learners are especially vulnerable to losing prior academic gains, further widening existing achievement gaps.
As Reed et al. (2019) highlights, many prior reviews of summer interventions suffer from methodological limitations, particularly the lack of rigorous comparison groups. Without a control or comparison condition, Reed et al. (2019) point out it is difficult to determine whether observed results were due to the intervention itself or to other factors such as participant motivation or unknown external supports. The authors note that volunteer-based samples and inconsistent implementation practices further compromise the validity and generalizability of reported findings. Therefore, to ensure meaningful and interpretable results, we have included in this review only studies with a clearly defined comparison condition that better allows for causal inferences about intervention results. Our primary research question was, what are the overall effects of summer interventions on the reading outcomes of students identified with or at-risk of reading difficulties in Grades 6 through 12?
Methods
Data collection
To identify all potential studies, a systematic search of the available literature was conducted. The first step was a computer search using the Education Source, PsychINFO, and ERIC online databases. The search focused on including interventions implemented over the summer months that included a reading component and targeted students identified with or at-risk of reading difficulties. Only articles published between 1965 through March 2025 were included. To identify all articles related to summer instruction, the primary key terms used were “summer” or “break” or “vacation.” Secondary search terms included “reading” or “comprehension” or “literacy” and “intervention” or “extended school” or “camp” or “treatment” or “recovery.” Summer instruction is often referred to by many names depending on the type of summer intervention, so we determined it was best to keep the search terms broad for inclusivity of all potential studies. This synthesis also sought to better understand what types of summer interventions are offered to students in these grade levels and what the stated aims of those programs were. While the participants had to be listed has being with or at-risk of reading difficulties, we did not seek to limit the possible inclusion of any intervention that was identified as being a “summer program.” This led to the inclusion of some articles that may not fit the typical description of a summer reading intervention or program, but that still included some form of support toward a reading skill or were targeted toward reading, such as problem solving and access to text.
To be included, articles needed to meet a determined set of criteria. Participants had to be identified as at-risk of or having reading difficulties. Articles had to be published in English and had to of taken place in a North American school system. Studies had to include an intervention that occurred during the summer months when school was not in session. Studies had to be published on or after 1965. Studies’ research designs had to include data for treatment and control conditions. The intervention had to target reading skills (this could be as either the primary focus or as a secondary element), and record some type of reading outcome, proximal or standardized. Participants also had to have at least completed 6th grade prior to the treatment. For studies that included grades earlier than 6th grade, data had to be disaggregated by individual grade level. Studies were excluded if they did not identify the target population with or at-risk of reading difficulties. Studies were also excluded if they did not include a control condition. The control condition could be the traditional no treatment often observed during summer months, or a different intervention than the treatment group. Study design was not used to exclude any article so long as it contained a control condition (see Figure 1, PICOC scheme).
The initial search yielded 1,909 articles. A two-step, systematic process was conducted to screen articles for study inclusion. First, a review of titles and abstracts was initially conducted for all articles, separating them into categories: yes, no, and maybe. A random sampling of 100 abstracts were assigned to a double coder to screen for fidelity purposes. Screeners had 100% agreement on which category abstracts were placed into. A full text review of studies identified as maybe or yes were then examined in greater detail and were either included or excluded after the review. Upon identifying all included articles through the computer search, a hand search was done. Journals search included Exceptional Children, Journal of Learning Disabilities, The Journal of Special Education, Journal of Educational Psychology, Learning Disability Quarterly, Learning Disabilities Research & Practice, Reading Research Quarterly, Remedial and Special Education, and Scientific Studies of Reading. No additional articles were found through this method. A reference list search of the articles identified as meeting inclusionary criteria as well as the original Lauer synthesis was conducted to identify other potential articles. Two additional studies were found through this reference list check. One additional study that was part of a larger efficacy study was located through a referral.
During the abstract review, studies were excluded from the next step in the screening process (i.e., full-text review) if they were found to not meet the inclusion criteria. Duplicates were also removed. The review of the abstracts left a possible 249 articles for consideration. The full texts of the remaining 249 articles were read and checked against the inclusion criteria. After eliminating those that did not fit the inclusionary criteria, we were left with 10 articles for inclusion plus the three additional articles that were found through hand and reference list searches and referrals (see Figure 2, Prisma schema). All 13 articles that met inclusionary criteria were included in this synthesis. It should be noted that only three of the articles were published in peer reviewed journals, leaving all others open to potentially less examination and critical view, opening the possibility for methodology errors and reporting bias. There were a number of secondary studies used in Cooper et al. (2000) analysis that failed to meet our inclusionary criteria and were not included in our review. The included articles also frequently lacked sufficient intervention details to conduct certain types of descriptive analysis we desired to perform such as did month of implementation or the type of instruction provided impact student performance.
Coding procedures
Articles were coded using a code sheet developed by The Meadows Center for Preventing Educational Risk (Vaughn et al., 2014). Information gathered included participant characteristics such as at-risk identification, grade, and gender. Study characteristics, design, and descriptions of treatment and comparison groups were also collected. Findings were reported on the code sheet by recording scores from each reported reading outcome measure.
Each article included in the synthesis was independently coded by two graduate students to ensure accuracy and consistency. Before coding the main set of articles, the coders first practiced on two separate studies that were not part of the final review. This step was taken to establish interrater reliability, which exceeded 95%, confirming that both coders were applying the coding framework consistently. Once reliability was established, all articles included in the synthesis were double coded, again maintaining an interrater reliability rate above 95%. Reliability was calculated by dividing the number of agreements by the total number of agreements and disagreements, then multiplying by 100 to obtain a percentage. In cases where the coders initially disagreed, they discussed the discrepancies together until they reached a consensus, ensuring consistency and accuracy in the final coding decisions.
Effect size calculation
Effect sizes were calculated based on the statistical information provided in the included studies. Effect sizes were calculated based solely on the reported reading outcomes from each study, rather than on the overall intervention results when programs addressed multiple subjects or topics. Because all included articles included treatment and control groups, Cohen’s d (Cohen, 1992) was calculated through dividing the pooled standard deviation from the posttest mean differences of the treatment and control group. All eligible effect sizes in each study that provided mean and standard deviation or other relevant statistics such as F-test scores were considered in calculating the weighted mean effect size. Analysis accounted for the statistical dependencies using the random effects robust standard error estimation technique developed by Hedges et al. (2010) when studies reported multiple effect sizes from the same sample. This analysis considers the correlation between effect sizes from the same sample and allows for clustered data by correcting the study standard errors. The robust standard error technique requires that an estimate of the mean correlation (ρ) between all the pairs of effect sizes within a cluster be estimated for calculating the between-study sampling variance estimate, τ2. In all analyses, we estimated τ2 with ρ = 0.80. It was hypothesized, because this review focused on studies in grades 6–12, that the research body was reporting a distribution of effect sizes with significant between-studies variance, as opposed to a group of studies attempting to estimate one true effect size (Lipsey and Wilson, 2001). Thus, a random-effect model was used for the current study. Robust variance estimation analysis was conducted in R using the robumeta package (Fisher and Tipton, 2015).
Results
A total of 13 studies either provided an overall effect size, or sufficient data to calculate an effect size (see Table 1). The studies included in this corpus represent a fairly heterogenous group of participants and program types. Due to this and the overall limited number of articles, a meta-analysis of moderating variables was not possible. We begin by providing a broad overview of the program characteristics in the included studies, followed by a discussion of the range of reported effect sizes in relation to specific study and participant features. A weighted overall effect size using Cohen’s d is reported on all the included articles, with an average effect size and range used for the remaining sections. Overall, the effect was moderate and significant, d = 0.22, p = 0.02, 95% CI [0.04, 0.39] and included a total of 6,917 participants.
Study features
Study design and years published
This synthesis examined interventions that included treatment and comparison conditions or multiple treatment conditions. All included studies utilized a group design method. Most of the included interventions utilized a no-treatment or business-as-usual comparison group (Brown, 2011; Ellers, 2009; Haymon, 2009; Hurwitz, 2022; Opalinski, 2006; Perkins, 2017; Rembert et al., 1986; Somers et al., 2015; Waiksnis, 2014). Four of the studies however, compared treatment to an alternative treatment (Bottorff, 2010; Glascock, 1999; Ruffu, 2012; Sipe, 1986). Bottorff (2010) compared the effects of a six-week summer program to the prior year’s four-week program to identify if the longer intervention had greater impact on student outcomes. In Glascock (1999), a small sample of students already enrolled in a summer program were provided additional Problem-Solving Training (PST) that took place twice a week. Ruffu (2012) took students already enrolled in a summer school program and gave them an additional daily 20 min of a computer-delivered repeated reading program. In Sipe (1986), treatment students participated in daily academic instruction along with 1 month of work experience while control students just received the work experience. Two studies, Perkins (2017) and Waiksnis (2014), were home-delivered interventions that did not include any school-provided instruction. Two articles had students participate across multiple summers. In Brown (2011) participants received summer intervention for two consecutive summers while those in Rembert et al. (1986) participated in summer school programs for either two or three consecutive summers.
Interventions included in this synthesis were published during the time period between 1985 and 2022. Two interventions met inclusionary criteria during the 1980s (Rembert et al., 1986; Sipe, 1986). Just one intervention met inclusionary criteria during the1990s (Glascock, 1999). From 2000 to 2009, three interventions were conducted on secondary students that met all inclusionary criteria (Ellers, 2009; Haymon, 2009; Opalinski, 2006). The remaining seven interventions included were published between 2010 and 2022 (Bottorff, 2010; Brown, 2011; Hurwitz, 2022; Perkins, 2017; Ruffu, 2012; Somers et al., 2015; Waiksnis, 2014).
Participant characteristics
The 13 included studies had a total of 6,917 participants (roughly 52% male, 48% female). All students were identified as at-risk in the source articles. At-risk identification included students who were identified as: struggling (Bottorff, 2010; Hurwitz, 2022; Somers et al., 2015; Waiksnis, 2014), five points above or below the proficiency cut score line on a state assessment (Brown, 2011), low achieving and struggling student (Ellers, 2009), economically disadvantaged (Glascock, 1999; Perkins, 2017; Sipe, 1986; Waiksnis, 2014), in danger of retention (Haymon, 2009; Opalinski, 2006), English Learners (EL) (Ruffu, 2012), minority students (Hurwitz, 2022; Rembert et al., 1986), low motivation (Rembert et al., 1986), educationally deficient (1–4 years behind) (Sipe, 1986), and reluctant readers (Waiksnis, 2014). Most studies focused on students considered in the middle grades, 6–8 (Bottorff, 2010; Brown, 2011; Ellers, 2009; Haymon, 2009; Hurwitz, 2022; Perkins, 2017; Opalinski, 2006; Somers et al., 2015; Waiksnis, 2014). Four focused on students in grades 9–11 (Glascock, 1999; Ruffu, 2012; Rembert et al., 1986; Sipe, 1986).
Intervention characteristics
We first present study features by group size and how long the intervention was implemented. Then, we provide a detailed description of the staff who implemented the summer intervention.
Sample size. The group size of the interventions examined varied greatly depending on district size and available funding. Ruffu (2012) had the smallest number of participants (n = 20) but one of the largest reported effect sizes of d = 1.55. Five studies had between 50 and 99 participants (Bottorff, 2010; Glascock, 1999; Haymon, 2009; Hurwitz, 2022; Waiksnis, 2014) and had an overall effect size of d = 0.31. Two studies had between 100 and 199 participants (Ellers, 2009; Rembert et al., 1986) with an average effect size of 0.31; one between 200 and 299 (Sipe, 1986; d = 0.51) and two with 400–499 (Opalinski, 2006; Perkins, 2017; average d = −0.22). Somers et al. (2015) had 919 participants and an effect size of 0.04. Brown (2011) had the largest sample size with 4,167 participants with a moderate overall effect size on students reading performance of d = 0.16, however most of these were in the control condition (3,705).
Duration of interventions. Total number of sessions delivered varied greatly across the interventions. The two home-based interventions (Perkins, 2017; Waiksnis, 2014) did not have any instruction sessions, instead, students were provided with books to read at home, based on student interest and teacher selected summer reading books on the student’s Lexile level. Students were then assessed on a reading skill at the end of the summer. Ellers (2009) and Sipe (1986) did not provide data on total number of sessions delivered. Students in Glascock (1999) received the least amount of reported sessions at 10, while participants in Ruffu (2012) received 12 total sessions. Brown (2011) and Rembert et al. (1986) both had between 15 and 20 sessions. Hurwitz (2022) did not report the total number of sessions participants received, but did report the intervention ran daily for 4 weeks. Two studies delivered between 20 and 30 sessions (Bottorff, 2010; Opalinski, 2006), where Haymon (2009) and Somers et al. (2015) both provided 30 total sessions to participants.
Dosage of treatment. Like duration of implementation, dosage of treatment (measured by how long participants received instruction) varied widely across articles. Ellers (2009) also did not provide any data on how many hours of instruction was provided to participants. Students in Ruffu (2012) received just 4 h of instruction while participants in Glascock (1999) received just over eight and a half hours of intervention. The next fewest hours of instruction was provided to students in Rembert et al. (1986), who received 13 total hours of instruction. Hurwitz (2022) did not provide the max number of hours of instruction students received, but reported that the average amount of time students participated in the intervention was 11 and a half hours. Participants in Brown (2011), Bottorff (2010), and Somers et al. (2015) all received roughly 30–31 total hours of instruction. Opalinski (2006) delivered 46 h of instruction to students and Haymon (2009) provided 60 h of intervention to participants. Sipe (1986) provided the most intervention time to students at 90 total hours, however, that was for both reading and math instruction and they did not disaggregate that by subject.
Instruction implementers. Eight of the included studies required students to receive language arts instruction that was delivered by either a certified teacher or other school staff, such as a school counselor or administrator (Bottorff, 2010; Brown, 2011; Ellers, 2009; Haymon, 2009; Hurwitz, 2022; Opalinski, 2006; Sipe, 1986; Somers et al., 2015). Problem solving training in Glascock (1999) was provided by staff that either had a graduate degree in counseling, were current graduate counseling students, had a graduate degree in special education, or had prior work experience work with at-risk adolescents. Rembert et al. (1986) described the staff who implemented the intervention as “trained camp counselors” but did not provide any additional details. Interventions delivered by a trained individual of either a certified teacher, professional school staff, researcher, or trained camp staff, reported an effect size range of d = −0.23 to 0.83.
Students in Ruffu (2012) received instruction that was provided through a computer program overseen by the researcher and had an effect of d = 1.55. The remaining two studies, Perkins (2017) and Waiksnis (2014), were home-delivered interventions where the students did not receive any direct language arts instruction or participate in any teacher-led instructional activities. In these studies, students were given books and worksheets to complete at home over the course of the summer while students in control did not receive anything. The average effect of home delivered interventions on students’ reading outcomes across the two studies ranged from d = −0.80 to 0.11. The following section presents general average effect sizes on reported reading outcomes across some of the specific study characteristics summarized above.
Intervention outcomes
Teacher led single-component interventions
Two teacher-led interventions utilized a single-component design, interventions that use only one instructional strategy or element, and had an average effect size of d = 0.8 (Glascock, 1999; Ruffu, 2012). Both articles were included in the synthesis because the steps learned were practiced on reading skills during treatment and gave a reading assessment at pre and post testing. In Ruffu (2012), students spent 20 min a day, 4 days a week, for 4 weeks using a repeated reading with speech recognition computer program designed to improve students reading fluency. The Dragon Naturally Speaking® (DNS) computer program was used and incorporates a sequence of internal and external events to add in developing student’s fluency skills. The sequence is as follows: (a) visual reception of text; (b) student retrieval from semantic memory; (c) the activating of the phonological processor; (d) the articulation of a word, phrase, or sentence; (e) provided immediate visual feedback on the articulation through speech to text; (f) the comparison and evaluation of the speech-to-text output to the standard; (g) recoding of poor production to more closely match the given standard. Daily sessions included: (a) dictation of part of the previous day’s passage; (b) listening to and silently reading a new passage; (c) dictating and correcting individual sentences from the new passage in the speech recognition environment; (d) dictating the new passage as a whole, without making corrections; and (e) listening to their own voice recorded dictation. Ruffu (2012) reported the largest effect size of all articles with d = 1.55. However, the only reading skill assessed was reading fluency which tends to report larger effect sizes when targeted compared to other reading skills (Kim and Quinn, 2013; Wanzek et al., 2010).
Another single-component intervention was Glascock (1999) where students received Problem Solving Training (PST) (D'Zurilla and Nezu, 2010) in addition to the academic summer program they were already participating in. PST training was given two times a week for 50 min and lasted for 10 sessions. PST training consisted of helping students identify potential problems, academic and other, as well as possible solutions through 5 steps: (1) problem orientation, (2) problem definition, (3) generation of alternative solutions, (4) decision making, and (5) solution implementation. PST uses both active and reflective learning activities with students are meant to apply PST 5 step solutions to case studies that represent real-life problems. Participants were also encouraged to use the program solutions techniques in their own lives and the rest of the summer program classes. Glascock (1999) reported a mean effect size of d = 0.05 on reading outcomes. In both Ruffu (2012) and Glascock (1999), students in the comparison condition remained in the regular summer school classroom during this time, but information on what the main summer school instruction or curriculum looked like was not provided.
Teacher-led multi-component interventions
Nine teacher-led interventions utilized a multi-component design, interventions that combine two or more instructional strategies or elements delivered together, with an overall average effect size of d = 0.27. The majority of interventions used had instructional time that focused on language arts and math (Bottorff, 2010; Brown, 2011; Ellers, 2009; Haymon, 2009; Opalinski, 2006; Rembert et al., 1986; Sipe, 1986; Somers et al., 2015). Hurwitz (2022) was a reading summer program where students received daily instruction utilizing the Newsela online platform to aid in teacher-led instruction. Newsela allowed for the classroom teacher to customize daily reading passages and activities based on students’ reading levels and interest. A few of the programs provided full day instruction that also included subjects like writing (Brown, 2010; Ellers, 2009; Haymon, 2009; Opalinski, 2006; Rembert et al., 1986; Somers et al., 2015), science (Rembert et al., 1986), and even art (Bottorff, 2010; Somers et al., 2015). Two interventions also provided activities like seminars, community engagement and field trips once a week (Rembert et al., 1986; Somers et al., 2015). All students, treatment and control, in Sipe (1986) received 1 month paid summer work.
Student-led interventions
Waiksnis (2014) and Perkins (2017) conducted student-led, home-based interventions. The average effect size for student-led interventions was d = −0.17, the only overall negative effect size across study characteristics. This is in large part driven by the Perkins (2017) study which reported an overall average negative effect size across all participants of −0.33. In both studies, students received books to take home during the summer months. In Waiksnis (2014) (d = 0.62) students were given 10 books that were selected by the student and based on their own interest. Two additional books were selected by the student’s teacher from the summer reading list. There were no additional required assignments or projects. Students in Perkins (2017) received 6–8 books, one every 10 days. Books were assigned books based on their interest and independent reading level with the teacher using the Lexile framework for reading to determine match books to students. In addition to receiving books, students also received 6–7 postcards from their teacher with literacy suggestions on how to better interact with the text.
Features of the interventions and student outcomes
Intervention duration
Interventions that were 4 weeks or less in duration reported a range of d = 0.07–1.55, with an average effect size of 0.44 across four studies (Glascock, 1999; Hurtwitz, 2022; Rembert et al., 1986; Ruffu, 2012). This was the largest effect average across any duration. However, a large reason for this effect size was due to the Ruffu (2012) intervention. This intervention reported only on a measure of fluency and had an abnormally large effect size of 1.55. If this study is removed, interventions that are 4 weeks or less in duration drop to an average of d = 0.22. This new average effect size would drop it to the 2nd largest average effect on student reading outcomes. Total time the student received reading instruction varied between 4 h, up to 20 h. Total number of sessions ranged from 10 to 20. Frequency of treatment each week also varied with students in Glascock (1999) receiving instruction just twice a week while participants in Hurwitz (2022), Rembert et al. (1986) and Ruffu (2012) meet daily.
Four interventions had a duration of between 6 and 8 weeks and had a range of d = −0.12–0.56 with an average d = 0.27 across the four studies (Bottorff, 2010; Haymon, 2009; Opalinski, 2006; Sipe, 1986). This groups had the most variance in total treatment time. Total hours students participated in treatment included: 29 (Bottorff, 2010), 46 (Opalinski, 2006), 60 (Haymon, 2009), and 90 (Sipe, 1986). There was less variance observed in the amount of treatment session provided. Opalinski (2006) had 23 sessions; Bottorff (2010) had 29 sessions; closely followed by Haymon (2009) with 30 sessions. Sipe (1986) did not provide information on total sessions delivered. In all groups, sessions were done daily.
Interventions that were 5 weeks in duration had a range of d = −0.23 to 0.83 and reported the smallest gains with an average d = 0.14 (Brown, 2011; Ellers, 2009; Somers et al., 2015). Ellers (2009) failed to provide any descriptive information on the frequency of the interventions, how long each session was provided for and or how many sessions participant received. The other two articles (Brown, 2011; Somers et al., 2015) had similar total hours of treatment given with 33:15 and 30, respectively. However, students received just 19 total daily sessions in in Brown (2011) while Somers et al. (2015) saw students getting 30 daily sessions.
The final two of the interventions (Perkins, 2017; Waiksnis, 2014) were conducted over the course of the entire summer and reported an overall negative effect size.
Student grade level
Treatment on students in middle school (grades 6–8) had nearly an identical effect size compared to students at the high school level (grades 9–12) (d = 0.38 vs. d = 0.39). Neither of these were considered statistically significantly different from their comparisons.
Five studies disaggregated data at the 6–8 grade level. Bottorff (2010) and Hurwitz (2022) also focused on students in middle school, but did not disaggregate the data by grade level. Effect sizes for students in the 6th grade ranged from d = −0.13 to 1.49. At the 7th grade level visual inspections reveal an overall negative directionality for participating students compared to students in control conditions. Effect size for 7th grade ranged from d = −0.8 to 0.15. Treatment students in 8th grade demonstrated effects ranging from d = −0.46 to 0.83 across two studies.
At the high school level, Sipe (1986) worked exclusively with 9th graders and reported outcomes for two groups, Hispanic and non-Hispanic participants. Hispanic treatment students reported an effect size of d = 0.46 while non-Hispanic students had an effect size of d = 0.56. Glascock (1999) also worked with students in the 9th grade and reported mean pre-post-test changes for treatment and control students on the SAT-9. Students in treatment averaged a decrease of 11.02 while those in the control condition averaged a decrease of 8.92 points on the SAT-9 reading measure. Just one study, Rembert et al. (1986), reported effect sizes for students in the 10th grade, with an effect size of d = 0.43. Ruffu (2012) only reported participants were in grades 9–11 and had an overall effect size of d = 1.55. This intervention measured students’ words-per-minute reading rates and was just a small part of a large summer school program.
Follow-up effects
Five studies included analysis of a follow up measure (Brown, 2011; Ellers, 2009; Haymon, 2009; Opalinski, 2006; Perkins, 2017). These follow up measures occurred over the course of a few months after treatment completion, up to 2 years later. In Ellers (2009), students’ scores on the Language Arts Alaska Standards Bases Assessment (SBA) were tracked for the next three testing periods. Treatment had a minimal positive effect of d = 0.02 the year the summer school treatment was provided, an effect size of d = −0.20 two years after treatment and d = 0.02 three years after treatment compared to control condition. None of the scores were statistically significant. Reading scores on the Missouri Assessment Program® one year after students completed the summer intervention were analyzed in Haymon (2009) to identify if there were any long term affects from summer intervention. Treatment students had a mean score 2.20 points higher than their control condition peers and had an effect size of d = 0.28 which was not statistically significant. Student’s scores on the end of year reading assessments in Brown (2011) were also analyzed and reported an effect size range from d = 0.158 to 0.198. In the Opalinski (2006) study, students were given the CAT6/ TerraNova® assessment in April the year they received treatment and again the following school year. In April, the year intervention was given, treatment had an effect size of d = 0.15, p > 0.05. The CAT6/TerraNova was again given in April the following school year and when results were analyzed, treatment had an effect of d = 0.42 on treatments students’ performance between test 1 and 2 which was statistically significant. However, when results were compared between treatment and control conditions, effects were not statistically significant with d = 0.15. The last study that administered a follow up measure to participants was the home-based intervention of Perkins (2017). Treatment at follow up had an effect size range of d = −0.48 to 0.05.
Discussion
While the original goal of this synthesis was to address questions related to the effects of summer interventions based on program characteristics (such program intensity, instructional practices, time and length of implantation, etc.), we were limited in the questions we were able to address by the number and qualities of studies identified. However, there is still valuable information that was gained through this synthesis that can more broadly support the continued use of summer programs for at-risk readers in the secondary grades. Results from this synthesis suggest that students with or at-risk of reading difficulties can improve on reading outcomes through participation in summer instruction. The Lauer et al. (2006) and Kim and Quinn (2013) syntheses reported that summer school programs had an overall effect size of d = 0.05 and d = 0.10. The results of this synthesis show a wide range of reported effect sizes between −0.80 and 1.55 with an overall ES of d = 0.22, p = 0.02. Overall, they highlight that summer interventions can lead to improved reading outcomes for students at-risk of reading difficulties, but grade level may play a moderating role in their effectiveness. These findings are closer to those reported in Cooper et al. (2000) which had an over ES of d = 0.26. However, due to the heterogenous demographics of participants and limited number of included articles in this review, results should be cautiously evaluated.
Lauer et al. (2006) synthesis found an effect size of d = 0.09 across nine interventions conducted on students in grades 6–8 which was significantly smaller than the two interventions the synthesis reported on high school students which had an effect of d = 0.25. This synthesis however found that the students in middle and high school grades performed nearly identical on reading outcome measures (d = 0.37 and d = 0.39). Like the moderator analysis conducted in Lauer et al. (2006) synthesis, the findings of this synthesis suggest that interventions with longer duration do not automatically lead to significantly improved outcomes compared to interventions that were conducted for few hours and across fewer days. These findings suggest that total time of instruction plays a bigger role on improved outcomes compared to duration of instruction.
This synthesis also demonstrates findings consistent with Cooper et al. (1996) meta-analysis, which reported that, except for students in 4th grade, reading achievement declined at greater rates with each passing grade. While their analysis stopped with students in 8th grade, this synthesis suggests the opposite trend may also be true, with students demonstrating an ability to make notable reading gains over the summer months as grade level increased. When examining effect sizes at the individual grade level, all grades showed average positive gains, with the exception of 7th grade. Notably, students in middle and high school grades demonstrated similar levels of summer reading improvement. The studies did not offer any explanation for the minimal performance observed among 7th graders.
Implications for practice
The field can draw some possible conclusions around the concept of the Faucet Theory and summer loss for older struggling readers based on the findings of this synthesis. The Faucet Theory is based on the idea that summer loss is largely explained by educational resource access being available only to some kids. However, it was the interventions that not only provided academic resources, but actual academic instruction of some type to the students that saw the largest impact on student outcomes. Of the 13 included articles, two were home-based (Waiksnis, 2014; Perkins, 2017) interventions that did little more than provide books to the students. If the Faucet Theory were true, we would expect to see these students outgaining their control condition peers on reading measures given at post testing. At the least, we would expect to see minimized summer losses compared to the control condition. However, control condition peers outperformed on almost every reading measure given at post-assessment across grade levels in both studies. Similar homebased interventions conducted in younger grades have shown positive effects in minimizing summer loss, d = 0.12 (Kim and Quinn, 2013), and additional research is needed to better understand the lack of affect these home-based types of interventions have on secondary students. A possible explanation for this is the fact that summer school interventions routinely group together students that have a variety of “at-risk” identifying characteristics. Since a major justification of summer programs is to provide academic materials and supports to students that may not have access to them, students that have struggle factors outside of needing access to materials (i.e., single parent household, adverse home/community life, EL status, etc.) may not have success with a single-component approach to summer support. A more individually sensitive consideration of useful supports may help lead to greater effect sizes and student advancement.
Limitations
This synthesis has limitations that should be noted. Like Lauer et al. (2006), this synthesis was limited in the number of articles, specifically at the 9–12 grade level. Where the Lauer article included just two studies that included students at these grade levels, this synthesis included four. Given the limited number of included articles that focused on students in these grade levels, results should be considered with caution. Although students in these grade levels reported an average moderate effect size, until additional studies are conducted on students in these grades, results should not be generalized across settings and participants.
The quality of the included articles may limit the reliability of the effects found. Only three of the articles were published in peer reviewed journals, leaving the others open to less examination and critical view, opening the possibility for methodology errors and reporting bias. The lack of peer review potentially may be a reason for the limited treatment and program details and findings, that were observed in most of the articles. Standards for high-quality research such as reporting information about attrition and baseline equivalence were commonly omitted from the source articles. Articles published in peer review journals should undergo such rigorous examination and therefore, most of the included studies in this synthesis would not meet Clearinghouse (2022) standards. Study quality and type of publication were also both noted as moderating effects in Lauer et al. (2006) and Cooper et al. (2000) syntheses.
The findings of this synthesis are also limited due to lack of detail of summer program components used in many of the included articles. Detail on how the students were taught (e.g., explicit vs. non-explicit instruction, programs and instruction delivered, or even group size) were missing in most of the articles. This lack of knowledge is a big barrier to understanding which interventions components are the most effective for mitigating summer loss for at-risk secondary readers.
The lack of follow-up measures in most of included articles also limits this synthesis’s ability to answer long-term effects of summer programs with just Haymon (2009) and Brown (2011) reporting such scores. If all progress demonstrated by the intervention is lost in the weeks of no instruction following treatment, valuable time and resources have been wasted. Immediate gains should not be confused or generalized to desired outcome results. Due to the absence of follow-up measures, the ability to confirm or reject the expected effects of the Faucet Theory on summer loss are not possible through this synthesis.
Implications and future research
This synthesis still provides some valuable implications for secondary grade educators and administrators, as well as future researchers. First, studies suggest that longer duration of intervention was not associated with increased gains by the student, rather total time of instruction may have been more relevant to student outcomes. A similar observation was reported in Lauer et al. (2006) review. For schools determining the length of summer school programs, schools must consider the total time of instruction the students are to receive in addition to length of implementation. Second, schools can have confidence that an effective way to minimize summer loss for at-risk population is through utilizing summer school. At-risk populations have been shown to lack access to resources and materials while not in school for extended periods of time. Summer school can temporarily turn back on the faucet of information and access, allowing these students to reinforce previously acquired information. There is potential for schools to reduce the need for additional supports during the school year by implementing effective summer school interventions, saving time, money, and effort. Summer instruction can also eliminate the need for teachers to spend the first month(s) of the new school year, re-teaching information that students had previously acquired and then lost over the summer. If teachers and students can immediately jump into new academic materials, every student has the potential to gain up to two additional months of new instruction. As noted previously, it is critical that schools provide teacher delivered instruction as part of the summer curriculum and not just provide access to educational resources.
This synthesis also points to the need for future research. The lack of studies into effective summer interventions for students in grades 6–12th must be noted. This need is even greater when you look at the quality of the research that has been done. This is a population of students that are at increased risk to drop out of school and quality research into a potential tool to help combat this is invaluable to the field.
Conclusion
Overall, this synthesis underscores the need for additional research into summer month interventions for at-risk students in the secondary grades. Only 13 studies since 1986 have either focused exclusively on or disaggregated the data sufficiently for evaluation of summer month interventions for students in grade levels 6–12 with a control condition. Historically, it was understood that all struggling students, especially those from poorer families, were at an increased risk of summer learning loss than students from higher socio-economic classes. However, there are recent studies that are questioning the validity that all at-risk readers suffer summer learning loss (Downey et al., 2004; Herrera et al., 2011; Kuhfeld, 2019). Given these recent studies, further research is needed at the secondary grade level to identify if these newer findings can also be observed in higher grades.
Author contributions
JDi: Writing – review & editing, Writing – original draft. PC: Writing – review & editing. JDa: Writing – review & editing. ED: Writing – review & editing. AC: Writing – review & editing.
Funding
The author(s) declare that no financial support was received for the research and/or publication of this article.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The authors declare that no Gen AI was used in the creation of this manuscript.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Alexander, K. L., Entwisle, D. R., and Olson, L. S. (2001). Schools, achievement, and inequality: a seasonal perspective. Educ. Eval. Policy Anal. 23, 171–191. doi: 10.3102/01623737023002171
Alexander, K. L., Entwisle, D. R., and Olson, L. S. (2007). Summer learning and its implications: insights from the beginning school study. New Dir. Youth Dev. 2007, 11–32. doi: 10.1002/yd.210
Alvermann, D. E. (2002). Effective literacy instruction for adolescents. J. Lit. Res. 34, 189–208. doi: 10.1207/s15548430jlr3402_4
Bottorff, A. K. (2010). Evaluating Summer School programs and the effect on student achievement: the correlation between Stanford-10 standardized test scores and two different summer programs. Dissertations. :536. doi: 10.33915/etd.9358
Brown, S. P. (2011). An evaluation of an extended learning opportunity in mathematics and reading for targeted at-risk middle school students (Order No. 3432390). Available from ProQuest Dissertations & Theses Global; Social Science Premium Collection. (822245142). Available online at: https://unk.idm.oclc.org/login?url=https://www.proquest.com/dissertations-theses/evaluation-extended-learning-opportunity/docview/822245142/se-2
Catts, H. W., Compton, D., Tomblin, J. B., and Bridges, M. S. (2012). Prevalence and nature of late-emerging poor readers. J. Educ. Psychol. 104:166. doi: 10.1037/a0025323
Clearinghouse, W. W. (2022). WWC procedures and standards handbook (version 3.0). Washington, DC: US Department of Education, Institute of Education Sciences, What Works Clearinghouse.
Cohen, J. (1992). Statistical power analysis. Curr. Dir. Psychol. Sci. 1, 98–101. doi: 10.1111/1467-8721.ep10768783
Contesse, V. A., Campese, T., Kaplan, R., Mullen, D. A., Pico, D. L., Gage, N. A., et al. (2021). The effects of an intensive summer literacy intervention on reader development. Read. Writ. Q. 37, 221–239. doi: 10.1080/10573569.2020.1765441
Cooper, H., Charlton, K., Valentine, J. C., Muhlenbruck, L., and Borman, G. D. (2000). Making the most of summer school: a meta-analytic and narrative review. Monogr. Soc. Res. Child Dev. 65, i–127. doi: 10.2307/1170523
Cooper, H., Nye, B., Charlton, K., Lindsay, J., and Greathouse, S. (1996). The effects of summer vacation on achievement test scores: a narrative and meta-analytic review. Rev. Educ. Res. 66, 227–268. doi: 10.3102/00346543066003227
Downey, D. B., Von Hippel, P. T., and Broh, B. A. (2004). Are schools the great equalizer? Cognitive inequality during the summer months and the school year. Am. Sociol. Rev. 69, 613–635. doi: 10.1177/000312240406900501
D'Zurilla, T. J., and Nezu, A. M. (2010). “Problem-solving therapy” in Handbook of cognitive-behavioral therapies. ed. K. S. Dobson. 3rd ed (Guilford Press), 197–225.
Ellers, S. L. (2009). The effects of a standards-based middle-level summer school program as an intervention to increase academic achievement as measured by standards-based assessments (Order No. 3380480). Available from ProQuest Dissertations & Theses Global; Publicly Available Content Database. (305162836). Available online at: https://unk.idm.oclc.org/login?url=https://www.proquest.com/dissertations-theses/effects-standards-based-middle-level-summer/docview/305162836/se-2
Entwisle, D. R., Alexander, K. L., and Olson, L. S. (2001). Keep the faucet flowing summer learning and home improvement. American Educator, 25, 10–15.
Every Student Succeeds Act, 20 U.S.C. § 6301 (2015). Available online at: https://www.congress.gov/114/plaws/publ95/PLAW-114publ95.pdf
Fisher, Z., and Tipton, E. (2015). robumeta: An R-package for robust variance estimation in meta-analysis. ar Xiv preprint ar Xiv: 1503.02220. Available online at: https://doi.org/10.48550/arXiv.1503.02220
Glascock, P. C. (1999). The effects of problem-solving training on self-perception of problem-solving skills, locus of control, and academic competency for at-risk adolescents : Arkansas State University.
Hart, P. (2005). Rising to the challenge: Are high school graduates prepared for college and work. Washington, DC: Achieve Inc.
Haymon, G. D. (2009). The impact of summer school on student academic achievement (Order No. 3354741). Available from ProQuest Dissertations & Theses Global; Publicly Available Content Database. (305082727). Available online at: https://unk.idm.oclc.org/login?url=https://www.proquest.com/dissertations-theses/impact-summer-school-on-student-academic/docview/305082727/se-2
Hedges, L. V., Tipton, E., and Johnson, M. C. (2010). Robust variance estimation in meta-regression with dependent effect size estimates. Res. Synth. Methods 1, 39–65. doi: 10.1002/jrsm.5
Herrera, C., Linden, L. L., Arbreton, A. J., and Grossman, J. B. (2011). Testing the impact of higher achievement’s year-round out-of-school-time program on academic outcomes : Public/Private Ventures.
Hodge, L., Little, A., and Weldon, M. (2021). GSCE attainment and lifetime earnings : UK Department for Education. doi: 10.3102/0034654313483906
Kim, J. S., and Quinn, D. M. (2013). The effects of summer reading on low-income children’s literacy achievement from kindergarten to grade 8: a meta-analysis of classroom and home interventions. Rev. Educ. Res. 83, 386–431.
Kuhfeld, M. (2019). Surprising new evidence on summer learning loss. Phi Delta Kappan 101, 25–29. doi: 10.1177/0031721719871560
Kuhfeld, M., Soland, J., Tarasawa, B., Johnson, A., Ruzek, E., and Liu, J. (2020). Projecting the potential impact of COVID-19 school closures on academic achievement. Educ. Res. 49, 549–565. doi: 10.3102/0013189x20965918
Kuhfeld, M., and Tarasawa, B. (2020). The COVID-19 slide: what summer learning loss can tell us about the potential impact of school closures on student academic achievement. NWEA: Brief.
Lauer, P. A., Akiba, M., Wilkerson, S. B., Apthorp, H. S., Snow, D., and Martin-Glenn, M. L. (2006). Out-of-school-time programs: a meta-analysis of effects for at-risk students. Rev. Educ. Res. 76, 275–313. doi: 10.3102/00346543076002275
Lewis, K., and Kuhfeld, M. (2022). Progress toward pandemic recovery: Continued signs of rebounding achievement at the start of the 2022–23 school year. Brief : Center for School and Student Progress at NWEA.
Misanko, T. L. (2024). The perceived influence of a school counseling small group intervention on first-time at-risk high school freshmen: a comparative case study (Doctoral dissertation): Southern Wesleyan University.
National Center for Education Statistics (2024). The nation's report card: 2024 mathematics and reading assessments : U.S. Department of Education, Institute of Education Sciences.
Opalinski, G. B. (2006). The effects of a middle school summer school program on the achievement of NCLB identified subgroups (Order No. 3224110). Available from ProQuest Dissertations & Theses Global. (305243000).
Quinn, D., and Polikoff, M. (2017). Summer learning loss: what is it, and what can we do about it? Washington, DC: Brookings Institution.
Reed, D. K., Aloe, A. M., Reeger, A. J., and Folsom, J. S. (2019). Defining summer gain among elementary students with or at risk for reading disabilities. Except. Child. 85, 413–431. doi: 10.1177/0014402918819426
Rembert, W. I., Calvert, S. L., and Watson, J. A. (1986). Effects of an academic summer camp experience on black students' high school scholastic performance and subsequent college attendance decisions. Coll. Stud. J. 20, 374–384.
Ruffu, R. (2012). Developing oral reading fluency among Hispanic high school English-language learners: an intervention using speech recognition software : University of North Texas.
Russell, J., and Drake Shiffler, M. (2019). How does a metalinguistic phonological intervention impact the reading achievement and language of African American boys? Read. Writ. Q. 35, 4–18. doi: 10.1080/10573569.2018.1535774
Sipe, C. L. (1986). Summer training and education program (STEP): the experience of Hispanic participants in the summer of 1985.
Somers, M. A., Welbeck, R., Grossman, J. B., and Gooden, S. (2015). An analysis of the effects of an academic summer program for middle school students. New York: MDRC, March.
U.S. Department of Education (2005). The nation’s report card: an introduction to the national assessment of educational progress (NAEP). Jessup, MD: ED Pubs.
United States (1965). Elementary and secondary education act of 1965: H. R. 2362, 89th Cong., 1st sess., Public law 89–10. Reports, bills, debate and act. Washington: U.S. Govt. Print. Off.
Vaughn, S., Elbaum, B. E., Wanzek, J., Scammacca, N., and Walker, M. A. (2014). Code sheet and guide for education-related intervention study syntheses : Meadows Center for Preventing Educational Risk.
von Hippel, P. T., and Hamrock, C. (2019). Do test score gaps grow before, during, or between the school years? Measurement artifacts and what we can know in spite of them. Sociological Science 6, 43–80. doi: 10.2139/ssrn.2745527
von Hippel, P. T., Workman, J., and Downey, D. (2018). Inequality in reading and math skills forms mainly before kindergarten: a replication, and partial correction, of “are schools the great equalizer?”. Sociol. Educ. 91, 323–357. doi: 10.1177/0038040718801760
Waiksnis, M. (2014). An evaluation of a summer reading program at a Public Middle School in a Southeastern State : Gardner-Webb University.
Wanzek, J., Wexler, J., Vaughn, S., Swanson, E., Edmonds, M. A., and Kim, A. (2010). Reading interventions for struggling readers in the upper elementary grades: a synthesis for 20 years of research. Read. Writ. 23, 889–912. doi: 10.1007/s11145-009-9179-5
Keywords: summer loss, summer school, reading difficulties, interventions, secondary grades
Citation: Dille JT, Capin P, Daniel J, Dille EA and Cahill AS (2025) A synthesis of the effects of summer interventions on secondary students with or at-risk of reading difficulties. Front. Educ. 10:1612484. doi: 10.3389/feduc.2025.1612484
Edited by:
Hung Jen Kuo, Michigan State University, United StatesReviewed by:
Lisa B. Hurwitz, Newsela, United StatesAnna Zamkowska, Casimir Pulaski Radom University, Poland
Copyright © 2025 Dille, Capin, Daniel, Dille and Cahill. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Jordan T. Dille, ZGlsbGVqQHVuay5lZHU=
†ORCID: Jordan T. Dille, orcid.org/0000-0002-5110-8973
Phil Capin, orcid.org/0000-0003-4955-9879
Johny Daniel, orcid.org/0000-0002-5057-9933
Elena A. Dille, orcid.org/0009-0007-1339-1038
Alice S. Cahill, orcid.org/0000-0002-5404-6675