POLICY BRIEF article

Front. Educ., 18 January 2023

Sec. Higher Education

Volume 8 - 2023 | https://doi.org/10.3389/feduc.2023.1002934

Evaluation of access and participation plans: Understanding what works

  • 1. College of Health and Life Sciences, Aston University, Birmingham, United Kingdom

  • 2. Strategic Planning Office, University of Wolverhampton, Wolverhampton, United Kingdom

  • 3. Government Relations and Policy, Aston University, Birmingham, United Kingdom

  • 4. Directorate of Student Engagement Evaluation and Research, Sheffield Hallam University, Sheffield, United Kingdom

Article metrics

View details

7

Citations

3,7k

Views

912

Downloads

Abstract

We present an analysis of two current policy options to improve evaluation of access and participation work: independent external evaluation vs. in-house evaluation. Evaluation of access and participation work needs to be well-conducted, objective and widely disseminated, regardless of the outcome. Independent external evaluation is likely to provide objectivity and the right skills, but providing effective and timely feedback may be prohibitively expensive. Without support, in-house practitioner teams risk lack of objectivity and skills. Neither external nor in-house evaluation is likely to solve issues of publication bias; usage of open science principles could help. Working with academics and other experts internal to the institution could provide the skills to work well under the open science framework. Working as a sector to avoid duplication of effort is likely to get us further, faster.

Introduction

Inequality of educational opportunity can have a long-term impact on later life chances (e.g., James et al., 2008; Education Policy Institute, 2018). In the UK, successive governments have attempted to address such inequalities through various agenda to improve social mobility – for example through the establishment of the Social Mobility and Child Poverty Commission in 2010 (now called the Social Mobility Commission), and latterly through the ‘levelling up’ programme, with its particular focus on skills development as a pathway toward securing rewarding employment (e.g., HM Government, 2022). Participation in Higher Education (HE) has often played a role in such agenda, with widening participation programmes being employed to encourage those who might otherwise not have considered HE to do so and to support raising of attainment in schools. In England, university outreach teams – often working in collaboration with schools, colleges, employers and third sector organisations – have driven such initiatives under requirements and regulations set out from 2018 by the Office for Students (the English HE regulator) and from 2006 to 2018 by the Office for Fair Access (OFFA). Resource allocations to these initiatives are large and – in the main – funded from tuition-fee income, so the stakes are high; the UK Government anticipated spend on widening participation by the English HE sector in 2020–2021 to reach around £860 m (Secretary of State for Education, 2018). However, given the resources allocated and the recent drives for improvement, knowledge of what interventions work seems to be remarkably sparse (see, e.g., Skilbeck, 2000; Gorard and Smith, 2006; Gorard et al., 2006, 2012; Younger et al., 2019; Robinson and Salvestrini, 2020; Austen et al., 2021).

Programme interventions to widen access to HE are typically delivered longitudinally, over at least one academic year and often more, some beginning at primary school age – although shorter interventions such as campus visits and taster classes are also offered. Such programmes usually comprise information, advice and guidance, application support, subject taster sessions, and campus visits; some interventions include residential summer schools and mentoring by current undergraduates. Successful evaluation of a programme is embedded from the design stage and commonly rests on a comprehensive ‘theory of change’ (see, e.g., Barkat, 2019; Dent et al., 2022). The ‘theory of change’, is a model which hypotheses how and why any given intervention should work, mapping the expected outputs of the programme of activities and the outcomes that can be measured to evaluate success. For example, the outputs of a programme of activities might be self-reports of increased knowledge and confidence in the ability to apply to HE, whereas the outcomes could be receiving an offer or eventual enrolment. Additionally, implementation and process evaluation should be carried out to understand how well the delivery of the intervention has gone and to help determine what parts of the programme have contributed toward its overall success (or lack thereof). This allows improvements to be made, often rapidly.

Although initially policy makers were more interested in tracking and monitoring spend (e.g., Office for Fair Access [OFFA], 2004), improving evaluation of access and participation work has been on the English policy agenda for some time. As early as 2008, the Higher Education Funding Council for England (the body that was responsible for oversight of English Higher Education prior to the creation of the Office for Students) outlined that its ‘Aimhigher’ partnerships (outreach consortia) needed to evaluate their own work (Higher Education Funding Council, 2008). Following on from Professor Sir Les Ebdon, Director of Fair Access at the Office for Fair Access (OFFA), as Director of Fair Access and Participation at the Office for Students (OfS) from 2018 to 2021, Professor Chris Millward continued efforts to improve evaluation of access and participation work and encouraged practitioners to evaluate rigorously and objectively (Office for Students, 2018). Higher Education providers and collaborative programmes, such as UniConnect, were strongly encouraged to produce theories of change and tools and resources were produced to support practitioners. These included, for example, a financial support evaluation toolkit, an evaluation self-assessment tool and – in 2019 – the creation of a ‘what works’ centre (now known as TASO: Centre for Transforming Access and Student Outcomes). The approach taken was therefore to upskill HE provider teams and work together as a sector to better understand what works, i.e., an ‘in-house’ approach.

The new Director of Fair Access and Participation, John Blake, appointed in November 2021, came in with strong intentions to further improve evaluation of access and participation work, observing that for 20 years or more of this work, we have nowhere near 20 years’ worth of evidence about what works. Critically, Blake said “But we expect the projects committed to in access and participation plans to be evaluated, for those evaluations to be independent, and for them to be published” (Office for Students, 2022a). It is assumed here that independent evaluation means evaluation by a third party not directly employed by the education provider (i.e., an external approach), although details of how this might work have not thus far been provided. John Blake (TASO International Conference, 2022) did acknowledge that this “needs thought about doing it correctly, so that we do not end up avoid incurring vast additional cost” as well as being keen to avoid the appearance of “institutions marking their own homework.” This has raised questions over the previous direction taken by many universities and colleges of upskilling, employing evaluation specialists, setting up specialist in-house units, and partnering with TASO to improve evaluation. It also elicited some concerns that the change of direction may be premature, having not given the previous policy time to work in terms of evaluation of projects where the outcome data takes longer than 1 year to collect. For example, university enrolment data from HESA is typically not released until 15–18 months after a student begins their course, internal student retention data would be available no sooner than 12 months after a student begins their course and final attainment data in terms of degree classification could take up to 5 years or more. Below we consider the advantages and disadvantages of each policy and propose a possible alternative way forward.

Policy options and implications

During the period of access regulation to date, the setting of a clear regulatory direction has been continually hampered by an unresolved ambiguity in terms of the espoused purpose of this evaluation. Regulatory guidance has emphasised the need for both value for money / return on investment assessments (particularly following the 2008 financial crash and the imposition of an austerity regime) and the identification and sharing of best practice. This dual approach is typified by Professor Sir Les Ebdon’s suggestion that there was an increased need for evidence and evaluation to ‘improve understanding of what works best, share best practice across the sector and demonstrate to Government the value of investment in this area’ (Office for Fair Access [OFFA], 2013). Yoking these two objectives together obscured a fundamental distinction between ‘black box’ evaluation approaches (quasi-scientific and trial-based designs) intended to identify the ‘effects of causes’, (Dawid, 2007) and produce robust evidence to support decision-making (e.g., about value for money), and theory-driven evaluation focused on exploring the ‘causes of effects’, and understanding how and why change happened the better to support practice development (Dawid, 2007; TASO, 2022). The different approaches necessarily invoke different methodologies and philosophical commitments.

Irrespective of the purpose of evaluation, to improve sector wide evaluation and knowledge about what works, two main policy options have thus far been espoused; these can be divided into an ‘internal’ vs. an ‘external’ approach. The first, upskilling ‘in-house’ practitioners and providing sector-wide support; the road Les Ebdon and Chris Millward pioneered. The second – independently generated and published evidence – the future envisioned by John Blake. From this perspective, good evaluation needs to be conducted by people with the appropriate skills for the methodology used, be objective, and – to avoid duplication of effort – be widely disseminated, either in academic journals or through sector bodies such as TASO.

Skills

Arguably, many practitioners lack the opportunity to develop the level of research skills necessary to produce a publishable level evaluation (Crawford et al., 2017; Harrison et al., 2018). There can also sometimes be ambiguity over whose responsibility evaluation is and the ubiquitous pressures of available time; many HE-based evaluators have roles split between delivery and evaluation. Upskilling all members of a team to a proficient level – certainly if requiring an academic type of publication – would take time, although a formal report made available in a repository would be attainable for most, and perhaps more accessible for the sector. By contrast, external evaluators could be selected on the basis of high proficiency in the particular methodology used for each individual project. However, as above, evaluation should take place at many different stages of an intervention and good evaluation would also usually be embedded within the design and development of the intervention itself. For many methodologies, particularly those based on a theory of change approach, they would also need to have a sophisticated understanding of the delivery practice. This means that an external evaluator would have to be involved from as early as the design stage of the intervention (identifying suitable control groups, for example), throughout the intervention, and at the end. This may prove challenging for a completely independent consultant, or prohibitively expensive for their employing institution. Certainly, there are also advantages of practitioners being involved in the evaluation design and process in order to further their understanding and practice and to draw on their professional experience to inform evaluation design.

Objectivity

As practitioners tend to be responsible for the development and delivery of interventions, it has been argued that they may not be best positioned to provide an objective and independent evaluation (Gorard and Smith, 2006; Loughlin, 2008; John Blake: TASO International Conference, 2022). Practitioners will have spent significant time and resource in designing and delivering the intervention and may therefore be seen as having a vested interest (for additional challenges faced by practitioner evaluation, see also Harrison and Waller, 2017a,b). At the same time, being closer to the practice, they will be more able to draw on experience and observation to construct a theory of change (see, e.g., Austen, 2021). Conversely, independent evaluation has the advantage of separating the evaluation from those heavily invested in it being successful. However, independence via ‘outsourcing’ is certainly no guarantee of quality or objectivity; where collaborations are long term, external consultants too may also be under pressure to produce results which reflect positively on the intervention, particularly if they perceive that success may govern whether they are awarded their next contract (see Morris and Jacobs, 2000; Markiewicz, 2008). A lack of familiarity with the complexity of delivery may also encourage the use of ‘cookie cutter’ evaluation approaches or insufficiently nuanced conclusions (see for example Nutt, 1980; Pringle, 1998). Involving stakeholders in the evaluation process may also prove more challenging for external evaluators. Both options therefore have flaws.

Dissemination

Sharing of good practice – what works – makes perfect sense. Sharing what does not work also makes sense; to save others from repeating unsuccessful interventions. Dissemination invariably furthers progress, at least if it is assumed that good practice can be generalised across a range of different contexts. Whatever the reason for dissemination, the most frequent methods of academic dissemination are publication in journals and presentation at conferences, whereas practitioners may be more likely to use informal networks and memberships. Whether this is more likely when evaluation is conducted externally, or by consultants who may have moved on to the next project, is unclear. Academic writing in peer reviewed journals is time consuming and a skilled process and probably likely to be avoided by anyone other than academics. For other evaluators, the time commitment costs are likely to outweigh the benefits. Unfortunately, whether evaluation is conducted internally or by external collaborators, interventions shown to work are far more likely to be more widely disseminated than those that do not (the well-known ‘file-drawer problem’, Rosenthal, 1979) and there is no mechanism proposed to remedy this in either approach.

In summary neither approach provides adequate resolution of the issues of either objectivity or dissemination, regardless of how skilled evaluation is provided. We therefore propose some alternative recommendations for consideration and discussion below.

Actionable recommendations

Adoption of an open science approach

Independence is neither a necessary nor a sufficient condition for objectivity and would not necessarily improve dissemination. Instead, Open Science principles could provide a means of ensuring objectivity and transparency at both the research and publication stages. Firstly, registering the principal activities that are going to be evaluated and their expected completion dates (perhaps in the HE provider’s access and participation plan1 and then merged to a central repository) would enable the sector to see what types of activities are being evaluated and avoid excessive duplication of effort, and provide opportunities for collaboration and the expansion of studies between different partners. Secondly, pre-registering a trial protocol on a centralised public database managed by a suitable organisation (e.g., TASO) with expected completion dates would allow scrutiny of the proposed evaluation to ensure quality and objectivity (to prevent hypothesising after the results are known – or ‘HARKing’). Finally, the results of the evaluation should be summarised on the same central registry as that of the trial protocol. Those researchers who want to disseminate their results in academic journals would be free to do so – perhaps even as a registered report – by submitting their trial protocol to a suitable journal, prior to the evaluation taking place. A central registry of proposed evaluations and their eventual outputs provides some mitigation against the risk that those activities that are judged unsuccessful are likely to languish as a hard-to-locate brief report on a university server.

Partnership working

Professional services staff delivering activities can sometimes be left isolated without resource and expertise to conduct robust causal evaluations, but – as discussed above – external evaluators may not always be able to provide thorough and timely support. Support from appropriate academic departments or central directorates within institutions could provide an effective and efficient compromise. Where those trained in research and evaluation lead on evaluation in collaboration with practitioners this could support a much more robust and objective approach. Evaluation experts would have less of a vested interest (removing aspects of bias) within the intervention and would have more interest in establishing what does and does not work in improving student outcomes. They could be encouraged to disseminate this work widely at conferences and in peer reviewed journals in collaboration with their practitioner partners. Although the process of academic publication contains peer review, and therefore cannot be viewed as ‘marking your own homework’, it is not infallible, is subject to publication bias, and can be slow; we address this by recommending the sector additionally follow Open Science principles as above.

Working together as a sector

As well as being objective, research needs to be generalisable and replicable. For more efficient progress we need to be wary of excessive duplication and consider the benefits of working together as a sector to answer the bigger questions. In most cases, some general guidance for the sector would be more helpful than a – potentially expensive or wasteful – trial and error approach by several institutions simultaneously. Practitioners tend to spread their efforts thinly across many evaluations, whilst a more focused and rigorous evaluation could occur if the burden was divided across several providers. To a large extent, the Centre for Transforming Access and Student Outcomes in Higher Education (TASO) has started the sector on this journey already, albeit on a relatively small scale, by identifying the important questions and working with a number of different providers to answer them. This would also potentially address the generalisability challenges, by building in opportunities to test interventions across a range of contexts. Challenges surrounding the public sharing of data due to GDPR concerns can be overcome by universities and colleges using higher education access trackers (e.g., AimHigher, HEAT) to record their activities and associated participants, allowing researchers from these tracking services to conduct large scale evaluations. This type of approach could also serve to avoid potentially under-powered studies (e.g., those with insufficient sample sizes to detect effects even when they are present). It would be beneficial to have a central body overseeing sector efforts and ensuring quality.

Another aspect for consideration is how ‘evidence’ is defined and disseminated. At its simplest level, a ‘what works’ approach tends to imply a binary outcome; either an intervention works, or it does not. This closes off the possibility of identifying partial successes or fragmentary learning. Realist evaluation, for example, is founded in the identification and assessment of configurations of contexts, mechanisms causing the change and the outcome that results (Pawson et al., 1997), This complexity opens the possibility of learning more about the conditions and approaches required to deliver successful outcomes and allows for a more nuanced definition of what ‘working’ means and a more detailed understanding of the conditions that might be required if a particular aspect of the intervention is to transferred to other contexts. Although, realist evaluation is often undertaken by ‘external’ evaluators the building of programme theories is reliant on internal practitioner expertise. For a discussion of this in the context of organisational interventions (see Nielsen and Miraglia, 2017).

Conclusion

Without appropriate resource and support practitioner-only evaluation alone may not deliver the rigour and objectivity required to fully move forward. Independent evaluation seems unlikely to overcome objectivity issues, if indeed they exist; the perceived problems with current approaches to evaluation in higher education have not been clearly articulated, rather only solutions offered. However, an opportunity exists to reframe the notion of independence, to focus on developing criticality and challenge both within and beyond organisations and support all stakeholders to be active critical thinkers, which is perhaps the real gap which needs to be addressed. Quality should be assessed using notions of criticality (objectivity), additionality (contribution to knowledge), timeliness (informs decision making) and materiality (with relevance and importance), rather than independence (Picciotto, 2013). Working together as a sector – in partnership with academics and other experts as outlined in Austen (2022) – and most importantly following open science principles, could provide the key to improving sector knowledge of what works faster. We have a timely opportunity to develop a new system with new Access and Participation Plans for English HE providers required for 2024.

Funding

This work was supported by an award to EM from Aston University’s Teaching Research Fund.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Statements

Author contributions

EM coordinated the paper and initial draft. RJS, MH, LW, LA, and JC wrote particular sections and provided information, suggestions, or comments.

Conflict of interest

Subsequent to the submission of this paper RJS began working for TASO. TASO had no input and are not associated with the views or recommendations expressed in this paper.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes

1.^This aspect has been included in a recent Office for Students (2022b), with the suggestions that HE providers should bolster their access and participation targets with an ‘intervention strategy’, which includes details of when evaluation outcomes are to be published.

References

Summary

Keywords

evaluation, policy, access and participation, what works, widening access and participation

Citation

Moores E, Summers RJ, Horton M, Woodfield L, Austen L and Crockford J (2023) Evaluation of access and participation plans: Understanding what works. Front. Educ. 8:1002934. doi: 10.3389/feduc.2023.1002934

Received

25 July 2022

Accepted

04 January 2023

Published

18 January 2023

Volume

8 - 2023

Edited by

Chris Millward, University of Birmingham, United Kingdom

Reviewed by

Kos Saccone, Central Queensland University, Australia; Angela Gayton, University of Glasgow, United Kingdom

Updates

Copyright

*Correspondence: Elisabeth Moores, ✉

This article was submitted to Higher Education, a section of the journal Frontiers in Education

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics