1. Introduction
The need for policy interventions to raise higher education access and participation rates among people who face socio-economic disadvantages is widely acknowledged in the academic literature. Existing evidence from England, the context of our study, suggests that the socio-economic gap in higher education participation can be explained to a great extent by differences in prior attainment during secondary school, rather than by barriers arising at the point of entry to higher education [
1]. At the same time, when people from less advantaged socio-economic backgrounds do participate in higher education, they are less likely to make optimal decisions in terms of choice of institution or course in comparison to their similarly achieving but more advantaged peers [
2]. Both the higher education application processes and the decision-making around that are recognised as complex [
3,
4]. Such evidence highlights the need for policy interventions that can effectively support the attainment raising efforts of schools and colleges, potentially simplify processes and the information space around them, and provide gentle nudges to individuals to use such information as best as possible, about whichever post-compulsory education pathway they choose [
5,
6].
In England, the recent period has seen the implementation of a variety of outreach widening participation in higher education programmes and interventions. These interventions have attracted substantial governmental, institutional, and third-sector funding. With outreach and widening participation programming growing and diversifying, causal evidence of impact is increasingly important for improving funders’ decisions regarding which programmes to prioritise, providers’ decisions about the format and content of such programmes, and young people’s decisions about which programmes to attend. This evidence, however, is often lacking [
7] or faces the challenge of so-called ‘black box’ programmes. There is some evidence [
7] that these ‘black box’ programmes, often encompassing a variety of interventions, are effective, especially when combining, for instance, the simplification of the application processes with individual support [
8] but disentangling between their constituent parts is often difficult. Further evidence [
5,
9,
10] suggests that particular types of interventions, specifically nudging programmes which provide participants with relevant information and seek to spark certain behaviours, are effective at improving both application rates and take-up of places in higher education. Even as very recent evidence finds small and inconsistent effects from nudge interventions [
11], the provision of information is an established form of higher education outreach in the English context and there is some causal evidence [
12] that, alongside further support, the provision of accurate information increases application rates, particularly to selective universities.
It is therefore essential to understand the impact of discrete and stand-alone interventions on higher education applications and access, especially if they are delivered in an otherwise busy intervention space. In this paper, we therefore present results from an experimental study evaluating the ways in which a behavioural nudging information-provision intervention has affected higher education application rates when deployed within a larger programme of higher education outreach in the East of England region.
1.1. Aims of the Paper
We report results from two randomised control trials (RCTs) designed to evaluate the effect of a light-touch information-providing (via text message) nudge intervention embedded within a larger programme of widening participation activities. The aim of the specific nudge intervention was to increase the probability of students applying to higher education. The trials, using the same experimental design in each of the two iterations of trialling, involved a total of 935 students in the final year of compulsory-age education, enrolled in 58 schools and colleges in the East of England region during the academic years 2017–2018 and 2018–2019.
The programme within which this intervention was embedded is the ongoing Take Your Place programme (hereafter, the programme) undertaken since 2017 by the Network for East Anglian Collaborative Outreach (neaco). The programme targets students living in socio-economically deprived geographic areas and delivers outreach activities in schools and further education colleges with high proportions of such students. The aim of the programme is to improve access to broadly defined higher education, by helping students explore their options and academic potentials. We return to a fuller description of the programme after discussing the relevant evidence background.
1.2. Evidence Background
The paper makes several contributions to the emerging literature that employs field experiments to examine nudging as a potential high-benefit and low-cost approach to improve educational outcomes. Although they have become increasingly popular in the field of behavioural economics, there is mixed evidence on the overall effectiveness of such nudging interventions [
11]. Field experiments that provide information or reminders to students about the college application process and financial aid availability and eligibility, without the accompanying offer of professional assistance, typically have not led to higher rates of college access or success [
13,
14,
15]. Using US data, Phillips and Reber [
16] found no improvement in low-income students’ higher education enrolment rates when they were provided with the information and support that higher-income students typically have. Similarly, Carrell and Sacerdote [
17] found that providing students with information on the benefits of attending college had no impact on their attendance and persistence. More recently, Avery et al. [
18] found null and negative effects of text message-based outreach on improving US students’ college choices and outcomes at a national level, in contrast to positive and significant effects identified from the same intervention when distributed in specific school districts in Texas. In the German context, a separate study [
19] found that information nudges increased higher education enrolment for students from a non-academic family background while decreasing (at least in the short-term) the enrolment intentions of students from academic backgrounds.
Overall, studies on nudging interventions in education appear to provide mixed evidence of the effectiveness of such approaches for higher education access and participation outcomes. A recent and comprehensive review of the nudging literature in education [
6] suggests that nudging interventions
can have broad and long-term effects on overall student outcomes but are not effective in all contexts or for all students. A key conclusion of that review highlights the importance of understanding the underlying behavioural mechanisms potentially resulting in application and access, and how closely interventions designed to impact these behavioural mechanisms match them. Furthermore, the broader evidence also suggests the importance of clarifying the conditions under which such interventions can facilitate behaviour change [
20], including in terms of the wider context in which they are delivered.
Our focus on the effectiveness of a nudging intervention (delivered through text messages) on increasing university application rates, when the intervention is embedded within a wider range of widening participation activities, is a non-trivial contribution that we provide to this literature. Producing such evidence is important for informing education policy design and for understanding individual decision making. We suggest that by varying one aspect of a programme’s provision, it is possible to capture the effectiveness and efficacy of specific programme components. This may contribute to the development of evidenced-based practices for widening participation and outreach practitioners. In addition, we verify the robustness of our empirical findings by repeatedly implementing the trial in two consecutive academic years and finding identical results.
We further contribute to the above debates by providing evidence within a context in which a lack of information, advice, and guidance may be a major driver of socio-economic inequalities in higher education applications. This is relevant both in terms of participation and access to selective institutions [
2,
21]. This is also relevant since credit constraints [
22] and geographic isolation [
23] continue to be factors that have been previously identified as equally important drivers of inequalities in higher education access among students from low socio-economic backgrounds in England. Such factors may be mitigated against by an income-contingent loan system that covers the entirety of students’ tuition fees, and separately by increasing patterns of localisation, whereby students travel relatively short distances to reach a higher education institution [
2]. Against this backdrop, generating evidence around the effectiveness of an outreach intervention, with a clear mechanism that may be delivered straightforwardly and at relatively low cost, is important.
Finally, we show evidence on the differential impact of the intervention by pre-determined student observable characteristics to facilitate previous findings that suggested that the impact of such interventions can be mostly effective on sub-groups of students.
2. The Intervention
The nudging intervention we explore in this paper has been delivered as part of one of the several regional partnerships under Uni Connect, a large-scale government-supported initiative. We provide context to Uni Connect and the relevant evidence about its effectiveness below, before describing the specific regional Uni Connect partnership and its outreach programme. We then provide comprehensive information about the nudging intervention.
2.1. The National Context in England
Despite the increasing number of young people accessing higher education, young people facing socio-economic deprivation are still less likely to progress to higher education in England [
24]. Factors which are associated with lower progression primarily focus on attainment at school [
1], but also include being the first in the family to potentially attend higher education (an aspect associated with relatively less available knowledge of the higher education system) [
25], and the economic circumstances of the household [
26]. In addition to this, changing labour market conditions [
27] and perceptions of their individual potential experiences in higher education [
28] also contribute to changing intentions in relation to higher education applications [
29].
To tackle these enduring inequalities, a large range of widening participation, fair access, and outreach programmes have been implemented in England. A relatively recent national programme is the government-funded Uni Connect initiative, looking to increase higher education participation across all types of higher education provision (from university to vocational routes) by taking a place-based approach and working regionally with universities, further education colleges, and schools, by delivering tailored programmes of outreach activity. Recent evidence [
30,
31] suggests that the impact of Uni Connect mirrors its complex nature. This large-scale evaluation work [
30,
31] takes in the full national programme and finds that both the range of interventions delivered and their relative impact vary by geography. It also finds that the overall impact of the programme is either negligible or has not been possible to causally attribute to the existence of the initiative. This is taken into consideration against a backdrop of recent disruptions and negative impacts from the pandemic and the associated public health crisis. While a further national-level evaluation is still underway, existing findings already suggest a need to disaggregate the constituent parts of Uni Connect activity, much like our present study does. Similarly, the current shift in direction for the next several years of Uni Connect towards attainment-raising interventions means that the earlier stages of the programme offered a unique opportunity to explore an intervention aimed at increasing application rates, rather than any other aspect of the higher education access process. Our study capitalises on this opportunity.
2.2. The Network for East Anglian Collaborative Outreach
The Network for East Anglian Collaborative Outreach (neaco) is one of the 29 regional partnerships under Uni Connect. The partnership has operated since 2017 in the East of England region with support from all universities and further education colleges in the region and delivers Take Your Place, its flagship outreach programme, in areas wherein the higher education participation of young people is low—and much lower than expected, given average attainment at age 16, and socio-economic composition. Students from these areas are classified as “target students” and represent the specific group whose progression to higher education is the key focus of this programme. A total of 106 schools and eight further education colleges were involved in the programme for the period under investigation in this paper.
The programme is special in that the overall approach adopted in the delivery of activities is based on a progressive framework that seeks repeated interactions with students. This is a key feature of the wider, national, and government-supported Uni Connect programme that for the past five years has dominated the higher education outreach landscape in England, alongside higher education provider- and third sector-driven activity. In participating schools and colleges, this translates into Take Your Place being delivered in a way that varies in each school or college, adapting to the needs of each educational setting, their environment, and the available resources.
There are two central foci for the outreach activity delivered by Take Your Place. The first prioritises the development of students’ understanding and preparedness by focusing on the specific requirements, means, and option choices through which students can realise their aspirations for transitions between the key stages of the English educational system and into higher education. The second strand of activities is focused on passion and ambition, focusing on enabling students to explore, identify, and articulate their passions and aspirations, giving positive incentives for choosing post-16 and post-18 pathways. The activities delivered by the so-called Higher Education Champions (HECs), an outreach specialist usually embedded in schools and their college-based counterparts, range from information sessions and university campus visits to summer schools and community engagement opportunities.
At the time of the delivery of the first iteration of a behavioural nudging intervention explored in this paper (2017–2018) and the first trial, the delivery of Take Your Place was in its relatively early stages. By 2018–2019, the time of the second iteration of the intervention and the second associated trial, Take Your Place was far more established, both in terms of the scope and the range of activities being delivered. As a recent report for the programme illustrates [
32], there continues to be substantial variation in the range of activities that the different schools and colleges engage in as part of Take Your Place, with levels of individual engagement with the programme monitored by the programme team and the target of separate analysis elsewhere. This is an important point as it relates to the potential of the nudging intervention to affect change in an increasingly busy intervention space, an issue we return to in discussing our design of the trial and the implications of our findings.
In addition to the in-school and in-college outreach activities provided by Take Your Place, in its first two years the programme also included a light-touch information-provision element.
This light-touch behavioural nudging intervention is the focus of the randomised control trials reported in this paper and is described below.
2.3. The Behavioural Nudging Intervention
In addition to progressive and sustained provision detailed above, the Take Your Place programme included a light-touch information-provision component. The objective of this behavioural nudging intervention was aligned with the programme’s overarching aim, which is to improve the higher education application rates of participating students. The intervention aimed to do this through the provision of easily understandable information that students could act upon. A secondary aim of delivering this intervention was to enable the exploration of this type of information-provision nudging intervention in terms of its effectiveness.
The intervention was delivered in two consecutive school years (2017–2018 and 201–2019), with only minor differences between the two years, all relating to the accuracy of the information provided via text messages to individual participants: the specific dates and deadlines were updated, and the links to any online material shared to students were updated. Otherwise, the intervention was materially the same.
The content of the information related principally to the process of applying to higher education through the Universities and Colleges Admissions Service (UCAS). UCAS is the centralised national admissions system, where all universities and a number of other higher education providers are included. Individuals wanting to apply to university (or the other available types of higher education providers) make one single application through UCAS, to a total of up to five separate degree courses in each year. The intervention studied here provided participants with information about preparatory steps (e.g., drafting a personal statement, identifying appropriate degree courses), as well as practical issues (e.g., navigating the UCAS website and application portal, finding the required information and deadlines) and places where the students could go to find more information about any of the above aspects (such as signposting to teachers and staff of Take Your Place, providing links to relevant information web pages or videos hosted online).
A total of up to 14 text messages were administered to participants in the intervention. However, to recognise that participants may have applied to higher education prior to the deadline, and to avoid irrelevant information being sent to them, two text messages inviting a response were also sent, containing a simple yes/no question regarding whether the individual student had already applied to higher education. For all those responding positively, the text messages stopped, and the individual participants’ outcome was recorded as having applied to higher education.
Appendix A contains all the text messages that were sent to participants, excluding any links which are no longer available.
The timing of the delivery of the intervention was important, as it needed to align with the application window and relevant deadlines. It was administered starting the last week of October of each respective school year (2017–2018 for the first trial; 2018–2019 for the second trial) and ended in mid-January of the same school year (in the next calendar year), immediately after the application deadline, which regularly falls in the middle of January each year. As
Appendix A indicates, the last text message was sent
after the passing of the application deadline, signposting students to relevant information in terms of options available to them if they still wanted to apply to higher education for the relevant year.
Importantly, the text messages were personalised with the names of the individual participants, using a direct address (“Hi, [student name]!”). This followed evidence according to which personalisation was important in the provision of information in higher education outreach [
12], and sought to create rapport with participants, which was hypothesised to increase the likelihood of action following the reception of the text messages.
A large team contributed to the development of the intervention, including staff on the neaco partnership and their institutional partners. The lead researcher was also involved in the set-up of the intervention through the provision of evidence in relation to various decisions (e.g., around personalisation).
3. Trial Design
To estimate the causal impact of the above nudging intervention on higher education application outcomes, two randomised control trials were implemented in each school year when the intervention was delivered (2017–2018 and 2018–2019). Each trial underwent the ethical approval process at the Faculty of Education University of Cambridge. The first of the two trials was jointly undertaken with researchers from the Behavioural Insights Team (BIT) and was registered by them (trial number 2017136). The latter team undertook a separate analysis of data pertaining to the first trial, were only briefly involved in the second trial, and did not undertake the full analysis of data as reported in this paper. We acknowledge their contributions to the first trial and thank them for their insights.
While the two trials were undertaken independently of each other, the testing of the intervention (materially the same across the two implementation years) allows us to pool the data across the two trials. This has implications for the power calculations (reported below), but we also explore the potentially different impacts of the intervention in each respective trial cohort in our later analysis. This is particularly relevant given the embedding of this intervention in the wider Take Your Place programme, which was at different stages of development in the two intervention years.
3.1. Outcome Measure
The outcome measure of interest is whether students applied to higher education via UCAS. This outcome measure was coded as binary, taking the value 1 if students had applied, and the value 0 if they had not applied. The outcome measure was collected with two procedures: first, from self-reported responses on whether they completed their application before the relevant deadline of the respective academic year; and second, with the help of on-the-ground staff, who obtained this information directly from the participants’ schools and further education colleges. While there is evidence to support that student responses to this type of question are highly predictive of actual student application [
29], the addition of the staff-provided data meant that the outcome measure could be collected from a high proportion of initial trial participants, contributing to very low attrition, as outlined later in this paper.
3.2. Trial Hypotheses
Each of the two trials operated under the same overall research hypothesis, according to which the text-based information-provision nudging intervention may encourage students in their final year of secondary education to apply to higher education via the standard UCAS route. We used a two-tailed test to test the non-directional hypothesis that the rate of application to higher education for students randomly allocated to receive the intervention was no different to that for students randomly allocated to the control condition.
3.3. Trial Characteristics
Both trials were based on individual-randomisation, balanced, two-arm (intervention and control) trials, run under an intention-to-treat approach. The intention-to-treat approach means that all participants randomly allocated to each of the trial conditions remained in that respective condition for the purpose of analysis (barring any missing data) regardless of the (unknowable) level of engagement with the intervention: that is, students randomly assigned to the intervention condition were considered as part of this intervention condition even if they did not engage with any of the text messages. It was impossible to monitor engagement with, and immediate actions as a result of, the text messages because the participants’ school and home lives were not monitored as part of these trials. They may have engaged in the behaviours suggested by the text messages immediately after receiving them, at a later point, or not at all; or they may have sought information or advice from their school or college. While clearly a limitation, this aligns with the commonly used intention-to-treat approach (analysing data according to the initial allocation result) we have also taken in this trial and means we are minimising the risk of over-stating our results.
We now outline the full experimental set-up and procedure. This applies to both trials.
3.3.1. Participant Recruitment
In the period under consideration for this study, the Take Your Place programme administered two large-scale surveys to students in schools and colleges participating in the wider programme. A separate section in each of these surveys invited final year students (those eligible to apply to higher education) to take part in the randomised control trial.
Detailed but simple information was provided to students as part of this recruitment process. Students were asked for fully informed consent to participate in a trial, with different students reached in the two consecutive years of the trials’ implementation.
The information provided to students during this recruitment process included the trial aims, the randomisation procedure (explained as a 50–50 chance of receiving the text messages if taking part in the trial), and information about what the intervention would entail. The participants who consented to taking part were then invited to provide their phone numbers for the purposes of the text messages delivery.
The inclusion/exclusion criterion for the presentation of the recruitment information related to the participants’ self-reported likelihood of applying to higher education. As part of the development of the text messages, it was decided that students expressing a very low likelihood of applying to higher education would not benefit from the text messages.
The students’ likelihood to apply to higher education was gauged during the survey with a stand-alone 6-point Likert response scale question asking them to rate the likelihood of application at age 18/19 (the relevant age for a vast majority of students in the participating schools and further education colleges).The students were also asked if they had already applied to higher education. Based on the above questions, two formal exclusion criteria were used: first, the students who expressed that they had no intentions to apply to higher education were excluded from the sample eligible to take part in the trial. Second, all students who indicated that they had already applied to higher education were also excluded.
For the first trial, the survey was undertaken between September and early November 2017. A total of just over 21,300 students responded, with just over 4000 final year students invited to take part in the trial. A total of 531 students signed up.
For the second trial, the survey was undertaken between September and late October 2018. A similar number of total respondents was reached, and a total of 439 students signed up to the second trial.
There are two potential implications of this recruitment process. First, the external validity of the trials may be relatively low as participating schools (in the overall Take Your Place programme, and therefore also in the trials) were selected based on specific characteristics of the areas wherein the students lived. The second implication is that we are only able to estimate the impact of digital nudging among students who were willing to receive text messages, with findings not necessarily generalisable to the wider population of Take Your Place students. While this latter issue is important, it is also unavoidable from the perspective of ethical conduct of research and of trials, with prospective participants only recruited into the trial on the basis of full informed consent. To address this concern, we explored responses to a series of relevant learner survey questions (the same survey used for recruitment purposes) including self-reported knowledge of (higher) education options, knowledge of specific education or employment options, and knowledge of where to seek information about such topics, comparing responses between trial participants and trial non-participants in the relevant year group. While this full analysis is beyond the scope of this paper and will be reported elsewhere, we found no statistically significant differences between these two groups on the above variables. This suggests that the self-selected trial participants were not, at least for these observed variables, meaningfully different to the non-participants. We return to issues of external validity when we discuss the results of the trials in relation to the intervention set-up as part of the wider outreach programme.
3.3.2. Randomisation Procedure
Randomisation occurred after the participants had signed up to each respective trial as per the procedure above and it was carried out at the individual level. Randomisation was stratified by target student status (students living in areas where the rate of higher education progression was lower than expected given the average age (16) of attainment) and by student self-reported gender. This was conducted to ensure that any differences in higher education application likelihood by these two characteristics would not represent a bias in the trial.
Randomisation was carried out in statistical software (Stata) using a random number generator with a randomly chosen seed number, and it saw 50% of participants allocated to the intervention condition and 50% of participants allocated to the control condition, separately for each respective trial.
This randomisation approach generated an intervention and a control condition in each trial. While we were not able to monitor participant compliance with allocation, the distribution of text messages was carefully monitored, and no contamination errors at the distribution point were noted. It remains possible, though not highly probable, that the individuals in the intervention condition may have shared text messages, or information therein, with control group counterparts. However, as outlined above, the intervention was designed so that the text messages would build upon each other and follow a progressive and time-specific pattern. Therefore, unless participants in the intervention condition had ‘leaked’ all the messages and information to participants in the control condition, the intervention would not have been able to be engaged with as designed.
3.3.3. Attrition after Randomisation
A total of 970 eligible participants were recruited into the two trials. Data on the outcome measure (outlined below) were not available for a small number of these participants (3%), with 940 of the 970 (97%) participants across both trials presenting full data for analysis.
For the first trial, 515 participants of the 531 initially recruited were retained in the analysis. Attrition was similar for the control and intervention arms of this trial, at 3% each. For the second trial, 425 participants were retained in the analysis from the recruited total of 439, again with a balanced attrition per arm, at 3% each.
While attrition is always a concern in trials, due to the implications of the internal validity of the analysis, at 3%, the attrition rate for this trial is very low [
33]. As such, we did not carry out any imputation checks; however, we did carry out a robustness test, as we detail in the Results section later in this paper.
3.3.4. Balance at Baseline
We examine whether our randomisation created balanced groups at baseline according to the observable characteristics of students.
Table 1 below presents the descriptive statistics of the originally randomised sample and the magnitude of the differences between the intervention and control groups (column 3 in
Table 1) for the pooled data and across both trials. The observed differences between the two groups are nearly equal to 0. We do not provide tests of statistical significance related to these mean comparisons because to do so would violate the logic of randomisation.
We then move on to empirically examine the balance across the intervention and control groups for the analytical sample (after attrition, as outlined above). In
Table 2, we show balance across the intervention and control groups for the sample with non-missing outcome data.
For the sample of students for whom we have outcome data and non-missing information on all other covariates, we observe no imbalance between trial arms across gender and target status, suggesting that the randomisation was balanced on these observable characteristics. This applies both to the initial baseline and to the post-attrition analytical sample.
3.3.5. Power Calculations
As part of the set-up of the trials, power analyses were conducted to judge the feasibility of detecting an effect of the intervention considering the likely response rate from the students. Given the lack of directly relevant evidence regarding the effect of such an intervention on university application rates at the time of the development of the trials, we calculated the sample sizes using a theorised minimum detectable effect size of 0.2. We assumed a conventional 80% statistical power (i.e., at least an 80% chance of detecting the main effect), and we also assumed that we could explain approximately 50% of the variance in the main outcome with the baseline variables we included, namely demographic characteristics (including ‘target’ student status and gender). The power calculation test to be run is two-tailed, as although the hypotheses are directional, it is important to statistically test for the eventuality of a negative effect. There parameters resulted in a required sample of 395 participants, half in the control group, and half in the intervention group. Were we to not meet the sample size requirements, a sample of 300 would yield a minimum detectable effect size of 0.229, and a sample of 200 one of 0.282, holding all other assumptions constant. All power calculations were performed in PowerUp! [
34].
At the recruitment stage, keeping all other parameters the same as above, the achieved sample yielded a minimum detectable effect size of 0.172 for the first trial and a minimum detectable effect size of 0.190 for the second trial. When pooled, the minimum detectable effect size was 0.127, which is very good for education trials in England, many of which are (under-) powered for a 0.2 effect size [
33].
At the analysis stage, we re-calculated the minimum detectable effect sizes. We used the same parameters as above, but instead of estimating the proportion explained variance from the covariates, we obtained this from a simple analysis, which put it at 13%. Together with the slight reduction in sample size, the at-analysis minimum detectable effect size was 0.231 for the first trial, 0.254 for the second trial, and 0.171 for the pooled sample.
3.4. Analytical Strategy
To obtain a causal effect of the information-provision nudging intervention on student outcomes, we compared post-intervention higher education applications by trial condition (intervention status) using the following OLS regression model for student
i, in institution
s, at year
t:
where:
is a post-intervention binary measure of higher education application;
is a pre-intervention self-reported measure of intentions to apply to higher education;
is a binary variable indicating whether the student was in the intervention or control group (0 = control; 1 = intervention);
is a vector of individual characteristics (the stratification factors) at baseline;
are the institution fixed effects;
is a dummy indicator of academic year (2017–2018 or 2018–2019);
is a robust standard error clustered at the institution level.
It is important to note that participation to the trials was limited to students who expressed at least a mild intention to apply to higher education, that is, when . To account for the fact that the wider outreach programme is an institution-level intervention and there is a clustering of students within institutions, we include institution-fixed effects and cluster all reported standard errors at the institution level. The coefficient of interest is , which shows the impact of the individual-level random assignment to the nudging intervention on the probability of having applied to higher education before the deadline.
4. Results
First, we present the descriptive results for the outcome measure and the baseline measure of interest.
Table 3 shows the rate of higher education applications for the intervention and control conditions for both trials, separately and pooled. In terms of the outcome measure of applications to higher education, and pooled across the two trials, 60% of the participants in the intervention group applied, compared to 59% of the control group. Additionally, for the pooled sample, the baseline intentions to apply (captured on a 6-point scale and used to recruit participants in the trials, with only those with at least a slight intention to apply to higher education being eligible) were also very similar across the two arms.
We observed a very similar pattern when looking at the disaggregated data for the two trials, with the proportions of students applying to higher education in each of the respective intervention and control conditions across the two trials being very similar to each other.
In relation to the baseline intentions to apply (also in
Table 3), these were fairly high across the board, and balanced by the intervention and control conditions. This mirrors evidence about the national sample of students engaged in Uni Connect, with only 11% of the learners in the analysis by the national evaluator relating to the relevant stage of the programme (by 2019) reporting that they were unlikely to apply to higher education [
35].
We then applied the analytical strategy as outlined above. The results of the application of this strategy to the pooled trial data indicate that there is a very small but not statistically significant effect of the nudging intervention on higher education applications of students within schools and colleges participating in the wider outreach programme under consideration here.
Table 4 presents the estimates on the impact of the intervention on the higher education applications. These results refer to the pooled sample of students participating in the two trials, presented in a sequential manner. In the first column (1), we show the row effect of the nudging intervention. In column two (2), we then add controls for individual-level characteristics. Finally, we add school-fixed effects for the results presented in column three (3).
This third column represents the analysis as specified above and offers the main trial results.
Table 5 and
Table 6 illustrate the results of the same analysis separately for the two trials. The estimated intervention effect is positive, yet very small and statistically insignificant, with an almost identical figure across all specifications (pooled, and separately for both trials, as seen in
Table 4,
Table 5 and
Table 6). While target student status remains a statistically non-significant explanatory variable for higher education applications across all specifications of both the separate and the pooled analysis, for the first trial and for the pooled model, gender is statistically significantly (and positively) associated with higher education applications (intervention and target student status held constant), but only in the analytical specification without institution-fixed effects (column (2) in
Table 4 and
Table 5 below). This is likely a result of a school/college-based variation in the overall rate of higher education application by gender, something that the institution-fixed effects capture (column (3)).
The outcome of the trial is therefore clear and consistent, showing no effect of the text messaging intervention on higher education applications.
4.1. Robustness Checks
We undertake two robustness checks to investigate how sensitive our estimates are to different specifications. First, we consider whether selective attrition between the treated and the control group students may bias our results. This is despite the fact that we observe very little variation in the overall rate of attrition by trial arm across the two trials. Therefore, in the first robustness check, we tested whether our results were similar when we replaced missing observations by assuming that all students for whom we had missing data in the intervention group applied to higher education, and that all the students for whom we had missing data in the control group did not apply to higher education. By running this analysis, we were able to examine whether, if we had managed to collect data for all the randomised sample and under the most optimistic assumptions about these missing data, we might have observed a significant effect of the intervention. The results are reported in
Table 7 below, indicating that even under our most optimistic assumptions about missing data, we do not see an effect of the intervention on higher education applications.
For the second robustness check, we repeated our main estimations using conditional logistic regression to account for the dichotomous nature of our dependent variable (instead of the linear probability model used in the main analytical specification above).
Table 8 presents the marginal effects from this analysis. Inevitably, the conditional logistic regression in column three (3), that is, when school-fixed effects are included in the estimation, results in a reduction in the sample size due to dropped observations when no variation in higher education applications was observed within schools. Even with that caveat, which further supports our choice of the OLS specification, we find no difference to the main results generated from our main analysis above when using this specification.
As a result, the main findings of the trial remain unchanged, either under the alternative analytical specification, or when testing a best-case scenario attrition situation as in the first robustness check. This increases the confidence in our results.
4.2. Effect Heterogeneity
Finally, we explore whether the effect of the intervention may have been different for any of the two stratification factors, gender, and target student status. In
Table 9, we show the results from the application of the main analytical strategy, but for disaggregated samples: target and non-target students, and, respectively, girls and boys, both across the pooled data.
The results above suggest no evidence of a heterogeneous effects of the intervention on higher education application rates by the two groups (target/non-target student status, gender) under consideration in our experimental study. Taken together, these results therefore suggest the robustness of our analysis and its precise null results.
5. Discussion and Conclusions
In this paper, we have reported the results from two randomised control trials testing the effects of a light-touch behavioural nudging information-provision intervention on higher education applications in the English context. Given the existing evidence [
13] on the use of behavioural nudging in the context of providing relevant educational information to (prospective) students, we hypothesised that the intervention, designed to work alongside the Universities and Colleges Admissions Service (UCAS) higher education application process in England, may encourage students who had at baseline expressed at least a mild intention to apply to higher education to realise this intention and apply to higher education.
We implemented two randomised control trials of the same intervention in consecutive school years, using individual-level random allocation to one of two experimental conditions in each trial: an intervention condition, receiving the intervention, and a control condition, not receiving the intervention.
The intervention was delivered as part of a wider programme of outreach and widening participation in the East of England region, which saw schools and further education colleges deliver, via staff employed by the programme, a wide-ranging set of outreach activities. From an ethical perspective, this means that students in the control group were not unfairly treated in relation to their opportunities to participate in potentially impactful outreach activities. However, from the perspective of the trials we have implemented, this means that we were in fact able to estimate what would amount to an additional effect of the nudging intervention. In that sense, the randomisation procedure, resulting as outlined above in balanced samples, may have also, in principle, ensured a balanced distribution of potential participation in these in-school activities; however, the business-as-usual approach of both experimental conditions may include a substantial amount of outreach intervention. While this represents a clear limitation of the trial, it also reflects the only possible real-world scenario for the delivery and testing of an outreach intervention: the English policy and activity landscape around outreach that we have outlined above means many schools and colleges routinely are the place of many and diverse outreach and widening participation activities. Testing the nudging intervention in this context is a way to increase translational validity, even if it may work to potentially minimise the effect size of the intervention. Further research and evaluation around the Take Your Place programme will explore how variation by school/college, as well as by individual student, shapes later higher education outcomes for programme participants, and will look to understand the changes to self-reported knowledge, expectations, and intentions around higher education that may have occurred due to a participation (in various amounts) in Take Your Place.
As such, our main trial result of finding positive, very small, but statistically non-significant results—essentially null results—is not necessarily surprising. This finding was robust both for statistical specification and for testing for the impact of (the very small) trial attrition, and it was consistent across each of the two trials separately as well as for the pooled data. Since the trial protocol was robustly implemented, attrition was low and the statistical power of the trial was good compared to other educational trials, which offers confidence that the null result is indeed a valid picture of the impact of this intervention, as delivered in the context of the wider outreach programme.
Although this embedding of a nudging intervention with an existing widening participation programme allowed for robust data collection, a high response rate, and a low attrition rate, nesting the intervention within the larger programme may therefore explain the lack of significant results.
This finding is particularly relevant given prior evidence [
7] around so-called ‘black box’ interventions, where a variety of potential mechanisms for change may be at play at any one time, making it difficult to disentangle them. In that sense, our experimental study provides specific robust evidence regarding the impact (or rather, lack thereof) of a particular aspect of the wider outreach programme being delivered in the East of England region.
Moreover, our findings align with recent evidence that challenge the hypothesis that nudging may result in large effects [
11] and offer further support to suggestions [
36] that intensive guidance might be needed to change higher education application and enrolment behaviours. This is precisely what the wider programme, the focus of a larger-scale quasi-experimental evaluation currently underway, may have provided to some of the students participating in these two trials, potentially minimising the likely effect of the nudging intervention.
We are unable to provide definitive evidence regarding the interplay between this intervention and the wider programme in terms of their potential impact on higher education applications. However, the fact that each trial concludes with the same result, while being run when at different stages of the wider programme (in 2017–2018 in its first full year of implementation and therefore at an incipient stage; in 2018–2019 already embedded) may suggest that the level of other activity happening in the participating schools and colleges was not, overall, a factor affecting the potential effectiveness of the intervention. Our future research relating to Take Your Place will be able to explore this variation by school and college in greater depth.
We acknowledge two further limitations of our study, particularly in relation to issues of internal and external validity. First, the intention-to-treat approach to both trials meant that we did not consider whether students had actually read, engaged, and acted upon the information provided to them via the text messages they had received. We were also not able to measure any ‘leakage’ or contamination from the intervention to the control group. While it is possible that students in the intervention group may have communicated with those in the control group (therefore minimising any intervention effect we may have been able to detect), this would have also meant that the recipients of the intervention had given the information at least some minimal thought and that the information would have prompted action, potentially cancelling out these two aspects. Future trials could make use of existing technology to measure actual levels of engagement (e.g., link clicks) with the information provided by the intervention. Future trials could also explore using alternative forms of communication of this intervention, with social media currently being a powerful vehicle for the communication of relevant information amongst young people.
From an external validity perspective, the recruitment into the trial of students who had expressed a non-negative (at least ‘slightly likely’) intention to apply to higher education means that the results are not immediately generalisable to the wider population of higher education-ready students in England. This is a common challenge of trials in education [
33], but one that future trials may address by using rich administrative data present in England alongside national outreach and widening participation programmes which may offer the opportunity to generate representative samples and therefore more readily generalisable evidence.
The above limitations notwithstanding, the evidence we have generated with our experimental study is relevant for local policy-making purposes, including within the wider outreach programme within which the intervention was initially embedded. The null results are attributed to the implementation team leading the outreach programme, who decided not to continue its deployment and instead focus on intensive in-school outreach activity.
We also contend that the understanding of the intervention is useful for wider policy-making purposes, especially in a context of limited resources but continuing efforts to improve equity and fairness in higher education applications, access, and participation.