ACADEMIC PROBATION, STUDENT PERFORMANCE, AND STRATEGIC COURSE-TAKING.
Concerns about low graduation rates, especially among underrepresented minority students, have spurred new interest in evaluating existing and implementing new institutional policies and programs intended to foster success. (1) Academic probation is a nearly universally used policy at colleges and universities and can potentially have large impacts on marginal students. Academic probation policies typically pair a performance threshold with a stipulation that persistent failure to achieve this standard can result in suspension or expulsion. Although probation policies target low-performing students, most universities maintain standards that are sufficiently high that nearly 25% of U.S. undergraduates will be placed on academic probation at some point during their tenure (Damashek 2003).
Probation policies are often controversial in part because of concerns that they are too punitive or possibly hinder success. Reflecting this uncertainty, institutions vary both in terms of performance thresholds and severity of prescribed punishment, suggesting there is little consensus on the optimal set of probation standards. If, for example, probation provides proper impetus for students to succeed, then adopting stricter standards may be beneficial. By contrast, if probation policies primarily discourage student effort and performance, they may reduce persistence and the likelihood of degree completion, especially among underrepresented groups who are often at greater risk of being assigned to probation.
We use administrative data on several recently enrolled cohorts at a large, urban U.S. public university to study how being placed on academic probation at the end of the first term affects student outcomes and behaviors. As in past work (Fletcher and Tokmouline 2010; Lindo, Sanders, and Oreopoulos 2010), we use a regression discontinuity (RD) design and find that probation causes students to increase their grade point average (GPA) in the subsequent term. Our key contribution is providing a richer understanding of these GPA improvements by more comprehensively investigating the effect of probation on strategic course-taking behaviors. (2) In theory, students placed on academic probation could satisfy the performance requirements by increasing effort, but it is also possible that students may adjust their course-taking strategies to increase the likelihood of improving their GPA. This strategic behavior may include enrolling in fewer or easier classes, changing majors, or more readily withdrawing from classes that they are performing poorly in. (3) Although our paper is among the first studies to emphasize the potential importance of these behaviors in the context of academic probation, these results are line with a literature that finds that students respond strategically to GPA requirements for merit scholarship receipt by reducing their workload and/or choosing less challenging courses of study (Cornwell, Lee, and Mustard 2005; Sjoquis and Winters 2015).
Our main finding is that probation causes students to engage in a variety of strategic behaviors that help to increase GPA without increased effort. This strategic behavior, however, does not appear for all groups. In particular, underrepresented minorities--black and Hispanic students--show relatively little evidence of strategic behavior whereas probation causes non-minorities to attempt fewer credits, fewer higher level courses, and substantially increases the probability of withdrawing from a course. (4,5) Course withdrawal is a particularly important dimension of strategic behavior because it allows students to avoid very low grades that would drag down their GPA substantially.
Though the sharpest heterogeneity is across minority status, we also find some evidence of heterogeneity according to ACT scores with high cognitive students--those with ACT scores above the median--being more likely to reduce credit hours attempted than low cognitive students with ACT scores below the median. We find little evidence of heterogeneity in course-taking behavior across genders, though these results are imprecisely estimated.
Lindo, Sanders, and Oreopoulos (2010) and Fletcher and Tokmouline (2010) both find that students increase their GPA as a result of being placed on academic probation. Lindo, Sanders, and Oreopoulos (2010) also find that some students voluntarily leave the university as a result of being placed on probation, while Fletcher and Tokmouline (2010) find no evidence of this voluntary attrition. Though our contribution focuses on strategic behaviors, we also estimate the effect of probation on GPA improvement and voluntary dropout so we are able to compare these studies and assess whether past results generalize to a new context. Our results for GPA improvement are similar to both Lindo, Sanders, and Oreopoulos (2010) and Fletcher and Tokmou-line (2010) whereas we find no effect of probation on voluntary attrition in the following term, similar to Fletcher and Tokmouline (2010). Our results might match the Fletcher and Tokmouline (2010) results more closely than the Lindo, Sanders, and Oreopoulos (2010) results because Fletcher and Tokmouline (2010) study public universities in the United States like we do whereas Lindo, Sanders, and Oreopoulos (2010) study a Canadian school that structures academic probation policies differently. (6)
Although our primary focus is on the immediate effects of placement on academic probation in the subsequent term, we present limited evidence on academic probation's impact on longer-run educational outcomes. Assignment to probation reduces third term enrollment overall with a particularly large effect on under-represented minorities. Probation has little impact on 4--and 6-year graduation rates for any group, suggesting that students who drop out or are expelled as a result of academic probation may have eventually dropped out anyway. However, as discussed in Section V.C, there are identification issues associated with studying long-run outcomes (in both our work and in the broader literature) using this design that require us to interpret these findings with added caution.
We posit that underrepresented minorities may be less likely to engage in strategic course-taking behavior because they may be less aware of institutional policies and supports. Though we cannot examine differences in institutional knowledge using our data, past work shows that minority students have much lower levels of academic integration, are less likely to visit academic counselors for academic planning, and are less likely to talk with faculty about academic matters (Nunez and Cuccaro-Alamin 1998; Terezini et al. 2011). Furthermore, first-generation students, many of whom are black and Hispanic, experience more difficulty navigating the bureaucratic aspects of academic life due to limited exposure to people with first-hand knowledge about college (Engle and Tinto 2008; Nunez and Cuccaro-Alamin 1998; Richardson and Skinner 1992; Terezini et al. 1996, 2001; York-Anderson and Bowman 1991). Despite showing little evidence of engaging in strategic behavior, underrepresented minorities still manage to make significant GPA improvements after being placed on probation although these gains appear to be smaller than those made by non-minorities. These results suggest that underrepresented minority students may have forgone additional GPA improvements by not utilizing the policies/supports that non-minorities avail, and/or that minority students may have increased effort by a larger amount than other students in order to increase their GPA. (7) The inability of underrepresented minority students to rely on strategic course-taking behavior and the potential need to increase their effort and time devoted to studies in the second term may have the unintended consequences of discouraging them and leading them to drop out in the third term.
Academic probation is used at most U.S. colleges and universities. While the exact policy structure varies across institutions, most combine performance standards such as minimum term and cumulative GPA requirements with penalties such as temporary suspension or, in some cases, expulsion. The ostensible goal of such policies is to encourage struggling students to improve effort and invest more in learning-related behaviors. To this end, in addition to imposing performance standards, universities often provide extra academic advising and other informational assistance.
Academic probation policies at the university we study are based on GPA. Students are assigned letter grades A through F for completed courses taken for a grade which correspond to the 4.0 grading scale typically found at U.S. institutions of higher education. No pluses or minuses are assigned. A student is placed on academic probation at the end of any term in which the student earns less than a 2.0 GPA. Individual colleges within the university can set their GPA cutoffs higher than this university minimum. (8)
Once placed on academic probation, students are required to attain at least a 2.0 in the subsequent semester and raise their cumulative GPA to above 2.0 to be removed from probation. Students placed on academic probation are also required to have an additional meeting with the academic advising office in the following term. If the student cannot meet the performance thresholds, the prescribed penalty is usually a 1-year suspension following which the student can petition for readmission to the university. Students at risk of suspension can petition the university for an exception if they can prove mitigating circumstances such as personal or family hardship. If the student's petition is successful, their probationary period is extended, providing them more time to rehabilitate their GPA.
Similar to previous studies, we focus on the impact of being placed on academic probation at the end of the first semester despite similar policies applying in all terms. This restriction allows for a clear analysis of the impact of probation on outcomes because at the end of the first term none of the students in the control group (those with GPAs above the cutoff) will have ever been placed on probation. Such clear comparisons may not be possible in subsequent terms. In the next section, we discuss the administrative dataset used in the analysis and its key features that inform our empirical research strategy.
The sample is drawn from nine cohorts of freshmen undergraduates who entered the university during the fall semester between 2004 and 2013. (9) The university is a large, public institution that features a diverse student population of about 17,000 undergraduates distributed across 12 colleges. The student body largely comes from urban and suburban locales. Many are immigrants themselves or the children of immigrants. A significant number are also first-generation college students.
The longitudinal administrative data used in the analysis are made available by the university. These data consist of student characteristics typically collected during the admissions process and at enrollment including student race, ethnicity, gender, high school attended, high-school GPA, and ACT scores. In addition, we observe detailed semester-to-semester course-taking behavior and performance of the students, including enrollment status, college, number of credits taken and earned, and grades earned.
Table 1 presents basic summary statistics for the sample. Panel A shows that the students in our sample exhibit substantial racial and ethnic diversity; a little less than a third of the sample is non-Hispanic white while 22% is Hispanic and 9% is black. Reflective of national trends showing that women are increasingly more likely to attend college, 55% of the students are female. The overall mean ACT composite score among those in the sample is 23.78, but the standard deviation of these composite scores is nearly 4 points, indicating that there is substantial variance across students in the level of entering skill.
Panel B of Table 1 shows summary statistics of students' enrollment and performance characteristics during their undergraduate experience.
Nineteen percent of the students had a GPA below the cutoff for probation at the end of their first term and were, therefore, placed on probation. Notably, nearly 40% of the sample fell beneath the probation cutoff at some point during their undergraduate enrollment at the university. Roughly 70% of students were enrolled in the College of Liberal Arts and Sciences. Thirty-nine percent of enrolled freshmen graduated within 4 years while 56% of enrollees were able to finish their degree in 6 years.
In Table Al (Appendix), we use Integrated Postsecondary Education Data System (IPEDS) information to compare the demographic profile of the university we study with similar urban, public institutions as well as with all non-profit, doctoral granting research universities in the United States. (10) An important feature of our university is that it is a non-elite public institution; the students that enroll there are more likely to approximate the background and preparation of the typical student in the United States. The university we study is also larger and nominally more expensive relative to similar institutions. Likewise, our university is much more racially and ethnically diverse than similar urban, public institutions. Only 38% of the student population is white in contrast to the 58% in the peer universities. Hispanics and Asians make up a much larger share of the student population than the typical institution whereas black enrollment is roughly similar. Aside from its demographic makeup, the university we study is fairly similar to its counterparts on a number of key metrics related to the economic status and academic preparedness of its student body. Our university matches its peer group average with similar fractions of students receiving a federal student loan and receiving any financial aid. Academic preparedness, as measured by ACT scores, is similar as well: the 25th percentile ACT composite score is 21 relative to a peer average of 20; the 75th percentile ACT composite score is 26, slightly higher than the 25 that peer institutions average. Finally, relative to its peers, the university we study has a slightly higher retention rate and somewhat higher 4--and 6-year graduation rates.
Finally, the university we study compares similarly to all non-profit doctoral granting universities nationwide as presented in Column (3). Nationally, the average total enrollment and undergraduate enrollment are lower, with doctoral granting universities enrolling roughly 17,000 students overall and about 12,000 undergraduates. Relative to these institutions, the university we study is more racially/ethnically diverse; however, our university has a somewhat lower percentage of blacks enrolled at 8% relative to a national average of about 11%. The 25th and 75th percentile ACT composite scores and retention rate are pretty similar. However, the 4--and 6-year graduation rates at the university we study are lower than the nationwide rates of about 41% and 61%, respectively.
IV. EMPIRICAL MODEL AND STRATEGY
Establishing the impact of academic probation on student outcomes is difficult because of unobserved differences between students placed on probation and students who are not. Specifically, OLS estimation of an equation such as:
(1) [outcome.sub.i] = [alpha] + [gamma] probation, + [x.sub.i][beta] + [[epsilon]i]
is likely confounded by non-random selection among students assigned to probation. We use an RD design that exploits the predetermined GPA threshold that defines assignment to academic probation. We find that there is a sharp jump in the probability of being placed on probation at the cutoff--the discontinuity is close to but not exactly one because in rare cases, students are placed on probation for reasons other than their GPA. (11) Assuming students cannot manipulate their GPA within a local area around the threshold, falling on either side of the threshold generates quasi-experimental variation in assignment to probation. Thus, RD provides a causal estimate of the impact of academic probation for students whose first-term GPAs fall within a local neighborhood of the cutoff.
Our RD strategy estimates variations of the following local-linear regression equation:
[outcome.sub.i] = [alpha] + [gamma][T.sub.i] + [beta]g[pa.sub.i] + [pi]g[pa.sub.i] x [T.sub.i] + [u.sub.i].
Here, [outcome.sub.i] denotes student outcomes such as performance or course-taking behavior in the subsequent term; [gpa.sub.i] denotes student GPA normed with respect to their college's academic probation cutoffs; [T.sub.i] is an indicator for achieving a GPA below the relevant probation cutoff; and [u.sub.i] denotes a mean-zero, random error term. Following the recommendation of Imbens and Lemieux (2008), we estimate Equation (2) using a local-linear regression with rectangular kernel weights. Our main results are estimated using the data-driven optimal band widths prescribed in Calonico, Cattaneo, and Titunik (2014a, 2014b, 2015), henceforth referred to as the CCT bandwidth. Additional results provided in the appendix tables demonstrate robustness to varying bandwidth size.
The coefficient of interest is [gamma], which we interpret as the impact of being assigned to academic probation at the end of the first term of enrollment. Identification of [gamma] comes from the discontinuous jump in the probability of being assigned to academic probation across the threshold, [gamma] is a reduced form effect on our outcomes of interest that encompasses the effects of all features of academic probation: imposition of performance threshold, internalization of penalty, and increased academic counseling and monitoring.
We also estimate Equation (2) separately for different subgroups of students to study heterogeneity in the effects of probation by gender, race/ethnicity, and level of cognitive skills. As discussed in the introduction, we might expect the impact of probation to differ by student race because minority students are less likely to be aware of potentially beneficial institutional systems and are less likely to engage with academic advisors. Therefore, we study the effect of probation for underrepresented minorities--Hispanic and black students--separately from that of non-underrepresented minorities including white and Asian students. (12) We follow Lindo, Sanders, and Oreopoulos (2010) in studying heterogeneity by gender and cognitive skills, and define students with above median ACT scores as high cognitive students. High cognitive students might find it easier to improve their GPA in the subsequent term than low cognitive students, and, consequently, may have a reduced need to rely on strategic course-taking behaviors to meet the GPA threshold imposed compared to low cognitive students. Lindo, Sanders, and Oreopoulos (2010) find that probation leads to dropout among men but not women while Ost (2010) finds that women are more responsive than men to the grades they receive in the majors that they choose. Standard errors are clustered on the running variable (Lee and Card 2008), though results are similar with other standard error choices.
A. Validity of the RD Design
As in any RD study, the primary threat to identification is precise manipulation of the running variable. In our context, the estimated effect of probation may be biased if students are able to finely manipulate their GPA to avoid probation. We believe that this type of manipulation is unlikely here since the typical undergraduate student takes four to five classes per semester and grades typically depend heavily on final exam performance. Furthermore, since probation is just a warning and does not entail any direct consequences, students do not have a strong incentive to avoid it. That said, since the probation threshold is publicly known and students might be able to allocate effort or appeal to professors in order to avoid probation, it is theoretically possible for there to be sorting around the threshold.
If students are in fact able to finely manipulate their GPA, this will be observable in the density of GPAs. Assuming that particularly motivated students push their GPA to be slightly above the threshold, one would expect to see a valley just below the cutoff and a heap just above the cutoff. Observing such a pattern might suggest that those just above the cutoff are un-observedly more motivated than those just below the cutoff since those below the cutoff were close but did not exert the extra effort to avoid probation.
The top panel of Figure 1 shows a histogram of first-term GPA, normalized to the academic probation cutoff. This figure clearly shows heaps at the whole numbers, with particularly large heaps corresponding to failing all of one's classes or getting all A's. Figure 1 also shows that there are students represented throughout the 0 to 4 range in a relatively continuous fashion. This apparent continuity in GPA emerges because students typically take four to five courses that vary in the number of credits assigned. Though there are relatively few possible GPAs for a student taking four 3 credit courses, GPA is fairly continuous for a student taking a mixture of 1, 2, 3, 4, and 5 credit courses. The bottom panel of Figure 1 presents the same distribution of first-term GPA excluding students earning whole number GPAs. After excluding these whole number GPAs, it is reassuring that no valley just below the cutoff is apparent, nor is there excess density just above the cutoff. (13)
To further investigate the possibility that students finely manipulate GPA, we test whether predetermined covariates vary smoothly through the cutoff. To implement this analysis, we estimate Equation (2) but we use predetermined student characteristics as the dependent variable. Table 2 presents the results from this regression and Figure 2 shows the analogous figures. If there were substantial jumps across the cutoff for a large number of characteristics then we would worry about the validity of the research design as individuals with these characteristics might be adept at manipulating their grades in order to avoid being assigned to academic probation.
Table 2 shows that for 10 of the 11 predetermined characteristics tested, there are no statistically significant differences across the probation threshold. At the 10% level, there is a significant difference by gender such that women are significantly more likely to be below the probation GPA cutoff in three of the five bandwidth specifications shown. While this is potentially concerning, we show that our estimates are very similar when we add controls for all of these predetermined characteristics including gender, and we also report results separately for men and women. Importantly, we find that strong predictors of college performance such as ACT scores are not significantly discontinuous across the probation
threshold in Table 2, and the results in Figure 2 match these estimates. That said, some of the point estimates are moderate in magnitude and it is possible that small imbalances across many covariates could combine to generate large imbalances in potential outcomes, so we explore this issue further. We use the predetermined covariates to predict each of our outcome variables and then test for discontinuities in these predicted outcomes. Since our identification strategy requires the assumption that the determinants of outcomes are smooth across the cutoff in the absence of placement on probation, testing for discontinuities in each predicted outcome is the more relevant falsification test for our analysis compared to testing whether there are discontinuities in the predetermined characteristics which may or may not be related to the outcome determinants. The results in Table 3 demonstrate that there is no evidence of a discontinuity in any of the predicted outcomes using the CCT bandwidth, which is the preferred bandwidth for our main results. We find significant discontinuities in predicted enrollment in the third term in two of the five bandwidth specifications tested which can happen by chance (Figure 3). Overall, it is reassuring that there are no significant discontinuities in the short-term outcomes that are the primary focus of this paper including enrollment in second term, change in GPA from first to second term, and the strategic course-taking behaviors in any of the bandwidth specifications.
Although we observe no evidence of manipulation, it is worth considering the likely direction of bias if students could successfully manipulate GPA. Since those just above the cutoff would likely be unobservedly more motivated than those just the below the cutoff in the presence of manipulation, we expect that any GPA manipulation will bias our estimates towards finding that academic probation hurts student performance.
B. Potential Bias from Heaping
In our context, even if students do not manipulate their GPA, the discreteness of course grades (A through F with no pluses or minuses) combined with the within-student correlation in assigned grades will mechanically lead to heaping at whole numbers. Since the probation threshold is a whole number, we expect to observe a heap right at the cutoff. This mechanical heaping has the potential to bias the RD estimates of probation impact if the heaping is non-random, that is, the students with exactly a 2.0 are unobservedly different than students with a GPA close to but not exactly equal to 2.0. One reason that students at exactly a 2.0 may be different than students whose GPA is slightly less than 2.0 is that students who take fewer courses are mechanically more likely to obtain exactly a 2.0 GPA than students who take more courses. (14) Since students who take more courses are also likely stronger students, one might worry that students with a 1.99 GPA may have different characteristics compared to those with exactly a 2.0 GPA. Importantly, so long as there is no manipulation, as the evidence in Section IV. A suggests, students with a 1.99 GPA are still expected to be unobservably similar to those with a 2.01 GPA.
We follow Barreca et al. (2015) to test for nonrandom heaping in our context by regressing each of the predetermined characteristics against the running variable and an indicator for whether the student has exactly a 2.0 GPA. The results from this exercise in Table 4 indicate that the heaping is non-random since most of the predetermined characteristics deviate significantly at the heap from the trend predicted by the surrounding data. Consistent with our hypothesis, the students who have an exact 2.0 GPA have attempted significantly fewer credits and have significantly weaker incoming ACT scores. Barreca et al. (2015) show that in the presence of such non-random heaping, keeping the observations at the heap leads to biased estimates, dropping the observations at the heap leads to unbiased estimates, and controlling flexibly for data at the heap can, but does not necessarily eliminate bias. (15) Following their recommendation, we use the "donut" RD strategy, also employed by Fletcher and Tokmouline (2010), which drops the observations at the heap, that is, the students with exactly 2.0 GPA in the first term in our main results. (16) While dropping the observations with an exact 2.0 GPA can limit the generalizability of our results, it is necessary to do to obtain unbiased estimates and in doing so, this donut approach is "consistent with the usual motivation for RD designs. Specifically, researchers will focus on what might be considered a relatively narrow sample in order to be more confident that they can identify unbiased estimates" (Barreca et al., 2015). We also test robustness of our results to allowing for inclusion of the heap with a separate indicator variable in appendix tables.
A. Short-Term Impacts of Assignment to Probation
We begin by examining whether students placed on probation at the end of their first term remain enrolled in the second term and investigate how student performance changes in the subsequent term. Since being placed on probation after the first term comes only with a warning to raise performance in the second term, any attrition between the first and second terms is strictly voluntary and reflective of students getting discouraged from returning to school. (17) Table 5 presents estimates of the effect of probation on second term enrollment and change in GPA between the first and second term for the overall sample and split by subgroup using the optimal CCT bandwidth. Tables S1 and S2 (Supporting information) show that these results are robust across a variety of bandwidths while Tables S12 and S13 show robustness to using the Barreca et al. (2015) indicator method instead of the donut RD, and to controlling for covariates. Figure 4 presents the results in Table 5 graphically.
The first panel of Table 5 shows that placement on probation does not have a statistically significant effect on second term enrollment of students in the overall sample or in any of the subsamples. This result is in contrast to Lindo, Sanders, and Oreopoulos (2010) who find that placement on probation at the end of the first year increases the probability of dropout between the first and second years for the full sample and particularly so for students with high high-school grades and men. The second panel of Table 5 shows that students placed on probation significantly improve their GPA by 0.2 additional grade points in the second term relative to students who narrowly escaped being placed on probation at the end of the first term.
This estimate is similar to that found in past work (Lindo, Sanders, and Oreopoulos 2010; Fletcher and Tokmouline 2010). Although the point estimates vary somewhat across demographic groups and lack statistical significance for men and those with high cognitive skills, Figure 4 shows visual evidence of discontinuities in GPA improvement in the second term for most groups of students. Table S2 shows that with the exception of the estimates for men, the point estimates are generally similar across bandwidth choices, and Table S12 shows that these results are robust to using the Barreca et al. (2015) indicator method and to controlling for covariates. Interestingly, our finding than women achieve larger GPA improvement than men is consistent with the findings of Lindo, Sanders, and Oreopoulos (2010).
B. Strategic Course-Taking Behavior
While the GPA improvements documented in the previous section may reflect increased effort, it is also possible that students are able to improve their grades in other ways. To explore the mechanisms behind the GPA improvement, this section analyzes the impact of probation assignment on different course-taking behaviors. We study the effect of probation on credit hours attempted, the relative difficulty of courses taken, and the probability of withdrawing from a course to receive a "W" on the transcript instead of the actual grade earned in that course. Table 6 presents the estimates of strategic student responses to being assigned to academic probation with different columns showing results for the overall sample and the various subgroups. Tables S13--S19 show robustness of these results to different bandwidths while Tables S3b-S8b show robustness to using the Barreca et al. (2015) indicator method and to adding covariate controls. In the interest of brevity, our subsequent discussion of the remaining results will only highlight those that are found to be robust in these Supporting information tables and we will point out any exceptions.
Panel A of Table 6 reports the effect of probation on attempting 15 or more credits in the second term while Panel B reports the effect on the total number of credits attempted in the second term. Studying the effect of probation on attempting 15 or more credits is important because although students are still considered full-time when attempting 12-14 credits, this is considered to be a light load since one must pass at least 15 credits each semester in order to graduate in 4 years. (18) For both measures of credits attempted, we find that probation leads to a statistically significant reduction in credits attempted in the second term for the full sample of students and for non-minority students and high cognitive students. Figures 5 and 6 show that, while there are visually discernable discontinuities in credits attempted for most groups, non-minority and underrepresented minority students exhibit starkly different behavior. While non-minority students placed on probation are nine percentage points less likely to attempt 15 or more credits, the estimate for underrepresented minorities is close to zero and not significant.
The estimates in Panel C show that although probation does not significantly affect the probability of withdrawing from a course in the full sample, it leads to significant increases in the probability of withdrawing for women (3.5 percentage points) and non-minorities (6.4 percentage points). Figure 7 mirrors these results showing that the clearest and biggest discontinuity in withdrawing is seen for non-minority students with minority students showing no effect on the probability of withdrawing whatsoever.
The estimates in Panel D report the effect of probation on enrollment in 200-level courses in the second term. At the university we study, 100-level courses are designated for introduction-level courses whereas 200-level courses are more advanced and require the 100-level courses as prerequisites. The estimates for all groups are negative suggesting that students placed on probation shy away from enrolling in the more demanding 200-level courses but they are statistically significant only for underrepresented minorities and for non-minority students. Figure 8 shows that the significant effect found for underrepresented minorities is driven by the two dots closest to the cutoff and Table S6 confirms that the effect for underrepresented minorities is very sensitive to bandwidth choice. In contrast, although the visual evidence of a discontinuity for non-minorities is not dramatic, Table S6 shows the point estimate of the effect on taking a 200-level course for non-minorities is quite robust across different bandwidths.
In Panel E, we analyze the effect of probation on the probability of taking a math or science course. Since math and science courses have harsher grading standards than other disciplines, are widely perceived by students as hard subjects (Sabot and Wakeman-Linn 1991), and require more study time (Arcidiacono, Aucejo, and Spenner 2012), it is possible that students avoid these subjects after being placed on probation. This could be important in the context of recent efforts to increase STEM enrollments, but Panel E shows no evidence for any significant effect on the probability of taking a math or science course.
In Panel F, we explore this idea using a broader measure based on the average grades in the disciplines in which a student takes courses in the second term. Education and Music courses, for example, have higher average grades relative to courses in Statistics, Chemistry, and Economics (Figure 9). We calculate this measure solely based on the disciplines that a student attempts courses in; the students' actual course performance does not enter into the calculation of their average discipline grade. The estimates in Panel F are mostly positive and Figure 10 shows some visual evidence of a discontinuity, but none of the estimates are statistically significant.
Taken together, the results in Table 6 show that students respond to being placed on probation by changing their course-taking behavior in the second term in multiple strategic ways to improve their chances of success including taking fewer credits, withdrawing from courses more often, and signing up for 200-level courses less often. It is important to note that we count credits to have been attempted even if a student eventually withdraws so the lighter course load is not a result of the course withdrawal. As mentioned in Section IV, the estimated effects of probation include the multiple effects of the imposition of performance threshold, the internalization of penalty, and increased academic counseling and monitoring. Our finding of different course-taking behavior in the second term among students placed on probation may reflect students trying to improve their chances of a higher GPA in that term in order to satisfy the performance threshold imposed on them and get off academic probation. The significant differences in course-taking behavior found, however, may also be a result of students placed on probation internalizing the signal about their weak academic performance and adjusting their course load difficulty to better match their downward revised expectations of their own ability. (19) Our findings of students placed on probation engaging in different course-taking behavior are consistent with both of these interpretations and we lack the data to empirically distinguish between these two mechanisms. If changes in course-taking spurred by updated expectations about one's own ability dominate the former mechanism, we might expect students placed on probation at the end of the first term to permanently change their course-taking through the rest of the terms. If placement on probation does not lead to a significant change in student expectations about own ability--at least close to the probation cutoff that we examine--it is possible that the changes in course-taking behavior are more transient and students only engage in this in the second term when they need to satisfy the GPA threshold imposed on them.
Most notably, we find consistent evidence suggesting marked heterogeneity in these student course-taking responses by student race. (20) For all of the strategic course-taking behaviors we find any significant effects for, we find evidence for a significant and sizable response by non-minority students whereas we find no evidence of a response by underrepresented minorities. (21,22) In light of these differences in strategic course-taking behavior by non-minority and minority students, it is interesting that both groups of students were found to have significantly improved their GPA in the second term relative to the first term. Although the coefficients for the effect of probation on the change in GPA are not statistically significantly different for minority and non-minority students, the point estimate for minority students of 0.18 is 20% smaller than the point estimate for non-minority students of 0.23 and it is theoretically possible that the minority students might have experienced larger gains in GPA if they had made some of the course-taking changes that non-minority students undertook in the second term.
Since each of the strategic course-taking behaviors we examine can be performed separately, the cumulative effect of these adjustments is potentially sizeable. In order to determine the fraction of GPA improvement that can be attributed to these documented strategic behaviors, it is necessary to determine the impact of each of these factors on GPA improvement which is complicated by endogeneity. For example, students who enroll in fewer credits generally perform worse than students who enroll in more credits, but this likely reflects unobserved heterogeneity in the types of students that enroll in fewer credits rather than the alternative interpretation that attempting more credits has a positive causal impact on GPA. Similarly, average grades tend to be better in 200-level courses than 100-level courses, but this difference may be driven by attrition of low-performing students from the university who are less likely to take 200-level courses. Although we are unable to fully address these issues, we make some adjustments described below to provide some suggestive evidence regarding how important the strategic behavior we document can be in explaining the improvement in GPA.
First, to quantify the impact of withdrawing from courses on GPA improvement, we calculate the GPA that each student would have had if she had received an F rather than withdrawing from the course. We then re-estimate the impact of probation using this constructed GPA and take this estimate as the impact of probation if students did not withdraw from any courses. While most low-performing students who withdraw from a course are failing at the time of the withdrawal, it is possible that some of these students would not actually have failed had they not withdrawn. As such, using a counterfactual of an F may overstate the GPA benefits of withdrawing. On the other hand, withdrawing from a course allows students to devote more time to their other courses and we do not account for this benefit. We also estimate the impact of probation using the constructed GPA while assuming that the counterfactual grade in a course a student withdrew from would have been a D. This is likely to understate the benefits of withdrawal since many students on probation would prefer a D to a W because a D still results in college credit and there is no refund for a W.
We find that if non-minority students failed instead of withdrawing from their courses, academic probation would only increase their GPA by approximately 0.174 points, which is quite similar to the 0.181 GPA improvement experienced by underrepresented minorities who are not found to withdraw strategically in response to being placed on probation. Since the actual increase in GPA for non-minority students is 0.228, up to 24% of their GPA improvement can be attributed to withdrawals. If we instead assume that students who withdraw would have earned a D, we find that 15% of the GPA improvement can be attributed to students strategically withdrawing from courses in the second term.
To estimate the impact of credit hours and course level on GPA improvement, we run a regression of GPA improvement on credit hours attempted, whether the student is taking 200 or higher level courses, academic standing indicators and individual fixed effects. (23) While the individual fixed effects will control for any time-invariant determinants of selection into different credits or courses, time-varying student specific shocks still have the potential to bias these estimates. Therefore, we acknowledge that the estimates from this regression are unlikely to be credibly identified. With this caveat in mind, the coefficients imply that each credit hour attempted reduces GPA by 0.024, and attempting 200- or higher level courses leads to 0.124 lower GPA. As shown in Table 6, non-minority students placed on probation attempt 0.185 fewer credits, and are 5.7 percentage points less likely to take a 200-level course in the second term. Combining these estimates of strategic behavior with the estimated impact of these behaviors suggests that the decreased credits and lower probability of taking 200-level courses combined could explain 0.012---or 5%--of the GPA improvement for non-minority students. (24)
While the above calculations rely on several strong assumptions, these results provide suggestive evidence that a meaningful fraction of the GPA improvement for non-minority students might be attributable to strategic behaviors rather than increased effort. For these students, withdrawals play a large part, explaining between 15% and 24% of the GPA improvement, while the changes in the other strategic behaviors are estimated to account for 5% of the total improvement. Therefore, up to 30% of the GPA improvement for non-minority students could be attributable to strategic behavior rather than a result of increased effort in the term following their placement on probation.
C. Longer-Term Impacts of Assignment to Probation
In Table 7, we present estimates of the impact of academic probation on longer-term outcomes of enrollment in the third term, and indicators for graduation in 4 and 6 years, respectively. Several important caveats apply to interpreting these effects. First, we consider the impact of being placed on academic probation based on performance in the first term but students not placed on probation after the first term might be placed on probation in later terms. As a consequence, our estimates on long-run outcomes may potentially be affected by contamination of the control group. The estimated impact of academic probation on longer-term outcomes may also be biased by differential attrition.
Panel A of Table 7 shows that probation significantly decreases the likelihood of enrolling in the third term in the full sample, and for women, underrepresented minorities and those with low cognitive ability. While the estimates for men, non-minorities, and the high cognitive group are negative, they lack statistical significance and are considerably smaller than the estimates for the relatively disadvantaged groups. We find that assignment to probation at the end of the first term significantly reduces third term enrollment by 7.4 percentage points for underrepresented minorities which is 11.2% of their mean third term enrollment rate of 66% while it is associated with a statistically insignificant, 2.1 percentage point reduction for non-minorities, which is 3% of their mean third term enrollment rate of 70%. Figure 11 reinforces these results showing clearer, larger discontinuities in third term enrollment for each of the relatively disadvantaged groups.
Panels B and C show that, unlike enrollment in the third term, probation does not have a statistically significant effect on the probability of graduating in 4 or 6 years for almost all students. The only significant graduation effect we find is a significant reduction in the probability of 4-year graduation for the low cognitive group but Figure 12 shows that this effect is noisy and it is not robust across different bandwidths in Table S10. The point estimate of the effect of probation on 4-year and 6-year graduation for underrepresented minorities is -0.022 and -0.07, respectively, which could be indicative of a potentially negative effect that is imprecise. In contrast, the analogous point estimates for non-minorities are -0.004 and 0.012, respectively, which are close to zero. Overall, the analysis suggests that there is no strong evidence that being placed on probation has broad long-run consequences for college graduation (Figure 13). (25)
VI. DISCUSSION AND CONCLUSION
Using an RD design, we present evidence on student strategic responses to being assigned to academic probation at a large, urban public U.S. university. We find that underrepresented minorities show little evidence of strategic behavior while non-minority students engage in strategic behavior on several dimensions.
It is important to note that the causal evidence on probation presented here applies to students on the margin of being assigned to probation; our study does not speak to probation's impact on students far from the relevant cutoff. Furthermore, probation policies may have broader effects not captured here, particularly if students are forward looking. While our empirical design assumes that students do not finely manipulate their GPA, it is possible that students across the board maintain high levels of effort to avoid probation.
While our study does not assess the total effect of academic probation policies, most universities already have a probation policy in place and are not considering eliminating it. Instead, several universities are currently debating whether to alter their academic probation cutoffs. Our study provides important evidence on the effect of the policy for students near the cutoff. This is the relevant policy parameter for schools considering changing the GPA threshold for being placed on academic probation. If all universities used the same threshold, this would be prima facie evidence that there was a consensus regarding the optimal performance standard. However, there is substantial variation across and within universities and it is common for universities to alter their policies. (26)
Outside of the implications for institutional policy, our study provides additional insight into how low-performing college students respond to the performance incentives imposed by the university. Though all students on probation have the incentive to attempt fewer credits and withdraw from courses when performing poorly, only a subset of students appears to engage in this behavior. In conversations with academic advisors at the university we were told that many students do not take advantage of the free support services provided to them, either because of disengagement or lack of awareness. The evidence we present on the lack of response among underrepresented minorities on this margin is consistent with qualitative work on academic probation that find that many minority students are unaware of university policies and supportive resources (Sadler 2010).
It is interesting that we find that underrepresented minority students are able to significantly improve their GPA without relying on these strategic behaviors although they appear to make smaller gains compared to those made by non-minority students. Since we find that minority students do not rely on strategic course choices, the minority students might have had to increase their effort and time spent studying to improve their grades compared to non-minority students. It is theoretically unclear how such increased effort may affect the students' future success. It is possible that the increased effort by minority students translates into better learning for them which sets them up for improved academic performance and graduation rates in the future terms. However, it is important to note that the increased effort minority students may have to put in can come with a cost. We know from past work that first-generation and minority students are more likely to have job and family responsibilities outside of school compared to non-first-generation, non-minority students (e.g., Stebleton and Soria 2013). This means that minority students' lack of reliance on helpful university policies and procedures can potentially increase their workload and stress significantly leading to feelings of inadequacy and eventual dropout. Our findings suggest another reason that university attempts to monitor and increase student retention that track students' academic performance alone may not be sufficient and that these should be considered along with other measures of course load, course difficulty, etc.
Finally, our investigation of strategic behavior has implications beyond the literature on academic probation. Studies that use college GPA as the dependent variable should keep in mind that GPA improvements need not correspond to increased learning or effort. While ideally researchers would adjust GPA to reflect course difficulty, course load, and other considerations, in practice, this adjustment is difficult because the data required may not be available and there is no compelling causal evidence on the impact of these factors on GPA. As an alternative, researchers might investigate the extent to which students strategically alter their course-taking behavior in response to a particular treatment.
TABLE A1 Comparison of University with National Peers Urban Similar Nationwide Public U Universities Total enrollment 29,049 16,429 17,078 Undergraduate enrollment 17,575 13,751 12,436 In-state tution/fees 13,410 7,528 19,239 Fraction white 38% 58% 58% Fraction black 8% 6% 11% Fraction Hispanic 21% 10% 9% Fraction Asian 18% 4% 7% Fraction with any financial aid 83% 82% 86% Fraction Pell grant 52% 40% 34% Fraction federal loan 49% 49% 57% 25th percentile ACT composite 21 20 21 75th percentile ACT composite 26 25 26 Full-time retention 81% 78% 81% 4-year graduation 30% 19% 41% 6-year graduation 57% 47% 61% N 1 52 342 Notes: Data drawn from National Center for Education Statistics IPEDS. The table compares basic characteristics between 2013 and 2015 for the urban, public university we study with similar urban, public institutions enrolling greater than 10,000 students nationwide. Colum 3 compares our university to all non-profit doctoral granting research universities. Racial\Ethnic percentages do not sum to one because of definitions of Race/Ethnicity that allow for multiple groups.
CCT: Calonico, Cattaneo, and Titunik
GPA: Grade Point Average
IPEDS: Integrated Postsecondary Education Data
RD: Regression Discontinuity
Altonji, J., T. Elder, and C. Taber. "Selection on Observed and Unobserved Variables: Assessing the Effectiveness of Catholic Schools." Journal of Political Economy, 113(1), 2005, 151-84.
Arcidiacono, P., E. Aucejo, and K. Spenner. "What Happens after Enrollment? An Analysis of the Time Path of Racial Differences in GPA and Major Choice." IZA Journal of Labor Economics, 1(5), 2012.
Barreca, A. I., M. Guldi, J. M. Lindo, and G. R. Waddell. "Heaping-Induced Bias in Regression Discontinuity Designs." Economic Inquiry, 54(1), 2015, 268-93.
Calonico, S., M. D. Cattaneo, and R. Titunik. "Robust Data-Driven Inference in the Regression-Discontinuity Design." Statu Journal, 14(4), 2014a, 909-46.
--. "Robust Nonparametric Confidence Intervals for Regression-Discontinuity Designs." Econometrica, 82(6), 2014b, 2295-326.
--. "Optimal Data-Driven Regression Discontinuity Plots." Journal of the American Statistical Association, 110(512), 2015, 1753-69.
Cornwell, C. M., K. H. Lee, and D. B. Mustard. "Student Responses to Merit Scholarship Retention Rules." Journal of Human Resources, 40(4), 2005, 895-917.
Damashek, R. Support Programs for Students on Academic Probation. Washington, DC: U.S. Department of Education, 2003.
Engle, J., and V. Tinto. "Moving beyond Access: College Success for Low-Income, First-Generation Students." Washington, DC: Pell Institute for the Study of Opportunity in Higher Education, 2008.
Fletcher, J. and M. Tokmouline. "The Effects of Academic Probation on College Success: Lending Students a Hand or Kicking Them While They Are Down?" Unpublished working paper, 2010.
Imbens, G. W., and T. Lemieux. "Regression Discontinuity Designs: A Guide to Practice." Journal of Econometrics, 142(2), 2008, 615-35.
Klempin, S. Redefining Full-Time in College: Evidence on 15-Credit Strategies. New York: Columbia University, Teachers College, Community College Research Center, 2014.
Lee, D. S., and D. E. Card. "Regression Discontinuity Inference with Specification Error." Journal of Econometrics, 142(2), 2008,655-74.
Lindo, J. M., N. J. Sanders, and P. Oreopoulos. "Ability, Gender, and Performance Standards: Evidence and Academic Probation." American Economic Journal: Applied Economics, 2, 2010, 95-117.
Nunez, A., and S. Cuccaro-Alamin. First-Generation Students: Undergraduates Whose Parents Never Enrolled in Postsecondary Education. Washington. DC: National Center for Education Statistics, 1998.
Ost, B. "The Role of Peers and Gender in Determining Major Persistence in the Sciences." Economics of Education Review, 29(6), 2010, 923-34.
Oster, E. "Unobservable Selection and Coefficient Stability: Theory and Evidence." Journal of Business & Economic Statistics, 2016.
Richardson, R. C., and E. F. Skinner. "Helping First-Generation Minority Students Achieve Degrees," in First Generation College Students: Confronting the Cultural Issues, Vol. 1992, edited by L. S. Swerling and H. B. London. San Francisco: Jossey-Bass Publishers, 1992, 29-43.
Sabot, R., and J. Wakeman-Linn. "Grade Inflation and Course Choice." Journal of Economic Perspectives, 5(1), 1991, 159-70.
Sadler, C. "Academic Probation: How Students Navigate and Make Sense of Their Experiences." Doctoral dissertation, University of Wisconsin, Stevens Point, 2010.
Sjoquist, D. L., and J. V. Winters. "The Effect of Georgia's HOPE Scholarship on College Major: A Focus on STEM." IZA Discussion Paper No. 8875, Institute for Labor Studies, 2015.
Stebleton, M., and K. Soria. "Breaking Down Barriers: Academic Obstacles of First-Generation Students at Research Universities." Unpublished working paper, 2013.
Stinebrickner, T., and R. Stinebrickner. "Learning about Academic Ability and the College Dropout Decision." Journal of Labor Economics, 30(4), 2012, 707-48.
Terenzini, P. T., L. Springer, P. M. Yaeger, and A. Nora. "First Generation College Students: Characteristics, Experiences, and Cognitive Development." Research in Higher Education, 37(1), 1996, 1-22.
York-Anderson, D. C., and S. L. Bowman. "Assessing the College Knowledge of First-Generation and Second-Generation College Students." Journal of College Student Development, 32(2), 1991, 116-22.
Additional supporting information may be found online in the Supporting Information section at the end of the article.
Table S1. Effect of Probation on Second-Term Enrollment--BW Robustness
Table S2. Effect of Probation on Change in GPA from First to Second Term--BW Robustness
Table S3. Effect of Probation on Attempting 15 or More Credits--BW Robustness
Table S4. Effect of Probation on Credits Attempted--BW Robustness
Table S5. Effect of Probation on Withdrawal in Second Term--BW Robustness
Table S6. Effect of Probation on Taking a 200-Level Course--BW Robustness
Table S8. Effect of Probation on Average Grades in Courses' Disciplines
Table S9. Effect of Probation on Enrollment in Third Term--BW Robustness
Table S10. Effect of Probation on 4-Year Graduation-BW Robustness
Table S11. Effect of Probation on 6-Year Graduation-BW Robustness
Table S12. Effect of Probation on Enrollment in Second Term
Table S13. Effect of Probation on Change in GPA from First to Second Term
Table S14. Effect of Probation on Attempting 15 or More Credits
Table S15. Effect of Probation on Credits Attempted Table S16. Effect of Probation on Withdrawal in Second Term
Table S17. Effect of Probation on Taking a 200Level Course
Table S18. Effect of Probation on Taking a Math/Science Course in Second Term
Table S19. Effect of Probation on Average Grades in Courses' Disciplines
Table S20. Effect of Probation on Enrollment in Third Term
Table S21. Effect of Probation on 4-Year Graduation
Table S22. Effect of Probation on 6-Year Graduation
MARCUS D. CASEY, JEFFREY CLINE (ID), BEN OST and JAVAERIA A. QURESHI *
* We thank Robert Kaestner, Carrie Ost, Steve Rivkin, Bradley Hardy, seminar participants at the University of Illinois at Chicago, Northwestern University, the Association for Education Finance and Policy Conference, Chicago Education Research Presentation Group, and three anonymous referees for helpful comments and discussion. We gratefully acknowledge the UIC Office of Social Science Research Award for financial support. The authors are responsible for any errors.
Casey: Assistant Professor, Department of Economics, University of Illinois at Chicago, Chicago, IL. Phone 312413-2100, Fax 312-996-3344, E-mail firstname.lastname@example.org
Cline: Graduate Student, Department of Economics, University of Illinois at Chicago, Chicago, IL. Phone 312-9962683, Fax 312-996-3344, E-mail email@example.com
Ost: Assistant Professor, Department of Economics, University of Illinois at Chicago, Chicago, IL. Phone 617-2333304, Fax 312-996-3344, E-mail firstname.lastname@example.org
Qureshi: Assistant Professor, Department of Economics, University of Illinois at Chicago, Chicago, IL. Phone 312355-3216, Fax 312-996-3344, Eemail@example.com
(1.) "Success" in these contexts typically refers to increasing grades, persistence in college, and ultimately graduation within 6 years of entry.
(2.) Lindo, Sanders, and Oreopoulos (2010) note in footnote 31 of their paper that in results not shown they explore whether students attempt fewer credits or courses with higher average grades. They find that there is a decrease in credits attempted, but note that this might reflect mid-year attrition from the university rather than reflecting strategically reducing course loads (they measure credits attempted at the year, rather than term level). Their paper does not explore other strategic behaviors such as course withdrawal or enrollment in higher level courses nor does it explore heterogeneity in strategic behavior.
(3.) It is plausible that taking fewer or easier courses and withdrawing strategically may impose lower cost on students in that they allow the students to focus on fewer courses to improve their performance in compared to the alternative of having to improve the GPA in a more demanding and full course load. However, these strategic course-taking behaviors which increase the probability of an improved GPA in the second term may come with a cost of delaying graduation because students engaging in this strategic behavior will have completed fewer credits at the end of the second term.
(4.) For the sake of brevity, we refer to non-underrepresented minorities, that is, white and Asian students as non-minorities throughout this draft.
(5.) Course withdrawal leads to a "W" on the transcript instead of an actual grade, and the course no longer contributes to GPA calculations. Students are permitted to withdraw from a total of four courses in their undergraduate program.
(6.) In the Lindo, Sanders, and Oreopoulos (2010) university, students are placed on probation based on their performance in the entire first year of study rather than performance in the first term (semester).
(7.) It is also possible that these students engage in other strategic behaviors that we cannot measure.
(8.) All colleges except for the Education and Honors Colleges use the university wide 2.0 threshold. The Education College requires a 2.5 and the Honors College requires a 3.4 minimum GPA, respectively. The running variable used in the RD analysis will be normalized with respect to the relevant cutoff.
(9.) Our sample excludes transfer, mid-year entrants, and part-time students.
(10.) Column (1) includes statistics for the university we study, Column (2) for urban, public institutions, and Column (3) for non-profit, doctoral granting research universities in the United States.
(11.) Since the first stage is not identical to 1, to obtain the IV estimates of the impact of probation, one should scale up our estimates by 1/0.98. For the outcomes that follow, we just show the reduced form estimates and leave off this minor rescaling.
(12.) For the sake of simplicity, we refer to non underrepresented minority students as non-minority students throughout this paper.
(13.) We also performed formal tests of sorting at the boundary and we are unable to reject smoothness of the density at the cutoff once the exact 2.0 GPAs are removed.
(14.) In the extreme, a student who only attempts one course will either have a GPA of exactly 2.0 or her GPA will be well above or below the probation threshold.
(15.) Including an indicator variable for the heap is a common estimation strategy to deal with heaps but this practice fully eliminates the bias from non-random heaping only if the sole difference between the heaped and non-heaped data is their means (Barreca et al. 2015).
(16.) Because we include students with GPAs between 2.01 and 2.09, the figures throughout the paper still include a dot that appears to land on 2.0. This dot excludes students with a GPA of exactly 2.0.
(17.) Attrition in terms after the second term could be voluntary or could reflect dismissals due to failing to meet the probation performance standards.
(18.) See Klempin (2014) for a discussion of the importance of attempting at least 15 credits each semester. Academic advisors pay close attention to each student's "credit momentum," defined as taking at least 15 credits per semester.
(19.) Stinebrickner and Stinebrickner (2014) find that students entering college are considerably over-optimistic about their ability, and that they learn about and update their expectations about own ability as grades are revealed. They report that student learning about their ability explains a significant fraction of dropout among students.
(20.) The point estimates for high cognitive students in Table 6 are suggestive of a meaningful response in their strategic behaviors as well but most of these coefficients lack statistical significance and the contrast of these effects between high cognitive and low cognitive students is not as stark as it is for minorities and non-minorities. Similarly, although we test for strategic behavior separately for men and women, we do not find as consistent or meaningful a difference in the behavior by gender as we do by race.
(21.) The only exception to this is that we find that probation significantly increases the probability of taking a 200-level course for underrepresented minorities in Table 5 but Table S6 and Figure 8 show that this is not a robust effect.
(22.) Tables SI3--S17 show that the significant probation effects on the probability of attempting 15 or more credits, the number of credits attempted, probability of withdrawal, and probability of taking a 200-level course for non-minority students are all robust to controlling for the predetermined covariates. It is reassuring that these results are robust to controlling for these important variables since it suggests that the results are not driven by non-random sorting related to these observables around the cutoff. Following a similar approach to Altonji, Elder, and Taber (2005), Oster (2016) formalizes this intuition and provides a test for how severe selection on unobservables would have to be to explain the estimated results. Though the Oster (2016) approach relies on strong assumptions, we implement the proposed test with her recommended [R.sup.2], and find that selection on unobservables would have to be between 20 and 80 times stronger than the observable selection to drive the course-taking estimates found for non-minority students.
(23.) We restrict the analysis to the first 2 years of enrollment since these relationships may be different for upperclassmen and the assumption of fixed student ability becomes less plausible when considering a longer time period.
(24.) The number 0.012 comes from the calculation 0.024 x 0.185+ 0.124x0.057.
(25.) The graduation results use smaller samples than the earlier analysis because some cohorts are too recent to have 4--or 6-year graduation defined.
(26.) In data collected by the authors on the academic probation policies for 38 public Ohio colleges and universities, less than one quarter share a common probation policy and approximately half of the schools use different probation cutoffs depending on cumulative credits earned. Several Ohio universities either lowered or raised probation standards in the past year. More broadly, Internet searches quickly reveal dozens of universities that are currently debating altering the cutoff for academic probation.
Caption: FIGURE 1 Histogram of Normalized First-Term GPA
Caption: FIGURE 2 Discontinuities in Observables
Caption: FIGURE 3 Discontinuities in Predicted Outcomes
Caption: FIGURE 4 Effect of Probation on Enrollment and Change in GPA in Second Term
Caption: FIGURE 5 Effect of Probation on Probability of Attempting 15 or More Credits
Caption: FIGURE 6 Impact of Probation on Credits Attempted in the Second Term
Caption: FIGURE 7 Impact of Probation on Probability of Withdrawal
Caption: FIGURE 8 Effect of Probation on Probability of Taking 200-Level Course
Caption: FIGURE 9 Effect of Probation on Probability of Taking Math/Science Course
Caption: FIGURE 10 Average Grades in Course's Discipline
Caption: FIGURE 11 Effect of Probation on Third Term Enrollment
Caption: FIGURE 12 Effect of Probation on 4-Year Graduation
Caption: FIGURE 13 Effect of Probation on 4-Year Graduation
TABLE 1 Descriptive Statistics Overall Mean SD Panel A: Background characteristics Female 0.55 0.5 Black 0.09 0.29 Hispanic 0.22 0.42 Asian 0.25 0.44 White 0.32 0.42 Other 0.06 0.24 Foreign 0.12 0.32 ACT composite 23.78 3.71 ACT English 23.92 4.49 ACT math 24.21 4.52 ACT reading 23.68 4.94 ACT sciences 23.46 3.75 Panel B: Undergraduate characteristics First term GPA 2.72 0.96 Second term GPA 2.67 0.92 Placed on probation at end of first term 0.19 0.39 Ever placed on probation 0.39 0.49 Second term enrollment 0.9 0.3 Credits attempted in second term 14.67 1.73 Liberal arts and sciences 0.69 0.46 Graduate in 4 years 0.39 0.44 Graduate in 5 years 0.53 0.48 Graduate in 6 years 0.56 0.48 N 29,571 Note: Sample includes all first time freshmen entering university between 2004 and 2013 academic years. TABLE 2 Validity of the RD Design--Balance on Covariates (1) (2) (3) Covariates BW = CCT BW = IK BW = +/- 0.8 Female 0.143 * 0.08 * 0.079 * [0.077] [0.048] [0.047] 3,270 10,022 10,164 Black 0.02 0.023 0.023 [0.035] [0.030] [0.028] 6,712 9,277 10,164 Hispanic 0.067 0.056 0.039 [0.042] [0.037] [0.034] 6,276 7,518 10,164 Asian -0.053 -0.041 -0.019 [0.036] [0.034] [0.030] 6,712 7,205 10,164 ACT composite -1.148 -0.112 -0.101 [0.748] [0.529] [0.162] 2,757 7,185 10,157 ACT English -1.108 -0.273 0.184 [0.748] [0.581] [0.398] 3,118 4,664 10,152 ACT math -1.450 -0.420 -0.370 [1.071] [0.743] [0.665] 2,894 8,703 10,152 ACT reading -0.260 0.0822 0.0560 [0.550] [0.378] [0.363] 4,001 9,190 9,916 ACT science -1.051 -0.164 -0.237 [0.669] [0.526] [0.427] 2,815 7,574 9,917 Credits attempted 0.0397 -0.156 0.0335 [0.531] [0.364] [0.242] 2,441 5,544 10,164 Liberal arts and science 0.069 0.064 0.061 (0.108) (0.0691) [0.068] Observations 3,908 10,022 10,164 (4) (S) Covariates BW = +/- 0.6 BW = +/- 0.4 Female 0.059 0.079 [0.057] [0.071] 7,205 4,294 Black 0.02 0.041 [0.033] [0.044] 7,205 4,294 Hispanic 0.053 0.057 [0.039] [0.047] 7,205 4,294 Asian -0.041 -0.031 0.034 [0.049] 7,205 4,294 ACT composite -0.101 -0.671 [0.195] [0.265] 7,203 4,293 ACT English 0.107 -0.561 [0.490] [0.611] 7,198 4,290 ACT math -0.301 -0.986 [0.783] [0.959] 7,198 4,290 ACT reading 0.0387 -0.315 [0.446] [0.525] 7,031 4,174 ACT science -0.239 -0.866 [0.515] [0.613] 7,032 4,175 Credits attempted -0.0342 0.0312 [0.310] [0.395] 7,205 4,294 Liberal arts and science 0.057 0.058 [0.085] [0.109] Observations 7,205 4,294 Notes: This table presents estimates of the below-cutoff indicator using predetermined covariates as the dependent variable. Column (1) uses observations within the optimal CCT bandwidth which is different for each outcome. Column (2) uses the IK optimal bandwidth. Columns (3), (4), and (5) each use a fixed bandwidth of 0.8, 0.6, and 0.4 grade points around the cutoff, respectively. All estimates clustered on the running variable. * p value < .1. TABLE 3 Validity of the RD Design--Predicted Outcomes (1) (2) Covariates BW = CCT BW = IK Enrollment in second term -0.012 -0.005 [0.009] [0.008] Observations 5,056 6,853 Change in GPA -0.010 -0.006 [0.023] [0.015] Observations 2,376 6,107 Predicted credits attempted second term -0.057 -0.043 [0.119] [0.088] Observations 4,002 6,858 Probability of withdrawal 0.007 0.002 [0.005] [0.003] Observations 3,572 8,504 Probability of more than 14 credits -0.019 -0.02 [0.027] [0.020] Observations 3,573 6.107 Probability of 200-level course -0.061 -0.017 [0.054] [0.0422] Observations 3,176 6,375 Probability of math/science course -0.011 -0.004 0.023 0.018 Observations 4.868 9,312 Course discipline's average grades 0.0129 0.0161 [0.015] [0.015] Observations 4,174 4,002 Predicted enrollment in third term -0.031 -0.026 * [0.020] [0.015] Observations 6.858 11,968 4-year graduation -0.033 -0.014 [0.028] [0.022] Observations 4,868 7,014 6-year graduation -0.017 -0.013 [0.023] [0.024] Observations 5.401 4,427 (3) (4) Covariates BW = +/- 0.8 BW = +/- 0.6 Enrollment in second term -0.004 -0.006 [0.0063] [0.007] Observations 9,916 7,031 Change in GPA 0.001 -0.002 [0.011] [0.0141] Observations 9,916 7,031 Predicted credits attempted second term -0.009 -0.049 [0.072] 0.086 Observations 9,916 7,031 Probability of withdrawal 0.00217 0.00404 [0.003] (0.00345) Observations 9,916 7,031 Probability of more than 14 credits -0.005 -0.0137 [0.016] (0.0193) Observations 9,916 7,031 Probability of 200-level course -0.005 -0.0108 [0.033] (0.0391) Observations 9,916 7,031 Probability of math/science course -0.004 -0.00604 [0.017] (0.0197) Observations 9,916 7,031 Course discipline's average grades 0.0004 0.00251 [0.010] (0.0121) Observations 9,916 7,031 Predicted enrollment in third term -0.026 -0.033 * [0.017] (0.0200) Observations 9,916 7,031 4-year graduation -0.006 -0.013 [0.0195] (0.0224) Observations 9,916 7,031 6-year graduation -0.007 -0.0114 [0.017] (0.0193) Observations 9,916 7,031 (5) Covariates BW = +/- 0.4 Enrollment in second term -0.01 [0.010] Observations 4,174 Change in GPA 0.002 [0.017] Observations 4,174 Predicted credits attempted second term -0.047 [0.113] Observations 4,174 Probability of withdrawal 0.005 [0.005] Observations 4,174 Probability of more than 14 credits -0.01 (0.0251) Observations 4,174 Probability of 200-level course -0.023 [0.051] Observations 4,174 Probability of math/science course -0.004 [0.024] Observations 4,174 Course discipline's average grades 0.013 [0.015] Observations 4,174 Predicted enrollment in third term -0.02 [0.030] Observations 4,174 4-year graduation -0.023 [0.030] Observations 4,174 6-year graduation -0.018 [0.025] Observations 4,174 Notes: This table presents estimates from below-cutoff indicator using local-linear specification that uses the primary covariates as dependent variable. Column (1): specification uses observations selected with the optimal bandwidth proposed by CCT. Note, for each outcome the number of observations used changes with the band width. Column (2) uses the IK optimal bandwidth. Column (3) uses a fixed bandwidth of .8 grade points of the cutoff; Column (4): specification uses observations within fixed bandwidth of .6 grade points of the cutoff; Column (5): specification uses observations within 0.4 grade points of the cutoff. All estimates clustered on the running variable. * p value < .1. TABLE 4 Evidence on Heaping (1) (2) (3) Variable BW = +/- 0.8 BW = +/- 0.6 BW = +/- 0.4 Female -0.05 *** -0.05 *** -0.05 *** [0.016] [0.017] [0.017] Black 0.02 ** 0.02 ** 0.02 ** [0.011] [0.011] [0.012] Hispanic 0.05 *** 0.05 *** 0.06 *** [0.014] [0.015] [0.015] Asian -0.015 -0.016 -0.023 [0.014] [0.014] [0.015] Resident -0.01 -0.01 -0.01 [0.010] [0.010] [0.010] Credits attempted -0.398 *** -0.393 *** -0.442 *** [0.050] [0.051] [0.052] ACT math -0.69 *** -0.68 *** -0.81 *** [0.138] [0.141] [0.148] ACT reading -0.92 *** -0.93 *** -1.01 *** [0.154] [0.156] [0.162] ACT science -0.39 *** -0.39 *** -0.46 *** [0.114] [0.116] [0.121] ACT English -0.95 *** -0.96 *** -1.07 *** [0.134] [0.137] [0.141] LAS -0.06 *** -0.05 *** -0.06 *** [0.015] [0.015] [0.016] Observations 11,216 8,257 5,346 Notes: This table presents estimate testing for significant differences in the covariates on and around the locations of heaps of the running variables as proposed by Barreca et al. (2015). Columns (1), (2), and (3) each use a fixed bandwidth of 0.8. 0.6, and 0.4 grade points around the cutoff, respectively. ** p value < .05; *** p < 0.01. TABLE 5 Effect of Probation on Second Term Enrollment and GPA Variable Overall Women Men Second term enrollment -0.010 -0.036 0.01 [0.021] [0.024] [0.029] Controls Yes Yes Yes Observations 6,551 3,942 3,056 CCT optimal bandwidth 0.548 0.6140 0.5494 Change in GPA from first 0.203 *** 0.237 ** 0.123 to second term [.061] [0.068] [0.115] Controls Yes Yes Yes Observations 10,578 3,248 2,792 CCT optimal bandwidth 0.9153 0.6001 0.5488 Underrepresented Variable Minority Non-Minority Second term enrollment -0.04 -0.012 [0.029] [0.020] Controls Yes Yes Observations 2,640 5,816 CCT optimal bandwidth 0.5991 0.7353 Change in GPA from first 0.181 ** 0.228 *** to second term [0.089] [0.065] Controls Yes Yes Observations 3,272 6,323 CCT optimal bandwidth 0.754 0.8611 High Low Variable Cognitive Cognitive Second term enrollment -0.001 -0.03 [0.026] [0.023] Controls Yes Yes Observations 3,680 3,720 CCT optimal bandwidth 0.7436 0.5261 Change in GPA from first 0.136 0.223 *** to second term [0.090] [0.072] Controls Yes Yes Observations 2,933 5,388 CCT optimal bandwidth 0.6615 0.7707 Note: RD estimates using Donut RD method and the optimal CCT bandwidths as discussed in the main text of the paper. ** p value < .5; *** p value < .01. TABLE 6 Effect of Probation on Strategic Course-Taking Behavior in Second Term Overall Women Men Panel A: Attempt 15 or more credits -0.059 ** -0.034 -0.063 [0.028] [0.035] [0.045] Controls Yes Yes Yes Observations 7,914 4,689 3,438 CCT bandwidth 0.6914 0.7576 0.6597 Panel B: Credits attempted -0.164 ** -0.10 -0.185 [0.078] [0.094] [0.132] Controls Yes Yes Yes Observations 8,405 4,542 4,523 CCT bandwidth 0.8097 0.8204 0.8453 Panel C: Withdrawal 0.028 0.035 * 0.013 [0.019] [0.021] [0.030] Controls Yes Yes Yes Observations 8,972 5,369 3,749 CCT optimal bandwidth 0.7788 0.8719 0.6976 Panel D: 200-level course -0.033 -0.011 -0.055 [0.026] [0.029] [0.039] Controls Yes Yes Yes Observations 8,299 3,725 4,748 CCT bandwidth 0.7204 0.6349 0.8668 Panel E: Probability of math/science course -0.034 0.025 -0.13 [0.027] [0.036] [0.034] Controls Yes Yes Yes Observations 5,560 2,982 2,979 CCT bandwidth 0.5427 0.5307 0.6265 Panel F: Average grade in courses' discipline 0.013 0.012 0.015 [0.013] [0.015] [0.019] Controls Yes Yes Yes Observations 5,059 2,926 1954 CCT bandwidth 0.6463 0.6228 0.6482 Underrepresented Non- Minority Minority Panel A: Attempt 15 or more credits 0.003 -0.088 ** [0.054] [0.036] Controls Yes Yes Observations 2,117 4,899 CCT bandwidth 0.5312 0.6705 Panel B: Credits attempted -0.085 -0.185 * [0.128] [0.110] Controls Yes Yes Observations 3,317 5,655 CCT bandwidth 0.7835 0.7832 Panel C: Withdrawal -0.008 0.064 ** [0.036] [0.024] Controls Yes Yes Observations 2,682 4,375 CCT optimal bandwidth 0.6449 0.6243 Panel D: 200-level course -0.108 ** -0.057 * [0.051] [0.029] Controls Yes Yes Observations 1,318 5,932 CCT bandwidth 0.3649 0.8149 Panel E: Probability of math/science course 0.075 0.004 [0.051] [0.021] Controls Yes Yes Observations 1,388 6,078 CCT bandwidth 0.4190 0.9131 Panel F: Average grade in courses' discipline 0.013 0.013 [0.019] [0.014] Controls Yes Yes Observations 1,703 3,763 CCT bandwidth 0.5952 0.7501 High Low Cognitive Cognitive Panel A: Attempt 15 or more credits -0.111 ** 0.01 [0.044] [0.043] Controls Yes Yes Observations 3,289 3,594 CCT bandwidth 0.7242 0.5554 Panel B: Credits attempted -0.308 * -0.061 [0.148] [0.103] Controls Yes Yes Observations 3,578 5,393 CCT bandwidth 0.7760 0.7843 Panel C: Withdrawal 0.046 0.036 [0.037] [0.027] Controls Yes Yes Observations 2,378 4,758 CCT optimal bandwidth 0.5525 0.6921 Panel D: 200-level course -0.049 -0.032 [0.035] [0.030] Controls Yes Yes Observations 4,370 4,713 CCT bandwidth 0.9169 0.6869 Panel E: Probability of math/science course -0.018 0.034 [0.029] [0.035] Controls Yes Yes Observations 3,984 3,376 CCT bandwidth 0.8546 0.5138 Panel F: Average grade in courses' discipline 0.033 -0.000 [0.021] [0.014] Controls Yes Yes Observations 1,842 3,324 CCT bandwidth 0.6786 0.6452 Note: RD estimates using Donut RD Method and the optimal CCT bandwidths as discussed in the main text of the paper. * p value < .1 ; ** p value < .05. TABLE 7 Effect of Probation on Longer-Term Outcomes Variable Overall Women Men Panel A: Enrollment in third term -0.047 * -0.058 * -0.029 [0.025] [0.034] [0.032] Controls Yes Yes Yes Observations 10,220 5,201 5,266 CCT bandwidth 0.8136 0.7796 0.8978 Panel B: 4-year graduation -0.033 -0.032 -0.021 [0.028] [0.032] [0.031] Controls Yes Yes Yes Observations 8,442 3,771 3,874 CCT bandwidth 1.0193 0.992 1.0455 Panel C: 6-year graduation -0.014 -0.002 -0.056 [0.036] [0.059] [0.073] Controls Yes Yes Yes Observations 5,569 1,615 1,213 CCT bandwidth 1.050 0.631 0.5797 Underrepresented Non- Variable Minority Minority Panel A: Enrollment in third term -0.074 ** -0.021 [0.037] [0.023] Controls Yes Yes Observations 3,640 7,108 CCT bandwidth 0.299 0.888 Panel B: 4-year graduation -0.022 -0.004 [0.036] [0.033] Controls Yes Yes Observations 2,225 4,142 CCT bandwidth 0.8686 0.8611 Panel C: 6-year graduation -0.07 0.012 [0.053] [0.042] Controls Yes Yes Observations 1,433 3,055 CCT bandwidth 0.8667 0.9921 High Variable Cognitive Low Cognitive Panel A: Enrollment in third term -0.036 -0.067 ** [0.034] [0.031] Controls Yes Yes Observations 4,773 5,632 CCT bandwidth 0.9276 0.7458 Panel B: 4-year graduation 0.015 -0.07 ** [0.038] [0.028] Controls Yes Yes Observations 3,509 5,084 CCT bandwidth 1.0962 1.0143 Panel C: 6-year graduation 0.046 -0.043 [0.059] [0.044] Controls Yes Yes Observations 1,698 2,820 CCT bandwidth 0.922 0.9506 Note: RD estimates using Donut RD Method and the optimal CCT bandwidths as discussed in the main text of the paper. * p value < .1 ; ** p value < .05.
|Printer friendly Cite/link Email Feedback|
|Author:||Casey, Marcus D.; Cline, Jeffrey; Ost, Ben; Qureshi, Javaeria A.|
|Date:||Jul 1, 2018|
|Previous Article:||SCHOOL CHOICE PROGRAMS AND LOCATION CHOICES OF PRIVATE SCHOOLS.|
|Next Article:||PARENTS KNOW THEM BETTER: THE EFFECT OF OPTIONAL EARLY ENTRY ON PUPILS' SCHOOLING ATTAINMENT.|