Printer Friendly

The effect of high school JROTC on student achievement, educational attainment, and enlistment.

1. Introduction

The Junior Reserve Officers' Training Corps 0ROTC) is a program for high school students funded jointly by local school districts and the federal government. The national program was established in 1916; although, individual programs date as far back as 1874. The program remained small until a major expansion was launched in the mid-1990s. By 2007 the JROTC program enrolled roughly 525,000 students in 3400 high schools. (1)

The JROTC program is of interest to policy makers for several reasons. First, although it has elements of a military preparation program, its goals are multidimensional and include improving academic outcomes for high school students. (2) Understanding the program's effects on student achievement is especially important given the heavy enrollment of at-risk students: Nearly 40% of the high schools that offer JROTC are located in inner-city areas, and about

one-half of enrollees are minorities. Another indicator of program diversity is that about 40% of enrollees are females. In addition, the program accepts many students who could not qualify for military enlistment (Coumbe, Kotakis, and Gammell 2008). These enrollment patterns in part explain why most students who participate in JROTC do not enter the military (about 70% in our data). (3) From a social cost-benefit perspective, it is important to understand the full array of program impacts, including effects on both cognitive and noncognitive outcomes as well as on employment in the military.

A broader public policy issue revolves around the value of offering JROTC in public schools, which has been the subject of ongoing debate. Proponents argue that the program improves both the cognitive and noncognitive skills of participants. President George H. Bush described JROTC as a "... program that boosts high school completion rates, reduces drug use, and raises self-esteem..." (cited in Corbett and Coumbe 2001, p. 40). JROTC includes elements that are similar to better-known high school initiatives, such as career academies and school-to-work (STW) programs. Some JROTC activities are comparable with other extracurricular high school activities, such as band, team sports, or school clubs, and many participants engage in community service after school and on weekends. Critics often oppose JROTC in public schools for philosophical reasons but also on the grounds that there is no evidence that the program improves academic achievement (Lutz and Bartlett 1995).

The opponents are correct about the lack of research on JROTC. To date no analysis has attempted to assess the wider effects of the program using nationally representative data. This study evaluates the impacts of JROTC on a broad list of student outcomes, including in-school and post-school performance, based on data from High School and Beyond (HSB) and the National Educational Longitudinal Study (NELS). We obtain estimates using a variety of techniques to control for the selection of individuals and schools into the program. Overall, we find mixed program effects on educational outcomes. While JROTC participants in general have poorer academic performance, the program appears to reduce dropout rates and improve graduation rates for black participants and self-esteem scores for female participants. We also find strong enlistment effects across all participants.

The article is structured as follows. Section 2 describes the basic features of JROTC, section 3 reviews prior literature on the effects of JROTC and related educational initiatives, section 4 presents the data, and section 5 discusses the estimation and selection issues. Section 6 discusses the baseline ordinary least squares (OLS) and probit models, section 7 presents school fixed effects estimates, and section 8 discusses the two-stage matching estimates. Section 9 concludes.

2. The Program (4)

Prior to 1964 JROTC was a relatively small program operated by the Army. The ROTC Vitalization Act of 1964 expanded the program and extended it to all four military services. The goals of the program at its inception were to instill citizenship values and disseminate information about military careers. After the introduction of the All Volunteer Force in 1973, the program incorporated recruiting incentives by offering advanced military pay grades to those who completed at least 2 years of JROTC. Currently, Congress has capped the number of JROTC units and several hundred high schools are on a waiting list for a new unit (Coumbe, Kotakis, and Gammell 2008). An incentive for participation of local schools is that the federal government pays a portion of the program costs. In 2005, federal spending on JROTC was about $239 million (USDOD 2008). According to estimates, federal subsidies cover about 40% of program costs (Denver Public Schools 1996). Based on these figures, total JROTC spending at all levels would be about $730 million annually. The multidimensional aspects of the program, which combines elective credit with extracurricular activities, helps supplement the standard classroom and extracurricular offerings of local schools. The operation of each unit varies largely by school, military branch, and JROTC instructor. This induces a certain degree of randomness in the way that students enter the program.

Currently, little is known about the program at the national level and even less is known about the careers of participants who do not join the military. Program administrators do not track the careers or educational outcomes of participants. The only participants that can be tracked through administrative data sources are those who report JROTC participation upon enlistment (about 30% of all students who ever participate in the program). However, analyzing the effect of the program on all participants is important for a comprehensive program assessment. In addition, program effects on non-enlistees have implications for local youth labor markets, especially in inner cities and in the South, where the majority of JROTC units are located.

3. Related Initiatives

Educational initiatives with some elements comparable to JROTC are those that assist the transition from school to employment. These include career academies and federally funded STW programs--reforms that stress both academic and vocational curricula and that establish formal links with employers. Evidence on the effects of these educational initiatives has been mixed. Attending career academies had no effect on high school completion or postsecondary education, but male attendees experienced higher earnings than non-attendees eight years after high school (Kemple 2008). Similarly, Neumark and Rothstein (2003) find that only two of six STW activities (school enterprises and job shadowing) increased college enrollment; whereas, one activity (Tech Prep) reduced college enrollment. With respect to labor market outcomes, co-op education, school enterprises, and internships increased employment, especially for at-risk males (Neumark and Rothstein 2005).

Indirect evidence on the effect of JROTC on academic performance can be gleaned from a pilot program created in 1992 by the Departments of Education and Defense that combined career academies with required JROTC participation. (5) Elliott, Hanser, and Gilroy (2002) find few differences between students in JROTC Partnership Academies and those attending magnet schools or other career academies. However, JROTC Partnership Academy students had better attendance, grades, and graduation rates than students in a general academic track and not in a magnet school or other career academy. Also, outcomes were better for JROTC Partnership Academy students than for students in "regular" JROTC. Finally, regular JROTC students performed on par with general track students not in a magnet school or in a career academy, suggesting that the career academy component, rather than the JROTC component, accounted for most of the success of the combined JROTC Partnership Academies.

Other analyses of the regular JROTC program have consisted primarily of case studies. One such study compared student outcomes in E1 Paso, Texas, and Chicago, Illinois, inner-city schools (Taylor 1999). In Chicago, JROTC students performed no better in terms of attendance, grades, or graduation rates; whereas, in E1 Paso they had fewer disciplinary problems and better attendance but lower test scores and college attendance. The analyses in these case studies do not attempt to estimate causal effects or consider the selection of students and schools into the program.

4. The Data

The HSB and NELS surveys are the only data sets that provide national-level participation rates for JROTC. (6) In addition, these surveys span two decades, allowing for an assessment of changes in program impacts over time. This is important in the case of JROTC because the program expanded substantially in the 1990s.

From the HSB survey we use the sophomore component, which follows a representative sample of 10th graders from 1980 to 1992. About 14,825 sophomores randomly drawn from the original sample were re-interviewed in 1982, 1984, 1986, and 1992. The NELS surveyed a sample of eighth graders in the spring of 1988 from 1052 schools. We focus on the subsample of 12,144 students who were reinterviewed in 1990, 1992, 1994, and 2000. Both surveys contain detailed information on student backgrounds, academic performance, and post-high school experiences. In addition, both surveys collected information from school officials on school characteristics such as enrollment, educational and special programs, dropout rates, and racial composition. We merged data from both surveys with information from the High School Transcript Survey to obtain information on courses completed and academic achievement. Transcript data were available for 13,024 HSB sophomores and 10,310 NELS students. Our empirical analysis restricts the samples to students in public schools, resulting in samples of 10,270 and 8634 for HSB and NELS, respectively.

We use the base year in each survey (1980 for HSB and 1988 for NELS) to observe the characteristics of students and their families. We use transcripts to identify JROTC participants as those who completed at least one JROTC class. To evaluate academic performance, we focus on standardized test scores, graduation rates, and dropout rates. (7) Both surveys administered cognitive tests in reading, math, civics, and science. (8) We also look at noncognitive outcomes, such as disciplinary problems (suspensions, expulsions, trouble with the law) and self-esteem (measured along the Rosenberg Self-Esteem Scale). (9) To evaluate postsecondary outcomes, we focus on college enrollment, degree completion, military enlistment, and civilian employment. Both surveys follow and test high school dropouts and also obtain transcripts for those who attended high school for at least one term. (10)

Table 1 presents weighted summary statistics for both survey samples. The data show that JROTC enrollees are more likely to be minority males from lower-income families with less-educated parents and are more likely to live in single-parent households. They also attend predominantly urban schools with high minority enrollments located in the South. These attributes suggest that JROTC enrollees are more likely to be at-risk students. (11) It also appears that participation is correlated with having a parent in the Armed Forces.

The composition of participants displays some change over time. In HSB a higher percentage of participants are black (39%); whereas, in NELS Hispanics and blacks are similarly represented (24 and 27%, respectively). Consistent with rising education levels nationwide, NELS students appear to have better-educated parents than HSB students. However, the parents of JROTC students appear to have less education than the parents of other students. Apart from changes due to overall demographic trends, the data suggest that the program expansion in the 1990s affected the size of the program but not the composition of participants.

JROTC members have significantly lower test scores and higher dropout rates than non-participants. While JROTC students have lower college enrollment rates than their peers, their military enlistment rates are much higher. This, however, may not be entirely due to the program because in 10th grade JROTC participants report a preference for a military career at a rate several times that of non-participants.

5. Estimation and Selection Issues

Let an outcome y for student i in school j be determined in the following general form:

[y.sub.ij] = [m.sub.0]([X.sub.i], [S.sub.j]) + [delta]([X.sub.i], [S.sub.j])[JROTC.sub.i] + [u.sub.ij], (1)

where [u.sub.ij] = [a.sub.i] + [s.sub.j] + [e.sub.ij], and [m.sub.0](x) and [delta](x) are two general functions of the observable individual and school characteristics. [X.sub.i] includes individual characteristics, and [S.sub.j] includes school characteristics. This model assumes that [y.sub.ij] depends on a function of observable characteristics for non-participants, the program effect on students with characteristics ([X.sub.i], [S.sub.j]), and an error term. The error is composed of an individual timed effect [a.sub.i], a school fixed effect [s.sub.j], and a random component [e.sub.ij].

Estimating this model via OLS assumes that m(x) is linear in the parameters and is correctly specified, that [X.sub.i] and [S.sub.j] are exogenous, and that program participation is uncorrelated with the composite error. Bias in OLS estimates could come from both observables and unobservables. If the treatment and control groups have different background characteristics, OLS would extrapolate too much because it uses nonparticipant outcomes to approximate participant outcomes under no treatment. In particular, OLS would obtain treatment effects via

[y.sub.ij] = [beta][X.sub.i] + [gamma][S.sub.j] + [delta]JROT[C.sub.i] + [a.sub.i] + [s.sub.j] + [e.sub.ij], (2)

where [delta] represents the program effect, [beta] represents the effect of individual characteristics, and [gamma] represents the effect of school-level characteristics on student outcomes.

OLS is also prone to bias from unobservables, in our case the school and individual fixed effects. Although JROTC students appear more at-risk in their observable characteristics (suggesting a negative bias), they may be more motivated than the average at-risk student by virtue of their enrolling in a program that aims to improve career outcomes (suggesting an upward bias). Similarly, JROTC schools may be located in inner cities and enroll many at-risk students, but they may be more proactive than similar schools that do not apply for federally-funded programs. A priori, it is not clear which effect dominates. Our analysis aims to recover the program effect on the average JROTC student enrolled in an average JROTC school.

One approach to mitigate bias from unobservables is to include a rich set of control variables in OLS regressions. We assume that [a.sub.i] is composed mostly of components related to ability, motivation, and a taste for the military. We include base-year test scores to proxy for ability, a dummy variable for whether the student repeated a grade in elementary school as a proxy for motivation, and a dummy variable for whether the student indicated interest in a military career as a proxy for military taste.

In addition to the individual fixed effect, another source of bias is [s.sub.j], which includes unobservable school characteristics that may be correlated with both the school's decision to offer JROTC (and therefore student participation) and with academic outcomes. We believe that school heterogeneity is more problematic in this setting because student participation is overwhelmingly dependent on whether the school offers JROTC and because little is known about how schools obtain a unit. (12) To mitigate this source of bias in OLS regressions, we include several school controls that are correlated with the presence of JROTC in the school. These include student body demographics (percentage minority enrollment), location (urban, suburban, or rural), percentage of graduating students that join the military, and percentage of graduating students that enroll in college. One problem, however, is that these variables may be endogenous because they are contemporaneous with JROTC participation. The baseline OLS and probit models ignore this source of endogeneity.

Because school characteristics included in [s.sub.j] vary by school but not by individual, we can remove the bias from the nonrandom selection of schools by obtaining school fixed effects estimates. This strategy nets out all observed and unobserved school attributes that may be correlated with both student outcomes and the presence of JROTC. For the continuous outcomes (cognitive test scores and self-esteem scores), the school fixed effects method estimates the program effect as follows:

[y.sub.ij] = [beta][X.sub.i] + [delta]JROT[C.sub.i] + [[eta].sub.ij] (3)


[[eta].sub.ij] = [a.sub.i] + ([e.sub.ij] - 1/j [J.summation over (j = 1)] [e.sub.ij]), and J = number of schools.

The fixed effects transformation cancels both observable ([S.sub.j]) and unobservable ([s.sub.j]) school characteristics. Equation 3 is estimated via OLS with pooled observations from all schools.

For the binary outcomes, there is no equivalent way to difference out the unobserved [s.sub.j]. In addition, estimating [s.sub.j] alongside [beta] and [delta] leads to inconsistent estimates of the parameters (the incidental parameters problem). However, the logit model allows for estimation of program effects without any assumptions on the relationship between [s.sub.j] and the right-side variables (the extreme being the assumption of independence, as in simple OLS estimation). Absorbing the individual fixed effect [a.sub.i] into the random error component ([[zeta].sub.ij] = [a.sub.i] + [e.sub.ij]) and focusing only on the [s.sub.j] component we assume that

[y.sub.ij] = 1 {[beta][X.sub.i] + [gamma][S.sub.j] + [delta]JROT[C.sub.i] + [s.sub.j] + [[zeta].sub.ij] > 0}, i = 1,...,N, j = 1,...,J.

If we further assume that [[zeta].sub.ij] follows a logistic distribution conditional on [X.sub.i], [S.sub.j], and [s.sub.j], then

P([y.sub.ij] = 1 | [X.sub.i], [S.sub.j] JROT[C.sub.i], [s.sub.j]) = [LAMBDA]([beta][X.sub.i] + [gamma][S.sub.j] + [delta]JROT[C.sub.i] + [s.sub.j]) (4)

can be estimated via conditional maximum likelihood. This approach eliminates both school-specific effects [S.sub.j] and [s.sub.j].

Both linear and nonlinear fixed effects methods require that JROTC participation vary within schools. However, fixed effects logit further requires that the binary outcomes also vary within schools. For example, fixed effects logit does not draw any information from schools where all sampled students graduate or where all sampled students do not enlist. This further addresses the bias due to [s.sub.j] because school unobservables could produce a never-enlist or always-graduate outcome. Both the linear and nonlinear fixed effects estimates effectively reduce the sample to schools that host a JROTC unit because very few students participate in JROTC in schools that do not offer the program. Because a school hosts only one JROTC unit (from one of the four services), and employs one or two instructors, school fixed effects estimates also do not depend on service-specific program variation or, for the most part, on instructor heterogeneity. Compared with baseline estimates, school fixed effects estimates are not contaminated by bias from the possible correlation of JROT[C.sub.i] with [s.sub.j]. However, the school fixed effects method still assumes that the model is correctly specified and may yield biased estimates if there is little overlap in the support of [X.sub.i] for participants and non-participants.

To relax some of these assumptions and also deal with the two levels of selection of both students and schools, we also apply nonparametric techniques. We follow a two-stage process to select the relevant treatment and control groups. In the first stage we identify schools that are similar to JROTC schools based on observed characteristics, while in the second stage we further restrict the control group to students within "JROTC-like" schools who are similar to JROTC students in their observable characteristics. If [y.sub.1] denotes a particular outcome for JROTC students, [y.sub.0] denotes the same outcome for non-JROTC students, w is a dummy that equals 1 if the student participates in JROTC, and v is a dummy indicating whether the school has a JROTC unit, the average treatment effect on the treated (ATT) is estimated as

ATT = E([y.sub.1] - [y.sub.0] | [w = 1, v = 1). (5)

The key assumption is that, conditional on both school and student observables, the distribution of the outcomes for participants is no different from the distribution of outcomes for non-participants (the conditional independence assumption) (Rosenbaum and Rubin 1983). Because nonparametric estimation requires no functional form assumptions and because the propensity of participating in the program can be estimated flexibly, we reduce the bias that could stem from assuming a linear and correctly specified function. More importantly, matching mitigates the bias stemming from the potential nonoverlapping support of the observable characteristics [X.sub.i] and [S.sub.j] for participants and non-participants by choosing and reweighing observations within the common support region. (13)

Heckman, Ichimura, and Todd (1997; 1998); Heckman et al. (1998); and Diaz and Handa (2006) find that matching performs well in the presence of a rich set of control variables. Our two-stage sample selection method is similar in spirit to the sample restrictions made by Diaz and Handa when estimating the effect of a poverty program. They use experimental data to evaluate their matching outcomes and find that restricting the sample to eligible households substantially reduces bias compared with when all households are used in estimations. In a similar vein, by first matching schools, we restrict attention to schools that are likely to host JROTC and compare students within these schools.

A common shortcoming of all these solutions is that they assume to varying degrees that selection of students into JROTC depends on observables. In our case, this amounts to assuming that among students with similar observed at-risk characteristics, cognitive ability, and military tastes, JROTC participation is random. For the two-stage matching technique, we extend this assumption to schools by assuming that JROTC units are randomly distributed among schools with similar student body demographics, academic programs, and recruiting environments. Because selection of students into the program varies largely across schools and regions, it appears reasonable, given the richness of our data, to view participation as random after conditioning upon at-risk characteristics and military tastes. (14) Sensitivity checks indicated that this assumption is satisfied. (15)

6. Baseline Models

Table 2 presents simple OLS and probit estimates of program effects without addressing the aforementioned sources of bias. The results from the HSB sample are presented in column 1, while results from NELS are presented in column 2. All models include demographics (age, gender, and race or ethnicity), whether the student aspires to a military career, whether the student repeated a grade in elementary school, covariates for parents' income, education, and family structure (both natural parents, single mother, single father, other family structure), and school-level variables (region, suburban or rural location, whether the school program is vocational or academic, percentage minority enrollment, percentage of students enlisting, attending college, or dropping out). All post-high school outcomes include base-year test scores as a proxy for ability.

Although simple comparisons of means suggest that the academic performance of JROTC students is worse than that of their peers, Table 2 indicates that the performance of JROTC students is, for the most part, similar to that of non-participants. Test score differences persist when not controlling for base-year test scores (results not shown) but disappear after adding base-year test scores. Self-esteem scores appear slightly higher for JROTC participants in the HSB sample. The incidence of disciplinary problems also appears similar across treatment and comparison groups. JROTC students are much less likely to enroll in a two- or four-year college. To avoid the possibility that this finding is driven by their (likely contemporaneous) higher enlistment rates, we also examine the likelihood of obtaining a degree 8 to 10 years after high school. The estimates indicate that JROTC students are less likely to obtain a postsecondary degree.

These specifications include all relevant controls. In results not shown when we successively added the different categories of controls (demographics, family, and school-level variables), race and family background had the strongest effect on program estimates. The effect of the school variables is also important, especially the school's minority composition and dropout rate. For the most part, JROTC participants exhibit poorer performance in regressions that include smaller subsets of controls, which suggests that disadvantaged family backgrounds and low-quality schools account for JROTC participants' seemingly weak achievement.

Military enlistment presents an exception to this pattern. In Table 2, which includes all controls, JROTC participants are 75-150% more likely to enlist. This effect increased by 9% when adding test scores as a proxy for ability, suggesting that lower-ability students may not meet minimum enlistment criteria, such as Armed Forces Qualification Test scores and minimum educational attainment. Similarly, when the indicator for disciplinary problems is added as a control, the enlistment effect again increases slightly, suggesting that behavioral problems also disqualify some who desire to enlist.

The enlistment effect also may be prone to taste endogeneity. Students may join JROTC because they intend to enlist and want to either take advantage of the advanced pay grade incentive or to experience the military without incurring a service obligation. The program may simply identify those who are more likely to enlist, rather than induce participants to enlist. This argument is supported by the finding that the enlistment effect is 17% higher when the military aspirations variable is omitted. To further explore this hypothesis, we also estimate the enlistment probability while controlling for whether a parent is serving in the Armed Forces, but we fmd that this has little effect on predicted enlistments. Finally, we include variables for state-level youth enlistment rates, which reduces the enlistment effect slightly (about 6% in NELS data). Overall, the enlistment effect appears to be robust to these concerns; although, its magnitude is subject to the set of controls included in regressions.

Because geographic placement of JROTC programs targets disadvantaged schools and at-risk students, we also analyze whether JROTC effects differ among minority groups. The results of models with race-JROTC interactions are summarized in Table 3, panel A. One of the most salient differences is that black JROTC members have lower dropout rates than white participants (by 9-24 percentage points) and higher graduation rates (by 11-17 percentage points). Interestingly, the enlistment probability is similar across racial groups. (16)

Does JROTC improve educational outcomes of minorities? NELS data indicate that among blacks, JROTC enrollees are less likely to drop out, are more likely to graduate, and are more likely to enlist. These positive program effects may be understated if black JROTC students are more at-risk than black non-participants, a hypothesis that is supported by a higher incidence of disciplinary problems among black JROTC students. When controlling for base-year disciplinary problems, however, their disciplinary issues in 12th grade are no different from other students.

We also investigate program effects by gender. For most outcomes (not shown), female participants appear similar to male participants. However, HSB data indicate that female participants have higher self-esteem scores than both their male counterparts and female nonparticipants. Although JROTC females are less likely to enlist compared with male participants, they are more likely to enlist than non-participant females (panel B of Table 3).

7. School Fixed Effects

As discussed in section 5, school fixed effects estimation reduces the sample to JROTC high schools. This method deals with the bias introduced from omitting relevant school controls (observable and unobservable) that are correlated with both academic outcomes and program participation. Table 4 presents the school fixed effects estimates. For continuous outcomes, the program effects are obtained via fixed effects OLS. For binary outcomes, average treatment effects are estimated using fixed effects logit, and Table 4 presents odds ratios from these estimations. With respect to test scores and disciplinary outcomes, fixed effects estimates appear similar to the baseline results. However, in the NELS sample, JROTC participants display a modest (1 point) test score gain after controlling for eighth grade test scores.

When investigating postsecondary outcomes, the school fixed effects estimates amplify the negative academic outcomes of JROTC participants, consistent with the hypothesis that JROTC schools are more proactive in seeking programs to assist their students. Omitting adequate school-level controls thus may have introduced an upward bias in the baseline estimates. HSB data indicate that JROTC students are more likely to drop out. Further, in both HSB and NELS, JROTC students are less likely to pursue postsecondary education. The fixed effects estimates also suggest that the enlistment effect may be even larger than previously estimated: JROTC participants are 2-5 times more likely to enlist than non-participants within the same school. If within a school JROTC attracts students interested in the military, the comparison group may include people who would most likely never enlist. A priori, the alternative could also be true--the presence of a JROTC unit in the school may create spillover effects that stimulate schoolwide enlistments. To investigate this issue, we compare the percentage of students expressing interest in a military career in the treatment and control groups. The gap in military aspirations between JROTC participants and non-participants is larger when calculated across all schools rather than within JROTC schools. This suggests that the larger enlistment effect obtained from fixed effects may be explained by program administrators selecting schools with higher enlistment propensities.

Compared with baseline estimates, school fixed effects deals with bias induced from school-level unobservables. However, if within a school JROTC students are relatively more at-risk, their academic outcomes will appear poorer. The school fixed effects method does not address this possible lack of overlap in the support of [X.sub.i]. In addition, the school fixed effects estimates are obtained parametrically and may be prone to misspecification bias.

8. Two-Stage Matching Estimates

JROTC effects derived from OLS and probit depend on the assumption that the outcomes of the control group approximate outcomes for JROTC students had they not enrolled in the program. Summary statistics suggest that this assumption is tenuous: The difference in averages for the two groups exceeds a quarter of a standard deviation for most variables. Although school fixed effects estimation reduces the gap in observable characteristics between control and treatment groups, it does so indirectly by limiting attention to JROTC schools. Within JROTC schools, however, the support of observable individual characteristics [X.sub.i] may differ between control and treated students. Therefore, our next set of estimations includes only students from the control group who are similar to JROTC students in their background attributes.

We restrict the sample in two stages: First, we select schools that are similar to JROTC schools in terms of student body characteristics; second, within these schools we identify students who are similar to JROTC participants based on their background characteristics. This two-step method excludes from the control group students who are similar to JROTC students but are enrolled in schools that are unlikely to offer JROTC. Because schools that offer JROTC have varying degrees of at-risk characteristics, we also exclude schools that are unique in these respects (and, therefore, unlike non-JROTC schools). Because student academic outcomes also depend on school inputs, restricting the sample this way yields more comparable treatment and control groups.

To match schools we use the following variables: school program (vocational, academic, other); location (suburban, rural, urban); region; school size and its square; minority composition (percentage black, percentage Hispanic, percentage other race); percentage of graduating class attending college, dropping out, or enlisting; and percentage of student body classified as disadvantaged or in a reduced price or free lunch program. We perform an exact one-to-one match of the schools based on these characteristics and keep in the sample only JROTC and non-JROTC schools with similar observables. We believe that these controls contain the most important factors that explain why schools apply for JROTC and why program administrators accept them.

We then focus on the students who attend the matched schools. For this, we first perform a one-to-one exact match of students in the reduced school sample based on the following characteristics: individual and family characteristics, base-year test scores, whether the student ever repeated a grade in elementary school, taste for the military, and the aforementioned school characteristics. (17) Because this two-stage selection method and the exact matching in each stage reduce the sample size substantially, which may affect efficiency, we also match JROTC students to non-participants via stratification matching. This method weights the contribution of each student in the control group based on their similarity to the treated students. We remove students and schools that lie in either extreme of the quality distributions (that is, we focus on individuals and schools that fall in the common support area). As a result, this method estimates average treatment effects for students in average JROTC schools.

Table 5 presents the two-stage matching estimates. The HSB data indicate that in-school performance of JROTC students is poorer than that of their peers. In particular, JROTC students are more likely to drop out, are less likely to complete high school, and are less likely to pursue or obtain postsecondary degrees. NELS data do not reveal any differences in dropout and graduation rates, but we find lower rates of postsecondary enrollment and degree attainment. Although the baseline estimates revealed no significant differences in graduation and dropout rates, the two-stage matching estimates indicate poorer academic outcomes for JROTC participants, a result that reinforces the school fixed effects results.

Both samples indicate positive enlistment effects. In HSB, JROTC students are 113-150% more likely to enlist. In NELS, JROTC students are 214-271% more likely to enlist than comparable non-participants. The enlistment effect obtained via two-stage matching is more comparable to the simple probit estimate than to the fixed effects logit estimate. In fact, the fixed effects logit estimate is twice as large as the matching estimate. This suggests that the enlistment effect calculated within schools may overstate the effect calculated across schools, perhaps because some students in non-JROTC schools who enlist would have joined JROTC had the program been offered by the school. However, within schools that offer the program, those who do not join JROTC will most likely never enlist.

To investigate the quality of the match, Table 1A in the Appendix summarizes background attributes of students and schools pre- and post-matching. With both data sets, the matching technique appears to balance the observed characteristics of the control and treatment groups, most notably military aspirations, race, gender, and parents' education. Before matching, only 2-3% of the control group has an interest in a military career, compared with the 15-18% of the JROTC students. After matching, both control and treatment groups show a similar military propensity. Matching schools on their background characteristics also reduces differences between JROTC and non-JROTC schools. Most notably, while in NELS school minority enrollments are only 29% in non-JROTC schools versus 50% in JROTC schools, after matching both types of schools have similar minority enrollments.

Compared with the baseline estimates, the two-stage matching reduces bias stemming from the different background characteristics [X.sub.i] of the JROTC students and the characteristics [S.sub.j] of the schools they attend. The advantage of the two-stage matching over school fixed effects is that it reduces the sample to average JROTC schools, thus avoiding the possibility of a few unique schools driving our results. In addition, the school fixed effects method yields unbiased estimates only if all selection is based on school-level attributes, which may be a strong assumption. Overall, controlling for the various sources of bias reveals negative academic outcomes and stronger enlistment effects for JROTC. This suggests that the program may have a sorting effect that channels students with pre-existing military propensity toward military careers and away from academic pursuits.

9. Conclusions

This study estimates the impact of high school JROTC on students' academic achievement, postsecondary education, and military enlistment. Baseline estimates suggest that JROTC students have higher enlistment rates and lower postsecondary enrollment rates than their peers. School fixed effects and two-stage matching techniques generally confirm these results but also indicate that JROTC students from the HSB sample are less likely to complete high school. We find noteworthy differences in program effects for blacks and for females. Black JROTC students (about one-third of all participants) have lower dropout rates and higher graduation rates than both white JROTC participants and black non-participants. In addition, females in JROTC (about 40% of participants) display higher self-esteem scores than both female non-participants and male enrollees. The importance of this result is highlighted by recent research that links noncognitive skills, including self-esteem, with labor market success (Heckman, Stixrud, and Urzua 2006).

While the composition of participants does not change drastically from one survey to the next, program effects do differ. The more recent NELS data reveal no negative program effects on dropout or graduation rates. This could be due to two main factors. First, NELS contains true pre-participation controls because the survey starts in eighth grade. In contrast, HSB starts in sophomore year, so program effects are obtained from changes in performance between the 10th and 12th grades, rather than throughout high school. In addition, JROTC students in NELS may be more positively selected than in HSB. In the 1990s the military offered enhanced educational benefits to new recruits, which may have resulted in higher-ability recruits and, potentially, higher-ability JROTC participants. Finally, in the NELS period, the military rejected more applicants lacking the minimum education requirements for recruitment compared with the HSB years.

The limited academic effects of JROTC are not unexpected since the program tends to be more vocational and extracurricular, rather than academic, in nature. Any academic gains from JROTC are likely to be indirect effects derived from gains in noncognitive skills and behavioral changes. However, the absence of program effects on academic outcomes could also be due to inadequate controls for the at-risk status of JROTC students. Notably, different estimation strategies aimed at balancing the control and treatment groups on observable characteristics amplified both the negative program effects on academic outcomes and the positive effects on enlistments. One possible explanation is that enlisting in the military and pursuing postsecondary education are mutually exclusive decisions, and JROTC affects student sorting into these two career paths. Evidence from NELS supports this claim because in this sample we observe less severe effects of JROTC on academic outcomes. Changes in recruiting policies and enhanced educational benefits offered by the military may have blended the line between these two career choices. Further support for the sorting hypothesis comes from research indicating that JROTC enlistees perform better in the military than other recruits, both in terms of job match (attrition) and performance (promotions) (Pema and Mehay 2009). Research on STW programs has shown that some components (Tech Prep, for example) have a similar sorting effect, resulting in better employment outcomes but poorer academic outcomes. Compared with STW programs, however, the marginal enlistment effect of JROTC far exceeds employment effects for STW programs (Neumark and Rothstein 2006).

The study provides a starting point for research on a high school initiative that, despite its size and recent growth, to date has escaped analytical scrutiny. It is important to note that this study estimates treatment effects for average JROTC students in average JROTC schools. The estimation techniques eliminate from the sample the tails of both the school and student distributions. The effects of the program may be very different for marginal students and schools. Further research will be needed to confirm the effects obtained here, to expand the set of potential outcomes, and to weigh potential economic gains against program costs. Future policy decisions on the JROTC program, including its size, funding, and choice of high schools for locating new units, must await further evidence of program effectiveness and cost.
Appendix: Characteristics of JROTC Participants and Schools
in the Matched Samples

                               High School and Beyond

                          Original Sample   Matched Sample

                           Non-              Non-
                          JROTC   JROTC     JROTC   JROTC

Panel A. Characteristics of students

  Plan to enlist
    by age 30              0.03    0.18      0.15    0.17
  Female                   0.50    0.37      0.32    0.37
  Black                    0.12    0.39      0.32    0.36
  Hispanic                 0.22    0.22      0.20    0.23
    high school            0.27    0.27      0.20    0.23
    graduates              0.24    0.15      0.19    0.13
  Single mother
    family                 0.16    0.24      0.22    0.24
    category               0.07    0.10      0.05    0.09

Panel B: Characteristics of schools

  Urban                    0.13    0.45      0.37    0.47
  South                    0.23    0.55      0.55    0.57
  % minority
    students               0.20    0.44      0.35    0.45
  % students in
    price lunch            0.16    0.25      0.23    0.25
  % students
    who drop
    out                    0.09    0.13      0.12    0.14
  % students
    who go to
    college                0.44    0.43      0.45    0.44
  % students
    who enlist             0.04    0.05      0.05    0.05

                          National Educational Longitudinal Study

                          Original Sample Matched Sample

                           Non-            Non-
                          JROTC   JROTC   JROTC   JROTC

Panel A. Characteristics of students

  Plan to enlist
    by age 30              0.02    0.15    0.16    0.16
  Female                   0.53    0.44    0.47    0.46
  Black                    0.09    0.25    0.25    0.25
  Hispanic                 0.13    0.26    0.15    0.18
    high school            0.22    0.21    0.24    0.22
    graduates              0.25    0.15    0.17    0.13
  Single mother
    family                 0.14    0.26    0.28    0.28
    category               0.07    0.21    0.11    0.14

Panel B: Characteristics of schools

  Urban                    0.28    0.49    0.41    0.48
  South                    0.32    0.62    0.57    0.64
  % minority
    students               0.29    0.50    0.47    0.49
  % students in
    price lunch            0.22    0.31    0.25    0.28
  % students
    who drop
    out                    0.08    0.12    0.15    0.14
  % students
    who go to
    college                0.62    0.59    0.60    0.58
  % students
    who enlist             0.05    0.06    0.06    0.05

Panel A presents summary statistics for the entire HSB or NELS
sample before matching and after exact (one-to-one) matches of
JROTC with non-JROTC students. The summary statistics for schools
come from school-level data and arc not directly comparable with
the summary statistics presented in Table 1. The school
characteristics in Table 1 come from student-level data and
may not be representative of the schools.

Received May 2008; accepted December 2008.


Ai, Chunrong, and Edward C. Norton. 2003. Interaction terms in logit and probit models. Economic Letters 80:123-29.

Altonji, Joseph G., Todd E. Elder, and Christopher R. Taber. 2005. Selection on observed and unobserved variables: Assessing the effectiveness of Catholic schools. Journal of Political Economy 113:151-84.

Bailey, Sandra S., Gary W. Hodak, Daniel J. Sheppard, and John E. Hassen. 1992. Benefits analysis of the Naval Junior Reserve Officers Training Corps, Technical Report 92-015. Orlando, FL: Naval Training Systems Center.

Corbett, John W., and Arthur T. Coumbe. 2001. JROTC: Recent trends and developments. Military Review 81:40-8.

Coumbe, Arthur T., and Lee S. Harford. 1996. U.S. Army Cadet Command: The ten year history. Fort Monroe, VA: U.S. Army Cadet Command.

Coumbe, Arthur T., Paul N. Kotakis, and W. Anne Gammell. 2008. History of the U.S. Army Cadet Command: Second ten years, 1996-2006. Fort Monroe, VA: U.S. Army Cadet Command.

Crawford, Alice, Gail Thomas, and Armando Estrada. 2004. Best practices at Junior Reserve Officers Training Corps units. Monterey, CA: Naval Postgraduate School.

Denver Public Schools. 1996. Junior Reserve Officers Training Corps (JROTC) program evaluation. Denver, CO: Denver Public Schools.

Diaz, Juan Jose, and Sudhanshu Handa. 2006. An assessment of propensity score matching as a nonexperimental impact estimator: Evidence from Mexico's PROGRESA program. Journal of Human Resources 41:319-45.

Dynarski, Mark, and Phillip Gleason. 1998. What have we learned from evaluations of federal dropout prevention programs?. Princeton, NJ: Mathematica Inc.

Elliott, Marc N., Lawrence M. Hanser, and Curtis L. Gilroy. 2002. Evidence of positive student outcomes in JROTC-career academies. Santa Monica, CA: Rand Corporation.

Evans, William N., and Robert M. Schwab. 1995. Finishing high school and starting college: Do Catholic schools make a difference? Quarterly Journal of Economics 110:941-74.

Glover, Carlos. 2002. Army JROTC. Unpublished briefing. Ft. Monroe, VA: U.S. Army Cadet Command.

Hanser, Lawrence M., and Abby E. Robyn. 2000. Implementing high school JROTC career academies. Santa Monica, CA: Rand Corporation.

Heckman, James J., Hidehiko Ichimura, Jeffrey Smith, and Petra Todd. 1998. Characterizing selection bias using experimental data. Econometrica 66:1017-98.

Heckman, James J., Hidehiko Ichimura, and Petra Todd. 1997. Matching as an econometric evaluation estimator: Evidence from evaluating a job training programme. Review of Economic Studies 64:605-54.

Heckman, James J., Hidehiko Ichimura, and Petra Todd. 1998. Matching as an econometric evaluation estimator. Review of Economic Studies 65:261-94.

Heckman, James J., Jora Stixrud, and Sergio Urzua. 2006. The effects of cognitive and noncognitive abilities on labor market outcomes and social behavior. NBER Working Paper No. 12006.

Kemple, J. J. 2008. Career academies: Long-term impacts on labor market outcomes, educational attainment, and transitions to adulthood. New York: Manpower Demonstration Research Corporation.

Lutz, Catherine, and Lesley Bartlett. 1995. Making soldiers in the public schools. Philadelphia, PA: American Friends Service Committee.

National Center for Education Statistics. 2006. The adult lives of at-risk students. Washington, DC: U.S. Department of Education.

Neumark, David, and Donna Rothstein. 2003. School-to-career programs and transitions to employment and higher education. NBER Working Paper No. 10060.

Neumark, David, and Donna Rothstein. 2005. Do school-to-work programs help the "forgotten half"? NBER Working Paper No. 11636.

Neumark, David, and Donna Rothstein. 2006. School-to-career programs and transitions to employment and higher education. Economics of Education Review 25:374-93.

Pema, Elda, and Stephen Mehay. 2009. What does occupation-related vocational education do? Evidence from an internal labor market. Paper presented at the European Association of Labour Economists Conference, Tallinn, Estonia.

Rosenbaum, Paul R., and Donald B. Rubin. 1983. The central role of the propensity score in observational studies of causal effects. Biometrika 70:41-55.

Taylor, William J. 1999. Junior Reserve Officers' Training Corps: Contributions to America's communities: Final report of the CSIS political-military studies project on the JROTC. Washington, DC: Center for Strategic and International Studies.

Thomas-Lester, Avis. 2005. Recruitment pressures draw scrutiny to JROTC. Washington Post. September 19, B1. USDOD (Department of Defense). 2008. Department of Defense Budget Fiscal Year 2009. Military personnel programs and operation and maintenance programs. Washington, DC: U.S. Department of Defense.

Elda Pema * and Stephen Mehay ([dagger])

* Graduate School of Business and Public Policy, 555 Dyer Road, Naval Postgraduate School, Monterey, CA 93943, U.S.A.; E-mail, corresponding author.

([dagger]) Graduate School of Business and Public Policy, 555 Dyer Road, Naval Postgraduate School, Monterey, CA 93943, U.S.A.; E-mail

We thank the editor (Christopher Bollinger), David Neumark, Barry Hirsch, Alice Crawford, Linda Bailey, Dahlia Remler, Yu-Chu Shen, seminar participants at Baruch College/City University of New York and at the 2005 and 2008 Western Economic Association conferences, and three anonymous referees for helpful comments. Yee-Ling Ang

* and Janet Days provided research assistance.

(1) Program data are taken from Taylor (1999); Crawford, Thomas, and Estrada (2004); Thomas-Lester (2005); and Coumbe, Kotakis, and Gammell (2008).

(2) For example, the Army's mission statement for JROTC is to "provide the motivation and skills to remain drug free, to graduate from high school, and to become successful citizens" (Glover 2002, slide no. 5).

(3) JROTC participants do not incur any obligation to enter the military. Although surveys indicate that 30% of JROTC graduates intend to enlist (see Bailey et al. 1992; Taylor 1999), only a fraction of those actually enlist, and only a fraction of those who ever participate in JROTC complete the full four-year program.

(4) In this section we describe the basic features of JROTC that are relevant to our analysis. An in-depth program description is available in Coumbe, Kotakis, and Gammell (2008). Also, Crawford, Thomas, and Estrada (2004) provide a comprehensive program overview based on surveys and interviews with students, parents, instructors, and school administrators.

(5) The program, called "A Federal-Local Partnership for Serving At-Risk Youth," attempted to combine the strengths of JROTC in terms of discipline, leadership, and extracurricular activities with the career academy focus on work-based learning (Hanser and Robyn 2000).

(6) The National Longitudinal Survey of Youth 1997 (NLSY97) does not clearly identify JROTC students in high school.

(7) Our definition of dropouts follows that of the National Center for Education Statistics (NCES) and includes students who leave school regardless of whether they later return. We treat GED recipients as nongraduates.

(8) For HSB, we follow Evans and Schwab (1995) and focus on the sum of the "formula" score on the vocabulary, reading, and the first part of the mathematics test. For NELS, we combine test scores in reading and math.

(9) Our indicators of in-school performance are selected to reflect the published goals of both JROTC and local school district administrators (Bailey et al. 1992; Denver Public Schools 1996; Taylor 1999; Glover 2002).

l0 Many dropouts still have missing values for various important variables. This problem is more severe with NELS data. To avoid bias from omitting dropouts, we impute values of time-invariant variables using responses to identical questions in later follow-ups. In addition, we include dummies for missing observations for important controls. We do not impute values of dependent variables or test scores.

(11) For definitions of "at-risk" students, see Dynarski and Gleason (1998) and the National Center for Education Statistics (2006).

(12) Some of the criteria used by administrators to assign a JROTC unit include the demographic composition of the school and the area, recruiting potential, the geographic distribution of JROTC units, cost-effectiveness, and also the time that the school has spent on a waiting list for a unit (Coumbe and Harford 1996).

(13) The two-stage matching differs from a simple matching of students based on their individual and school characteristics because it excludes students who may be at risk, but who attend schools unlikely to host the program. It further excludes schools that are "too likely" to offer JROTC based on their characteristics. Matching students on both their individual and their schools' characteristics ensures that average school characteristics of sampled students are balanced. Given that we have school surveys, there is no need to rely on estimates of school characteristics based on averages from sampled students.

(14) We also applied instrumental variable (IV) methods to obtain program effects. We considered the following potential instruments for educational outcomes: the statewide enlistment rate, whether one parent is in the military, whether the student participated in extracurricular activities in eighth grade (NELS only), whether the school offered JROTC (HSB only), number of extracurricular clubs available at the high school (a proxy for choice of extracurricular activities), the student's height-to-weight ratio (a proxy for physical fitness), and percentage of minority enrollments in the school. For the enlistment outcome, we omitted from the list IVs clearly correlated with a predisposition for the military. Overall, the IV estimates produced inflated coefficients and standard errors (five times the size of the baseline estimates).

(15) Because NELS provides test scores for 1988, 1990, and 1992, we estimated program effects on test scores via (individual) fixed effects. The results were similar to the ones presented in the paper. We also followed Altonji, Elder, and Taber (2005) and estimated bounds on our JROTC effects by assuming that the selection on unobservables is no greater than the selection on observables. We estimated bivariate probit models of binary outcomes and program participation, subject to the aforementioned condition. With HSB data, the bounds obtained were too wide and included zero. In the NELS data the correlation in the disturbances from the participation and outcome equations was insignificant. We interpret this result to mean that, conditional on our covariates, individual heterogeneity is uncorrelated with participation.

(16) One possible explanation for these differences by race is that JROTC attracts blacks of higher average ability than white enrollees. However, when comparing test scores among JROTC members in different race or ethnic groups, white participants have higher test scores, even after controlling for family and school characteristics. In addition, all outcomes in Table 3 already control for base-year test scores to avoid this problem. If we further assume that JROTC attracts equally at-risk youth across race groups, then the comparison of JROTC students with each other most likely provides accurate causal program effects.

(17) These variables are only used to categorize students in a way that facilitates good matches. The method does not presume to explain program participation.
Table 1. Weighted Summary Statistics


                              Non-participants        Participants
Variable                             Mean (SE)           Mean (SE)
  Female                          0.50 (0.006)        0.36 (0.042)
  Black                           0.10 (0.004)        0.39 (0.043)
  Hispanic                        0.12 (0.004)        0.12 (0.020)
  Plan to enlist by age 30        0.03 (0.002)        0.19 (0.038)
  Urban                           0.21 (0.005)        0.55 (0.045)
  Public                          0.90 (0.003)        0.99 (0.004)
  South                           0.31 (0.006)        0.56 (0.045)
  % minority students            22.17 (0.359)       51.63 (2.759)
  % college-goers                46.79 (0.263)       46.31 (1.950)
  % students who enlist           3.67 (0.045)        4.10 (0.316)
  Parents completed high
    school                        0.28 (0.006)        0.28 (0.039)
  Parents college graduates       0.23 (0.005)        0.14 (0.025)
  Single mother family            0.14 (0.004)        0.19 (0.030)
  Bottom income category          0.05 (0.002)        0.08 (0.019)
  Top income category             0.08 (0.003)        0.03 (0.012)
  Parent in military              0.02 (0.002)        0.11 (0.043)
  Test scores:
    8th grade                             n/a                 n/a
    10th grade                   26.50 (0.202)       19.90 (1.422)
    12th grade                   30.02 (0.235)       21.25 (1.494)
  Disciplinary problems:
    10th grade                    0.26 (0.006)        0.39 (0.044)
    12th grade                    0.20 (0.005)        0.26 (0.046)
  Self-esteem score:
    8th grade                              n/a                 n/a
    10th grade                    0.02 (0.009)       -0.05 (0.071)
    12th grade                   -0.01 (0.010)       -0.09 (0.061)
  Dropout rate                    0.15 (0.005)        0.25 (0.042)
  Graduation rate                 0.83 (0.006)        0.66 (0.045)
  Attend postsecondary            0.45 (0.006)        0.22 (0.030)
  Postsecondary degree            0.32 (0.006)        0.10 (0.021)
  Enlist                          0.08 (0.004)        0.24 (0.041)
  Work                            0.86 (0.005)        0.83 (0.037)
Population represented            3,252,524           93,044


                              Non-participants        Participants
Variable                             Mean (SE)           Mean (SE)
  Female                          0.53 (0.011)        0.43 (0.059)
  Black                           0.12 (0.007)        0.27 (0.050)
  Hispanic                        0.10 (0.005)        0.24 (0.064)
  Plan to enlist by age 30        0.02 (0.002)        0.20 (0.068)
  Urban                           0.28 (0.009)        0.49 (0.063)
  Public                          0.90 (0.006)        0.98 (0.009)
  South                           0.34 (0.010)        0.65 (0.055)
  % minority students            26.58 (0.632)       52.80 (4.070)
  % college-goers                65.87 (0.373)       56.45 (2.704)
  % students who enlist           4.41 (0.081)        5.35 (0.294)
  Parents completed high
    school                        0.22 (0.009)        0.19 (0.037)
  Parents college graduates       0.27 (0.009)        0.13 (0.043)
  Single mother family            0.15 (0.009)        0.29 (0.054)
  Bottom income category          0.07 (0.005)        0.18 (0.041)
  Top income category             0.22 (0.009)        0.10 (0.045)
  Parent in military              0.02 (0.002)        0.11 (0.057)
  Test scores:
    8th grade                    51.26 (0.203)       47.03 (1.232)
    10th grade                   51.01 (0.195)       46.74 (1.108)
    12th grade                   51.27 (0.200)       47.27 (1.426)
  Disciplinary problems:
    10th grade                    0.17 (0.007)        0.22 (0.041)
    12th grade                    0.17 (0.009)        0.30 (0.071)
  Self-esteem score:
    8th grade                     0.01 (0.013)       -0.02 (0.059)
    10th grade                    0.02 (0.016)        0.08 (0.087)
    12th grade                    0.00 (0.016)        0.14 (0.082)
  Dropout rate                    0.13 (0.006)        0.18 (0.042)
  Graduation rate                 0.86 (0.007)        0.74 (0.054)
  Attend postsecondary            0.66 (0.010)        0.54 (0.060)
  Postsecondary degree            0.38 (0.009)        0.31 (0.064)
  Enlist                          0.06 (0.005)        0.22 (0.040)
  Work                            0.87 (0.006)        0.84 (0.038)
Population represented            1,997,424           53,817

The samples include all students in HSB and NELS that matched with
the High School Transcript Studies. Means and standard errors were
estimated using panel transcript weights. School characteristics are
obtained from student-level data and represent the characteristics
of the schools attended by students in the sample. They are not
representative of the population of U.S. schools. Test scores include
math and reading scores. Disciplinary problems include expulsions,
suspensions, or trouble with the law. Self-esteem scores represent
standardized scores on the Rosenberg scale. Dropouts include students
who leave school, regardless of whether they later return. Graduation
from high school is observed two years after senior year and includes
only those who obtain a traditional high school diploma. Postsecondary
enrollments and degrees refer to two- and four-year colleges.
Employment status is observed in 1992 (HSB) and 2000 (NELS) and
includes those who report currently working for pay. This variable
takes a value of zero for those who are unemployed or out of the labor

Table 2. Baseline OLS and Probit Estimates of JROTC Effects

Outcome                                   HSB          NELS

12th grade, controlling for base year
  Test scores                              -0.84         0.60
                                           (0.72)       (0.39)
  Self-esteem scores                        0.11 *       0.07
                                           (0.06)       (0.05)
  Any disciplinary problems                 0.07         0.11
                                           (0.11)       (0.11)
                                           [0.02]       [0.02]
Post- high school outcomes:
  Drop out of high school                   0.17        -0.1
                                           (0.14)       (0.16)
                                           [0.03]       [0.004]
  Graduate from high school                -0.19        -0.003
                                           (0.14)       (0.16)
                                          [-0.04]      [-0.001]
  Attend postsecondary education           -0.45 ***    -0.33 **
                                           (0.11)       (0.12)
                                          [-0.17]      [-0.12]
  Obtain a postsecondary degree by 1992    -0.43 ***    -0.19
    (HSB) or 2000 (NELS)                   (0.17)       (0.13)
                                          [-0.12]      [-0.07]
  Enlist in the military                    0.41 ***     0.88 ***
                                           (0.12)       (0.11)
                                           [0.06]       [0.17]
  Employed in the civilian sector          -0.2         -0.12
                                           (0.14)       (0.11)
                                          [-0.04]      [-0.02]

See notes to Table 1 for variable definitions. All regressions
control for demographics, family, school characteristics, and
military career aspirations. Postsecondary outcomes include
base-year test scores. Regressions also include separate
categories for missing controls. The sample size in regressions
using HSB data varies from 5989 to 6908, of which 130-160 are
JROTC participants.
For regressions using NELS data, sample sizes vary from
6578 to 8314, of which 147-190 are JROTC participants. The sample
sizes is primarily due to missing values for the dependent variables.
Standard errors appear in parentheses and are robust to within serial
correlation. Binary response models are estimated via probit, are
in brackets.

* Significant at the 10% level.

** Significant at the 5% level.

*** Significant at the 1% level.

Table 3. Baseline OLS and Probit Estimates of the Effect of JROTC on
Blacks and Females

Panel A                            Survey    Drop Out

  JROTC x BLACK                    HSB       -0.13 *   (0.07)
                                   NELS      -0.08 **  (0.04)
  Black participants vs. white     HSB       -0.24 *** (0.07)
    participants                   NELS      -0.09 *   (0.05)
  Black participants vs. black     HSB       -0.04     (0.05)
    non-participants               NELS      -0.06 **  (0.03)

Panel B                            Self-esteem


  JROTC X FEMALE                   HSB       0.20 *    (0.12)
                                   NELS      0.11      (0.09)
  Female vs. male JROTC            HSB       0.20 *    (0.11)
    participants                   NELS      0.04      (0.09)
  Female participants vs. female   HSB       0.22 **   (0.10)
    non-participants               NELS      0.15 ***  (0.06)

Panel A                             Graduate

  JROTC x BLACK                     0.06      (0.08)
                                    0.10 **   (0.05)
  Black participants vs. white      0.17 **   (0.08)
    participants                    0.11 **   (0.05)
  Black participants vs. black     -0.05      (0.05)
    non-participants                0.05      (0.04)

Panel B                            Enlist


  JROTC X FEMALE                   -0.11 **  (0.05)
                                   -0.11     (0.07)
  Female vs. male JROTC            -0.24 *** (0.05)
    participants                   -0.19 *** (0.07)
  Female participants vs. female    0.01     (0.03)
    non-participants                0.14 *** (0.04)

Panel A                            Attend Postsecondary

  JROTC x BLACK                    -0.01     (0.08)
                                   -0.01     (0.08)
  Black participants vs. white      0.11     (0.07)
    participants                    0.01     (0.08)
  Black participants vs. black     -0.17 **  (0.05)
    non-participants               -0.01     (0.08)

Panel B


  Female vs. male JROTC
  Female participants vs. female

Panel A                            Enlist

  JROTC x BLACK                    -0.04     (0.07)
                                   -0.07     (0.08)
  Black participants vs. white     -0.01     (0.07)
    participants                   -0.04     (0.09)
  Black participants vs. black      0.07     (0.06)
    non-participants                0.19 *** (0.07)

Panel B


  Female vs. male JROTC
  Female participants vs. female

See notes to Table 2 for model specifications. All interactions are
estimated via linear probability models with standard errors robust
to both heteroskedasticity and within-school serial correlation. This
is done to avoid problems in estimating interaction effects via probit
or logit (see Ai and Norton 2003). The differences in outcomes between
participant and nonparticipant blacks, and between black and nonblack
participants are obtained by including separate categories for each
minority-JROTC combination. The number of black JROTC participants in
HSB models ranges between 69 and 75, and for NELS the number ranges
between 43 and 53. The number of JROTC females in HSB models ranges
between 57 and 63, and between 79 and 91 in NELS models.

* Significant at the 10% level.

** Significant at the 5% level.

*** Significant at the 1% level.

Table 4. School Fixed Effects Estimates

Outcome                                 HSB              NELS
12th grade, controlling for base year
  Test scores                           -0.90             0.99 **
                                        (0.72)           (0.49)
  Self-esteem score                      0.10 *           0.04
                                        (0.06)           (0.06)
  Any disciplinary problems              0.02             0.38
                                        (0.22)           (0.25)
                                        [1.02]           [1.15]
Post-high school outcomes:
  Drop out of high school                0.44 *          -0.25
                                        (0.26)           (0.31)
                                        [1.55]           [0.78]
  Graduate from high school             -0.26            -0.06
                                        (0.24)           (0.29)
                                        [0.71]           [1.07]
  Attend post-secondary education       -0.65 ***        -0.72 ***
                                        (0.23)           (0.21)
                                        [0.52]           [0.49]
  Obtain a postsecondary degree by      -0.48            -0.38
    1992 (HSB) or 2000 (NELS)           (0.30)           (0.24)
                                        [0.62]           [0.68]
  Enlist in the military                 1.08 ***         1.86 ***
                                        (0.27)           (0.28)
                                        [2.94]           [6.43]
  Employed in the civilian sector       -0.47 *          -0.28
                                        (0.27)           (0.26)
                                        [0.62]           [0.76]

See notes to Tables 1 and 2 for variable definitions and model
specifications. Continuous outcomes are estimated via OLS; whereas,
binary outcomes are estimated via fixed effects logit. The fixed
effects logit models are identified from within-school variation in
JROTC participation and the outcome of interest. Standard errors
appear in parentheses and are robust to within-school correlation.
Log-odds ratios appear in brackets. Sample sizes in column 1 vary from
5638 to 7730, of which between 136 and 180 are JROTC participants. In
column 2 the sample sizes vary from 3935 to 7222, of which between 105
and 161 are JROTC students.

* Significant at the 10%a level.

** Significant at the 5% level.

*** Significant at the 1% level.

Table 5. Two-Stage Matching Estimates

                                      High School and Beyond

                                 One-to-One Nearest   Stratification
Outcome                          Neighbor Matching    Matching

Test scores                      -1.16     (3.46)     -2.89     (2.64)
Any disciplinary problems        -0.07     (0.06)      0.001    (0.04)
Self-esteem score                 0.22 *   (0.15)      0.16     (0.13)
Drop out of high school           0.16 *   (0.08)      0.10 **  (0.06)
Graduate from high school        -0.23 **  (0.08)     -0.12 **  (0.05)
Attend postsecondary education   -0.09     (0.08)     -0.09 **  (0.05)
Obtain a postsecondary degree
  by 1992 (HSB) or 2000 (NELS)   -0.08     (0.09)     -0.14 *** (0.04)
Enlist in the military            0.09 *   (0.06)      0.12 *** (0.05)
Employed                          0.02     (0.07)     -0.05     (0.05)

                                  National Educational Longitudinal

                                 One-to-One Nearest   Stratification
Outcome                          Neighbor Matching    Matching

Test scores                      -0.84     (1.58)     -0.4     (1.01)
Any disciplinary problems        -0.02     (0.07)     0.005    (0.05)
Self-esteem score                -0.16     (0.12)     0.09     (0.09)
Drop out of high school          -0.02     (0.05)    -0.03     (0.03)
Graduate from high school         0.0001   (0.06)    -0.01     (0.03)
Attend postsecondary education   -0.13 **  (0.07)    -0.10 **  (0.05)
Obtain a postsecondary degree
  by 1992 (HSB) or 2000 (NELS)   -0.07     (0.07)    -0.08 *   (0.05)
Enlist in the military            0.19 **  (0.06)     0.15 **  (0.04)
Employed                         -0.02     (0.06)    -0.03     (0.04)

The sample of students is selected based on demographics, family and
school background, and base-year test scores. The sample of schools is
selected based on school size, urbanicity, region, minority
composition of the student body, percentage of students who enlist,
drop out, or go to college, percentage of students who are classified
as disadvantaged, and school program (vocational, academic, etc.). The
number of matched JROTC students varies between 69 and 96 in both
surveys. One-to-one matching restricts the sample of comparable
non-JROTC students to the same range. When employing stratification
matching, the number of control students varies between 828 and 1132
for HSB and between 3500 and 3930 for NELS. Standard errors appear in
parentheses. They are obtained via bootstrapping and control for
clustering at the school level.

* Significant at the 10% level.

** Significant at the 5% level.

*** Significant at the 1% level.
COPYRIGHT 2009 Southern Economic Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2009 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Comment:The effect of high school JROTC on student achievement, educational attainment, and enlistment.
Author:Pema, Elda; Mehay, Stephen
Publication:Southern Economic Journal
Geographic Code:1USA
Date:Oct 1, 2009
Previous Article:Optimal two-part pricing in a carbon offset market: a comparison of organizational types.
Next Article:Do market pressures induce economic efficiency? The case of Slovenian manufacturing, 1994-2001.

Related Articles
Study on the impact of high-stakes testing.
The effect of motivation, family environment, and student characteristics on academic achievement.
Birth order and educational achievement in adolescence and young adulthood.
Teaching quality and student outcomes: academic achievement and educational engagement in rural Northwest China.
A data-based model to predict postsecondary educational attainment of low-socioeconomic-status students.
The strategic value of dual enrollment programs.
An investigation of comprehensive school counseling programs and academic achievement in Washington State middle schools.
Effects of single-sex and coeducational schooling on the gender gap in educational achievement.
Closing the gap: a group counseling approach to improve test performance of African-American students.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters