Comparing employment-related outcomes of the vocational rehabilitation program using longitudinal earnings.
First is the general "selection-bias" problem that arises from examining program impacts on participants in a non-experimental setting. VR lacks a control or comparison group to gauge what would have occurred to participants if they were not in the program. In the absence of such groups, estimates of the impacts of the VR service intervention may be inaccurate.
Second, even with an appropriate comparison group the reported employment outcomes would be viewed with skepticism. Historically, there has been a narrow administrative focus of the data-base collected on program participants due to the time-limited nature of service provision. This has led to undue emphasis being placed on employment status at program termination as a measure of VR agency accomplishment. For instance, typical agency production standards are based on the percentage of successful closures" in a fiscal year. Unfortunately, research on other employment and training programs does not indicate any correlation between immediate post-program placement and long-run success (Gay and Borus, 1980). Further, there are numerous studies where the impact of program participation has been found to occur several years after the program's conclusion (Ashenfelter, 1978; Hu, Lee, and Stromsdorfer, 1971). Each of these shortcomings hints at a possible solution-namely, to collect a "longitudinal" earnings profile for all VR referrals and apply it to a systematic VR program evaluation. This longer horizon of earnings enables researchers to study several important aspects of a disabled person's employment status. For instance, the increase in a person's likelihood of returning to work--the re-employment probability-can be assessed. The duration of the VR client's first post-program work spell, one possible measure of the sustainability of the VR treatment, can also be determined. Finally, one can contrast the post-program level of earnings between the program participants and an appropriate comparison group.
At the federal level, the RSA-SSA Data Link was implemented in the 1970's to address many of these issues. This cooperative venture merged the Rehabilitation Service Administration's VR client records with the earnings records maintained by the Social Security Administration. This data set contained calendar year earnings from 1972 through 1983 for clients closed in fiscal year 1975. While 9 years of post-closure earnings data are available, such a long-term analysis obviously cannot gauge more recent program impacts. Thus, the full ramifications of the 1974 mandate of the Rehabilitation Act to serve a more severely disabled population may not have been captured by this data set. Efforts to construct earnings data links for more recent cohorts, such as for 1980 closures, have yet to come to fruition.
Fortunately, the necessary employment information can be obtained for individual clients at the state level at relatively low cost. The process, referred to as an earnings crossmatch, entails merging quarterly earnings records reported to the state unemployment insurance agency with data routinely collected by the VR agency. Such a procedure has recently been undertaken for clients of the Virginia Department of Rehabilitative Services (DRS). The results of this process are detailed below. First is an analysis of the appropriate comparison group to determine VR program impacts. This is followed by a discussion of the limitations in the earnings profile currently collected by VR agencies. The earnings crossmatch procedure is then described. Examples of the employment outcomes generated via such a process are then detailed and possible uses by VR stakeholders are discussed.
Choice of Comparison Group
The optimal methodology for assessing training effects is a pure experimental design in which participants are randomly assigned to treatment and control groups. However, as a recent review of design options of VR impact evaluation (BPA, 1988) points out, this approach is probably infeasible. In addition to the overwhelming practical difficulty of implementing random assignment, there are also obvious ethical and institutional issues of randomly denying the services of a public program to otherwise eligible clients (Noble, 1988).
The next best approach is a "comparison" group methodology. The employment experiences of a treatment group-those receiving VR servicesare tracked for several years prior to and following program participation. Similar earnings data for a comparison group--a nontreatment group with otherwise similar attributes-are tracked over the same period. The net effect of VR services is then the difference in pre- and post-program earnings paths for people in the treatment and comparison groups, controlling for the influence of other factors. Thus, any differences in the change in earnings between the treatment and comparison groups are due to program enrollment, as long as the limited number of changing factors are accounted for.
The choice of a comparison group is guided in important respects by institutional aspects of the VR program. A disabled person passes through a variety of programmatic and self-screening devices before becoming a VR participant. An individual successfully completing the program has made several discrete choices-to apply for services, to accept the prescribed treatment and to follow through with the implementation of the individualized written rehabilitation program (IWRP). This process introduces significant selectivity bias since the multistage participation decision, as well as the subsequent earnings levels of a client, can be systematically influenced by unobserved factors such as motivation. Minimizing selectivity bias is a major criterion in deciding upon a valid comparison group (Moffitt, 1987).
For conventional manpower programs it is common to draw a comparison group from data sources such as the Current Population Survey CPS) (e.g., Bassi, 1983; Dickinson, Johnson and West, 1986). The main problem with the CPS data for evaluation of VR is that there are likely to be significant differences between VR applicants and a random sample of the CPS population. Therefore, the earnings paths for a random sample of the CPS people who have not sought VR services represent biased estimates of how VR clients would have fared in the absence of training. A comparison group based on the CPS is also flawed due to insufficient information about the presence of a disability for people in the survey. This is critical since disability is a major factor determining a client's pre-program earnings profile. Different types of disabilities can influence pre-program earnings, the service regimen provided by the VR agency as well as client vocational outcomes. For these reasons the CPS is an inappropriate base from which to draw a comparison group for VR clients.(2)
The alternative to such surveys is to identify an "internal" comparison group (i.e., persons having had some degree of exposure to the program). In the case of VR there are three different levels of internal comparison groups: people who applied for but were not accepted for VR services (Status 08); people not successfully rehabilitated after implementation of the IWRP (Status 28); and those who are accepted and agree to receive services, but leave the program prior to implementation of their IWRP (Status 30).
The first plausible internal comparison group contains people declared ineligible for VR services. One of the few studies of VR impacts (Nowak, 1983) to ever incorporate a comparison group used this cohort in estimating program impacts. However, this group is likely to differ in both observable and unobservable characteristics vis-a-vis the treatment group. The problem is that acceptance involves both elements of self-selection and programmatic screening. Moreover, the programmatic screens can imply informal consideration by administrators known as creaming," or even "scraping." The latter may be especially true in VR since the program was mandated to serve a more severely disabled population in 1973. These measured and unmeasured differences introduce biases that may preclude them from further consideration as a proxy for what the earnings of VR participants would have been in the absence of treatment (Barnow, 1987).
Several studies of VR (RSA, 1982; Abt & Associates, 1974) have used the not rehabilitated cohort as a comparison group. Since this group passed through the multiple screens to become a program participant, it would seem that selection bias would not be an issue. However, the Status 28 cohort is emphatically not an appropriate comparison group. A proper evaluation of the impacts of the VR program should rather include them as part of the treatment group. The fact that such persons are unable to obtain immediate employment in no way diminishes the magnitude of the services provided to them. Moreover, many do eventually secure gainful employment. The evaluative issue is to isolate the effect of VR participation on obtaining such employment.
The criterion of minimizing pre-enrollment differences suggests the Status 30 cohort as the comparison group. While the Status 30 client is essentially a dropout, this group has conceptual appeal for several reasons.(3) First, such clients had the motivation to apply and then subsequently met the agency criteria of eligibility. Second, it is likely that both service receiving clients and dropouts experienced similar declines in their earnings potential which led them to apply to VR. However, because the Status 30 clients are tern-dnated prior to implementation of their IWRP, this usually means the only service provided was a diagnostic evaluation. This treatment is unlikely to have significant effect on a client's future earnings stream.
Of course, the fact that Status 30 closures leave the program at an early stage suggests unobservable attributes within this group which may introduce bias. It may well be that some individuals in this group have less perseverance. The unwillingness to follow through on a service plan may also manifest itself in an inability to maintain a thorough job search. Alternatively, some in this group may feel stigmatized by participation in a public training program (Burtless, 1985). This attribute may enable such individuals to secure gainful employment on their own initiative. The concern is that some elements of unobservable difference between the treatment and comparison groups which are correlated with future earnings probably persist. However, on balance, the measured and unmeasured difference between the Status 30 cohort and the treatment group makes them the most viable proxy for a true control group.
Limitations of Current VR Data Collection
Under current data collection, a client's earnings profile contains only two discrete observations of earnings-acceptance into and termination from the program. Further, existing administrative tracking procedures only enable closure earnings to be reported for that fraction of clients successfully completing the IWRP. Such data deficiencies cast serious doubt on the reliability of previous studies of VR earnings impacts. Models used by economists to gauge program impacts require a more complete picture of a client's earnings profile for three reasons.
First, the earnings figure reported at acceptance may not reflect the actual pre-VR earnings experience of a client due to the onset of the disabling condition. Depending on the nature of the disability, there may be a systematic decline in client earnings immediately prior to seeking assistance.(4) Although this decline is understandable given that people are more apt to turn to training programs when faced with employment difficulties, it is unlikely that earnings reported at this time capture a trainee's true pre-program earnings potential. In such instances, the actual long-run earnings path may be understated. If so, these earnings do not represent how the client would fare in the absence of treatment and, therefore, are poor baseline earnings for assessing net training effects. Furthermore, VR may be an extreme case of pre-program dip. As the recent BPA (1988) report indicates, it is common for clients to report zero earnings in the week prior to application to the program.
A second, even more significant problem exists with the closure earnings reported by rehabilitated clients. Although this earnings figure is accurate for the client's 60 days of employment, it is tenuous to impute a post-program earnings path from a single, very short-run employment spell (Worrall, 1978). Indeed, it has been hypothesized that there is a negative relationship between immediate post-program placement and a training program's long-run effects on earnings (Gay and Borus, 1980). There is at least anecdotal evidence that some VR counselors may engage in selective "creaming" of those applicants with the greatest employment potential to obtain quick closures" and add to their performance ratings. Such clients may have readily obtained employment in the absence of VR. As a result, these successful closures Status 26) may have received little substantive benefit from the program.
Finally, VR performance analysis using this single point of earnings assumes that rehabilitated persons will remain at the same job for the duration of their employment life. Current data collection methods make no allowance for unemployment or job turnover. It is conceivable, once again, that the short-term successful placement is not indicative of true program impact. Suppose the client is pressured into taking the first job that is offered. If there is a latent job mismatch or the employee becomes disenchanted and leaves, the quality of this successful closure is surely diminished.
This suggests a more problematic deficiency following from the fact that a significant fraction of VR referrals are not classified as rehabilitated. Obviously no closure earnings are available to VR administrators for these cohorts (Statuses 08, 28 and 30). Nonparticipants do not retire from the labor force merely because they did not enroll in VR. It surely would be of some interest to those interested in eligibility determination to examine what happened to people not accepted for services.
Moreover, while it is true that Status 28 clients have no earnings within the limited time perspective of the agency purview, there is evidence indicating that many of these clients closed not rehabilitated do ultimately get jobs (Dean & Dolan, 1988). Rather than taking the first available employment opportunity, such clients could conceivably have engaged in a lengthy job search in the hopes of maximizing their employment potential. The restrictive 60-day time window for programmatic success requires VR agencies to classify these people as not rehabilitated. Ironically enough, state agencies cannot take credit for clients who may have fared the best from the receipt of VR services.
In sum, there are glaring deficiencies in the clients' earnings profiles compiled by VR agencies. Most of the problems discussed above follow from the generic problem of not having a longitudinal data set of client earnings. While efforts have been made to obtain such information on a federal level via the RSA-SSA Datalink (Greenblum, 1978) or through IRS tax records (Walter, Welsh and Serve, 1988), such earnings files have not been amenable to analysis of individual client earnings. A more feasible method of collecting client earnings records is the use of state employment commission records. An earnings crossmatch with the Virginia Employment Commission (VEC) which links quarterly earnings records to former rehabilitation clients has been established. As will be seen below, such earnings crossmatches are extremely powerful in their applications and also almost costless to implement.
Several years of pre- and post-program earnings for some 9,500 accepted clients closed during 1982 were tracked and compared. Coverage includes those receiving significant services (Statuses 26 and 28) as well as those for whom services were not initiated (Status 30)(5). Specifically, the VEC successfully tracked earnings for 78 percent of the accepted clients in at least one quarter for the interval 1977 through 1985. Possible explanations for a client not being tracked are: 1) the client had dropped out of the labor force; 2) the client engaged in employment that was not covered by the VEC; and 3) the client may have moved out of the state and thus was no longer under VEC jurisdiction.
With a valid comparison group and longitudinal earnings records, it is now possible to examine employment outcomes that may be attributable to VR service provision. Certainly, one of the primary objectives of VR is to return people to the labor force for sustained periods. While a person receiving VR services may not reenter the labor force with a higher hourly wage, one measure of success should be an increase in the percentage of time worked on an annual basis. Accordingly, the first employment measure that longitudinal employment records allows for is the change in VR clients' labor force participation rates. Conceptually, one can think of this as a client's reemployment probability. From the VEC-DRS earnings crossmatch, the dropout cohort reported quarterly earnings in only about one-third of the time both prior to and after some contact with VR. In other words, the typical client closed Status 30 worked in only 5 quarters for the 4-year period both preceding and after VR exposure. While successfully rehabilitated clients reported the same percentage of quarters worked prior to VR application, their post-VR performance was markedly better, with a working percentage time of roughly 50 percent. This converts to labor force participation averaging 8 of 16 quarters during the post-program years for which earnings were measured. Concomitantly, this represents a 50 percent increase in the percentage of quarters employed for Status 26's when compared to their Status 30 counterparts.
Factors other than VR program participation may account for some of this observed difference. Moreover, this may not be a sustained impact if all of the labor force participation occurred in the immediate post-program quarters for the successfully rehabilitated cohort. The point remains, however, that the richness of the longitudinal earnings records allows such issues to be addressed. Such data enables the appropriate models to account for other confounding factors and to examine the trends in quarterly labor force participation rates.
A second employment measure provided with aligned longitudinal earnings data is the duration of a job spell following VR exposure. That is, given that a person was ultimately employed, what was the length of time that the person remained at this job? This measure of the client's length of tenure with the first post-program employer indicates the suitability of the job match process. It can also be construed as a measure of the sustainability of the VR service regimen and, to some extent, an indicator of job satisfaction.
Such a measure of employment sustainability can be used in several different ways. As one illustration, reported earnings from the VEC crossmatch indicate that the median duration of the first post-treatment job spell for a client with a physical disability was more than 6 quarters-about 20 months. Thus, half of the workspells lasted at least 6 quarters. For the Status 30 group the median employment duration was only slightly more than 4 quarters-about 13 months. By way of comparison, the successfully rehabilitated cohort experienced approximately a 50 percent increase in the duration of the first post-VR job spell.
Another way to examine this measure of employment duration is to examine the percentage of these two groups that are still working at the same job at discrete intervals after their termination from VR. While roughly one-fourth of the successfully rehabilitated clients were still working after 4 years with the same employer, only 15 percent of the program dropouts still held the same job for this interval.
Note that this outcome measure reflects job duration after obtaining employment. Conversely, the earnings crossmatch file can also be used to determine the duration of an unemployment spell as well. By definition, a successfully terminated VR client is employed upon closure. Other closure groups (those not eligible, not completing their service regimen or not rehabilitated) will most likely be unemployed after the VR agency administratively terminates them. Since earnings crossmatches can track post-VR employment regardless of closure status it is now possible to examine the interval between a group's exposure to VR and any subsequent commencement of employment.
While these employment measures shed some much needed new light on VR performance, the longitudinal earnings files can also be used to enhance traditional VR productivity indicators. As noted previously, the conventional assessment has been to examine the change in earnings. The question arises as to which period is appropriate as the base for measuring earnings gains. It is now feasible to examine both several years of prior as well as post-program earnings. As noted in the accompanying graph, the annual earnings for the Status 30 and 26 cohorts were lower in the year immediately preceding referral to VR than 2 years prior. This pre-program dip may be due to the onset of the disabling condition. Further note the similarity of earnings for the period 2 years before program referral. The fact that both cohorts earned about $2,500 during this period is particularly encouraging in light of the comparison group issue discussed above.
An even more interesting story emerges when the post-program earnings are examined. While the earnings for the Status 30's increased in each of the years following termination from VR, they hovered around the same earnings which were reported for 2 years prior to VR application. The post-program earnings for the Status 26 cohorts also increased in each of the three periods of reported earnings. The most prominent feature, of course, is the increase from pre-program levels. The graph reveals that the after-VR earnings jumped to an average between $4,000 to $4,500 for those successfully rehabilitated during the 3-year period following treatment. An obvious question emerges: How much of this $1,500 increase is due to VR participation and how much is due to other factors, such as the changing economy or changing characteristics of the clientele receiving services? The answer will vary depending on the model assumptions and particular econometric techniques used by analysts to measure earnings enhancements. The salient point is that with a valid comparison group and longitudinal earnings such questions can now be addressed with a greater degree of confidence.
Evaluation of the effects of the VR program are hampered by the lack of a sufficient earnings stream for participants and a benchmark with which to contrast any impacts. A state-based longitudinal earnings file which tracks all people exposed to the Annual Earnings of VDRS Clients for 3 Years Pre-referral and Post-closure program overcomes both of these shortcomings. The resulting employment profile allows for examination of labor force participation and duration of employment spells as well as the more typical earnings gains.
Application of the earnings crossmatch can be of direct practical value to policy makers, legislators and VR administrators in program advocacy, budget justification and internal resource management. There are two specific uses for such analyses. First, inferences can be drawn which contrast public VR performance with alternative delivery systems. That is, how do outcomes of the VR process stack up against other federal remedial manpower training initiatives, such as the Job Training Partnership Act? Second, these studies can assist in improving the efficacy of VR service provision. Specifically, VR administrators could use these augmented program evaluations to assess different service strategies as well as examine the impact of expanded service provision for people with differing disabilities.
The process for implementing routine earnings crossmatches is at hand. For relatively modest costs, VR agencies can have access to a data enhancement that changes the scope of program evaluation. These longitudinal earnings records will allow for the introduction of econometric techniques that have been applied to other remedial manpower training programs. It appears that, for the first time, analysts can utilize scientifically defensible methods for assessing the various employment impacts of VR services.
1. Formal benefit-cost analysis of the program began with the seminal work of Conley (1969) and continued through the 1970's with efforts by Bellante (1972) and Worrall (1978). This literature has been updated recently through two major research efforts by Berkowitz (1988) and Berkeley Planning Associates (October, 1988).
2. While the recent BPA report identified disabled people from the Survey of Income and Program Participation (SIPP) as a potential comparison group, evaluations have yet to be undertaken with this new data set.
3. Such comparison groups were popularized by Kiefer (1978) in his seminal MDTA evaluations of the 1970's.
4. This manpower training phenomenon was first observed by Ashenfelter (1978) and has been documented by Kiefer (1979), Bassi (1983) and Lalonde (1986).
5. We also have earnings files for some 8,000 clients closed in Status 08 during this year. While this cohort may represent another comparison group, analysis of their earnings are not considered in this paper.
1. Abt Associates, Inc. (1974). Cost-benefit analysis. [Chapter 3]. The Program Services and Support System of the Rehabilitation Services Administration: Final Report. Cambridge, MA.
2. Ashenfelter, Orley. (1978). Estimating the effect of training programs on earnings. Review of Economics and Statistics, 6.0 47-57.
3. Barnow, B. (1987). The impact of CETA programs on earnings. Journal of Human Resources, 2.2 (2), 157-193.
4. Bassi, L. J. (1983). The effect of CETA on the post-program earnings of participants. The Journal of Human Resources, 18 (4), 539-556.
5. Bellante, D. M. (1972). A multivariate analysis of a vocational rehabilitation program. The Journal of Human Resources, 7, 226-241.
6. Berkeley Planning Associates. (1988). Review of design options and recommendations for the impact evaluation of the federal-state vocational rehabilitation program. U.S. Department of Education.
7. Berkowitz, M. et al. (1988). Analysis of costs and benefits in rehabilitation. Philadelphia: Temple University Press.
8. Burtless, G. (1985). Are targeted wage subsidies harmful? Evidence from a wage voucher experiment. Industrial and Labor Relations Review, 39 105-14.
9. Conley, R. W. (1969). A benefit-cost analysis of the vocational rehabilitation program. Journal of Human Resources, 4, 226-252.
10. Dean, D., & Dolan, R. (1986). Towards an improved methodology for estimating benefits of the vocational rehabilitation program. Rehabilitation Counseling Bulletin, 30 (2), 110-115.
11. Dickinson, K., Johnson, T., & West, R. (1986). An analysis of the impact of CETA programs on participants' earnings. The Journal of Human Resources, 2.1, 64-91.
12. Gay, R., & Borus, M. (1980). Validating performance indicators for employment and training programs. Journal of Human Resources, 15 (1), 2948.
13. Greenblum, J. (1977). Effect of vocational rehabilitation on employment and earnings of the disabled: State variations. Social Security Bulletin, 40 3-16.
14. Hu, T., & Lee, M. and Stromsdorfer, E. (1971). Economic returns to vocational and comprehensive high school graduates. Journal of Human Resources, 6, 25-50.
15. Kiefer, N. M. (1978). Federally subsidized occupational training and the employment and earnings of male trainees. Journal of Economics, 8, 111-25.
16. Lalonde, R. J. (1986). Evaluating the econometric evaluations of training programs with experimental data. American Economic Review, 76 (4), 604-620.
17. Moffitt, R. (1987). Symposium on the econometric evaluation of manpower training programs. Journal of Human Resources. 22 (2),149-156.
18. Noble, J. H. (1988). Technical and practical feasibility of using experimental designs to evaluate vocational rehabilitation services. Conference on Issues Arising from the BPA Recommended Approach for Evaluating the Federal-state Vocational Rehabilitation Program.
19. Nowak, L. (1983). A Cost-effectiveness evaluation of the Federal/state vocational rehabilitation program-using a comparison group. The American Economist, 2.7 23-29.
20. Rehabilitation Services Administration. (1982). Annual report to the President and the Congress, FY 1982. U.S. Dept. of Education.
21. Rehabilitation Services Administration. (1989). The long-term impact of vocational rehabilitation: RSA-SSA Data Link Analysis. (Information Memo RSA-IM-89-38).
22. Walter, G., Welsh, W., & Serve, M. (1988). Providing a college education to deaf students: Why it pays. American Rehabilitation, 14 (2), 16-20.
23. Worrall, J. D. (1978). A benefit-cost analysis of the vocational rehabilitation program. Journal of Human Resources, 13 (2), 285-29. 9%
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Vocational Rehabilitation and Competitive Employment|
|Author:||Dean, David H.|
|Date:||Mar 22, 1991|
|Previous Article:||Integrating qualified workers with disabilities into the workforce.|
|Next Article:||Vocational rehabilitation outcome measures: the probability of employment and the duration of periods of employment.|