Assessing Client Satisfaction in Vocational Rehabilitation Program Evaluation: A Review of Instrumentation.
The Rehabilitation Act Amendments of 1992 identified the need for state programs to address the serious problem of "patterns of inequitable treatment of minorities (that) have been documented in all major junctures of the vocational rehabilitation process" (Sec. 102, 3). Ongoing program evaluation must be undertaken since annual reports describing the progress toward remediation of this problem must be submitted to Congress. Additionally, the Amendments mandate that consumer satisfaction surveys be conducted to identify other areas of vocational rehabilitation that need to be expanded, improved, or modified.
More than 20 years ago Reagles, Wright, and Butler (1970) reviewed the literature on instrumentation to measure client satisfaction and found a "dearth of studies." A subsequent review conducted by the present authors identified a number of measures of client, consumer, and patient satisfaction that have been developed over the past 20 years in psychology, medicine, and marketing. The purpose of the present paper is to review four of the instruments identified that would appear to hold promise in the evaluation of client satisfaction in vocational rehabilitation programs: the Scale of Client Satisfaction (Reagles, Wright, & Butler, 1970); the Client Satisfaction Questionnaire (CSQ; Larsen, Attkisson, Hargreaves, & Nguyen, 1979); the Evaluation Ranking Scale (ERS; Pascoe & Attkisson, 1983); and the Patient Satisfaction Questionnaire (PSQ; Ware, Snyder, & Wright, 1976a, 1976b). The Scale of Client Satisfaction was developed specifically for use in vocational rehabilitation programs while the others were developed for use in other settings. Each of the instruments is described in terms of format and development, as well as documentation regarding reliability and validity. Finally, a summary of the critical issues surrounding measurement of client satisfaction is presented, along with suggestions for further research.
Reviews of Scales
Scale of Client Satisfaction
Description and development. The Scale of Client Satisfaction was developed at the Regional Rehabilitation Research Institute, University of Wisconsin-Madison, as a part of the Wood County Project to evaluate the impact of a five-year expanded program of vocational rehabilitation (Wright, Reagles, & Butler, 1970). Prior to the Wood County Project, the measurement of client satisfaction was frequently excluded from the overall evaluation of the outcome of rehabilitation programs (Reagles, Wright, & Thomas, 1972).
The scale of client satisfaction consists of 14 items. Some items feature multiple choice responses while others use yes/no format. Among the aspects of satisfaction addressed are the convenience of appointments (e.g., time and location), helpfulness of the counselor, length of time from first vocational rehabilitation contact to first appointment with the counselor, frequency of visits with the counselor, and degree of satisfaction with the amount of time spent with the counselor (Reagles et al., 1972).
The scale is a self-report measure recommended for administration about six months after case closure. Scoring weights for all possible responses to each of the 14 items were empirically determined using the method of reciprocal averages (RAVE), so as to maximize the internal consistency of total scores (Reagles et al., 1972). The weights corresponding to responses for each item are summed over the 14 items to yield a total score, with higher scores representing greater satisfaction with rehabilitation services.
Reliability and validity. The Scale of Client Satisfaction was administered to 483 clients who received services from the Wisconsin Division of Vocational Rehabilitation and whose casefiles were closed as "rehabilitated." A Hoyt reliability coefficient of .83 was obtained, reflecting internal consistency of total scores. This was interpreted to mean that: "(a) the development of a scale for the purpose of measuring client satisfaction, conceptualized as a single variable, is feasible; (b) the 14 items are indeed measuring a single underlying variable...client satisfaction; (c) the items are scaleable; (d) the Hoyt reliability coefficient (.83) is sufficiently high for the purposes of this study; and (e) the inter-item correlation coefficients ranged from .09 to .67 with the majority between .22 and .31" (Reagles et al., 1972, p. 19).
No test-retest reliability estimates were reported. In support of content validity, the developers indicated that instrument content is appropriate, based on the opinions of" individuals knowledgeable in the dimensions of client satisfaction" (Harrison, Garnett, & Watson, 1981, p. 198).
Evaluation. The Scale of Client Satisfaction was specifically designed to measure satisfaction with vocational rehabilitation services. However, this instrument provides only a single score indicating global satisfaction, rather than separate scores indicating satisfaction with different aspects of service. Thus, the single Score is not helpful in identifying strengths and limitations of the program. Furthermore, Reagles et al. (1972) observed that the items "tend to emphasize the clients' responses to direct interaction with their counselors. Therefore, the items may not fully represent the concept of client satisfaction" (p. 21).
Reagles and Crystal (1975) developed the Client Satisfaction Scale as a revision of the Scale of Client Satisfaction. Whereas the earlier scale used response categories that were mostly dichotomous (e.g. "yes" or "no"), the revised version uses a Likert-type format with several response categories.
For example, the first item on the original scale simply requests a "yes" or "no" response to the question, "Were the time and place of the appointments convenient for you?" In contrast, the first item on the revised version requires respondents to evaluate the convenience of appointment times and locations by choosing one of five responses ("always", "quite often", "undecided", "once in a awhile", "not at all") (Reagles & Crystal, 1975). Reagles and Crystal maintained that the revised scale is more comprehensive and has greater clarity and discrimination potential than the earlier scale.
Reagles and Crystal (1975) administered the Client Satisfaction Scale to 90 former clients who had received services from one district office of Wisconsin DVR and whose casefiels were closed as rehabilitated. Overall, the distribution of scores was similar to the distribution of scores obtained by the clients who completed the earlier version of the Client Satisfaction Scale. However, the distribution obtained using the new version was slightly skewed with a greater proportion of scores at the higher or "more satisfied" end of the distribution (Reagles & Crystal, 1975).
Another major limitation of the Scale of Client Satisfaction is that the standardization sample was drawn entirely from rural areas in Wisconsin. Other than 2% of the sample which consisted of American Indians, the rest of the sample was comprised entirely of Caucasians. More recent studies have utilized the Client Satisfaction Scale with individuals from other ethnic groups. Smart (1993), for example, used the Client Satisfaction Scale with a sample of Mexican-Americans in exploring the relationships between acculturation, acceptance of disability, and client satisfaction Smart translated the Client Satisfaction Scale into Spanish so that both English and Spanish versions were available for use by the participants. However, new norms still need to be developed that are more representative of diverse rehabilitation client populations throughout the United States to enhance the generalizability of this instrument. Other potential limitations include the vulnerability to response bias in which clients respond in "socially desirable" ways (Cook & Cooper, 1979); failure to include clients closed as not rehabilitated in satisfaction studies (Reagles, 1979); and conducting one-time-only studies (Cook & Cooper, 1979).
Despite its limitations, the Scale of Client Satisfaction appears to be a useful tool for program evaluation. Problems with item format have been addressed through revisions, and the scale has been translated into Spanish. The test items are straightforward (although reading level is not indicated). Scoring appears to be simple, and it seems that the scale could be easily modified for clients with sensory or motor impairments. "Perhaps its greatest use -- since a single score of satisfaction is obtained -- would be to determine the correlates of satisfaction, to identify which interventive counselor functions are most related to clients' expressed satisfaction" (Reagles, et al. 1972, p. 22).
Client Satisfaction Questionnaire
Description and development The Client Satisfaction Questionnaire (CSQ) was developed by Larsen, Attkisson, Hargreaves, and Nguyen (1979) as a standardized measure of client satisfaction that could be used in a wide variety of settings. Based on a review of literature, Larsen et al. identified nine program dimensions in which clients might vary regarding their satisfaction: physical surroundings; support staff; kind/type of service; treatment staff; quality of service; amount, length, or--quantity of service; outcome of service; general satisfaction; and procedures. Nine items were written for each dimension, that were reviewed by experts and reduced to 31 items, which then comprised the original version of the CSQ. After field testing in community mental health centers, an 8-item form (CSQ-8) and two parallel 18-item forms (CSQ- 18A and CSQ- 18B) were developed. Although originally conceptualized in terms of nine dimensions, the CSQ provides a single, global score representing general satisfaction. Items are rated on four point Likert-type scales that do not allow for neutral responses, with total scores computed by summing across items and higher scores reflecting greater satisfaction. The CSQ is available in English, Spanish, and Dutch.
Reliability and validity. Numerous studies have documented the high internal consistency of the CSQ (Wilkin, Hallam, & Doggett, 1992). The 8-item version has typically been found to produce higher item-total correlations and inter-item correlations than the 18-item versions, but coefficient alphas for all forms have been consistently above .90 (Attkisson & Zwick, 1982). In addition, Levois, Nguyen, and Attkisson (1981) obtained alternate forms reliability estimate of .82 between the two parallel forms (CSQ-18A and CSQ-18B) and found no significant differences between mean scores on the two forms, using a sample of 92 clients in a mental health day treatment program.
Modified versions of the CSQ-8 have been developed for specific settings. Greenfield (1983) reported internal consistency with coefficient alphas of .83 and .88 for a CSQ-3 and CSQ-4, respectively, which were used as part of a larger questionnaire of client satisfaction in a university counseling center Daly and Flynn (1985) modified the CSQ-8 for use in an inpatient rehabilitation unit; after eliminating one item that was found to correlate poorly with the others, a coefficient alpha of .78 was found for the resulting CSQ-7.
Several studies comparing the CSQ with other measures of client satisfaction and outcome have provided evidence supporting validity (Wilkin, Hallam, & Doggett, 1992). In a one month follow-up study of 45 urban community mental health clients; the CSQ was found to correlate with outcome measures in the following ways: (a) clients who reported low satisfaction on the CSQ attended fewer sessions and were more likely to drop out of therapy, whereas those who reported high satisfaction attended more sessions, and were more likely to remain in therapy; and (b) client and therapist global ratings of improvement correlated with reported satisfaction (Attkisson & Zwick, 1982). In a five-year study of clients served in a university counseling service (N = 1267), Greenfield (1983) found that CSQ satisfaction scores were lowest for students who only attended one session, increasing as students attended up to 16-20 sessions and dropping off after more than 20 sessions; in addition, they found differences in scores according to gender, student level, and major, and the type of problem for which services were sought.
In regard to the appropriateness of the CSQ for use in different settings, Wilkin et al. (1992) suggested that the content of the instrument was only appropriate for the original target group, clients of community mental health programs. They also suggested that the CSQ was only appropriate for measuring satisfaction with a "specific episode of care for a particular problem, rather than a continuum of care for a variety of problems over a considerable period of time" (p.251). In regard to use with individuals from different cultures, the CSQ has been used with a sample of mental health clients in the Netherlands (Brey, 1983) and with Hispanic clients from Texas Family Service Centers (Roberts, Attkisson, & Mendias, 1984). Finally, Wilkin et al. have noted that the CSQ has provided skewed distribution with scores tending to be high, reducing ability to detect small differences between sample or programs.
Evaluation. The CSQ has potential for use within rehabilitation settings as a measure of client satisfaction. For example Daly and Flynn (1985) modified the CSQ for use in inpatient and outpatient rehabilitation hospital settings. The development of the CSQ in community mental health centers suggests its usefulness in settings serving individuals with long-term mental illness.
As a measure of global client satisfaction, the CSQ has a large amount of supporting evidence. However, it is not without an overriding problem of highly skewed distributions of scores. Until researchers better understand the nature of the highly positive responses (which all instruments reviewed here have typically produced), results can only be interpreted in relative terms. Reports of "high" or "average" satisfaction in absolute terms have little meaning without comparison to a reference group
A trend that holds promise is the use of the CSQ-8 as a general core of items to which situation-specific items may be added (Daly & Flynn, 1985; Greenfield, 1983). Further research is needed to determine if valid comparisons can be made across settings if contextual items are added to the CSQ-8. Further research is also needed to assess the effect of using alternative methods of administration of the CSQ, such as interview, computer, or braille formats.
Finally, nearly all of the literature on the CSQ has involved the developers either as authors or editors. While they deserve recognition for their contribution, other independent investigations are needed to provide a broader base of empirical support.
Evaluation Ranking Scale
Description and development. The Evaluation Ranking Scale (ERS) was developed by Pascoe and Attkisson (1983) as a measure of patient satisfaction to be used in evaluating the delivery of health care services within medical practices. The developers needed a measure that was "capable of registering satisfaction with specific aspects of health care delivery; resistant to response biases and other artifacts; and brief and easily administered" (Wilkin, Hallam, & Doggett, 1992, p. 244). Additionally, Pascoe and Attkisson observed that most attempts to measure patient Satisfaction only produced positive responses. This would account for the fact that the information carried by dimensions of health care is not displayed in the results of most individual satisfaction studies, i.e. dimensionality and relative dissatisfaction are being masked by methodological problems" (Pascoe & Attkisson, 1983, p. 337).
Unlike many other measures of patient or client satisfaction, the ERS focuses on the assessment of specific program dimensions rather than global measures of satisfaction. Key characteristics of health services were identified in developing the scale, including accessibility, availability, physical environment, informational resources, interpersonal quality of patient-staff exchanges, technical skill of providers, service relevance, and the outcome or effectiveness of services (Pascoe & Attkisson, 1983). These characteristics identified were then reviewed with patients, administrators, and public health providers and were translated into six dimensions with corresponding descriptors: clinic location and appointments; clinic building, offices, and waiting time; clinic assistants and helpers; nurses and doctors; health services offered; and service results (Wilkin et al., 1992).
Each dimension is presented to respondents on separate 3 X 5 cards. Patients are asked to rate the cards using a two stage process. The first stage requires them to "sort the cards in order of their importance in judging a service, regardless of the patient's positive or negative feelings about the particular service being evaluated" (Wilkin, et al., 1992, p. 244). The second stage involves asking patients to rate absolute and relative quality by placing the cards along a 0 to 100 point scale (0 representing the worst possible health center and 100 representing the best possible health center). Patients are advised that they can place their cards anywhere along the continuum to represent how they feel about each of the six dimensions and that the cards can be overlapped or as far apart as they want.
Satisfaction scores are given weights by multiplying each item by its reversed rank from the first stage. For example if an item is ranked number 1 (most important), it is weighted 6. If the item is rated at 95 in the second stage, this would be multiplied by the weighting of 6 for a satisfaction index of 570. Scores are then divided by the sum of the weights to obtain a mean score, allowing weighted scores to be calculated for each item and for the scale as a whole (Wilkin, et al., 1992).
The scale takes about ten minutes to administer. The ERS must be administered by a rater since administration and scoring are fairly complex tasks. Furthermore, the rater must undergo specialized training in order to avoid introducing rater bias.
Reliability and validity. The only evidence of reliability has been reported in the comparison of rankings obtained from two different groups of patients at the same urban health clinic. Spearman rank order correlation coefficients of .89 (p=.02) for the rankings section of the ERS and .71 (p=.06) for the ratings section of the ERS were obtained (Attkisson, Roberts, & Pascoe, 1983). The developers of the instrument interpreted the coefficients as indicating that patients in the two groups ranked the scale dimensions in virtually the same order and rated the dimensions in a very similar manner. Test-retest reliability coefficients were not provided, and evidence of internal consistency is also lacking.
Wilkin et al. (1992) stated that test development procedures were designed to ensure positive content validity but that the combinations of item descriptions are sometimes unusual. For example, `clinic location' is included with `ease of obtaining appointments.' The main evidence of validity is derived from comparing the ERS with global measures of satisfaction such as the CSQ. Compared with an 8-item version of the CSQ, the ERS produced lower overall measures of satisfaction because of its greater ability to discriminate between different components of health care (Pascoe & Attkisson, 1983). However, when compared with the 18-item CSQ, both instruments produced similar results (Pascoe, Attkisson, & Roberts, 1983).
Evaluation. The ERS is particularly beneficial for identifying specific program areas with which clients are satisfied or dissatisfied. An instrument similar in design to the ERS could be administered to vocational rehabilitation clients to provide more detailed information about the different components (e.g., counseling and guidance, training, job development and placement) of the vocational rehabilitation program than would a more global measure of client satisfaction. ERS results usually reveal less overall satisfaction, but identify the relative importance of different dimensions, and specify how positive or negative each dimension is thought to be by the patient.
The ERS has also been described as having the capability to detect subgroups of patients who are dissatisfied with specific program features (Pascoe & Attkisson, 1983). Since one of the intents of the Rehabilitation Act Amendments of 1992 is to be more inclusive of `unserved' and `underserved' populations in all phases of the rehabilitation process, utilizing an adapted version of the ERS could prove effective at evaluating progress toward meeting this objective.
Despite its advantages, the ERS is not without limitations. According to Wilkin et al. (1992), it is a costly instrument to administer, and the psychometric properties are questionable due to the limited evidence regarding reliability and validity. It would be difficult to modify the ERS for clients with sensory or motor impairments since respondents are required to sort cards and place them along a continuum on a chart. The scale may also be somewhat confusing since the first stage asks respondents to judge the importance of a service whereas the second stage asks them to rate the quality of a service. Reading level is not reported, but it does not appear that this instrument could be modified for individuals unable to read since respondents must be able to identify what is written on every card as they sort and arrange them along the continuum.
The ERS is a relatively complex instrument compared to pencil and paper checklists and requires specialized training to administer and score. Since it must be administered individually, it is more cumbersome and time consuming than techniques that involve simply interviewing clients by telephone or mailing them written questionnaires.
Finally, since the ERS was designed to be used in medical office settings, extensive changes to the items would be necessary before it could be utilized as a vocational rehabilitation program evaluation tool. A normative sample would have to be established and reliability and validity data would have to be collected. However, the ERS provides a unique approach to measuring client satisfaction and could possibly be utilized in a revised format to collect more specific information about those aspects of the vocational rehabilitation program which clients like and dislike.
Patient Satisfaction Questionnaire
Development and description. The Patient Satisfaction Questionnaire (PSQ) was developed by John Wale and colleagues at Southern Illinois University in 1976. Numerous subsequent studies have been conducted by Ware and associates at the Rand Corporation in California.
The goals of the initial project were "to develop a short, self-administered satisfaction survey that would be applicable in general population studies and would yield reliable and valid measures of concepts that had both theoretical and practical importance to the planning, administration, and evaluation of health services delivery programs" (Ware, Snyder, Wright, & Davies, 1983, p.247). The developers viewed patient satisfaction as a multidimensional concept. They viewed the ratings as measures of the reality of the care provided and the patient's personal preferences and expectations.
From an initial pool of 2300 items drawn from the literature, a population survey, and professional experience, several versions of the PSQ have been developed, researched, and modified (Wilkin et al., 1992). The 68-item PSQ-II has been used most frequently in validation studies. The recent 51-item PSQ-III, developed for use in outcome studies, was intended to more directly assess medical care experiences rather than only general attitudes about care.
Like other versions, the PSQ-III requires the patient to indicate how strongly he or she agrees or disagrees with a series of statements using a five-point scale. Items are unweighted and belong to one of seven subscales: general satisfaction, technical quality, interpersonal aspects, communication, financial aspects, time spent with doctor, access/availability/convenience. A short form of the PSQ-III contains only 18 items in four subscales and allows for reference to a particular doctor (Wilkin et al., 1992).
Reliability and validity. Field tests of the PSQ-II were conducted with four samples (N=323-040) from general population household surveys and a survey of patients at a family practice center (Ware et al. 1983). Estimates of test-retest reliability over six-week intervals were obtained for two of the samples. Of the 17 subscales administered twice, 82 percent exceeded a product moment correlation of .50. Individual items were less reliable, with only 25 percent exceeding a correlation of .5 0.
Estimates of internal consistency, in using coefficient alpha based on single administrations of the PSQ-II to each of the four samples, ranged from .23 - .94, with estimates for 94 percent of the subscales exceeding .50 (Ware et al., 1983). According to Wilkin et al. (1992), initial field tests of the PSQ-III have shown improvement on the internal consistency of the subscales.
Factor analyses have provided supporting evidence for construct validity in that subscales have tended to assess the hypothesized dimensions of patient satisfaction. This evidence, however, is weakened by high inter-item and inter-scale correlations which suggest that the dimensions may be measuring similar rather than different components (Wilkin et al., 1992).
After describing a series of validation studies on the PSQ-II, Ware et al. (1983) concluded: "1) Patient satisfaction with medical care is a multidimensional concept, with dimensions that correspond to the major characteristics of providers and services; 2) The realities of care are reflected in patients' satisfaction ratings; 3) The influence of patients' expectations, preferences for specific features of care, and other hypothetical constructs on patient satisfaction remain to be determined" (p.262).
In a comparative study of the PSQ, CSQ, and ERS, Pascoe, Attkisson, and Roberts (1983) found that the PSQ produced a greater range of scores resulting in less skewed distributions. This somewhat positive finding was diminished by the finding that an average of 10-24 items out of 68 were scored "uncertain" and that 86 percent of the respondents (N=99) had at least one pair of items marked inconsistently. They further found that scores on the PSQ were not related to a measure of global service satisfaction and concluded that the PSQ was assessing general attitudes about health care rather than personal experiences with service. Roberts and Attkisson (1983) reported that over one-third of the variance in satisfaction as measured by the PSQ was attributable to aspects of life satisfaction and general well-being, rather than service satisfaction.
Ware and Davies (1953) found that the PSQ-II predicted changes in medical care providers and disenrollments from prepaid health plans. They also found that it correlated with whether patients intended to seek care from their physician, the emergency room, or to treat themselves. In addition, they found that even with relatively small differences in satisfaction ratings (small effect sizes), they were able to predict intentions and behavior.
Evaluation. The developers of the PSQ have been commended for their attempts to establish a theoretical basis for the definition of patient satisfaction and for their extensive research on development and validation (Wilkin et al., 1992). This research has produced a large set of data that increases opportunities for normative comparisons. Despite efforts to make the PSQ a more direct measure, it continues to be a general, indirect measure of satisfaction, which limits its usefulness in program evaluation (Wilkin et al., 1992). Nearly all research on the PSQ has been conducted by its developers at the Rand Corporation, except for that done by developers of the CSQ.
Rehabilitation counselors may find the PSQ-II (without modification) useful as a tool to assess client satisfaction with their medical care and/or with the medical consultations purchased by the agency. Some research has suggested that if patients are not , satisfied with care, they may avoid seeking it, which could lead to decreased preventative care and increased need for crisis management (Ware & Davies, 1983). An assessment of a client's general attitudes toward health care could be a useful tool in counseling individuals who are not managing their disabilities well.
More research is needed on the 18-item version of the PSQ to develop its ability to measure context-specific dimensions of satisfaction as well as general attitudes. For many items, it appears that "counselor" could be substituted for "doctor," but follow-up validation studies would be necessary. The PSQ is a model for item development and validation as well as for the process of attempting to ground the instrument in a theoretical framework.
The importance of measuring client satisfaction has increased in recent years as a result of federal legislation. Current methods for assessing satisfaction do not permit meaningful comparison across client populations and programs. The work of the developers of the instruments reviewed here move closer to that goal, but many problems and questions remain. To begin with, there exists no clear theoretical framework for defining client saris, faction (Lehman & Zastowny, 1983), although Ware et al. (1983) made a good attempt when developing the PSQ. In addition, instruments vary in format, terminology, and purpose. Most have been found to measure a unidimensional construct of global satisfaction although some (e.g., the ERS and PSQ) have attempted to tap multidimensional components of client satisfaction.
A multidimensional approach is more useful in program evaluation since it gives information about specific strengths and limitations of a program. Research suggests that measures of client satisfaction must take "context" into consideration in order to be valid and useful (Peterson & Wilson, 1992). Several program evaluators have attempted to do this by modifying established instruments and adding setting-specific questions (Daly & Flynn, 1985; Greenfield, 1983). Whether it is possible to meaningfully compare client satisfaction ratings from instruments that combine context-specific and general items is yet to be determined. These reviewers suggest that context is a variable, that does impact satisfaction ratings and must be taken into consideration when developing instruments.
Other variables mentioned in the literature that may affect ratings are the following: social desirability (Sabourin et al., 1989); mood, attempts to minimize decision regret, number of choices available, timing of administration, demographics (Peterson & Wilson, 1992); and anonymity (Levois et al., 1981; Soelling & Newell, 1983). Very little research has been conducted on these variables, especially within rehabilitation counseling, and results have frequently been inconsistent.
Perhaps the most critical factor to consider when assessing client satisfaction is the highly skewed distributions that all instruments produce. Because of these consistently high ratings, one must be careful to not over-interpret results. General descriptions such as "high" and "average" satisfaction have little meaning in absolute terms; rather ratings should be conceptualized in relative terms. Due to the extensive research with available instruments, a meta-analysis of normative data has been possible (Lehman & Zastowny, 1983), and more such studies are needed.
From their review of customer satisfaction measurement, Peterson and Wilson (1992) concluded that substantial methodological problems are created by and potentially contribute to skewed distributions. They pointed out that correlations between satisfaction ratings and other variables may be weakened because of skewness and accompanying range restrictions. They recommended using statistical tests that compare distributions rather than central tendency.
These reviewers believe that, with modification, all four instruments (Scale of Client Satisfaction, CSQ, ERS, PSQ) have potential for use in vocational rehabilitation. Research will be needed to explore questions of reliability and validity as instruments are given to clients with varying types of functional limitations.
Attkison, C.C. & Zwick, R. (1982). The client satisfaction questionnaire: Psychometric properties and correlations with service utilization and psychotherapy outcome. Evaluation and Program Planning, 5, 233-237.
Attkisson, C.C., Roberts, R.E., & Pascoe, G.C. (1983). The Evaluation Ranking Scale: Clarification of methodological and procedural issues. Evaluation and Program Planning, 6, 349-58.
Brey, H. (1983). A cross-national validation of the client satisfaction questionnaire: The Dutch experience. Evaluation and Program Planning, 6, 395-400.
Cook, D.W. (1977). Guidelines for conducting client satisfaction studies. Journal of Applied Rehabilitation Counseling, 8, 107-114.
Cook, D.W. & Cooper, P.G. (1979). Rehabilitation program evaluation. In B. Bolton (Ed.). Rehabilitation counseling research (PP. 193-211). Baltimore: University Park Press.
Daly, R. & Flynn, R.J. (1985). A brief consumer satisfaction scale for use in-patient rehabilitation programs. International Journal of Rehabilitation Research. 8, 335-338.
Fink, A. (1993). Evaluation fundamentals: Guiding health programs, research, and policy. Newbury Park, CA: Sage. Glass, R.M. (1989). Program evaluation. In B. England, R-M.
Glass & C.H. Patterson (Eds.), Quality rehabilitation: Results oriented care (pp. 19-37). Chicago: American Hospital Publishing.
Greenfield, T.K. (1983). The role of client satisfaction in evaluating university counseling services. Evaluation and Program Planning, 6, 315-328.
Harrison, D.K., Garnett, J.M. & Watson, A.L. (1981). Michigan studies in rehabilitation: Client assessment measures in rehabilitation. Ann Arbor University of Michigan Rehabilitation Research Institute.
Larsen, D.L., Attkisson, C.C., Hargreaves, W.A., & Nguyen, T.D. (1979). Assessment of client/patient satisfaction: Development of a general scale. Evaluation and Program Planning, 2, 197-207.
Lehman, A.F. & Zastowny, T.R. (1983). Patient satisfaction with mental health services: A meta-analysis to establish norms. Evaluation and Program Planning, 6, 265-274.
Levois, M., Nguyen, T.D., & Attkisson, C.C. (1981). Artifact in client satisfaction assessment: Experience in community mental health settings. Evaluation and Program Planning, 4, 139-150.
Pascoe, G.C. & Attkisson, C.C. (1983). The Evaluation Ranking Scale: A new methodology for assessing satisfaction. Evaluation and Program Planning, 6, 335-47.
Pascoe, G.C., Attkisson, C.C. & Roberts, R.E. (1983). Comparison of indirect and direct approaches to measuring patient satisfaction. Evaluation and Program Planning, 6, 359-71
Peterson, RA. & Wilson W.R. (1992). Measuring customer satisfaction: Fact and artifact. Journal of the Academy of Marketing Science. 20 (1). 61-71.
Reagles, K.W. (1979). A handbook for follow-up, studies in the human services. New York: ICD Rehabilitation and Research Center.
Reagles, K.W. & Crystal, R. (1975). Study of vocational assessment: Final report in Wisconsin Studies in Vocational Rehabilitation. Madison: University of Wisconsin Regional Rehabilitation Research Institute.
Reagles, K.W., Wright, G.N. & Buffer, A.J. (1976). Correlates of client satisfaction in an expanded vocational rehabilitation program. (Monograph XII, Series 2) Wisconsin Studies In Vocational Rehabilitation. Madison: University of Wisconsin Regional Rehabilitation Research Institute.
Reagles, K.W., Wright, G.N. & Thomas, K.R. (1972). Development of a scale of client satisfaction for clients receiving vocational rehabilitation counseling services. Rehabilitation Research and Practice Review. 2 (2), 15-22.
Roberts, R.E., Attkisson, C.C., Mendias, R.M. (1984). Assessing the client satisfaction questionnaire in English and Spanish. Hispanic Journal of Behavioral Sciences, 6, 385-395.
Sabourin, S., Laferriere, N., Sicuro, F. & Coallier, J.C. (1989). Social desirability, psychological distress, and consumer satisfaction with mental health treatment. Journal of Counseling Psychology, 36, 352-356.
Smart, J.F. (1993). Level of acculturation of Mexican-Americans with disabilities and acceptance of disability. Rehabilitation Counseling Bulletin, 36, 199-211.
Soelling, M.E. & Newell, T.G. (1983). Effects of anonymity and experimenter demand on client satisfaction with mental health services. Evaluation and Program Planning, 6, 329-333.
Walls, R.T. & Tseng, M.S. (1987). Measurements of client outcomes in rehabilitation. In B. Bolton (Ed.) Handbook of Measurement and Evaluation in Rehabilitation, (pp. 183-201). Baltimore: Paul H. Brookes Publishing Co.
Ware, J.E. & Davies, A.R (1983). Behavioral consequences of consumer dissatisfaction with medical care. Evaluation and Program Planning, 6, 291-298.
Ware, J.E., Snyder, M.K. & Wright, W.R. (1976a). Development and validation of scales to measure patient satisfaction with health care services: Volume 1 of a final report part A: Review of literature, overview of methods, and results regarding construction of scales. (NTIS No. PB 288-329). Springfield, VA: National Technical Information Service.
Ware, J.E., Snyder, M.K. & Wright, W.R. (1976b). Development and validation of scales to measure patient satisfaction with health care services: Volume 1 of a final report part B: Results regarding scales constructed from the patient satisfaction questionnaire and measures of other health care perceptions. (NTIS No. PB 288-330). Springfield, VA: National Technical Information Service.
Ware, J.E., Snyder, M.K. Wright, W.R., Davies, A.R. (1983). Defining and measuring patient satisfaction with medical care. Evaluation and Program Planning, 6, 247-264.
Wilkin, D., Hallam, L. & Doggett, D. (1992). Measures of need and outcome for primary health care. Oxford: Oxford University Press.
Wright, G.N., Reagles, K.W., & Butler, A.J. (1970). An expanded program of vocational rehabilitation: Methodology and description of client populations. Wisconsin Studies in Vocational Rehabilitation, 2, XI.
Received: October 1994
Revision: February 1995
Acceptance: March 1995
Lynn C. Koch, Department of Rehabilitation Psychology and Special Education 432 N. Murray Street, Madison, Wisconsin 53706
|Printer friendly Cite/link Email Feedback|
|Author:||Merz, Mary Ann|
|Publication:||The Journal of Rehabilitation|
|Date:||Oct 1, 1995|
|Previous Article:||On the Congruence of Evaluation, Training, and Placement.|
|Next Article:||Hirschi's Social Control Theory: A Sociological Perspective on Drug Abuse Among Persons with Disabilities.|