Printer Friendly

Problems and benefits associated with consumer satisfaction evaluation at independent living centers.

Because of the strong emphasis on consumer involvement, consumer satisfaction evaluation (CSE) is often referenced as a means to obtain independent living center (ILC) evaluation information. This form of evaluation is often used without recognition of inherent problems. The purpose of this paper is to provide a review of the state-of-the-art of consumer satisfaction evaluation for independent living centers. First, CSE dimensions are discussed, and problems and strengths are then reviewed from the literature. Finally, a discussion of future solutions and improvements is presented. Although there are some major problems associated with consumer satisfaction evaluation (CSE), it is a popular, economical evaluation concept that is valued in independent living rehabilitation. In two recent national surveys of independent living centers (ILCs), CSE emerged as a high priority component for program evaluation (Jones, Petty, Boles, & Mathews, 1986; Budde, Petty, & Nelson. 1984). In addition, the staff at the Research and Training Center on Independent Living have observed a number of ILCs who use consumer satisfaction in part or in whole to evaluate their centers. Nevertheless, many centers and agencies may not be aware of advantages and problems associated with CSE or the current state of technical development.

Dimensions of CSE

Service providers, administrators, and researchers define CSE in various ways. One way is to identify common practices across various dimensions. Lebow (1983a) focuses on both the narrow and broad boundaries of CSE. His narrow definition includes the extent to which services gratify consumers' wants - service adequacy, availability, accessibility, process, cost, and so on. The broad definition includes items that correlate with measures such as: goal attainment, premature termination, return for additional service (Lebow, p. 73).

While some variations are reported in evaluation approaches, the most common method is a "discrepancy approach", where consumers' expectations are matched to their perception of services provided (Pascoe, 1983). This approach is usually a self-report method requiring evaluation of various dimensions of a service. Common service dimensions have been defined by Pascoe and Atkisson 1983) as accessibility, availability, physical environment, information resources, interpersonal quality of patient-staff exchanges, technical skill of providers, service relevance, and outcome or effectiveness of services. These dimensions and many of their subcategories cut across most services.

However, various types of services might require unique subcomponents. A specific subcomponent for ILCS, for example, might include assessing the level of satisfaction with community options. ILCs spend a great deal of time advocating for and developing options for independent living, e.g., accessible housing, accessible transportation, funds for personal care attendants, legal rights. Results illustrating satisfaction with such options could be used to target needed options and determine commitment levels to develop those options. Another potential use is evaluating satisfaction with the decision-making process. Since consumers are the decision makers and must take responsibility for their own fives, (Budde & Bachelder, 1988) they should select the preferred options and services and use them to meet their independent living goals. Evaluation results could then be used to determine satisfaction with consumer decision making and to validate or modify the type and level of assistance ILC staff provide. Dimensions for consumer involvement in overall service delivery decisions might also be included. Evaluation results could validate or lead to improvements in the process of consumer involvement at the ILC policy level.

Although there are variations, the typical CSE approach combines discrepancy and self-report methods. Items are included on instruments from several broad areas. Sometimes instruments are designed to evaluate subcomponents of broad areas - particularly those concerned with unique features. While CSE is predominantly used for group evaluation, it has potential for individual evaluation. Finally, evaluation or correlation of outcomes and satisfaction should be included in CSE to determine if there is any relationship between satisfaction and enabling consumers to meet their goals.

Problems with CSE

Because CSE instruments have not been developed or tested for ILCS, ILCs and state agencies typically develop their own CSE instruments. This practice is not unique to the rehabilitation field and also occurs in the mental health field (Westbrook & Oliver, 1981; Linn, 1975; Stomps & Finkelstein, 1981) and in educational counseling fields (Greenfield, 1983). As a result, psychometric practices and experimental designs are often ignored. Instruments are developed and used without understanding their inherent problems. Consumer bias

The literature contains numerous reports of consumer bias where consumers rate satisfaction high (e.g., DeShane, Brown, & Johnson, 1979; Denner and Halprin, 1974; Frank, 1974; Gilligan and Wilderman, 1977; Goyne and Ladoux, 1973; Henchy and McDonaid, 1973; Linn, 1975; Pascoe, 1983; Lebow, 1983a, b). In the above studies, predominant positive bias was reported, and Linn (1975) stated this was true regardless of evaluation method used, components evaluated, or population studied. Scheirer (1978) reports that services are evaluated on the positive side even without evidence of progress toward intended program goals.

There are several reasons why consumers rate satisfaction high. Jones (1964) suggests that attempts are made to please the social service agency. Denner and Halprin (1974) suggest that when consumers use the same instrument to evaluate their progress and the social service. they want to make a good impression. If the consumer's progress was unsatisfactory, he or she would rate satisfaction high to show that poor progress was not the provider's fault. Also, if questions are answered negatively, consumers fear they might lose their services. They might also respond positively because of the Hawthorne Effect (Roethlisberger & Dickson, 1939). Pascoe (1983) identified gratitude to staff, personal program rewards, and experimentation bias as other factors.

While positive bias might be dismissed by service providers anxious to report high service satisfaction, the bias problem is nonetheless real. Lebow (1983a) reports that satisfaction ranges from 72% to 83% for various types of mental health services. He also reports that less than 10% of the population reports dissatisfaction. Validity

If the validity of in-house instruments is not established, there is no way to determine how accurately items measure what they are supposed to measure. A standardized instrument with established external validity would be useful to measure common components across centers. However, internal validity of items that are unique to a center would need to be established for that center. ILC evaluators might use instruments with established external validity and add items unique to a particular center. Then, psychometric practices would need to be used to test the validity of all items selected for a particular ILC.

Locating a valid CSE instrument for ILCs could be a difficult task, since few have been reported. An alternative would be to use instruments from another field, e.g., health care or mental health. The Client Satisfaction Questionnaire 31 (CSQ-31), for example, developed by Larsen, Atkisson, Hargreaves, and Nguyen (1979) for the mental health field, has an established level of construct validity. Items were validated by California mental health professionals and county mental health board members. Some of the CSQ-31 items are applicable to ILCs. Examples include: In general, how satisfied are you with the comfort and attractiveness of our facility? Have the services you received helped you deal more effectively with your problems? However, items that are unique to independent living might be included, such as: How satisfied are you that you made decisions independently? How adequate was the list of housing options?

One concern about the validity of the CSQ-31 involves the subjects used in the validity tests. Consumers did not validate items. It would seem more consistent to talk about consumer satisfaction when consumers are involved in validity tests.

There are several other validity issues. Terminology within items can have a variety of meanings, e.g., how difficult was it, how satisfied are you, was the problem resolve4 did you progress? Second, the response category might be limiting. Arbitrary cutoffs between satisfied and dissatisfied responses have been criticized by Locker and Dunt (1978) and Hulka and Zyzanski (1982). A sufficient range of effective responses for each question should be used - a 4-point scale minimum. Third, wording of items produces acquiescent responses. Ware (1978) found that 40% to 60% of respondents agree with item emphasis regardless of content. Favorably worded items increased scores, and unfavorable wording decreased scores.

Although the issue of validity is important, it can also be costly. Most ILCs lack the resources and personnel to establish validity for their instruments. Moreover, instruments with established validity in other fields might not transfer to independent living. In either case, a test of validity is needed, but it could prove too costly for one ILC. Reliability Like validity, when the reliability of an instrument is not tested, evaluation data can be questioned. Again, there are few instruments whose reliability has been tested, and none has been reported for independent living. Instruments from health care and mental health generally report reliability of about .50. With the few instruments achieving reliabilities at the.90 level, procedural problems have been reported (Ware, 1978; Counte, 1979; Mangelsdorff, 1979). If instruments from other fields were found to contain valid items, their reliability would be suitable for group use-.50 or higher. However, they would not be applicable for individual use (.90 or higher) (Pascoe, 1983).

If ILCs are to use valid and reliable CSE instruments, the only solution is to develop an instrument specifically for ILCS. Then the next step would be to establish validity and test reliability.

Problems with CSE Administration

Even if a valid, reliable CSE instrument were developed, ILC staff and administrators would need to resolve problems concerning administration. One problem involves surveying the population. Some ILCs survey all consumers afl of the time. It might be more economical to survey a random sample for a specific period of time. This could be done at intervals over the life of the ILC. For example, a random sample of 100 consumers from 5 randomly selected months might be used for the first study. The same study could be repeated every 2 1/2 years.

There are two additional problems with random sampling. First, the random sample must be representative. It should draw from the composite population, including those at various stages of service delivery and those who have prematurely terminated. Consumers who have prematurely terminated are difficult to locate and unlikely to answer surveys. Retrieving their data requires additional effort and cost. If the data cannot be retrieved or the process is too costly, the sample win become biased to the extent of the missing data. Thus, survey results would be suspect, and their potential to validate or modify service would be questionable. When data are collected at a point long after service completion, the same problem exists. The longer the time period, the more difficult the data will be to retrieve (Denner & Halprin, 1974; Larsen et al., 1979). The method used also contributes to the amount of data that will be collected. Lebow (1983b) reports that satisfaction surveys generally yield response rates of 38% for mail surveys, 42% for phone inquiries, 67% for in-person, at-home interviews. 64% for a combination of these techniques, and 85% for interviews or questionnaires presented at the treatment facility. The second sampling problem is concerned with sampling consumers who (1) are at different stages of service delivery and (2) receive different services. Consumers have different needs at intake than they do when receiving or completing services. Their needs might include transportation to the ILC, understanding about or attention to their problem, or orientation to the intake process. At the service delivery stage, consumers might be concerned about their opportunities to make decisions, their relationship with the provider, or their progress. At the post-service delivery stage, consumers might be concerned about final outcome, their overall view of the program, or follow-along relationships. Mann (1973) has proposed a model that uses different criteria to assess different stages. ILC consumers do not always use all services offered. All do not require housing assistance, independent living skins training, equipment repair, transportation, and so on. In fact, some services are provided through different methods, e.g., providing information, referral, direct service, service assistance, peer counseling, intervention/advocacy. If an ILC wants to determine overall satisfaction levels, general questions will suffice. However, valuable information to validate or modify specific services and service methods will be lost.

Strengths of CSE

Although there are problems that need to and can be overcome, there are good reasons to improve and use CSE. The primary reason is that CSE lends itself to "consumer control," a key concept in independent living rehabilitation (Budde & Bachelder, 1987; Budde & Bachelder, 1988; DeJong, 1979; Frieden, 1978; Stoddard, 1978). Consumer control requires individuals with disabilities who receive services from ILCs (consumers) to select their own independent living goals, the services that will be used to attain those goals, and to take responsibility for attaining their goals. Consumer control also involves evaluating ILC services and impact and providing feedback or advocacy to ensure that ILC services meet consumer needs. There are also other practical and legal reasons for using CSE. Practical reasons

Larson, Atkisson, Hargreaves, and Nguyen (1979) argue that consumer satisfaction evaluation and feedback is one of the few mechanisms to improve services to meet consumer needs. Levkoff and DeShane, 1979; Krause and Howard, 1976; Strupp and Hadley (1977) report that since service providers apply their own values, the center's perception of process and outcome often differs from that of the consumer. Service providers and even independent raters distort ratings according to their own values. Thus, the most accurate perception of service delivery is from the consumer. Consumers know their needs and goals best, and they are in a position to determine how well they have been met. Similarly, they can convey their perceptions of the processes (services and approaches) used. Studies show that consumers are sensitive to the quality of staff's verbal and nonverbal behavior (Korsch, Gozzi & Francis, 1968; Stiles, Putnam, James, & Wolf, 1979; Stiles, Putnam, Wolf, & James, 1979; and Wilson & McNamara, 1982). Even if consumers' ratings differ from staff perceptions and other evaluation results, they represent a potentially valid projection" (Waskow & Parloff, 1975). In fact, Larsen et al. (1979) state that if the client's perception is not taken into account, the entire center evaluation system is incomplete.

There are some broad practical reasons for using CSE. Social programs and centers (to a degree) are valued by consumers and the general public because of their popularity - not hard evaluation data. Accordingly, it is important to determine how consumers and the larger community perceive ILCS. The process of identifying the level of satisfaction or "social validity" can be critical for program survival (Wolf, 1978). If the public or consumers are not in favor of a center's program, information might be exchanged to answer concerns, or programs might be modified to meet concerns.

CSE can also be used for quality control or to ensure that service provision meets established standards (Braukmann, Fixen, Philips, Phillips, and Wolf. 1975; Wurmser, 1979; Pandiani, Kessler, Gordon, & Domket, 1982). The process of conducting CSE denotes provider accountability and establishes a contingency for quality services. Results of CSE can be used to identify weaknesses, make program changes, or modify staff behavior. Most important, they can be used in individual performance appraisal. Legal implications

With the growing emphasis on consumerism in the 1970s, consumer input has become more common in government. For example, CSE has been a legal requirement in mental health legislation - P.L. 94-63. and it has led to both widespread use of CSE (Lebow, 1983b) and a good deal of research to improve the methodology.

While amendments to the Rehabilitation Act of 1973, P.L. 95-602, do not call for CSE, they do call for "substantial consumer involvement." One way to facilitate consumer involvement is to let consumers use CSE results to help make improvements. Problem-solving sessions to discuss areas of low satisfaction can be set up. Recommendations for ILC improvement can be made at this time. In some cases, consumers might even monitor implementation of improvements that an ILC has agreed to make. Improved criteria for decision making

It has been noted that consumers' values and criteria for judgements can differ from those of service providers. However, values and criteria can be altered through information exchange. Levkoff and DeShane (1979) suggest that consumers might not have all the information needed to judge service delivery or provider competence. They state that provision of service delivery information is an important aspect of service provider competence. They express the view that increasing the level of consumer understanding could bring consumer values and criteria for provider judgments in fine with the providers (p. 63). The converse of this argument is also true. If consumers are able to increase service provider understanding about consumer problems, values and criteria for judgment might be brought into line with consumers. A mutual exchange of information could put consumers and service providers in a better position to make judgments. The best evaluation results would be obtained when consumer-provider relationships interact in a dialectic process (Galanter, 1976). Prediction of consumer behaviors

There is evidence that satisfied and dissatisfied consumers behave differently. Prominent differences in ratings have been found between individuals who terminate services prematurely (dropouts) and those who complete services. (See literature reviews by Baekeland & Lundwall, 1975, and Garfield, 1978.) Nguyen et al. (1983) found low satisfaction correlated with a high percentage of missed appointments. Although additional studies are needed to establish correlations between levels of satisfaction and certain behaviors, the predictive value of CSE should not be overlooked. Satisfaction levels might be used to predict level of cooperation with staff, time and cost of services, frequency of utilization of services for new or continuing problems, level of program endorsement, and so on. Cost

The low cost of OSE makes it an attractive form of evaluation. Instruments are inexpensive, simple to administer, and simple to score. Pandiani, Kessler, Gordon, and Domkot (1982) state that cost of a center survey is less than $100, once an instrument is constructed. Costs remain low, unless intensive follow-ups are conducted for low response rates or intensive interviews to acquire additional information. Of course, this assumes that a valid and reliable instrument is available. Center relations

ILCs can use CSE to demonstrate that consumer input is sought and valued. They can also show how consumer input is used to improve center services. In fact, several methods can be used to make consumer involvement a prominent feature of any ILC. One way is to have boards of directors establish a policy requiring CSE and feedback. Another is to establish a client bill of rights (Rosenthal, 1976). The bill of rights could specify that consumers not be viewed as mere recipients of service. They should be viewed as bona fide consumers who have needs and demand quality services. Also, consumers have the right and responsibility to evaluate service delivery. Finally, it could state that information resulting from CSE is welcomed and considered valuable.

Conclusion

While there are good reasons for using CSE, the problems associated with it raise serious questions. What is the answer to this dilemma? First, results of CSE (particularly in-house instruments) should not be misused. Administrators are often skeptical about CSE, data are suspect, and results are infrequently used Lebow, 1983c). Lebow continues: Save for those instances where these data have been used for self-congratulation or to ward off any further requests for data from a funding body, these data have served little functional purpose in organizational decision-making (p. 243).

If evaluation results are used to show inflated service quality and quantity or not used at aU, there is little justification to use CSE. This does not mean that CSE be abandoned. The technology should be improved so that it overcomes inherent problems while maintaining its practical characteristics.

Some pilot research has been conducted to overcome primary problems of consumer bias. Anonymity practices alone reduce consumer bias. Soelling and Newell (1983) reported significant lower scores for the test group in which anonymity was practiced. Item emphases that produce negative and positive ratings could be altered equally throughout an instrument. Pascoe and Atkisson (1983) developed a ranking and rating instrument, The Evaluation Ranking Scale (ERS), to reduce consumer bias. They found it lowered mean scores significantly, was more discriminatory, and obtained more specific information about program components. In a follow-up study, they found scores significantly lower, more randomly distributed, and more sensitive in selecting patients who were relatively satisfied (p. 357b). Finally, a preliminary attempt has been made to establish norms, so error factors can be subtracted from obtained scores to determine true scores Lehman & Zastowny, 1983).

The ERS appears to be the most promising instrument that ILCs could use for CSE. However, several refinements would improve the instrument. Versions might be developed for various time periods consumers are in programs. Items unique to ILCs might be added. The reliability and validity for independent living centers should be tested. Additional studies to correlate satisfaction with general outcomes and independent living outcomes, like those identified by Budde, Petty, and Nelson (1984), could determine ERS's predictive utility.

Finally, administrative procedures could be developed for sampling, individual testing, anonymity, and probing. Probing procedures used by Fawcett, Seekins, Whang, Muiu, and Suarez de Balacazar (1982) for their Consumers Concerns Report use a similar ranking and rating system. Their approach also increased consumer involvement in problem analysis and solutions development.

CSE is an innovative approach for program and individual evaluations. Its technology is in the formative stage. While the popularity of instrument use is high, the use of results is questionable. As CSE technology is improved, use of instruments and results will undoubtedly increase. The potential advantage of the technology cannot be dismissed. It represents a low-cost measure of consumer perspective that could be used to validate or improve programs, predict consumer outcomes, improve public relations, and even facilitate center survival.
COPYRIGHT 1989 National Rehabilitation Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1989, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Nelson, Christopher
Publication:The Journal of Rehabilitation
Date:Oct 1, 1989
Words:3607
Previous Article:Cost comparison of vocational services offered under industrial insurance.
Next Article:Disability and the Family: A Guide to Decisions for Adulthood.
Topics:


Related Articles
Independent living programs: impact of program age, consumer control, and budget on program operation.
Evaluation of a medical rehabilitation and independent living program for persons with spinal cord injury.
The greater vision: an advocate's reflections on the Rehabilitation Act Amendments of 1992.
Independent living centers: moving into the 21st century.
The whole is greater than the sum of it's parts (at least, now it can be).
Independent living services for older individuals who are blind: issues and practices.
Bringing the rehabilitation family together: an IL-VR partnership.
Using the power of management information system technology to support the goals of centers for independent living.
Preventing and managing secondary conditions: a proposed role for independent living centers.
Independent Choices.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters