Printer Friendly

Institutional Effectiveness in Two-Year Colleges: The Southern Region of the United States.

Using institutional effectiveness criteria established by the Southern Assocation of Colleges and Schools, the authors surveyed administrators and faculty at institutions granting associate's degrees in the southern United States to elicit (a) the extent to which effectiveness components were implemented, (b) the importance placed on those components by institutional leaders, and (c) discrepancies between reported implementation and perceived importance. The data were analyzed to determine if differences existed between perceived levels of implementation and importance based on institutional size and respondents' characteristics (including professional classification, gender, ethnicity, age, and years of employment). Reported implementation did not vary according to institutional size or respondents' characteristics, but one-way analysis of variance tests revealed significant relationships between the perceived importance of some components and respondents' length of employment, professonal classification, and institutional size.

With today's strong impetus for accountability at all levels of organizations, particularly where tax monies are spent, the movement toward assessment, accountability, and effectiveness has gained momentum. Educational institutions constitute one arena where the concepts of assessment, accountability, and effectiveness have been implemented. The momentum of these concepts has strengthened partially due to strong critical judgments about educational quality.

Hodgkinson (1986) discussed several concerns that face higher education including the following: (a) During the time that scores on SAT tests taken by high school seniors were declining, scores on the GRE test taken by college seniors fell as much and sometimes more; (b) faculty members who focus on teaching are often denied tenure because of more rigorous assessment in other areas of scholarly pursuit; and (c) the costs of higher education have risen faster than any other comparable expenditure.

In fact, the general public is experiencing a growing distrust of higher educational institutions. Derek Bok, president emeritus of Harvard University, has said, "The public has finally come to suspect quite strongly that our institutions (higher education) are not making the education of students a top priority" (Mooney, 1992, p. A17).

Additionally, a concern among the public regarding higher education has been its inability or unwillingness to document effectiveness so as to justify resource allocation. Peters (1994) indicated that "accountability chills further a regulatory climate that already threatens higher education with extinction" (1994, p. 17). "This continuing inability to document effectiveness and improvement hurts higher education's cause, especially in hard times; legislators are increasingly reluctant to support higher education spending without such evidence" (Ewell, 1991, p. 15). With regard to effectiveness in community colleges, Baker (1992) has asserted that
 the old status quo paradigm existed in an era of growth and expansion
 characterized by increasing enrollments, adequate retention rates,
 legislative delegation of authority, stable literacy rates, a traditional
 student population, and program development based on demand. However, the
 new paradigm will be played out in a period of turbulence, scarce
 resources, declining enrollments, soaring attrition rates, increasing
 illiteracy, student diversity, and shrinking program offerings.... Colleges
 will be doing more with less. Furthermore, political forces will demand
 more accountability while seeking to increase control of the bureaucracy.
 (p. ix)


Accreditation

Enhancing the quality of education is the basic premise of institutional accreditation, a voluntary and unique process of American higher education. Accreditation is a concept and process principally concerned with improving educational quality and assuring the public that institutions meet established standards set by regional accreditation agencies (Southern Association of Colleges and Schools [SACS], 1992).

Regional accreditation serves an important function in improving the quality of American higher education. "The time has come for regional accreditation to assume a more active, visible role. Higher education, pressed to demonstrate its commitment to improving the quality of undergraduate education, needs its regionals as never before" (March, 1991, p. 4). Basically, regional accreditation agencies serve as higher education's quality control units, and in the age of accountability, the movement toward quality continues to gain momentum, thereby placing regional accreditation agencies in a leading role toward the quality pursuit in higher education.

But where do accreditation agencies "fit" with governmental entities in the United States? Some would argue that the elections of 1994 sent a profound message from the public to the national government in that the public desires government to be smaller, local, and more accountable (Dill, Massy, Williams, & Cook, 1996). In having a smaller federal government, accreditation standards and the overall assurance of academic quality becomes much more a focal point of faculty and administrators in individual institutions of higher education. Then, the accrediting agency's role would transform itself to that of auditing an institution's quality assurance procedures that exist within the institution itself (Dill, Massy, Williams, & Cook, 1996).

Even though the individual "agency" that will ultimately assure high quality academic programs within institutions of higher education is a vital topical area of accreditation, the overall focus of accreditation is still the assessing of outcomes. "Perhaps the key word for American Higher Education for the twenty-first century is assessment--the `A' word" (Winsor, Curtis, & Stephens, 1997, p. 170). This leads to the core of accreditation, which is improving educational quality throughout the region and assuring the public that institutions meet established regional standards (SACS, 1995). Furthermore, Lewis & Smith (1994) noted that such factors as student achievement, faculty degrees, facilities, and physical resources (or, in other words, the inputs of the institution) are the factors on which accreditation focuses.

Additionally, McIntoshh (1996) indicated that assessment should be geared to provide feedback on actual student learning. He referred to the National Science Education Standards as providing to educators a guideline that stresses more than traditional assessment areas; the focus should also be on assessing actual learning outcomes (McIntoshh, 1996).

Institutional Effectiveness

Since 1989, institutional effectiveness has been adopted by the Southern Association of Colleges and Schools and the Western Association of Senior Colleges as a component in the accreditation process (Ewell, 1992). The Southern Association of Colleges and Schools (SACS) (1992) divided institutional effectiveness into two primary areas: planning and evaluation and institutional research. Even though SACS does not prescribe a set of procedures for planning and evaluating programs and policies, SACS does require each institution to define its goals, formulate its mission, develop procedures for evaluating goal-achievement, use the evaluation results to improve institutional effectiveness, and establish this overall process consisting of broad-based involvement by administrators, faculty, staff, and students (SACS, 1992).

Ewell (1985) has said institutions of postsecondary education achieve excellence by producing demonstrable changes that are consistent with (a) institutional objectives, (b) student educational growth, and (c) the expressed needs of society. The societal need expectation, particularly, is fundamental to Level I institutions of higher education.
 Community and technical colleges are often on the front lines of change in
 American postsecondary education. They must interpret and respond variously
 to challenges involved in preparing the nation's work force for the demands
 of a rapidly changing world, both socially and technologically. They must
 also provide much of the population with access to four-year colleges and
 professional preparation. (Grossman & Duncan, 1989, p. 1)


Thus, community and technical colleges face numerous demands from many different publics. Furthermore, community and technical colleges must better define and position themselves in their respective communities (Grossman & Duncan, 1989). In other words, community colleges must be effective in what they do and they must be effective in documenting what they do as well as what they say they do. Successful resolution of these issues will yield enhanced institutional effectiveness in Level I institutions.

The Study

The study described in this report used institutional effectiveness guidelines and criteria as set forth by the Southern Association of Colleges and Schools. For the purpose of this study, institutional effectiveness is defined as a process in which a community, junior, or technical college engages in better identifying, assessing, and improving educational outcomes. The components of institutional effectiveness consist of institutional purpose, educational goals, program evaluation, planning, institutional research, and organizational involvement.

The researchers surveyed higher education institutions granting associate's degrees in the Southern region of the United States to determine the extent to which the components of institutional effectiveness were implemented, the degree of importance placed on the institutional effectiveness components by institutional leaders, and any discrepancies between reported implementation and perceived importance of the institutional effectiveness components.

The surveyed higher education institutions included Level I institutions accredited by the Southern Association of Colleges and Schools, which include community, junior, and technical colleges in the Southern United States: Alabama, Florida, Georgia, Kentucky, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, Texas, and Virginia. To note, Level II is composed of baccalaureate degree granting institutions; Level III includes comprehensive institutions; and Level IV encompasses doctoral-granting universities (Gentemann & Rogers, 1987).

In reviewing the literature of institutional effectiveness, the primary concepts that appeared include the broad topics of organizational effectiveness, accountability, institutional effectiveness, mission, purpose, goals, power, planning, resource allocation, strategic planning, academic planning, evaluation, and institutional research. For the purpose of analysis and further echoing this aforementioned conceptual framework, the survey instrument questions were arranged in the conceptual groupings of (a) institutional purpose, (b) educational goals, (c) program evaluation, (d) planning, (e) institutional research, and (f) organizational involvement.

Research Questions and Grounding

The overriding research questions posed with regard to the present study were as follows:

1. What current practices of institutional effectiveness, as defined in this study, are being implemented by Level I higher education institutions in the Southern region of the United States?

2. What is the general level of importance placed on institutional effectiveness by professionals working in Level I higher education institutions in the Southern region of the United States?

In his discussion of organizational effectiveness and organizational structure, Mintzberg (1979) stated that there are five basic parts to any organization: (a) strategic apex, (b) middle-line, (c) operating core, (d) support staff, and (e) technostructure. Additionally, Mintzberg (1979) has described five structural configurations of organizations in his classification schema: (a) simple structure, (b) machine bureaucracy, (c) professional bureaucracy, (d) adhocracy, and (e) divisionalized form. In general terms, Mintzberg (1979) advocated that the larger the organization, the more formal are the lines of authority. Greater use of planning and control processes are used to coordinate tasks. Mintzberg has hypothesized that larger organizations consist of more formalization. Therefore, this study was designed to determine if reported implementation of institutional effectiveness components varied according to institutional size and if respondents' perceived importance of institutional effectiveness components varied according to institutional size.

Expanding on Mintzberg's classifications, educational institutions would be categorized in the professional bureaucracy classification. The professional bureaucracy classification is common in universities and school systems where reliance on the skill and knowledge base of the operating professionals is crucial (Mintzberg, 1979). Obviously, community, junior, and technical colleges would be classified as professional bureaucracies.

According to Mintzberg (1979), professional bureaucracies are decentralized in that a great deal of the power lies with the professionals of the operating core whose responsibilities serve as key components in the professional bureaucracy. Basically, the present study identified the strategic apex as the chief executive officers (presidents), the middle-line as mid-level administrators (deans and directors), and the operating core as the faculty; therefore, a tremendous amount of power lies with the faculty.

Providing Mintzberg's theory is valid, faculty, when compared with administrators, would place a higher degree of importance on broad organizational involvement. This study was designed to determine if reported implementation of institutional effectiveness components and their perceived importance varied according to respondents' professional classification, as well as respondents' gender, ethnicity, age, number of years employed by the institution, number of years employed in the current profession, and number of years employed in the current position. Differences also could arise between the levels of reported implementation of effectiveness components and the perceived importance of those effectiveness components by the institutional members. If such differences exist, improving the current implementation of effectiveness components would be needed.

Methodology

The method used to accomplish this study included a survey of a stratified random sample of community, junior, and technical colleges in the Southern region of the United States. The stratification resulted from controlling for institutional size in the sample. Through interviews with leading community, junior, and technical college academicians, three categories were created based on institutional size: large (more than 3,000 full-time equivalents or FTEs), medium (1,500 to 3,000 FTEs), and small (less than 1,500 FTEs). The 1992 Membership Directory of the American Association of Community and Junior Colleges served as the sampling frame. Thus, the target population was community, junior, and technical colleges in the Southern region of the United States. Randomness was accomplished by providing an equal chance of selection for all community, junior, and technical colleges in the SACS accreditation area.

Independent variables included institutional size, professional classification of respondents, gender, ethnicity, age, number of years employed by the respective institution, number of years employed in the current profession, and number of years employed in the current position. Dependent variables, via responses of the survey instrument, included the perceptions of institutional effectiveness components, namely institutional purpose, educational goals, program evaluation, planning, institutional research, and organizational involvement. Additionally, the survey instrument contained 40 items of institutional effectiveness indicators. Each question possessed two scales of respondent measurement: extent of implementation and degree of importance.

An initial letter was addressed to the president or chancellor of 45 selected Level I higher education institutions requesting commitment to the study by returning a letter of commitment to the researcher. Additionally, follow-up letters were sent and 28 presidents indicated a commitment to the project. Packets of five surveys were sent to the presidents of each of the 28 institutions in December, 1993. The president was instructed, via a cover letter on one of the instruments, to complete that one instrument, to distribute one instrument to a dean or director, and to distribute three instruments to faculty members, with accompanying cover letters. Following completion, each respondent was instructed to seal the instrument in the attached envelope and return it to the president. After all five instruments were completed, the president returned them in an envelope provided for that purpose.

Significant differences between means were determined through a one-way analysis of variance and reported at the p [is less than] .05 level. An analysis of variance is "used to determine whether mean scores on one or more factors differ significantly from each other, and whether the various factors interact significantly with each other" (Borg & Gall, 1989, p. 356). Discrepancies between reported implementation and perceived importance of institutional effectiveness were determined through a calculation of mean scores for both scales, subtracting the mean importance score (should be) from the mean implementation score (is) for each item.

Limitations

Some internal threats to validity exist due to the nature of a mailed questionnaire. Respondents choose to become respondents by answering and returning the survey instrument, which could lead to bias of a variable or set of variables.

Additionally, threats to external validity may result that would lessen the study's generalizability. However, the findings of the present study, whose sample included Level I higher education institutions in the Southern region of the United States where the accreditation agency (the Southern Association of Colleges and Schools) mandates institutional effectiveness efforts, can be effectively generalized to other Level I institutions in the Southern region as well as other Level I institutions whose accreditation criteria include areas of institutional effectiveness components similar to the components that SACS identifies.

Results and Conclusions

Of the 28 committed institutions, 23 returned the survey instruments by the deadline date of January 15, 1994. A follow-up letter was sent that netted two packets of instruments for a total return of 26 packets or 92.86% of the 28 committed institutions; 5 respondents from each institution completed surveys, yielding 130 respondents. A demographic breakdown of respondents is provided in Table 1.

Table 1 Demographic Frequencies of Respondents
Gender
 Male 56.8
 Female 43.2

Ethnicity
 Black 8.1
 White 89.4
 Hispanic 1.6
 Native American 0.8

Age
 30 - 39 11.2
 40 - 49 40.8
 50 - 59 40.0
 60 + 8.0

Professional Classification
 Faculty 53.3
 Administrators 46.7

Number of Years Employed by Institution (M = 12.17)
 1 - 6 35.2
 7 - 17 32.0
 18 - 28 32.8

Number of Years in Current Profession (M = 20.59)
 1 - 3 31.2
 4 - 9 37.6
 10 - 29 31.2

Number of Years in Current Position (M = 8.04)
 1 - 3 28.0
 4 - 9 41.6
 10 - 29 30.4

Insitutional Size
 Small ([is less than] 1,500 FTE) 26.4
 Medium (1,509 - 3,000 FTE) 41.6
 Large ([is greater than] 3,000 FTE) 32.0


Note. Values represent percentages of respondents.

The overwhelming majority of respondents were White between 40 and 59 years of age. Males accounted for 56.8% of the respondents whereas females accounted for 43.2%. The professional classification of faculty made up 53.3% of the respondents, and administrators encompassed 46.7%. The categories of number of years employed by the institution netted a mean of 12.17 years; number of years employed in the current profession netted a mean of 20.59 years; and number of years employed in current position netted a mean of 8.04 years.

Under the implementation (is) area on the survey instrument items, the top 4 responses fell in the component area of institutional purpose whereas 7 of the following 8 responses fell either in the component area of program evaluation or planning. Three of the bottom 5 responses were included in the organizational involvement component area.

The importance (should be) area saw the top 4 responses as consisting of either institutional purpose or educational goals whereas 13 of the next 14 responses occurred in the component areas of program evaluation, planning, and institutional research. The lowest scores in the importance area were either in institutional purpose or organizational involvement.

Although respondents indicated that areas of institutional purpose are being implemented effectively, the respondents also indicated that improvement could occur because mean importance scores were higher than mean implementation scores; however, 2 of the 4 instrument items in the component area of institutional purpose importance (should be) received bottom-level rankings. Consequently, respondents believe that other areas of improvement possess higher priority than institutional purpose.

Component areas of institutional purpose (4 instrument items), educational goals (6 instrument items), program evaluation (9 instrument items), planning (8 instrument items), institutional research (5 instrument items), and organizational involvement (8 instrument items) were categorized from the study. Further, each component area was divided into implementation (is) and importance (should be) areas. Table 2 presents these findings.
Table 2
Component Areas of Institutional Effectiveness

 Number Implementation
 of
Component Area Items M SD

Institutional Purpose 4 24.61 2.92
Educational Goals 6 31.88 5.80
Program Evaluation 9 48.17 7.36
Planning 8 43.07 7.72
Institutional Research 5 23.48 5.73
Organizational Involvement 8 41.97 7.33

 Importance

Component Area M SD

Institutional Purpose 26.76 1.77
Educational Goals 38.31 3.38
Program Evaluation 56.35 4.68
Planning 51.58 3.82
Institutional Research 31.39 2.74
Organizational Involvement 50.13 3.28


Note. Means are based on a scale of 1 to 7 where 1 represents strongly disagree and 7 represents strongly agree for each item response. The component area mean represents the total of means for each instrument item in the area. A list of instrument items by area is provided in the Appendix.

One finding stands out from this analysis in all component areas: Mean implementation scores were less than mean importance scores. Further, the implementation standard deviations were larger than the importance standard deviations. This would seem to indicate that although individuals did perceive that certain indicators of institutional effectiveness were being implemented, the reported implementation was lower than the perceived importance of the institutional effectiveness indicators, thus advocating a need for improvement. Consequently, these responses indicate that respondents perceive improvement in institutional effectiveness component areas could and should occur.

Reported implementation of institutional effectiveness components did not vary according to institutional size, gender, ethnicity, age, number of years employed by the institution, number of years employed in the current profession, and number of years employed in the current position.

Perceived importance of institutional effectiveness components did not vary according to institutional size, gender, ethnicity, age, number of years employed in the current profession, and number of years employed in the current position. The number of years employed by the institution, however, did influence respondents' perceptions of the organizational involvement component. One-way analysis of variance tests revealed that faculty and administrators who have been employed by the institution between one and six years had a significantly higher agreement with the perceived importance of the institutional effectiveness component of organizational involvement as seen in Table 3.

Table 3 Summary of Analysis of Variance on Importance Responses by Number of Years Employed by Institution
Component Area df SS MS F

Organizational Involvement
 Between groups 2 64.597 32.299 3.102(*)
 Within groups 120 1249.321 10.411
 Total 122 1313.919


(*) p < .05

Additionally, institutional size was a significant factor in respondents' perceptions of educational goals as seen in Table 4. Respondents from medium-size institutions (1,500-3,000 FTEs) had a significantly higher agreement with the perceived importance of the institutional effectiveness component of educational goals. Likewise, respondents from medium-size institutions (1,500-3,000 FTEs) had a significantly higher agreement with the perceived importance of the institutional effectiveness component of program evaluation, while respondents from large institutions ([is greater than] 3,000 FTEs) had a significantly lower agreement. Finally, respondents from medium-size institutions (1,500-3,000 FTEs) had a significantly higher agreement with the perceived importance of the institutional effectiveness component of institutional research, whereas respondents from large institutions ([is greater than] 3,000 FTEs) had a significantly lower agreement.

Table 4 Summary of Analysis of Variance on Importance Responses by Institutional Size
Component Area df SS MS F

Educational Goals
 Between groups 2 77.363 38.681 3.533(*)
 Within groups 116 1270.133 10.949
 Total 118 1347.496

Program Evaluation
 Between groups 2 155.808 77.904 3.723(*)
 Within groups 119 2490.036 20.925
 Total 121 2645.844

Institutional Research
 Between groups 2 47.575 23.787 3.301(*)
 Within groups 119 857.540 7.206
 Total 121 905.155


(*) p<.05

Reported implementation of institutional effectiveness components did not vary according to professional classification based on one-way analysis of variance tests. Thus, it was concluded that the reported implementation of institutional effectiveness did not differ significantly between faculty and administrators. Nevertheless, perceived importance of institutional effectiveness components did vary according to professional classification. As seen in Table 5, administrators had a significantly higher agreement with the perceived importance of the institutional effectiveness components of program evaluation when compared with the faculty agreement. Certainly, faculty view evaluation as an important criteria in their programs, however, from the "big picture" standpoint, administrators seem to view program evaluation as being more important. This could be the result of an administrator's role in a community college in that an administrator may not want to be kept informed of each and every detail in each and every program; however, an administrator may want to be kept informed of overall educational program evaluation from a collegiate standpoint.

Table 5 Summary of Analysis of Variance on Importance Responses by Professional Classification
Component Area
 df SS MS F
Program Evaluation
 Between groups 1 116.497 116.497 5.470(*)
 Within groups 117 2501.490 21.380
 Total 118 2618.437


(*) p<.05

Implications

For effectiveness to occur in institutions of higher education, effectiveness, as a process, should become a key part of the institution's operating procedures. As indicated by the respondents, mean implementation scores were less than mean importance scores in all component areas of institutional effectiveness, indicating a perceived need for improvement in actual institutional effectiveness implementation. If effectiveness efforts only occur in preparation for an accreditation visit, the efforts will be focused for a brief period of time and then occur sporadically for the majority of time. If institutional effectiveness efforts become regular processes in how the institution functions, institutional effectiveness will be more successful and proactive in addressing and resolving long-term issues than sporadic effectiveness efforts designed to accomplish reaccreditation.

Additionally, the importance of organizational involvement, as perceived by faculty and administrators employed by their respective institutions between one and six years, lends credence to the fact that broad-based participation in institutional effectiveness is desired. If institutional effectiveness efforts become internalized in each respective institution whereby the efforts would constitute an ongoing process, as opposed to institutional effectiveness efforts designed to occur at one point in time, the broad-based participation by faculty and administrators in these efforts would probably be enhanced.

Recommendations

The following recommendations are drawn from the aforementioned conclusions of this institutional effectiveness study:

1. Faculty and administrators of Level I higher education institutions should focus on improvements in overall component areas of program evaluation, institutional research, and organizational involvement, both in actual implementation as well as in perceived importance.

2. Effort should be devoted to increasing involvement of organizational members who have been a part of the institution for six years or more.

3. Faculty and administrators in small and large institutions, ([is less than] 1,500 FTEs and [is greater than] 3,000 FTEs respectively) should increase their commitment to educational goals, program evaluation, and institutional research.

4. Efforts should be made for greater faculty commitment to component areas of program evaluation.

Suggestions for Further Research

Institutional effectiveness efforts will undoubtedly continue due to the growing public mandate for accountability. Further research could take the form of the following:

1. Institutional effectiveness studies of singular institutions in a thorough manner to gauge and enhance effectiveness

2. Institutional effectiveness studies of other Level I higher education institutions that are accredited by agencies other than the Southern Association of Colleges and Schools

3. Institutional effectiveness studies of other higher education institutions including comprehensive colleges, regional universities, and research universities

4. Creation of institutional effectiveness models to better address individual effectiveness of institutions

5. Implementation of institutional effectiveness models in institutions of higher education

An opportunity exists with the involvement of all stakeholders in institutional effectiveness efforts. For higher education to continue to function effectively in the future, institutional effectiveness practices must become enhanced and continuous. The ultimate goal is to better the educational product and process for those individuals who are receivers of and participate in education, namely students.

References

American Association of Community and Junior Colleges. (1992). Membership directory. Washington, DC: Author.

Baker, G. A. (1992). Cultural leadership--inside America's community colleges. Washington, DC: Community College Press.

Borg, W. R., & Gall, M. D. (1989). Educational research (5th ed.). New York: Longman.

Dill, D., Massy, W., Williams, P., & Cook, C. (1996). Accreditation & academic quality assurance: Can we get there from here? Change, 28(5), 16-24.

Ewell, P. (1985). Toward the self-regarding institution: Excellence and accountability in postsecondary education. In J. Krakower (Ed.), Assessing organizational effectiveness: Considerations and procedures (p. 14). Boulder, CO' National Center for Higher Education Management Systems. (ERIC Document Reproduction Service No. ED 270 056)

Ewell, P. (1991). Back to the future. Change, 23(6). 12-17.

Ewell, P. (1992). Outcomes assessment, institutional effectiveness, and accreditation: A conceptual exploration. Resource papers for the Council on Postsecondary Accreditation Task Force on Institutional Effectiveness. (ERIC Document Reproduction Service No. ED 343 513)

Ewell, P. (1993). Total quality and academic practice. Change, 25(3), 49-55.

Gentemann, K., & Rogers, B. (1987). The evaluation of institutional effectiveness: The responses of colleges and universities to regional accreditation. Paper presented at the SAIR-SCUP Annual Conference, New Orleans, LA. (ERIC Document Reproduction Service No. ED 290 392)

Grossman, G., & Duncan, M. (1989). Indicators of institutional effectiveness: A process for assessing two-year colleges. Columbus, OH: Center on Education and Training for Employment. (ERIC Document Reproduction Service No. ED 325 193)

Hodgkinson, H. (1986). Reform? Higher Education? Don't be absurd! Phi Delta Kappan, 68(4), 271-274.

Lewis, R.G., & Smith, D.H. (1994). Total quality in higher education. Delray Beach, FL: St. Lucie Press.

March, T. J. (1991). Regional accreditation--editorial. Change, 23(3), 4.

McIntoshh, W. J. (1996). Assessment in higher education. Journal of College Science Teaching, 26(1), 52-53.

Mintzberg, H. (1979). The structuring of organizations. Englewood Cliffs, NJ: Prentice-Hall, Inc.

Mooney, C. J. (1992). Bok: To avoid bashing, colleges must take a leadership role on national problems. The Chronicle of Higher Education, 38(31), A17-A18.

Peters, R. (1994). Accountability and the end(s) of higher education. Change, 16-23.

Southern Association of Colleges and Schools (SACS). (1992). Criteria for accreditation. Decatur, GA: The Commission on Colleges of the Southern Association of Colleges and Schools.

Southern Association of Colleges and Schools (SACS). (1995). Criteria for accreditation. Decatur, GA: The Commission on Colleges of the Southern Association of Colleges and Schools.

Winsor, J., Curtis, D., & Stephens, R. (1997). National preferences in business and communication education: A survey update. Journal of the Association of Communication Administration, 3, 170-179.

Appendix

Description of Institutional Effectiveness Questionnaire

Respondents were asked to rate 40 indicators of institutional effectiveness (categorized into six component areas) by circling appropriate responses on two scales. The first scale was used to rate the extent to which each indicator was currently practiced or implemented at each respondent's institution. The second scale was used to rate how important each respondent believed the indicator should be in developing a more effective institution. The rating scale used was as follows:

1 SD = Strongly Disagree

2 D = Disagree

3 MD = Mildly Disagree

4 U = Undecided or unable to respond

5 MA = Mildly Agree

6 A = Agree

7 SA = Strongly Agree

The six component areas and 40 indicators comprised the following:

Institutional Purpose Or Mission

1. The institutional purpose (is / should be) clearly written and understood throughout the institution.

2. The institutional purpose (is / should be) fully congruent with community needs and values.

3. The institutional purpose (is / should be) realistic for the present and foreseeable future. 4. Educational programs and services (are / should be) consistent with the institutional purpose.

Educational Goals

5. Desirable outcomes of the institution (are / should be) expressed through explicit educational goals.

6. Expected educational outcomes (are / should be) defined in the form of measurable objectives.

7. Written objectives (are / should be) regularly developed and used in each program and service area.

8. Educational goals and objectives (are / should be) developed through a formal process of needs assessment.

9. Measures of student learning (are / should be) identified and used in developing course and program objectives.

10. Achievement of established educational goals and objectives (is / should be) regularly monitored to determine the effectiveness of the institution.

Program Evaluation

11. Institutional programs and services (are / should be) comprehensively evaluated to determine their outcomes and accomplishments.

12. High priority (is / should be) placed on program evaluation by the administration.

13. An established schedule of program review or evaluation (is / should be) followed.

14. Measures of student learning (are / should be) used in program evaluation.

15. Student satisfaction with programs and services (is / should be) considered in program evaluation.

16. Student follow-up information (is / should be) included in program evaluation.

17. Information gathered from program evaluation (is / should be) clearly reported to appropriate constituencies, e.g., faculty, students, trustees, and community members.

18. Performance evaluation of faculty and administrators (is ! should be) linked to the attainment of program goals and objectives.

19. Institutional procedures for planning and evaluation (are ! should be) reviewed and modified periodically based on the recommendations of faculty and the administration.

Planning

20. Results of program evaluation (are / should be) used to improve the effectiveness of programs and services.

21. High priority (is / should be) placed on planning by the administration.

22. Established procedures, including formal needs assessment, (are / should be) used in developing new programs.

23. Information derived from the assessment of student learning (is / should be) used in making curricular changes.

24. Short-term or annual planning (is / should be) done in conjunction with a periodically updated long-term or strategic plan.

25. Allocation or reallocation of resources (is ! should be) directly linked to established goals and objectives.

26. The institutional planning process (is / should be) contributing to the efficient use of resources, e.g., personnel, facilities, and equipment.

27. The institutional planning process (is / should be) designed to allow for innovation and experimentation to improve program quality.

Institutional Research

28. Relevant information regarding student academic achievement (is / should be) accessible and adequate for sound decision making.

29. Relevant student follow-up information (is / should be) accessible and adequate for sound decision making.

30. Relevant information regarding the cost-benefit of individual programs and services (is / should be) accessible and adequate for sound decision making.

31. Relevant information regarding the external environment of the institution (is / should be) accessible and adequate for sound decision making.

32. The level of resources assigned to institutional research (is / should be) adequate to support effective planning and evaluation.

Organizational Involvement

33. A participatory process (is / should be) used to establish goals and objectives for program and service units in the institution.

34. A participatory process (is / should be) used to evaluate the programs and services of the institution.

35. The administration (is / should be) fully involved in program planning and evaluation activities.

36. The faculty (is / should be) fully involved in program planning and evaluation activities.

37. Students (are / should be) fully involved in program planning and evaluation activities.

38. Trustees and community representatives (are / should be) fully involved in program planning and evaluation activities.

39. There (are / should be) institutional incentives for individuals or groups to undertake improvements in programs and services.

40. The quality of relationships among faculty, students, staff, and the administration (is / should be) a positive factor in the effectiveness of the institution.

Respondents also were asked to complete eight demographic items that elicited gender, ethnicity, age, professional classification, number of years employed at their current institution, number of years in their current profession, and number of years in their current position.

Timothy S. Todd is interim assistant provost and assistant professor of organizational communication in the College of Fine Arts and Communication at Murray State University in Murray, Kentucky (timothy.todd@murraystate.edu).

George A. Baker III is the Joseph D. Moore Distinguished Professor of Community College Leadership at North Carolina State University, Raleigh, North Carolina (baker@poe.coe.ncsu.edu). He directs the National Initiative for Leadership and Institutional Effectiveness (NILIE).
COPYRIGHT 1998 North Carolina State University, Department of Adult & Community College Education
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1998, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Baker, George A. III
Publication:Community College Review
Date:Dec 22, 1998
Words:5725
Previous Article:Global Perceptions of the Community College.
Next Article:Women Faculty in Community Colleges: Investigating the Mystery.


Related Articles
Meg Downey.
Economics Faculty Research at Teaching Institutions: Are Historically Black Colleges Different?
Fear of data: a warning to CIC members to make peace with data--or suffer the consequences.
By the numbers: higher ed leaders must overcome their fear of sharing data with constituents.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters