Printer Friendly

An assessment of past and current approaches to quality in higher education.

This paper argues that, despite an increasing uniformity of approach to quality monitoring, there is little analysis of the rationale behind the methods because there is little exploration of what `quality' is in a higher education context. Despite good intentions, quality monitoring has become over-bureaucratic and the potential for significant change has been hampered by a focus on accountability rather than improvement. Furthermore, the accountability focus, despite its onerous and somewhat oppressive burden, is a safe process for higher education because it does not consider the nature of learning or what is learned. By focusing on accountability, the transformative potential of quality monitoring is not fulfilled.

Introduction

`Quality' has evolved from a marginal position to being the foremost concern in higher education alongside funding issues. The evolution of quality has been one from vague concept to articulated procedures. Furthermore, there is considerable conformity of procedures across national boundaries and the tendency to a dominant model of external scrutiny of quality in higher education.

Approaches to quality in higher education in most countries have started with an assumption that, for various reasons, the quality of higher education needs monitoring. At root, governments around the world are looking for higher education to be more responsive, including:

* making higher education more relevant to social and economic needs;

* widening access to higher education;

* expanding numbers, usually in the face of decreasing unit cost;

* ensuring comparability of provision and procedures, within and between institutions, including international comparisons.

Quality has been used as a tool to ensure some compliance with these concerns. However the rationale and policy often tend to be worked out after the decision to undertake an audit, assessment or accreditation process has been made. Thus approaches to quality are predominantly about establishing quality monitoring procedures.

The organisation, degree of government control, extent of devolved responsibility and funding of higher education systems vary considerably from one country to the next. However, the rapid changes taking place in higher education are tending to lead to a convergence towards a dominant model for quality. This model is one of delegated accountability. Central to this process is the emphasis placed on quality as a vehicle for delivering policy requirements within available resources.

Delegated accountability

External quality monitoring (EQM) is not restricted to one or two types of higher education system. It can be found in all types of higher education systems, including:

* the `Continental model' of `centralised-autonomy' found in much of western Europe including Italy, France and Austria;

* the `British model' of autonomous institutions also found throughout much of the Commonwealth;

* `market systems' such as the United States of America and the Philippines;

* `semi-market' systems such as Taiwan and Brazil;

* centralised systems such as China;

* newly devolved systems such as those in eastern Europe, the Baltic states and Scandinavia.

The development of most EQM systems has been as a result of a pragmatic response to government mandates, and systems adapt and respond to changing situations. However, within this fluid situation, some common themes emerge, suggesting a convergence to a dominant form of accountable autonomy (Figure 1). The systems that have traditionally espoused a market approach and those that have been influenced by the traditional British system of autonomous institutions supported by the state are finding their autonomy being eroded by government-backed requirements to demonstrate accountability and value for money (Bauer & Kogan, 1995).

[Figure 1 ILLUSTRATION OMITTED]

In New Zealand, for example, with a tradition of strong university autonomy, there is now a requirement for higher education institutions to define objectives that are approved by the Ministry of Education (1991). Similarly, in Australia, financial stringency has been used to legitimate the requirement placed on universities to develop quality assurance procedures to provide accountability for public funds (Baldwin, 1992; National Board, 1992).

Where central control was, or continues to be, exerted over higher education, for example in China, eastern Europe, South America and Scandinavia, there is increasing delegated responsibility for quality, but at the price of being required to be accountable and open to scrutiny. For example, in Kominia, university autonomy has become the central principle in the governance of higher education institutions. However, the trade-off for academic autonomy is the acceptance of external evaluation mechanisms. The Accreditation and Recognition of Diplomas Act, which came into force in January 1994, specified the aims of accreditation and academic evaluation, including encouraging institutions to develop their own mission-based performance evaluation mechanisms, `protecting the community from institutions that do not have the capacity to fulfil their public commitments' and providing the community with `information on the capacity and performance' of various institutions. Although the intention is not to use the public financing of universities as an excuse for restricting the administrative autonomy of universities, financial autonomy requires overall public accountability (Ifrim, 1995).

In those countries where a new accountable autonomy is being granted, self-assessment is seen as indicative of the shift to self-governance. In those countries where universities have traditionally been autonomous, self-evaluation is seen as not only politically pragmatic but a necessary vehicle to ensure the institution focuses its attention on quality issues.

External quality monitoring

The tool for ensuring delegated accountability is external quality monitoring (EQM) of institutions. EQM has a dual role. It offers an `impartial' and `objective' system-wide (or even international) mechanism for examining policy, practice and procedure. It also acts as a conduit for information intended to reassure external stakeholders, such as employers, professional bodies and the general public, as to the continued viability of provision. In short, EQM is the operational mechanism through which quality is used to legitimate higher education policy.

EQM is an all-encompassing term that covers a variety of quality-related evaluations undertaken by bodies or individuals external to higher education institutions. It includes the following.

Accreditation and evaluation of Institutions

* External evaluation of institutional status, such as the assessment undertaken by the Consejo Nacional de Univeridades in Venezuela, evaluates and grants licences to new, experimental higher education institutions and continues to evaluate them until they attain full autonomy (Ayarza, 1993).

* Periodic evaluation of institutional viability, such as the accreditation process in the United States, is a self-regulatory process of recognition by nongovernmental voluntary associations (Petersen, 1995).

* External assessment of institutional provision, such as that undertaken by the Comite National d'Evaluation (CNE), in France, evaluates each institution holistically (Ribier, 1995; Staropoli, 1991) but does not in any way accredit the institution.

Audit of procedures within an Institution

* External quality audit of internal quality assurance procedures, such as the academic audits of institutions in Britain, was formerly undertaken by the Quality Audit Division of the Higher Education Quality Council (1993) and audits of polytechnic quality procedures were conducted in Finland by the Higher Education Evaluation Council (1997). There is no attempt to evaluate the institution as such, just to ensure that the institution has clearly defined internal quality monitoring procedures that ensure effective action.

* The Australian Committee for Quality Assurance in Higher Education added a ranking to the examination of quality assurance portfolios volunteered by universities, which was linked to recommendations about additional incentive funding (Meade, 1993). The three rounds of the Australian approach focused on specific elements, such as teaching, research performance or community interaction.

* In Sweden, the approach to audit undertaken by the National Agency is to focus on the stated improvement agendas of institutions and explore the efficacy of improvement projects (Askling, 1997).

Accreditation of programs of study

* Validation (and periodic review) of programs of study by central awarding bodies such as the procedures previously undertaken by the Council for National Academic Awards in the United Kingdom.

* Accreditation of courses in North America by up to 14 non-governmental voluntary associations who recognise provision in institutions that have been found to meet stated criteria of quality.

* Accreditation and validation of programs of study, such as those undertaken in some countries by professional and regulatory bodies (Harvey & Mason, 1995).

Assessment of teaching quality In subject areas or of programs

* External evaluations of teaching and learning provision at a program or subject level, such as the assessment of subject area provision undertaken by the Quality Assessment Division of the Higher Education Funding Council for England (1994) or the evaluations undertaken by the independent Centre for Quality Assurance and Evaluation of Higher Education in Denmark (Thune, 1993).

Research assessment

* Evaluation and appraisal of research, such as the Research Assessment Exercise conducted by the Funding Councils in Britain (Higher Education Funding Council, 1993), research evaluations undertaken by the Academy of Finland since the early 1980s (Luukkonen & Stahle, 1990) and the recent Lithuanian evaluation of research performance (Mockiene & Vengris, 1995).

Standards monitoring

* The use of external examiners to monitor standards on postgraduate or undergraduate degrees in the United Kingdom, Denmark, Ireland, New Zealand, Malaysia, Brunei, India, Malawi, Hong Kong and in the technikons in South Africa (Silver, 1993; Warren Piper, 1994).

The issues around EQM in this discussion will be illustrated by focusing on teaching and learning, rather than research, although many of the issues are similar.

Methodology

Approaches to quality in higher education have been characterised by a growing uniformity of methodology which incorporates various combinations of three basic elements: self-assessment; peer evaluation; statistical or performance indicators.

Typically the procedure is for the institution or program of study (or subject area) to produce a self-evaluation report. This qualitative self-evaluation is often complemented by statistical data. The report (and the appropriate statistical data) are scrutinised by an external body, which subsequently facilitates a visit of `respected' peers to the institution. The peer-review panel undertake a visit lasting, usually, between one and four days. They attempt to relate the self-assessment document to what they see or, in practice, hear. The peer-review panel may have received other appropriate documents in advance of the visit or may have access to other material during the visit. The peers may observe facilities or even, in some cases, the teaching and learning process. In the main, the peer-review process usually involves reading the self-evaluation and engaging in discussion sessions with groups of selected institutional managers, teaching and administrative staff and students.

This approach, or variants of it, are enormously popular and can be found in countries as diverse as the United States of America, the Argentine, United Kingdom, Netherlands, Australia, South Africa, and China. It takes as its starting point the notion of a self-critical academic community. Yet it is this very notion of `self-criticism' that politicians and civil servants are sceptical about: witness their desire for `hard' statistical data. However, there is a reluctance, internationally, to impose a professional inspectorate on higher education or to undertake research to explore whether higher education delivers what is required for a range of stakeholders.

Despite the frequently expressed concerns about intrusion into academic freedom, undermining the autonomy of universities and the burden and cost of external monitoring procedures, the self-regulatory approach suits the academy. It is far less threatening than a central, professional inspectorate or an open inquiry into the purpose and effectiveness of higher education. Self-regulation, via self-evaluation and peer review, is imbued with amateurism and sense of `playing the game'. And it is the conduct of the game rather than the result that is prized so highly in amateurism, provided, of course, the best side wins.

This was why, nationally and internationally, the first annual Australian evaluation was regarded with such dismay. It was a brusque game, with a new set of rules, that did not simply let the best team win. The University of Sydney was placed in the second rank. Other countries do not seem to have learned how effective that initial process was in giving quality a high profile and, instead, tend to play a much `softer' game that reproduces, from the outset, the status quo.

Apart from providing a `safe' context for evaluation, what is so good about the dominant methodology? In the appropriate setting, self-evaluation and peer review can be an significant spur to fundamental self-reflection. If the institution wants to explore its purpose, its areas of effectiveness, its weaknesses and future opportunities, then self-evaluation, followed by a peer-review process that involves open dialogue and helpful feedback, can be an invaluable tool. It can help develop a future strategy for continuous improvement. However the long-term effectiveness is entirely dependent on the establishment of internal procedures and development of a culture of continuous improvement. For example, the European-wide, CRE-Audits, undertaken on a voluntary basis, have been useful for most of the universities that have taken part in helping them develop strategic plans. Whether, in the long term, they will result in a process of continuous quality improvement depends on how well the outcomes are communicated and linked with the day-to-day activities of the teaching and research staff.

Where compulsory monitoring uses self-evaluation, peer review and statistical indicators, the efficacy of the methodology is rather more debatable. Where institutional staff see the self-evaluation as part of a judgemental process, especially if it is linked to status rankings or to funding, then there will be a disinclination to be open about weaknesses and a tendency to overstate strengths. A lack of frankness makes dialogue difficult and the self-evaluative process becomes a defensive account rather than an opportunity to explore future development and change. In such circumstances, self-evaluation followed by an inquisitorial peer review encourages retrenchment rather than responsiveness: cloisterism rather than new collegialism (Harvey, 1995).

Peer reviews are not good at finding out what is really going on. In the main, peer-review teams make judgements based on what they are told and tend to look for discrepancies in the story. They rarely have detailed documentation nor observe what goes on the ground. Even if they have access to appropriate documentation, which allows some form of cross-checking, and they observe facilities and practices first-hand, they tend to see and assimilate only a tiny fragment of the entire institutional operation. Peer reviewers are not trained as investigators -- if they are trained at all. What training they have tends to be towards identifying what they should be looking for, but despite the best will of some training programs, they are not trained how to identify and interpret what they see. In short, the preconceptions and prejudices of peers are rarely challenged prior to visits, even if, on reflection, they considered that they have learned a lot from the process themselves. Peer review is, in the main, gentle amateurism designed not to rock too many boats. A recent study in Chile, for example, suggested that, even in the newly developing private university sector, peer reports in 90 per cent of cases were simply confirming what the institutions already knew and, furthermore, the prior experience of peer reviewers tends to influence the outcome of reports (Silva, Reich, & Gallegos, 1997, p. 31).

Statistical data, often euphemistically referred to as `performance indicators', are problematic. It is rarely clear about what or about whose performance they provide indicators. What, for example, does an increase in percentage of `good' degree classifications tell us about quality? Does it indicate that the student learning performance has improved? Does this mean that the teaching staff have performed better, or are the students learning more despite the teachers? Or does it mean that academic standards have fallen? Similarly, what does the employment rate of graduates within the first six months after graduation tell us about the performance of the institution? Perhaps it says more about the vagaries of the recruitment process and the differential in take-up rates between different subject specialisms than it provides any indication of the performance of the institution. In short, so-called performance indicators are invariably simplistic, convenience measures that bear no relation to any notion of quality. Furthermore, the benefit that might accrue from improving statistical measures to make them into really meaningful performance indicators is outweighed by the cost that would accrue (Yorke, 1998).

In some countries, such as the United Kingdom, performance indicators play a minor role, whereas in Australia there are attempts to develop new indicators. However, in general, there increasingly seems to be a growing tendency to cast doubt on the value of quantitative indicators of higher education quality. In the United States, where quantitative indicators have dominated quality evaluations, there is a gradual shift to giving more credence to qualitative assessments based on peer reviews. For example, the Tennessee Higher Education Commission, which has been prescriptive in using quantitative indicators as a basis for allocating up to five per cent of institutional budgets, has, with each of its four iterations of assessment criteria, gradually replaced crude quantitative indicators by qualitative, peer-review evaluations (Banta, 1995).

Quality

A key issue, as has already been suggested, is the lack of thought given to what it is exactly that is meant by `quality' in the context of higher education. There are implicit assumptions and widespread adoption of rhetoric such as `fitness for purpose' and `value for money' but little clear thinking about quality as such, nor how the `politics of quality' impacts on the various stakeholders in higher education.

Elsewhere, the suggestion has been made that quality is used in five ways in higher education debate: `excellence', `perfection' (or consistency), `fitness for purpose', `value for money' and `transformation' (Harvey & Green, 1993). It has further been argued that transformation is a meta-quality concept and that other concepts such as perfection, high standards, fitness for purpose and value for money are possible operationalisations of the transformative process rather than ends in themselves (Harvey, 1994b, p. 51; Harvey & Knight, 1996, pp. 14-15).

The transformative view of quality is rooted in the notion of `qualitative change', a fundamental change from one state to another. In the case of students, it involves transforming not just what they know, but how they think and what they can do. Transformative education is about `adding value' to the students by enhancing their attributes but it is also about empowering them as critical, reflective transformative, lifelong learners (Astin, 1991; Harvey & Knight, 1996).

This is not a passive transformation. Education is a participative process. Students are not products, customers or consumers -- they are participants. Education is not a service for a customer (much less a product to be consumed) but an ongoing process of transformation of the participant. Parents, teachers, educationalists from primary schools to universities in a variety of countries prefer, overall, the transformation view of quality. It is compatible with what they think education is about.

Traditionally quality in higher education has been seen in terms of the `exceptional'. By its very nature, elitist higher education recruited exceptional teachers, researchers and students and provided them with exceptional libraries, laboratories and opportunities to learn from one another. The emphasis was on high quality inputs. The result was `excellent' outcomes: pioneering research, scholarly theses, and exceptional graduates who were attractive to employers simply by dint of being graduates.

More recently, there has been a tendency among national quality monitoring agencies to see higher education as a more diverse system as participation grows. The `mission' of the institution and its location within the higher education panoply are supposedly taken into account. The emphasis is now on `fitness for purpose', although just what purpose and what constitutes fitness is rarely clearly identified. Some agencies provide a check-list of areas against which institutions should identify purpose and from which peers might evaluate fitness. In practice, the judgements of fitness, where they occur at all, rarely take into account the mission other than as a general context. Furthermore the approach for judging fitness is either rigid (especially where quantitative indicators dominate) or prejudicial, where amateurish pre-judgements are uninhibited by adequate training.

More importantly, these kinds of evaluations of fitness for purpose tend to be reductionist, fragmenting the notion of quality rather than exploring the complex interrelationships that ultimately impact on the key stakeholders. They are deliberately disassociated from the politics of quality and are incapable of making any link between the quality monitoring procedures, the resource envelope, the student experience of learning and the range of accomplishments and standards of graduates.

The `politics of quality' refers to the macro and micro agendas that accompany the introduction of quality monitoring procedures. At one level, this can be the use of quality monitoring to legitimate changes in the structure or resourcing of higher education, including providing reassurance to external stakeholders about the `standard' or `quality' or `international comparability' of higher education at a time of rapid change. The politics of quality might also include the role that quality monitoring has in introducing value-for-money practices or redistributing limited resources on the basis of an apparent value-for-money exercise, such as a research assessment exercise where money is concentrated in institutions that provide `excellent' research output.(1) Other political agendas include attempts to reduce the autonomy of higher education institutions and questioning the extent to which they produce `work-ready' graduates.

At a local level, quality assurance can be a tool to unify disparate institutions. For example, in the new polytechnic sector in Finland, Rektors are using the Finnish Higher Education Evaluation Council's pilot quality audits as a way to focus the attention of very diverse component institutions onto the new polytechnic mission and procedures. Similarly, at the institutional level, the politics of quality can extend to levering a more open approach to teaching and learning, feedback from students and action based on a culture of improvement. It can also be used as a smokescreen to cover the issues that arise when student numbers increase rapidly without a commensurate increase in staffing and resources.

It is the politicisation of quality and confusion over what is meant by `quality' that has led to a growing negative view of quality procedures. A decade ago, in the countries that first developed quality monitoring in higher education, the idea of exploring the quality of higher education was rather a surprising idea for most institutions, especially well-established, traditional universities of international repute. For them, the idea of monitoring the quality of provision in any way was regarded with a mixture of amusement and alarm. Implicitly, if not explicitly, higher education institutions saw themselves (and were seen by others) as intrinsically quality institutions. Such a notion, of course, was based on the exclusivity of their club and the generous resources that accrued to it. Of course, quality was perceived as `a good thing'. Today, there are many people working in higher education, as teachers, researchers and managers, who are not so sure. Indeed, the author has heard people in countries as diverse as Britain, Denmark, New Zealand, Australia, Hong Kong, Brazil and the United States suggesting that, to the contrary, quality is a `bad thing'.

What has happened? How can a fundamental, taken-for-granted presupposition about higher education be cast in such a negative light? Is it that quality monitoring has asked some awkward questions? Has it undermined a taken-for-granted? Has quality monitoring, at least temporarily, disturbed self-complacency? Has it required that higher education institutions and their staff face up to their responsibilities to stakeholders? Has it required that they be more open about their procedures and practices? Maybe all these things have caused some concern and people feel threatened. However, it is questionable as to whether this would have led, however inconvenient, to a view that would suggest quality monitoring is not just a regrettable intrusion but counter-productive.

It is not the awkward questions or the requirement for openness that has undermined faith in the quality monitoring processes. It is the political agendas that accompany them that result in a negative view of quality. It is the structuring of procedures that entrap academics into endorsing the `quality' of a system where they clearly see the quality of provision declining that frustrates them. It is the disengagement of quality from their own primary concerns -- the enhancement of students, the development of their research, the financial management of the institution -- and the structuring of it as a game or exercise in which they fleetingly take part, that annoys or bemuses them. It is the imposition of a top-down model of accountability instead of an exploration of how quality is really improved or how improvement is impeded at the operational level that makes them feel it is a burdensome but pointless process.

In one sense, the introduction of external quality monitoring, despite the added workload of self-evaluations and peer reviews, was a useful exercise in focusing attention on quality issues, not least what institutions are for, how they operate and how they could do things better and in a more responsive way. The problem has been that the process has not tended to result in an improvement focus, nor has it provided practitioners (let alone students or other external stakeholders such as employers) with a feeling of ownership of, and responsibility for, a process of continuous quality improvement to ensure that the institution provides the transformative education and research necessary for the next century.

In short, quality has become linked with control. The term `quality' is used far more frequently, in practice, as a shorthand for the bureaucratic procedures than for the concept of quality itself. It is thus not the quality itself that is regarded as undesirable but the paraphernalia of quality monitoring that is seen as so intrusive. Quality is not so much about what or why but about assurance and assessment. It is about who decides what an appropriate educational experience is, for what purposes and at what cost. None of this should be surprising as behind nearly all external quality monitoring is a political motive designed to ensure two basic things: that higher education is still delivering despite the cut in resources and increase in student numbers; and that higher education is accountable for public money.

The dominant model of delegated accountability works much better as a device for ensuring that higher education is accountable for public money than for ensuring that it is delivering what is required, as there is virtually nothing in the quality procedures in use that tells us whether stakeholders -- students, employers, teaching staff, society as a whole -- are getting what they need or whether `outcome standards' are changing.

Quality and learning

A major problem is the lack of convergence of quality monitoring and innovations in teaching and learning. There is little evidence (Rear, 1994), anywhere in the world, that quality monitoring and innovations in teaching and learning are pulling in the same direction (Figure 2). At the institutional level, quality monitoring procedures and innovation in teaching and learning interface, if at all, through the dissemination of good practice. EQM, in most countries, does not deal with the nature of the learning, partly because it does not examine the nature of `quality'. On the contrary, EQM tends to be conservative, driven by accountability requirements, and tends to inhibit innovation in teaching and learning.

[Figure 2 ILLUSTRATION OMITTED]

A tension has, thus, emerged between `quality-as-accountability procedures' and `quality-as-transformation'. The predominance of the former meaning has led to a `compliance culture', such that emphasis on quality is not, in fact, producing the transformation in students that it has been suggested is essential in a rapidly changing world. As technology, competition and social upheaval transform the world at an accelerating pace, so higher education is increasingly seen as crucial in producing people who can accommodate and initiate change.

It has been suggested that, in practice, rather than having a transformative impact, EQM creates an initial shock reaction but that it rarely translates into a process of ongoing improvement. It may be effective, in the short run, in `getting quality on the agenda' of institutional management but it fails to ensure an ongoing response at the grass-roots level. Much of the evidence about impact of EQM is anecdotal, which is not surprising given that it is a relatively new phenomenon. In Spain, for example, `evaluation fever' is seen as having `developed too quickly, too anxiously, making sometimes too much noise, but showing less effectiveness than expected' (Escudero, 1995). In the United States, with a longer history of evaluation, informed commentators have suggested that the impact is only peripheral (Marchese, 1989).

For many commentators, the key positive benefit is the self-evaluation process. Initial research into the impact of external quality monitoring in Norway (Karlsen & Stensaker, 1995) and Finland has suggested that, in a significant number of cases, `the process of assessment alone is of intrinsic value', especially the self-evaluations, which `create an arena for communication' and provide a `legitimate way to openly discuss possible solutions to the present complicated problems' (Saarinen, 1995, p. 232), a point reinforced at an OECD conference (Barblan, 1995; Bell, 1995; Rasmussen, 1995; Rovio-Johansson & Ling, 1995).

The limited research evidence suggests that EQM has provided an initial impetus to change, but that it offers little by way of continuing momentum. In the Netherlands, for example, the Inspectorate is of the view that the institutes pay attention to the quality of education in a more systematic and structural way than they did before a systematic process of EQM was established (Inspectie Hoger Onderwijs, 1992). However, although quality is clearly on the agenda of institutions, it is difficult to find a linear relation between recommendations made by the visitation committees and measures taken by the institutes (Acherman, 1995; Frederiks, Westerheijden & Weusthof, 1993). In a similar vein, the Inspectorate concludes that institutes, in general, still have problems with the formulation and realisation of consistent, well-planned and managed responses to the reports of visitation committees: improvements are scattered and actions have a short-term character.

The Appraisals Process in Ontario appears to offer an example of the positive impact of EQM. Research suggests that there is sufficient evidence to show that the process, overseen by the Ontario Council on Graduate Studies (OCGS) has been effective in maintaining and improving the quality of graduate programs. Improvement can be seen in terms of quantitative, summative indicators such as completion rates and time to completion, and in terms of improvements in peer evaluations over a seven-year cycle. Whether this has resulted in an institutional culture of continuous improvement of the transformative process in Ontario is less clear.

Recent accounts from nine countries suggest that external quality monitoring:
 has an initial `shock effect' resulting in quality issues being placed on
 internal agendas, of raising the profile of teaching, and increasing
 accountability to stakeholders--principally funders and students. Although,
 in most countries, external quality monitoring is a fairly recent
 phenomenon, there is some suggestion that the predominant
 accountability-based approaches have only an initial impact on quality
 improvement. Alternative approaches may need to be developed to ensure a
 continuous process of enhancement. (Harvey, 1997, p. 3)


For example, at Auckland Institute of Technology, there are tensions between the external accountability requirements and the Institute's commitment to the enhancement of teaching and learning (Horsburgh, 1997). A major plank of the institution's philosophy is to empower staff to find their own means of improvement, to foster innovation and encourage staff to act in a professional way as enhancers of learning (Hinchcliffe, 1993).

At the Hogeschool Holland, EQM has helped to clarify the purpose and focus of internal quality assessment. However it has resulted in an improvement in self-evaluation and the development of systems of quality assurance rather than enabling effective, continuous improvement of the student learning experience (van Schaik & Kollen, 1995).

In Chile, the existence of external quality monitoring has led to the establishment of permanent quality control or accrediting processes within institutions, some significant curriculum content reforms and improvement of instruments for assessing student learning and the implementation of pedagogical upgrading programs (Silva, Reich & Gallegos, 1997). What is less clear is that the process is leading to a change in culture towards one of internally driven quality improvement.
 External evaluation is a procedure that appears to be acceptable in the
 Chilean university system. This implies that a progress towards a `culture
 of evaluation' is occurring in this country. The effects of external
 quality monitoring seem [italics added] to be positive so far ... The
 ultimate impact on the external evaluation procedures in progress will show
 up as the planned or agreed actions or changes are fully implemented and
 properly monitored. Then an adequate `perturbation' can be expected in the
 institution or in the system. (Silva, Reich, & Gallegos, 1997, pp. 33-34)


A similar situation obtains at Monash University in Australia, where there is a sense that the short-lived process of external quality monitoring did focus attention on teaching and learning:
 At Monash, it seems that there have been significant gains in three main
areas:


* Course approval procedures have become more rigorous, with greatly increased attention to the need for structure, planning and analysis.

* There is increased awareness of students' perspectives on teaching and learning, and this input has become an essential part of the process of shaping and reshaping programmes in at least some areas of the university.

* There is a perceptible shift in the climate, with a new attention to teaching issues, and an intensification of debate about effective learning.

(Baldwin, 1997)

The first two points illustrate the initial-impact effect of EQM, found in many institutions around the world. Taken-for-granted practices and procedures have had to be confronted and clearly documented. It represents the minimum required shift from an entirely producer-oriented approach to higher education to one that acknowledges the rights of other stakeholders to minimum information and a degree of `service'. This is a laudable outcome and, in an information-driven world, an outcome not before time. Baldwin adds:
 The third effect, a shift in climate, is the least tangible, but probably
 the most important. In the end, specific regulations matter far less than
 the quality of attention given to teaching and learning. One of the great
 frustrations for individuals concerned about the quality of teaching and
 learning in universities has been the awareness that, with all the
 formidable brain power concentrated in these places, very little has been
 turned to the intellectual analysis of teaching and learning. This is not
 to deny that much excellent teaching has gone on, but it has not often
 enough been the subject of reflection and debate. This situation seems to
 be changing for the better. (p.60)


However, she suggests that this may be as much to do with the impact of new technology as to external quality monitoring. Furthermore, some of her colleagues are far from convinced that external quality monitoring represents an overall gain rather than loss as the costs of the process include excessive bureaucratisation, greatly increased administrative workload, a formalism that can stifle creativity and individuality and an implicit lack of trust in academic staff. Baldwin is optimistic:
 If the elaborate quality assurance mechanisms were necessary as a catalyst
 for change, then many -- particularly those associated with documentation
 -- should in time wither away, or at least become greatly simplified. If
 the principles involved are not internalised, they cannot be effective.
 (p.61)


Herein, lies the problem. The effectiveness of external monitoring depends on three things:

* the withering away of the bureaucratic, accountability, conformance process;

* the linking of a lighter-touch external review to well-developed internal procedures for quality improvement;

* the development of an internal quality culture, widely embraced, for which internal procedures are guides and aids to appropriate practice.

As yet there is little evidence of a withering away of external procedures (except in the spectacular case of Australia, although, even here, the residue appears to be convoluted systems within institutions). There is too much vested interest in the self-perpetuation of monitoring bureaucracies to expect a gradual withering away in most countries -- witness the protracted merger of the audit and assessment processes in England. Apart from possibly Sweden, there is little indication of the development of `light touch' external monitoring being linked to internal improvement agendas.

The dominant `delegated accountability' approach to `quality' that emphasises `procedures' has led to a degree of scepticism about quality that is counterproductive in the development of a quality culture within institutions -- even where quality procedures are in place, albeit not referred to in such terms. For example, in some institutions there is a well-established culture of dialogue between teaching staff and students with consequent amendment of course content, teaching style and assessment procedures. Yet this is often overlooked as a quality process because it lacks the formalism of a prescribed procedure.

Conclusion

However, despite these concerns, progress is being made. There is a growing momentum to link external quality monitoring more firmly to internal procedures for quality improvement (Rasmussen, 1995; Rovio-Johansson & Ling, 1995). Given the accountability demands of politicians and civil servants, this shift from accountability to improvement is partial. None the less, there is a growing concern that quality monitoring has to be about improving what is delivered to stakeholders, even where this requires some substantial reconsideration of the higher education raison d'etre.

Accountability still remains a priority in many systems and there is a concern that credibility through accountability has to be established first and then improvement will follow. There have been attempts to argue that improvement and accountability are not incompatible aims. However, there is little empirical research that attempts to show that a methodology that places primary emphasis on accountability can effect real continuous improvement. The alternative approach, to establish a process of continuous improvement from which accountability automatically follows, is rarely attempted. The system in Sweden, however, remains fairly unusual in placing emphasis on an audit of clearly articulated improvement programs. This is a fundamentally simple idea, but one that seems to have eluded the monitoring agencies in many countries.

Real enhancement is internally driven. If enhancement is also intended to develop the transformative ability of students, then quality monitoring needs to adopt a transformative framework, rather than simplified operationalisations such as fitness for purpose. Overall, attempts at enhancement through quality monitoring have been hindered rather than helped by the use of the dominant paradigm of delegated accountability that adopts amateurish methodology. It is hindered because it tends to reinforce traditional ways of working within institutions and, of course, is directed much more at accountability than enhancement.

Only if external quality monitoring is clearly linked to an internal culture of continuous quality improvement that focuses on identifying stakeholder requirements in an open, responsive manner will it be effective in the long run. Quality monitoring is in need of a `paradigm shift' that turns it from an accountability tool to a fundamental support in the development of a culture of continuous improvement of the transformative process. As we move into the next millennium, higher education needs to produce transformative agents -- critical reflective citizens -- and the external quality monitoring must help, not hinder, that development.

Keywords
accountability
higher education
improvement
monitoring
performance indicators
quality assurance


Note

(1) Such exercises, of course, rarely measure the value of the output against the cost of the research, but assume, implicitly, that well-rated research, in terms of peer review, is `good value'. Such practices also have another political dimension, to ensure that substantial research money is concentrated rather than spread too thinly and that it is awarded to the `correct' institutions, not least to ensure the status quo is retained.

References

Acherman, H. (1995). Meeting quality requirement. Abstract of paper, with additional comments, presented at the Organisation for Economic Co-operation and Development, Programme on Institutional Management in Higher Education Seminar, Paris, 4-6 December 1995.

Askling, B. (1997). Quality monitoring as an institutional enterprise, Quality in Higher Education, 3(1), 17-26.

Astin, A. W. (1991). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education. New York: American Council on Education and Macmillan.

Ayarza Elorza, H. (1993). Quality assurance in Latin America: An overview of university accreditation. Paper presented at the First Biennial Conference and General Conference of the International Network of Quality Assurance Agencies in Higher Education, Montreal, Canada, 24-28 May 1993.

Baldwin, G. (1997). Quality assurance in Australian higher education: The case of Monash University. Quality in Higher Education, 3(1), 51-61.

Baldwin, P.J. (1992). Higher education funding for the 1993-1995 triennium. Canberra: AGPS.

Banta, T. (1995). An assessment of some performance indicators used in funding: Performance funding in Tennessee at age sixteen. Paper presented at the 17th Annual EAIR Forum, `Dynamics in higher education: Traditions challenged by new paradigms', Zurich, Switzerland, 27-30 August 1995.

Barblan, A. (1995). Management for quality: The CRE programme of institutional evaluation: Issues encountered in the pilot phase -- 1994-1995. Paper submitted to the Organisation for Economic Co-operation and Development, Programme on Institutional Management in Higher Education Seminar, Paris, 4-6 December 1995.

Bauer, M. & Kogan, M. (1995). Evaluation systems in the UK and Sweden: Successes and difficulties. Paper for the Conference on `Evaluating universities', AF-Forum, Rome, 26-27 September 1995.

Bell, C. (1995). Preliminary lessons to be drawn from the case studies. Introductory remarks of panel moderator at the Organisation for Economic Co-operation and Development, Programme on Institutional Management in Higher Education Seminar, Paris, 4-6 December 1995.

Escudero, T. (1995). Evaluation fever at the Spanish University: A critical analysis. Paper presented at the 17th Annual EAIR Forum, Dynamics in higher education: Traditions challenged by new paradigms, Zurich, Switzerland, 27-30 August 1995.

Frederiks, M. M. H., Westerheijden, D. F., & Weusthof, P. J. M. (1993). Self-evaluations and visiting committees: Effects on quality assessment in Dutch higher education. Paper presented to the 15th EAIR Forum, University of Turku, 15-18 August 1993.

Harvey, L. (Ed.). (1993). Quality assessment in higher education: Collected papers of the QHE Project. Birmingham: QHE.

Harvey, L. (1994). Continuous quality improvement: A system-wide view of quality in higher education. In P. T. Knight (Ed.), University-wide change, staff and curriculum development (SEDA Paper 83, pp. 47-70). Birmingham: Staff and Educational Development Association.

Harvey, L. (1995). The new collegialism: Improvement with accountability. Tertiary Education and Management, 2 (2), 153-60.

Harvey, L. (1997). Editorial. Quality in Higher Education, 3(1), 3-4.

Harvey, L. & Burrows, A. (1992, Summer). Empowering students. New Academic, pp. 1ff.

Harvey, L. & Green, D. (1993). Defining quality. Assessment and Evaluation in Higher Education: An International Journal, 18 (1), 9-34.

Harvey, L. & Knight, P. (1996). Transforming higher education. Buckingham: Open University Press and Society for Research into Higher Education.

Harvey, L. & Mason, S. (1995). The role of professional bodies in higher education quality monitoring. Birmingham: QHE.

Higher Education Evaluation Council (Finland). (1997). Action plan for 1998-1999. (Publications of HEEC, 5:1997). Helsinki: Edita.

Higher Education Funding Council for England. (1994). The quality assessment method from April 1995 (HEFCE Circular, 39/94). Bristol: Author.

Higher Education Funding Council for England, Scottish Higher Education Funding Council, and Higher Education Funding Council for Wales. (1993). A report for the Universities Funding Council on the conduct of the 1992 Research Assessment Exercise. Bristol: Author.

Higher Education Quality Council, Division of Quality Audit. (1993). Notes for guidance of auditors. Birmingham: Author.

Hinchcliffe, J. (1993). Total quality management: A New Zealand perspective. Paper presented at AUSTAFE Conference, May 1993, Canberra.

Horsburgh, M. (1997). External quality monitoring in New Zealand tertiary education. Quality in Higher Education, 3(1), 5-15.

Ifrim, M. (1995). Accreditation and quality assurance in higher education institutions in Romania. QA, 8, 14-19.

Inspectie Hoger Onderwijs. (1992). De bestuulijk hantering van de resultaten van der externe kwaliteitszorg 1989 in het wettenschappenlijk onderwijs (rapport 1992-8). Zoetermeer: Ministerie van Onderwijs en Wetenschappen.

Karlsen, R. & Stensaker, B. (1995). Between governmental demands and institutional needs: Peer discretion in external evaluations--what is it used for? Paper presented at the 17th Annual EAIR Forum, Dynamics in higher education: Traditions challenged by new paradigms, Zurich, 27-30 August 1995.

Luukkonen, T. & Stahle, B. (1990). Quality evaluations in the management of basic and applied research. Research Policy, 19, 357-68.

Marchese, T. (1989). Summary comments at the FIPSE Conference, Sante Fe, New Mexico, 7 December 1989. In Proceedings: Assessment and Accountability in Higher Education (p. 17). Denver: Education Commission of the States.

Meade, P. (1993). Recent development in quality assurance in Australian higher education: Strategies for professional development. Paper presented at the First Biennial Conference and General Conference of the International Network of Quality Assurance Agencies in Higher Education, Montreal, 24-28 May 1993.

Ministry of Education. (1991). Financial reporting for tertiary institutions. Wellington: Author.

Mockiene, B. and Vengris, S. (1995). Quality assurance in higher education in the Republic of Lithuania: Implications and considerations. In Background papers for the Third Meeting of the International Network of Quality Assurance Agencies in Higher Education, 21-23 May 1995, Utrecht (pp. 204-208). Netherlands: VSNU/ Inspectorate of Education.

National Board of Employment, Education and Training, Higher Education Council. (1992). Higher education: Achieving quality. Canberra: AGPS.

Petersen, J. C. (1995). Report proposes accreditation changes in US. QA, 8, 6-7.

Rasmussen, P. (1995). A Danish approach to quality in education: The case of Aalborg University. Paper, with additional comments, presented at the Organisation for Economic Cooperation and Development (OECD), Programme on Institutional Management in Higher Education (IMHE) Seminar, at OECD, Paris, 4-6 December 1995.

Rear, J. (1994). Institutional responses in British higher education. In D. Westerheijden, J. Brennan, & P. Maasen (Eds.), Changing contexts of quality assessment: Recent trends in West European higher education (pp. 75-94). Utrecht: Lemma.

Ribier, R. (1995). The role of governments vis-a-vis the evaluation agencies. In Background papers for the Third Meeting of the International Network of Quality Assurance Agencies in Higher Education, 21-23 May 1995 (pp. 214-215). Utrecht: VSNU/Inspectorate of Education.

Rovio-Johansson, A. & Ling, J. (1995). Comments on the experiences of one university in the CRE programme of institutional evaluation at the Organisation for Economic Co-operation and Development, Programme on Institutional Management in Higher Education Seminar, Paris, 4-6 December 1995.

Saarinen, T. (1995). Systematic higher education assessment and departmental impacts: Translating the effort to meet the need. Quality in Higher Education, 1 (3) 223-234.

Silver, H. (1993) External examiners: Changing roles?. London: CNAA.

Staropoli, A. (1991). Quality assurance in France. Paper presented to the Hong Kong Council for Academic Accreditation Conference on `Quality assurance in higher education', Hong Kong, 15-17 July.

Thune, C. (1993). The experience with establishing procedures for evaluation and quality assurance of higher education in Denmark. Paper presented at the First Biennial Conference and General Conference of the International Network of Quality Assurance Agencies in Higher Education, Montreal, 24-28 May 1993.

van Schaik, M. & Kollen, E. (1995). Quality management at the Hogeschool Holland: Towards a policy of systematic quality assessment. AG Diemen: Hogeschool Holland.

Warren Piper, D. J. (1994). Are professors professional? The organisation of university examinations. London: Jessica Kingsley.

Yorke, M. (1998). Performance indicators relating to student development: Can they be trusted? Quality in Higher Education, 4(1), 45-61.

Professor Lee Harvey is the Director of the Centre for Research into Quality, University of Central England in Birmingham, 90 Aldridge Road, Perry Barr, Birmingham, United Kingdom.
COPYRIGHT 1998 Australian Council for Educational Research
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1998, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Harvey, Lee
Publication:Australian Journal of Education
Date:Nov 1, 1998
Words:7729
Previous Article:SPECIAL ISSUE: MAINTAINING AND DEVELOPING QUALITY IN INSTITUTIONS OF HIGHER EDUCATION IN THE 21ST CENTURY.
Next Article:Managing quality in higher education institutions of the 21st century: A framework for the future.
Topics:


Related Articles
Managing quality in higher education institutions of the 21st century: A framework for the future.
Managing the quality of teaching in higher education institutions in the 21st century.
Professional development and quality in higher education institutions of the 21st century.
The politics of quality assurance: the Australian quality assurance program for higher education, 1993-1995.
The certification connection: licensure ought to guarantee that every classroom comes equipped with a skilled knowledgeable teacher. The new...
Rising expectations in business education.
Multiplicities or manna from heaven? Critical thinking and the disciplinary context.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters