Printer Friendly

The evolution of measurement and evaluation of libraries: a perspective from the Association of Research Libraries.

ABSTRACT

This paper reviews the evolution of measurement and evaluation in libraries from the perspectives of three important figures who have shaped the history of library assessment activities: James Gerould, F. Wilfrid Lancaster, and Duane Webster. Although Lancaster is about a decade older than Webster and almost half a century removed from Gerould, the contributions of the three individuals knit a common fabric in the development of assessment in libraries in the past century. In investigating the interconnections of the three individuals, not only can we gain an understanding of how we got to today's world of evaluation in libraries, but also we can gain a glimpse into future developments in the field. James Gerould was a library administrator, Lancaster was a library educator, and Webster was a library association executive. Each brought unique perspectives into the evaluation and measurement of library services. In this article we attempt to offer a tribute to Lancaster's accomplishments within the context of the work done in the Association of Research Libraries as it was shaped over the years between Gerould and Webster, from the beginning toward the end of the twentieth century.

INTRODUCTION

Lancaster's pioneering work in the field of measurement and evaluation of libraries may only be fully appreciated when viewed in the context of what preceded and what followed it. An appreciation of this trajectory can assist understanding as developments in library measurement and evaluation unfold in future years. In the following essay we approach the antecedents of the measurement and assessment program conducted by the Association of Research Libraries (ARL) over the last decade, particularly focusing upon Lancaster's role in the evolution of library evaluation and measurement. As is fitting for a Festschrift, we will season our narrative with personal views.

The year 2008 is a milestone for the field of library measurement and evaluation, as the profession celebrates the contributions of key contributors, both as individuals and members of associations. From the one hundredth anniversary of the Gerould Statistics to Duane Webster's retirement as the Association of Research Libraries' executive director, ARL celebrates a century of library evaluation activities. Lancaster's Festschrift is a most fitting volume to be published in the same year.

From their respective positions Gerould, Lancaster, and Webster each brought different, unique, and complimentary perspectives into the evaluation and measurement of library services. Recognizing Lancaster's monumental influence in the field as an educator whose work on evaluation and measurement summarizes in a succinct way hundreds of library evaluation studies, we also acknowledge the importance of two key figures within the ARL evaluation culture: a practicing administrator like Gerould who used data to demonstrate the value of libraries and an unfailing advocate like Webster who promotes libraries through advocacy and outreach.

LANCASTER WITHIN OUR CONTEXT

Lancaster's career and research interests span the entire field of library science from areas such as organization of knowledge, indexing, and abstracting services, to evaluation and measurement of library services. Lancaster is truly a library renaissance man who had an impact on everything he did. He is undoubtedly one of the most prolific authors in the library field, highly cited and continuously publishing. When Martha Kyrillidou was a student of his, she once asked him what motivated him to publish in such a prolific fashion, and he responded that knowledge became obsolete so quickly that you had to endeavor constantly to stay up with developments and to write up results. Above all, Professor Lancaster was an educator whose thoughtful teaching inspired creative thinking and sparked research ideas in all those surrounding him. He bridged his roles of educator and researcher by authoring landmark textbooks that reported the results of his research and served as pedagogical tools in communicating results. Lancaster indelibly stamped the field of library evaluation and measurement with his two major landmark textbooks published in two editions each (Lancaster 1977, 1988, 1993; Baker & Lancaster, 1991). His synthesis of measurement and evaluation reflects his work and interests in systems analysis applied to the management of libraries in the 1970s. While James Gerould pioneered the collection of input data as represented in the Gerould Statistics, Lancaster's contributions to assessment and evaluation reflect contemporary thinking when libraries increasingly saw themselves as parts of larger systems that needed to be described not only in terms of inputs but also in terms of outputs, processes, outcomes, and impacts.

Overlapping in time and influenced by the systems approach promulgated by Lancaster, Webster, then a program officer for ARL's Office of Management Studies, emphasized the human management processes operating in research libraries as systems in his organizational development work. While Lancaster's contributions to library measurement and evaluation may be viewed as the synthesis of an increasing corpus of applied research on evidence-based methods from the perspective of a library educator and researcher, Webster's parallel contribution comes from the perspective of implementing organizational changes using the kinds of studies synthesized by Lancaster. From his perspective from the ARL Office of Management, Webster believed that creating valued experiences and leadership awareness within each library organization was a key element for effectively implementing change. Webster saw an important role for library assessment and was a strong supporter for evidence-based methods and practices (Webster, 2007).

Ultimately, both of these perspectives, the descriptive-analytical one and the human relations-behavioral one, contributed to an increased awareness of libraries as symbolic entities manifesting elements of effect of service, information control, and library as place that generate perceptions and expectations as library users come into contact with these entities. This kind of awareness has shaped the current authors' perspectives regarding library evaluation and measurement in the more recent years, notably with the development of LibQUAL.

It is particularly fitting within a Festschrift rubric to provide a word about our personal perspectives on library measurement and evaluation as they have been shaped over the years. Martha Kyrillidou entered the field wanting to improve library services--primarily in Greece at that time (Kyrillidou, 1990). Realizing that the best way to effect positive changes to library services was by strengthening libraries' assessment capacity, she studied evaluation and measurement at Kent State and then moved to the University of Illinois at Urbana-Champaign (UIUC) to have the opportunity to study under Lancaster and work at the Library Research Center. She was hired by ARL in 1994, tutored by library educators such as Lancaster and Linda Smith at the UIUC, while having the opportunity to work with Robert Molyneux and Kendon Stubbs on projects like the Association of College and Research Libraries (ACRL) Statistics. Coming to ARL presented the opportunity of working closely with Webster and other library leaders who had a strong interest in library assessment activities from organizational and leadership development perspectives and who wanted to develop "new measures" (Kyrillidou & Crowe, 1998; Kyrillidou, 2002) like LibQUAL.

Molyneux and Stubbs had worked extensively with the ARL Statistics data (Molyneux, 1986; Stubbs & Molyneux, 1990) and ARL has worked with ACRL collaboratively in giving permission for the ARL Statistics instrument to be used to collect data for non-ARL libraries through ACRL. In particular, Stubbs had developed the ARL Membership Criteria Index in the mid-1990s (Stubbs, 1986a, 1986b, 1988). He had examined thoroughly the application of the quantitative method of factor analysis to all academic libraries in the United States (Stubbs, 1980, 1981) and applied the same methodology in developing the ARL Membership Criteria Index, which was published annually in the Chronicle of Higher Education until 2005. In his work Stubbs also provides insights on how the ARL Statistics may be used to describe not only ownership but also access (Stubbs, 1993). Stubbs' work has built upon Gerould's work and influenced the way we now describe library services (Kyrillidou, 2000, 2002; Weiner, 2005).

At the end of the 1990s, Colleen Cook, a longtime library administrator at Texas A&M University Libraries, and Fred Heath, then Dean of Libraries at Texas A&M, grappled with the notion that considering library service from a user perspective was primary in helping library organizations focus on priorities. Building on work done at Texas A&M by Parasuraman, Zeithaml, and Berry in the services marketing field in developing SERVQUAL (Parasuraman, 2002), Heath and Cook brought these perspectives into the arena of research librarianship (Cook, 2001; Cook & Heath, 2001). Through the pioneering work of LibQUAL+, Cook and Heath were able to bring a stronger focus on the library user not only to the Texas A&M libraries but to more than one thousand other libraries through the establishment of the LibQUAL+ suite of services at ARL (Cook, Heath, Kyrillidou, & Webster, 2002). With expert psychometric advice and a strong commitment to the empirical research process from Bruce Thompson (Cook & Thompson, 2001; Cook, Heath, & Thompson, 2001, 2002, 2003; Cook, Heath, Thompson, & Thompson, 2001), qualitative evaluation methods advice from Yvonna Lincoln (Lincoln, Cook, & Kyrillidou, 2004, 2005), and strong leadership support from Webster, as well as the commitment of hundreds of library administrators around the world who saw the focus on the library users as an inescapable evaluation perspective, LibQUAL+ emerged as the twenty-first century version of the Gerould Statistics (Kyrillidou & Heath, 2001, 2004).

Kyrillidou and Cook have worked together on the LibQUAL+ project from their respective positions at ARL and Texas A&M (Thompson, Kyrillidou, & Cook, 2007a, 2007b, 2008; Thompson, Cook, & Kyrillidou, 2005, 2006) and more recently in the ARL evaluation program at large when Cook assumed the chair of the ARL Statistics and Measurement Program in 2006 to which Kyrillidou is assigned ARL staff. In retrospect, key figures such as Gerould, Lancaster, and Webster, and their related disciples and colleagues, have greatly influenced the development of library and evaluation activities and their work guides current thinking as new territory is charted.

ARL STATISTICS AND THE GEROULD STATISTICS

Statistics have been collected and published annually for the members of the Association of Research Libraries since 1961-62, and the data are available through an interactive Web interface. Prior to 1961-62, annual statistics for university libraries were collected by James Gerould, first at the University of Minnesota and later at Princeton University (Stubbs & Molyneux, 1990). These data, covering the years. 1907-8 through 1961-62, are now called the Gerould Statistics (Molyneux, 1986). The whole data series from 1908, which is available on the ARL server, (1) represents the oldest and most comprehensive continuing library statistical series in North America.

Gerould was the first full-time librarian at the University of Minnesota and according to the records on this library's website, he
 brought a new energy to the Library and sought to meet the changing
 needs of research and instruction. Throughout the United States,
 faculty, scholars and students were pressing libraries for more
 books and easier access. Gerould increased the acquisition budget
 to $20,000, added specialized journals, and acquired library
 collections from Europe. During his administration, the collection
 grew from 50,000 to 400,000 volumes including Scandinavian holdings
 that formed the nucleus of today's outstanding collection. In 1912,
 Gerould and reference librarian Ina Firkins launched the first
 University lecture series on the use of the library. At the end of
 Gerould's administration in 1920, the Board of Regents approved
 construction of a new library building. Gerould, a primary force in
 organizing the Association of Research Libraries in 1932, is best
 known in the library profession as the founder of the Association
 of Research Libraries statistics, a national compilation of library
 collection statistics. In 1920, Gerould became head of the
 Princeton Library. There he again faced the problems of building
 the collection and plans for a new library. After seventeen years,
 he retired to spend the rest of his life in Williamsburg, Virginia
 where he died in 1951. (2)


ARL libraries are a relatively small subset of libraries in North America, but the member libraries (123) are the largest research libraries in North America representing 16 Canadian and 107 U.S. research institutions. Of these, 113 are university libraries; the remaining 10 are public, governmental, and nonprofit research libraries. Together the university libraries account for a large portion of academic library resources in terms of assets, budgets, and the number of users they serve. The total library expenditure of ARL libraries in 2004-5 was almost $3.6 billion; from that, roughly $2.68 billion was spent by the university libraries and more than $900 million by the nonuniversity libraries.

ARL Statistics is a series of annual publications that describe collections, staffing, expenditures, and service activities for the 123 members of the Association of Research Libraries. The academic libraries, which comprise about 92 percent of the membership, include 14 Canadian and 99 U.S. libraries.

ARL Statistics has not remained static over the years (Stubbs, 1980, 1981, 1986a, 1986b, 1988, 1993). It has evolved by incorporating an increasing number of variables in describing library operations. The Gerould Statistics are fully documented by Molyneux so we will briefly describe some of the more recent changes in the datafiles. In 1989, ARL Statistics collected 42 data elements and by 2004 the number of data elements, or variables, increased to 62. In particular, a number of alternate format collections were added in 1992. Such collection variables are government documents, manuscripts and archives, maps, graphic, audio, video, and computer files. In 1994, a series of service specific variables was incorporated, including group presentations, number of participants in these group presentations, reference transactions, and initial, total, and reserve circulation statistics (Kyrillidou, 2000). More recently, in 2003, a series of variables related to expenditures for electronic resources was incorporated. These variables reflect a range of electronic resource expenditures ranging from expenditures for electronic serials and monographs to those spent for bibliographic utilities. This brief history of the evolution of data elements reflects our evolving understanding of describing libraries in more complex and intriguing ways:
 The most recent ARL Statistics 2005-06 describe a familiar picture
 for research libraries in North America. The rising cost of serials
 is outpacing general inflation, the cost of monographs is hovering
 close to inflation, and salaries are increasing moderately more
 quickly than inflation? The numbers of reference and circulation
 transactions have fallen from their levels of 10 years ago, 4 but
 more users participated in instructional services offered by the
 library. (5) Librarians are becoming more involved in the
 instructional process and are increasingly an integral part of the
 teaching and learning infrastructure at their institutions. The
 introduction of digital information and the dramatic changes in the
 nature of content has transformed the way the size of library
 collections is measured. For example, in 2005-06, ARL libraries
 spent 43% of their materials budget on electronic resources--a
 total of $431 million out of $1.1 billion. This measure indicates
 the quantity and complexity that libraries are dealing with, but
 ultimately these figures cannot offer much when it comes to
 describing the quality of research, teaching, and learning at an
 institution. (Kyrillidou, 2008, p. 9)


In a world where descriptive statistics still serve a basic need, we have recently implemented another set of major changes in the ARL Statistics (Kyrillidou & Young, 2008, pp. 9-11) so that they will continue to be relevant in the years to come. There are three major directions in the most recent changes: (a) from serial subscriptions to serial titles, (b) from collections to expenditures, and (c) toward developing new indicators and variables through iterative qualitative work to develop library profiles. In particular, definitions for serials were changed so that serials are not counted as subscriptions anymore but as titles, placing emphasis on the intellectual content of this unit of measurement. The historical ARL membership criteria index that includes collections-related variables is no longer published in the Chronicle of Higher Education. In its place we have calculated a new index., the Expenditures-Focused Index (EFI) (Thompson, 2007). The EFI, though not ideal, is a practical way to describe the size of a library in the near future. Through iterative qualitative work we aim to produce a new set of variables that would describe in richer ways collections, services, and collaborative relations for libraries in the coming years. These changes are aiming at reviving the relevance of the ARL Statistics in the twenty-first century.

ARL has a long-standing history in library statistics thanks to Gerould, but it has an increasingly relevant standing in the history of library statistics thanks to the demonstrated leadership of Webster. In 2007, the ARL Executive Director, Duane Webster, announced that he would retire in 2008. Webster has demonstrated a passionate desire for improving library services through leadership training and organizational development efforts. He has demonstrated that evidence-based improvements are critical for library organizations and throughout his service at ARL has strengthened the statistics and measurement capability (Webster, 2007). It has become an operation that has global impact and leverage beyond the small, yet powerful group of major North American research libraries that comprise membership in ARL.

With a strong vision for library collaboration, Webster fostered a culture of engagement that has expanded the influence of the ARL Statistics. ARL Statistics has influenced the data collection activities of smaller academic libraries that report annual data to the Association of College and Research Libraries (ACRL), a division of the American Library Association (ALA), using a survey version of the ARL Statistics. In the United States, ARL has been represented in the advisory group for the Academic Library Survey conducted by the National Center for Education Statistics on a biennial basis. Outside the United States, ARL Statistics has collaborated with statistics-gathering efforts in Canada through the Canadian Association of Research Libraries (CARL) in the United Kingdom through the Society of College, National and University Libraries (SCONUL) and in Australia through the Council of Australian University Librarians (CAUL). The foresight of James Gerould in establishing the ARL Statistics has resulted in effects and impact unforeseen at that time.

Webster's work has benefited from the systems perspectives that preceded him and much of the work that Lancaster summarized in his two volumes, If You Want to Evaluate Your Library and The Measurement and Evaluation of Library Services.

LANCASTER'S MONOGRAPHS

Lancaster has the qualities of being not only a great analytical thinker but also a great synthetic thinker and writer. His work on measurement and evaluation is a sound demonstration of both of these abilities. He published two books and four editions where he synthesized the literature of evaluation and measurement. He viewed his first book more as a textbook for students to study and learn about measurement and evaluation, entitled The Measurement and Evaluation of Library Services, and his second book, If You Want to Evaluate Your library, as a monograph that has more practical applications for those interested in engaging in evaluation studies. In reality both books complement one another in equally important ways for both the library apprentice and the practitioner.

Lancaster's first book is known as the first definitive review and synthesis of evaluation techniques in libraries. He authored it with the assistance of M. J. Joncich, a librarian at the University of Illinois at Urbana-Champaign with expertise in collection development. The chapter on collection evaluation in particular, which is coauthored with Joncich, organizes the studies reviewed in three sections:

1) quantitative, including size, formulas, and growth rate; 2) qualitative, embracing the impressionistic approach and evaluation against lists or the holdings of other libraries; and 3) use studies (which receive the greatest emphasis), including circulation and in-house use. Descriptions of various methods for analyzing use cover the advantages and disadvantages of a 'collection sample' (in which a portion of the collection's past use is determined) versus a "checkout sample" (studying what is used during a specific time). Jain's "relative use method" (comparing a sample's actual use with expected use) and Trueswell's "last circulation copy" approach are explained. This excellent chapter is especially valuable for its synthesis of prior research. Over eighty previous studies, dating as far back as 1936, are cited, with detailed summaries provided for many. (Nisonger, 1992, p. 6)

Lancaster's first book on library evaluation received the American Library Association's Ralph Shaw Award in 1978. Lancaster focuses on presenting chapters in many areas that are still relevant to libraries. Even though at first glance a reader may think that some areas of evaluation are obsolete, as soon as we rethink those areas in terms of today's reality we realize how they are being morphed. For example, among his first chapters are "Studies of Catalog Use" and "Evaluation of Collections." Many of the studies Lancaster summarizes are giving us insights into the new functional management information systems that are supported currently by modern integrated library systems (ILSs). Circulation studies that have become much easier nowadays are giving library administrators insights on the parts of the collections used. Studies like the OCLC collections overlap studies across different libraries and organizations are extensions of earlier studies conducted in local systems on a scale that was more feasible at that time.

Services feature prominently in Lancaster's literature review with a whole chapter on references services, another one on literature searching and information retrieval, and a separate one on document delivery. He devotes a separate section on the evaluation of technical services, another one on automated systems, and even a distinct chapter on the role and relevance of standards. His conclusions regarding library standards are still relevant today:
 In general, library standards have a tendency to be guidelines
 rather than true enforceable standards of the type that govern
 engineering and manufacturing operations. Present standards are
 largely based on current practices at existing institutions that,
 in some sense, are considered "good." They emphasize inputs rather
 than outputs (services). Also, the great diversity among libraries
 makes it extremely difficult, and even dangerous, to attempt
 development of precise, quantifiable standards. Consequently;
 library standards as they now exist, while having some value as
 procedural guidelines or in establishing absolute minimal
 requirements for various types of libraries, are too general and
 imprecise to be used in the detailed evaluation of library
 services. Perhaps what is needed is standards by which individual
 institutions can evaluate their own performance in relation to the
 needs of their user population; that is, standards or guidelines
 are needed for conducting the type of evaluation studies discussed
 in this book. (Lancaster, 1977, pp. 296-297)


His "Library Surveys" chapter is worth noting as reflecting the prevailing notions that surveys were limited and subjective in nature as many local surveys were constructed at that time with limited scope. In particular, it is worth noting the emphasis on the prevailing notion of that time that methodological imperatives often dictate the perceived usefulness of surveys over the utility and impact of the data on decision making:
 It is clear that the library survey, if it is to produce results of
 any value, must be carefully designed according to procedures that
 are well-established in social science research. Samples must be
 scientifically derived, and all proposed approaches to the
 gathering of data must be critically examined to determine their
 validity and reliability. Appropriate statistical procedures must
 be applied in the analysis and interpretation of the survey
 results. (Lancaster, 1977, p. 309)


Similar concerns are being expressed by library educators like Bertot even today (Bertot &Jaeger, 2008).

And in the well-known balanced act that Lancaster often achieves when he draws conclusions, he follows this section with the following paragraph where he emphasizes the utility of survey data in decision making:
 A well conducted library survey can produce considerable number of
 data that are of potential value in the evaluation of library
 services. This is especially true if the survey goes beyond purely
 quantitative data on volumes and types of use, and general
 characteristics of the users, and attempts to assess the degree to
 which the library services meet the needs of the community
 served.... At the very minimum, however, a well-conducted survey
 can provide a useful indication of how satisfied the users are with
 the services provided, and can identify areas of dissatisfaction
 which may require closer examination through more sophisticated
 microevaluative techniques. (Lancaster, 1977, p. 309)


In general, Lancaster's first monographs view library evaluation as important from a microevaluation perspective. Yet he attempts, at his conclusion, to draw the notion that we can generalize from several microevaluation studies:
 Evaluation must occur at the level of the local institution, and it
 cannot be assumed that the limitations and failures encountered in
 one library also will apply to another, even one with the same
 general characteristics. Partly from the results of evaluations
 that have been conducted in libraries, and partly from common
 sense, it is possible to identify some major factors that are
 likely to influence the success or failure of the most important
 services that libraries offer. An attempt was made to present such
 factors in this book. (Lancaster, 1977, p. 386)


Lancaster's second edition of Measurement and Evaluation of Library Services had Baker as first author (Baker & Lancaster, 1991). Sharon Baker, a library educator at the University of Iowa School of Library and Information Science, had strong research interests in evaluation like Lancaster and a strong focus on public libraries as well. So, the second edition of their book has more examples from the public library environment. In 2002, Baker co-authored the second edition of a book entitled The Responsive Public Library: How to Develop and Market a Winning Collection where she brings a wealth of practical experience and research knowledge effectively presenting and integrating analysis, planning, change, and management and marketing. (Baker & Wallace, 2002)

Lancaster's first edition of his second book, which is primarily focused on evaluation, entitled If You Want to Evaluate Your Library, received the American Library Association's G. K. Hall Award in 1989 (Lancaster, 1988). The second edition of this book was significantly expanded from 193 pages to 352 and published in 1993. In particular it is worth noting the new chapters on the evaluation of bibliographic instruction and on continuous quality control--both areas that have been dominant in the library evaluation scene (Lancaster, 1993). Both editions are organized in three major sections: (1) document delivery services, (2) reference services, and (3) other aspects.

It is worth noting how broadly Lancaster defined document delivery services to include aspects of collection evaluation. He included formulae, expert judgments, bibliographic checking, analysis of use, in-house use, and evaluation of periodicals, obsolescence, weeding and use of space, catalog use, and shelf availability. Reference services include question answering, database searching, and evaluation of bibliographic instruction. Under other aspects he covers issues related to resource sharing, cost-effectiveness considerations, cost-benefit studies, and continuous quality control.

He presents in Exhibit 1 the library as having two essential aspects--the organization and control aspect and the services aspect, with the first one being closer to the inputs in the form of information resources and the second one being close to the outputs in the form of the user community. In this exhibit he summarizes in a simple picture the whole basis of the system approach to evaluation. He asserts that the "inputs have little value in and of themselves--they can only be evaluated in terms of the role they play in achieving desired outputs.... the outputs of the library--i.e., the services provided--are less tangible than the inputs but much more tangible than the outcomes" (Lancaster, 1993, p. 3).

Furthermore, he offers a justification for the kind of evaluation studies libraries have performed using input type, not unlike the EFI index recently developed by ARL: "Indeed, it is possible to use certain evaluation methods, applied to input, that are intended to simulate an output situation and thus approximate an evaluation of output.... This is a legitimate approach if one can be sure that the external standard fully reflects the needs of the users of this particular collection" (Lancaster, 1993, p. 5).

The systems approach has dominated the development of the ARL Statistics as they have moved from simply collecting input data to incorporating variables related to outputs. Many of the studies described in Lancaster's books were implemented in research libraries as they often have a great need to justify their budgetary existence. Lancaster's systems approach and the local evaluation studies informed work on organizational development that in some ways was expanding in a parallel fashion. At ARL, managing organizational development became a distinct programmatic area under the leadership of Webster.

ORGANIZATION AND STAFFING

ARL has always had a strong interest in improving organizations, and the collection of descriptive statistics served as the basis of a shared understanding or baseline. Much evaluation research has taken place within ARL libraries over the years, as summarized by Hiller and Self (2004). Much work, though, has also been advanced through collaborative work supported by ARL staff in partnership with member leaders. Webster's contributions are important especially within the collaborative framework with ARL member leaders. In the early stages of his career, Webster was involved in pioneering work in organizational development in research libraries. He was involved in a landmark study entitled Organization and Staffing of the Libraries of Columbia University, where the perspectives of the systems approach summarized in the many studies Lancaster reviewed in his monographs, were supplemented by perspectives of organizational culture and staffing that moved forward the notion of viewing organizations as complex systems that need to be addressed at a variety of levels (Booz, Allen, & Hamilton, 1973).

In the Columbia case study we first see a general description of the organization from the perspectives of inputs and outputs, often supplemented with understanding of the external environment and the relation of its administrative structure. The case study, though, moves beyond the simple description of laying out plans for change. The case study follows with a section on a "Recommended Plan for the Organization" and a section on a "Recommended Plan of Staffing." In the "Recommended Approaches to Management and Professional Activities" we see elements of teamwork through a section on "Group Problem Solving," the notion of a matrix organization through a section on "Multiple Reporting Relationships," and an effective engagement of planning, policy and budget formulation, supplemented with sections on working relationships and communication, and staff development. The last chapter of this monograph focuses on the implementation approach by (1) acquainting university officials and library staff with change and (2) implementing change in an orderly fashion.

This case study reveals the multifaceted aspects research library organizations have been facing in controlling their environments and in moving forward in a meaningful fashion so they deliver services that are relevant to their communities. The case study approaches were applied during Webster's career beyond Columbia, to many ARL libraries and beyond. Furthermore a multifaceted program of planning activities related to collections, preservation, and other functional areas was developed over the years at ARL.

At the core of these studies lies the keen understanding that library organizations are human and political systems that interact with their environment constantly. For these organizations to remain meaningful to their constituencies they need to manage change by understanding the environment and developing sound plans that garner support by all involved. The scope of these activities was taking place at the campus or library or functional level for the most part in the latter part of the twentieth century. Eventually, though, with the advent of technology at the dawn of the twenty-first century, we have reached a state where we can engage in descriptive/analytical work on a much larger scale; for example, LibQUAL has been applied to more than one thousand libraries across the globe during its relatively short lifespan since 2000 (Cook, 2001; Cook & Heath, 2001; Cook & Thompson, 2001; Cook, Heath, & Thompson, 2001, 2002, 2003; Cook, Heath, Thompson, & Thompson, 2001; Thompson & Cook, 2002; Thompson, Cook, & Heath, 2003a, 2003b). Much like Lancaster's work, the Lib QUAL related family of studies has had an international impact on library evaluation activities (Kyrillidou & Persson, 2006; Kyrillidou, 2005; Kyrillidou, Olshen, Heath, Bonnelly, & Cote, 2005). The LibQUAL+ stream of research also emphasizes the importance of organizational engagement on improvement activities (Kyrillidou, 2006b; Hoseth, 2007).

Below we attempt to describe the collaborative assessment tools that have been developed since 2000 under the ARL StatsQUAL umbrella (Kyrillidou, 2005/2006). Implementing behavioral/organizational changes in a collaborative fashion across diverse organization systems is a concept that is constantly evolving through the variety of consortia and collaborations that have flourished in the library world. It is our hope that collaborative assessment leads into collaborative actions and collaborative changes on a scale that has more impact than ever before experienced by libraries. Leadership is a key ingredient in ensuring that the impact of assessment is fully realized (Lakos, 2007; Hiller, Kyrillidou, & Self, 2006, 2007a, 2007b).

NEW MEASURES AND ARL

In the past decade and particularly under the leadership of Carla Stoffle (University of Arizona) and Brinley Franklin (University of Connecticut), the Statistics and Assessment Committee of ARL has sought to expand the assessment program of the association beyond the input measures in the ARL descriptive statistics to output, and to some extent outcomes measures, under the general rubric of a "New Measures" program (Blixrud, 2003; Nitecki & Franklin, 1999). While it is unlikely that the association will discontinue collecting descriptive statistics of some type, ARL has grown its assessment program substantially and will likely continue to do so in the future in building upon the work of Gerould, Lancaster, and Webster. The pressures for greater accountability are increasing in higher education in particular (Franklin, 2007a, 2007b). Following is a description of ARL's current program of offerings for evaluation and some indication for future plans.

StatsQUAL

The StatsQUAL suite of services includes LibQUAL+, DigiQUAL, MINES for Libraries and ClimateQUAL. Each measures and evaluates a major program or function intrinsic to research libraries today. All of the StatsQUAL services are predicated upon the assumption that there is fundamental value in assessing an individual library over time and in comparison with peer institutions for benchmarking purposes and identification of best practices. While this notion is now generally accepted, before the widespread adoption of LibQUAL+ in academic libraries, particularly in North America and the UK, it was widely understood that library assessment could, by its very nature, only be local. The research stream of LibQUAL+, building upon that of SERVQUAL (Parasuraman, 2002), showed that there do indeed seem to be overarching concepts fundamental to the theory of library service quality that are common to a greater or lesser extent to many academic and special libraries throughout the world. Beginning with LibQUAL+ and continuing with all of the associated StatsQUAL services, is the assumption that there is enhanced value in providing an assessment of a given library in the context of a peer group.

Through StatsQUAL, libraries gain access to a number of resources that are used to assess a library's effectiveness and contributions to teaching, learning and research (Kyrillidou, 2006a; Town, 2006; Cook, 2006, Thompson, 2006; Franklin, 2006; Plum, 2006). StatsQUAL presents these tools in an interactive framework that integrates and enhances data mining and presentation both within and across institutions. In addition to the suite of instruments StatsQUAL also offers a growing dataset of survey results with access to data warehouse capabilities. A few words about each one of these tools follows.

LibQUAL+

LibQUAL+ is used to solicit, understand, and act upon users' opinions of service quality. The program's centerpiece is a rigorously tested Web-based survey bundled with training that helps libraries assess and improve library services, change organizational culture, and market the library.

LibQUAL+ enables systematic assessment and measurement of library service quality, over time and across institutions (Cook, Heath, Kyrillidou, & Webster, 2002). The LibQUAL+ suite of services has been used in a variety of libraries, including college and university, community college, health science, law, and public--some through various consortia, others as independent participants. The project has also expanded beyond the U.S. and Canada to include participating libraries in Central America, Europe, Asia and Australia.

The LibQUAL+ protocol was developed by ARL in collaboration with Texas A&M University, with support from the U.S. Department of Education Fund for the Improvement of Post-Secondary Education (FIPSE). The growing LibQUAL+ community participants and its extensive dataset represent a rich resource for improving library services through continuous collection and analysis of data and understanding of trends, implications and future directions (Heath, Cook, Kyrillidou and Thompson, 2002; Thompson, Cook and Kyrillidou, 2006, 2005; Thompson, Kyrillidou, Cook, 2008, 2007a, 2007b).

DigiQUAL

DigiQUAL is an online survey for users of digital libraries. The survey--created through collaboration between ARL, Texas A&M University, and the University of Texas--evaluates digital libraries from the user perspective, emphasizing issues related to the reliability and trustworthiness of a website. DigiQUAL adapts the LibQUAL+ protocol for use in the digital library environment.

DigiQUAL was tested as a short online survey containing five questions and a comments box. It systematically collects feedback on the site's service, functionality, and content. Survey questions are randomly drawn from an item bank of more than 180 items that have been developed through extensive qualitative analysis of focus group data and interview scripts with various digital library developers and users. The development of DigiQUAL has been supported by funding from the National Science Foundation's (NSF) National Science Digital Library (NSDL) Program (Cook, Heath, Kyrillidou, Lincoln, Thompson and Webster, 2003; Lincoln, Cook and Kyrillidou, 2004, 2005).

MINES for Libraries

Measuring the Impact of Networked Electronic Services (MINES) is an online transaction-based survey that collects data on the purpose of use of electronic resources and the demographics of users. As libraries implement access to electronic resources through portals, collaborations, and consortium arrangements (Bleiler & Plum, 1999; Plum & Bleiler, 2001), the MINES for Libraries protocol offers a convenient way to collect information from users in an environment where they no longer need to physically enter the library in order to access resources.

MINES for Libraries adapts a long-established methodology to account for the use of information resources in the digital environment. The survey is based on methods developed by Brinley Franklin (University of Connecticut) and Terry Plum (Simmons College) to determine the indirect costs of conducting grant-funded R&D activities, and was adopted as part of ARL's New Measures program (Franklin and Plum, 2006, 2004, 2003, 2002). Canadian libraries have implemented MINES for Libraries[TM] through a contract between ARL and the Ontario Council of University Libraries (OCUL) (Kyrillidou, Olshen, Franklin, Plum, 2005, 2006). Additional institutions are involved in more extensive campus-wide cost analysis. Continuing efforts to adapt MINES for Libraries into changing technological infrastructures are underway, attempting to create scalable solutions for collecting evaluation data from our virtual users.

ClimateQUAL--Organizational Climate and Diversity Assessment (OCDA)

Climate QUAL--Organizational Climate and Diversity Assessment (OCDA) is an online survey that is being developed as a joint project of the University of Maryland Libraries, the University of Maryland Industrial/Organizational Psychology Program, and the Association of Research Libraries (Lowry, 2005; Lowry & Hanges, 2008; Hanges, Aiken and Chen, 2007). The Organizational Climate and Diversity Assessment is a survey tool first administered in the University of Maryland (UM) Libraries in 2000 as a means of collecting information about staff perceptions about how well the Libraries were doing in achieving the principles of diversity. The survey asked questions covering a range of issues including job satisfaction, fair treatment, relationship or task conflict, continuous learning, managerial practices and ethnic or gender harassment. The survey was repeated in 2004, with the addition of questions focusing on team issues, as a way of understanding if there had been changes (positive or negative) in the Libraries' climate since 2000 (Baughman, Love, Lowry, Saponaro, 2007; Williams, 2004).

The UM Libraries have partnered with the UM Industrial/Organizational Psychology Program and the Association of Research Libraries to convert an existing print survey instrument (OCDA) to a Web-based assessment designed to be utilized by any number and types of library organizations. Phase I of this project involved piloting the existing survey in a Web-based administration among five selected research libraries: Texas A&M University, University of Arizona, University of Connecticut, University of Iowa, and University of Kansas.

Phase II of the project began in January 2008 with ten additional libraries testing the survey based on revisions from Phase I findings and on refining a theory of organizational justice that ties internal climate to the delivery of effective service within a healthy organization. The project partners are engaged in sharing strategies and intervention activities that relate internal climate to improvements in service delivery.

CONCLUSION

Although our understanding of many of the effects and relations among the various assessment tools supported by ARL is still in its nascent stage, the perspective of tying user success into organizational evidence that can be collected both internally and externally across time and across peer institutions is a powerful framework. This framework as it will be refined over the years to come is forming the basis of guiding our understanding of libraries in an environment where: books are morphed into bytes, graphic materials are morphed into kilobytes, audio files are morphed into megabytes, video files are morphed into petabytes, where information is transformed into avatars existing in another reality, a second life. Assessment is becoming a critical skill as libraries are attempting to have a second life (Wright and White, 2007).

Lancaster often refers to Ranganathan's Five Laws of Library Science--Ranganathan was an Indian librarian who in the 1930s succinctly stated key library values as a series of laws. Lancaster has used Ranganathan's framework to offer perspectives in library evaluation. The fifth of Ranganathan's laws is the "Library is a Growing Organism." A growing organism cannot be described directly because it changes rapidly. A library as growing organism had contributions manifested through the contributions of library users in the forms of the books they read and wrote in the era of Ranganathan. The symbolic nature of books, and its relation to the size of library collections, has been challenged by the symbolic nature of storing information in a multiplicity of new forms and formats. Our descriptive data and evaluative frameworks--both qualitative and quantitative--are growing in complexity (becoming more organic). Ultimately they will continue to capture reality in indirect and partial ways and like the approaches of Gerould, Lancaster, and Webster, are reflective of the times as much as our personal perspectives. Our hope is that as we learned from people like Gerould, Lancaster, and Webster, others that follow us will gain and learn from our experiences. The value of information is increasing when used throughout one's life cycle. Collaborative, organic, and lifelong evaluation approaches to organizational and personal learning are some of the adjectives we can use to describe the latest generation of evaluation tools and their evolutionary development.

REFERENCES

Baker, S. L., & Lancaster, E W. (1991). Measurement and evaluation of library services (2nd ed.). Arlington, VA.: Information Resources Press.

Baker, S. L., & Wallace, K. L. (2002). The responsive public library: How to develop and market a winning collection. Englewood, CO: Libraries Unlimited.

Baughman, M. S., Love, J., Lowry, C., & Saponaro, M. (2007). From organizational assessment to organizational change: The University of Maryland Library experience. In Francine DeFranco et al. (Eds.), Proceedings of the Library Assessment Conference: Building effective, sustainable, practical assessment, September 25-27, 2006, Charlottesville, VA. (pp. 319-329). Washington, DC: Association of Research Libraries.

Bertot, J. C., &Jaeger, P. T. (2008). Survey research and libraries: Not necessarily like in the textbooks. Library Quarterly, 78(1), 99-106.

Bleiler, R., & Plum, T. (1999). SPEC Kit 253: Networked information resources. Washington, DC: Association of Research Libraries. Retrieved March 03, 2008, from http://www.arl.org/bm~doc/spec253web.pdf.

Blixrud, J. C. (2003, October/December). Mainstreaming new measures. ARL Bimonthly Report 230/231, 1-8. Retrieved March 3, 2008, from http://www.arl.org/newsltr/230/mainstreaming.html.

Booz, A., & Hamilton, Inc. (1973). Organization and staffing of the libraries of Columbia University: A case study. Westport, CT: Redgrave Information Resources.

Cook, C. (2001). A mixed-methods approach to the identification and measurement of academic library service quality constructs: LibQUAL+[TM] (PhD diss., Texas A&M University, 2001). Dissertation Abstracts International, 62: 2295A. (University of Microfilms No. AAT3020024 62).

Cook, C. (2006). The importance of the LibQUAL+[R] survey for the Association of Research Libraries and Texas A&M University. In Mersini Moreleli-Cacouris (Ed.), Library assessment conference--Thessaloniki 13-15 June 2005 (pp. 55-74). Washington, DC: Association of Research Libraries. Retrieved March 3, 2008, from http://www.arl.org/bm~doc/lac-greece-2005.pdf.

Cook, C., & Heath, F. (2001). Users' perceptions of library service quality: A LibQUAL+[TM] qualitative study. Library Trends, 49(4), 548-584.

Cook, C., Heath, F., Kyrillidou, M., Lincoln, Y., Thompson, B., & Webster, D. (2003, October). Developing a National Science Digital Library (NSDL) LibQUAL+[TM] protocol: An e-service for assessing the library of the 21st century. A report submitted for the NSDL Evaluation Workshop. Retrieved March 3, 2008, from http://www.libqual.org/documents/admin/NSDL_workshop_web1.pdf.

Cook, C., Heath, F., Kyrillidou, M., & Webster, D. (2002). The forging of consensus: A methodological approach to service quality assessment. In Joan Stein, Martha Kyrillidou and Denise Davis (Eds.), Proceedings of the 4th Norhumbria International Conference on Performance Measurement in Libraries and Information Services "Meaningful measures for emerging realities". Washington DC: Association of Research Libraries, 93-98.

Cook, C., Heath, F., & Thompson, B. (2001). Users' hierarchical perspectives on library service quality: A "LibQUAL+" study. College and Research Libraries, 62, 147-153.

Cook, C., Heath, E, & Thompson, B. (2002). Score norms for improving library service quality: A LibQUAL+[TM] study, portal: Libraries and the Academy, 2(1), 13-26.

Cook, C., Heath, F., & Thompson, B. (2003). "Zones of tolerance" in perceptions of library service quality: A LibQUAL+[TM] study, portal: Libraries and the Academy, 3(1), 113-123.

Cook, C., Heath, F., Thompson, B., & Thompson, R. L. (2001). LibQUAL+[TM]: Service quality assessment in research libraries. IFLA Journal, 4, 264-268.

Cook, C., & Thompson, B. (2001). Psychometric properties of scores from the Web-based LibQUAL+ study of perceptions of library service quality. Library Trends, 49(4), 585-604.

Franklin, B. (2006). Measuring the Impact of Networked Electronic Service (MINES): The North American experience. In Mersini Moreleli-Cacouris (Ed.), Library Assessment Conference--Thessaloniki 13-15 June 2005 (pp. 75-94). Washington, DC: Association of Research Libraries. Retrieved March 3, 2008, from http://www.arl.org/bm~doc/lac-greece-2005.pdf.

Franklin, B. (2007a). The privatization of public university research libraries, portal: Libraries and the Academy, 7, 407-114.

Franklin, B. (2007b). Return on investment. In Francine DeFranco et al. (Eds.), Proceedings of the Library Assessment Conference: Building effective, sustainable, practical assessment, September 25-27, 2006, Charlottesville, VA. (pp. 127-130). Washington, DC: Association of Research Libraries.

Franklin, B., & Plum, T. (2002). Patterns of patron use of networked electronic services at four academic health sciences libraries. Performance Measurement and Metrics, 3(3), 123-133. Retrieved March 3, 2008, from http://www.emeraldinsight.com/1467-8047.htm.

Franklin, B., & Plum, T. (2003). Documenting usage patterns of networked electronic services. ARL: Bimonthly Report. 230/231 (October/December): 20-21. Retrieved March 3, 2008, from http://www.arl.org/newsltr/230/usage.html.

Franklin, B., & Plum, T. (2004). Library usage patterns in the electronic information environment. Information Research: An International Electronic Journal, 9(4), paper 187. Retrieved March 3, 2008, from http://informationr.net/ir/about.html.

Franklin, B, & Plum, T. (2006). Successful Web survey methodologies for Measuring the Impact of Networked Electronic Services (MINES for libraries). IFLA Journal, 32(1), 28-40. Retrieved March 3, 2008, from http://www.ifla.org/V/iflaj/IFLA-Journal-1-2006.pdf.

Hanges, P. J., Aiken, J., & Chen, X. (2007). Diversity, organizational climate, and organizational culture: The role they play in influencing organizational effectiveness. In Francine DeFranco et al. (Eds.), Proceedings of the Library Assessment Conference: Building effective, Sustainable, Practical Assessment, September 25-27, 2006, Charlottesville, VA. (pp. 359-368). Washington, DC: Association of Research Libraries.

Heath, F., Cook, C., Kyrillidou, M., & Thompson, B. (2002). ARL Index and other validity correlates of LibQUAL+[TM] scores, portal: Libraries and the Academy, 2, 27-42.

Hiller, S., Kyrillidou, M., & Self, J. (2006) Assessment in North American research libraries: A preliminary report card. Performance Measurement and Metrics: The International Journal for Library and Information Services, 7(2), 100-106.

Hiller, S., Kyrillidou, M., & Self, J. (2007a). When the evidence isn't enough: Organizational factors that influence effective and successful library assessment. Presented at the Evidence Based Library and Information Science Conference. Retrieved March 3, 2008, from http://www.libqual.org/documents/admin/HIller2.pdf.

Hiller, S., Kyrillidou, M., & Self, J. (2007b). Keys to effective, sustainable and practical library assessment. In Francine DeFranco et al. (Eds.), Proceedings of the Library Assessment Conference: Building effective, sustainable, practical assessment, September 25-27, 2006, Charlottesville, VA. (pp. 171-176). Washington, DC: Association of Research Libraries.

Hiller, S., & Self, J. (2004). From measurement to management: Using data wisely for planning and decision-making. Library Trends, 53(1), 129-155.

Hoseth, A. E. (2007). We did LibQUAL+[R]--Now what? Practical suggestions for maximizing your survey results. College and Undergraduate Libraries, 4(3), 75-84.

Kyrillidou, M. (1990). User survey of the library of the English Department, Aristotle University, Thessaloniki, Greece. (Master's thesis, Kent State University, 1990).

Kyrillidou, M. (2000). Research library trends: ARL Statistics. Journal of Academic Librarianship, 26(6), 427-436.

Kyrillidou, M. (2002). From input and output measures to quality and outcome measures, or, From the user in the life of the library to the library in the life of the user. Journal of Academic Librarianship, 28(1), 42-46.

Kyrillidou, M. (2005). [Translated citation--original in Greek] The globalization of library assessment and the role of LibQUAL+(tm). From library science to information science: Studies in honor of G. Kakouri (Thessaloniki). Retrieved March 3, 2008, from http://www.libqual.org/documents/admin/libqual_greek2004.pdf.

Kyrillidou, M. (2005/2006). Library assessment as a collaborative enterprise. Resource Sharing & Information Networks, 18(1/2), 73-87.

Kyrillidou, M. (2006b). Service quality: A perceived outcome for libraries. In P. Hernon, R. E. Dungan, and C. Schartz (Eds.), Revisiting outcomes assessment in higher education. Westport, Connecticut: Libraries Unlimited, pp. 351-366.

Kyrillidou, M. (2006a). Library assessment: Why today and not tomorrow? In Mersini Moreleli-Cacouris (Ed.), Library Assessment Conference--Thessaloniki 13-15 June 2005 (pp. 9-27). Washington, DC: Association of Research Libraries. Retrieved March 3, 2008, from http://www.arl.org/bm~doc/lac-greece-2005.pdf.

Kyrillidou, M. (2008, February). Reshaping ARL statistics to capture the new environment. ARL Bimonthly Report, 256, 9-11.

Kyrillidou, M., & Crowe, W. (1998). In search of new measures. ARL: A Bimonthly Report, no. 197, 8-10.

Kyrillidou, M., & Heath, F. (Eds.). (2001). The new culture of assessment in academic libraries: Measuring library service quality. Library Trends, 49(4).

Kyrillidou, M., & Heath, F. (2004). The starving research library user: Relationships between library institutional characteristics and spring 2002 LibQUAL+[TM] scores. Journal of Library Administration, 40(3/4), 1-11.

Kyrillidou, M., Olshen, T., Franklin, B., & Plum, T. (2005). The story behind the numbers: Measuring the Impact of Networked Electronic Services (MINES) and the assessment of the Ontario Council of University Libraries' Scholars Portal. 6th Northumbria International Conference on Performance Measurement in Libraries and Information Services, Durham, England, Aug. 23. Retrieved March 3, 2008, from http://www.libqual.org/documents/admin/Northumbria_2005MINES_sept20.doc.

Kyrillidou, M., Olshen, T., Franklin, B., & Plum, T. (2006). MINES for Libraries[TM]: Measuring the Impact of Networked Electronic Services and the Ontario Council of University Libraries' Scholar Portal, final report, January 26, 2006. Washington, DC: Association of Research Libraries. Retrieved March 3, 2008, from http://www.libqual.org/documents/admin/FINAL%20REPORT_Jan26mk.pdf.

Kyrillidou, M., Olshen, T., Heath, F., Bonnelly, C., & Cote, J. (2005) La mise en oeuvre inter-culturelle de LibQUAL+MC Le cas du francais. BBF 2005 Paris, t. 50, no 05, 48-55.

Kyrillidou, M., & Persson, A. C. (2006). The new library user in Sweden: A LibQUAL+[TM] study at Lund University. Performance. Measurement and Metrics: The International Journal for Library and Information Services, 7(2), 45-53.

Kyrillidou, M., & Young, M. (2008). ARL Statistics 2005-06. Washington, DC: Association of Research Libraries.

Lakos, A. (2007). Evidence-based library management: The leadership challenge, portal: Libraries and the Academy, 7(4), 431-450.

Lancaster F. W. (1988). If you want to evaluate your library.... Champaign, IL: University of Illinois Graduate School of Library and Information Science.

Lancaster F. W. (1993). If you want to evaluate your library.... (2nd ed.). Champaign, IL: University of Illinois Graduate School of Library and Information Science.

Lancaster, F. W., with the assistance of M. J. Joncich (1977). The Measurement and Evaluation of Library Services. Arlington, VA: Information Resources Press.

Lincoln, Y., Cook, C., & Kyrillidou, M. (2004) Evaluating the NSF National Science Digital Library Collections. Paper presented at the Multiple Educational Resources for Learning and Online Technologies (MERLOT) Conference, Costa Mesa, California, August 3-6, 2004. Retrieved March 3, 2008, from http://www.libqual.org/documents/admin/MERLOT%20Paper2_final.pdf.

Lincoln, Y., Cook, C., & Kyrillidou, M. (2005). User perspectives into designs for both physical and digital libraries: New insights on commonalities/similarities and differences from the NDSL digital libraries and LibQUAL+[TM] databases. Presented at the 7th ISKO Conference, Barcelona, Spain, 2005.

Lowry, C. B. (2005). Continuous organizational development--Teamwork, learning, leadership, and measurement, portal: Libraries and the Academy, 5(1), 1-6.

Lowry, C. B., & Hanges, P.J. (2008). What is the healthy organization? Organizational climate and diversity assessment: A research partnership, portal: Libraries and the Academy, 8(1), 1-5.

Molyneux, R. E. (1986). The Gerould Statistics 1907/08-1961/62. Washington, DC: Association of Research Libraries. Retrieved March 03, 2008, from http://fisher.lib.virginia.edu/gerould.

Nisonger, T. E. (1992). Collection evaluation in academic libraries: A literature guide and annotated bibliography. Englewood, CO: Libraries Unlimited.

Nitecki, D., & Franklin, B. (1999). New measures for research libraries. Journal of Academic Librarianship, 25(6), 484-487.

Parasuraman, A. (2002). Foreword. Performance Measurement and Metrics: The International Journal for Library and Information Services, 3(2), 37-39.

Plum, T. (2006). Evaluating the usage of library networked electronic resources. In Mersini Moreleli-Cacouris (Ed.), Library Assessment Conference--Thessaloniki 13-15 June 2005 (pp. 95-116). Washington, DC: Association of Research Libraries. Retrieved March 03, 2008, from http://www.arl.org/bm~doc/lac-greece-2005.pdf.

Plum, T., & Bleiler, R. (2001). SPEC Kit 267: User authentication. Washington, DC: Association of Research Libraries. Retrieved March 03, 2008, from http://www.arl.org/bm~doc/spec267web.pdf.

Stubbs, K. (1980). The ARL Library Index and quantitative relationships in the ARL. Report prepared for the Committee on ARL Statistics, November, 1980.

Stubbs, K. (1981). University libraries: Standards and statistics. College and Research Libraries 42(6), 527-38.

Stubbs, K. (1986a). On the ARL Library Index. In Research Libraries: Measurement, Management, Marketing. Minutes of the 108th Meeting of the Association of Research Libraries, May 1-2, 1986, Minneapolis, Minnesota. Washington, DC: Association of Research Libraries, 18-20.

Stubbs, K. (1986b). Lies, damned lies ... and ARL Statistics? In Research libraries: measurement, management, marketing. Minutes of the 108th Meeting of the Association of Research Libraries, May 1-2, 1986, Minneapolis, Minnesota. Washington, DC: Association of Research Libraries, 79-85.

Stubbs, K. (1988). Apples and oranges and ARL Statistics. Journal of Academic Librarianship 14, 231-35.

Stubbs, K. (1993). Access and ARL membership criteria. In Proceedings of the 125th Meeting of the Association of Research Libraries, 117-122.

Stubbs, K. L., & Molyneux, R. E. (1990). Research library statistics 1907-08 through 1987-88. Washington, DC: Association of Research Libraries.

Thompson, B. (2006). Research and practice: Key elements of success for LibQUAL+[R]. In Mersini Moreleli-Cacouris (Ed.), Library Assessment Conference--Thessaloniki 13-15 June 2005 (pp. 41-54). Washington, DC: Association of Research Libraries. http://www.arl.org/bm~doc/lac-greece-2005.pdf.

Thompson, B. (2007). Some alternative quantitative library activity descriptions/statistics that supplement the ARL Logarithmic Index. A report submitted to the Association of Research Libraries. Retrieved March 03, 2008, from http://www.arl.org/bm~doc/bruce_3mk.pdf.

Thompson, B., & Cook, C. (2002). Stability of the reliability of LibQUAL+[TM] scores: A "Reliability Generalization" meta-analysis study. Educational and Psychological Measurement 62, 735-743.

Thompson, B., Cook, C., & Heath, F. (2003a). Two short forms of the LibQUAL+[TM] survey as- sessing users' perceptions of library service quality Library Quarterly, 73(4), 453-465.

Thompson, B., Cook, C., & Heath, F. (2003b). Structure of perceptions of service quality in libraries: A LibQUAL+[TM] study. Structural Equation Modeling, 10, 456-464.

Thompson, B., Cook, C., & Kyrillidou, M. (2005). Concurrent validity of LibQUAL+[TM] scores: What do LibQUAL+[TM] scores measure? Journal of Academic Librarianship, 31(6), 517-522.

Thompson, B., Cook, C., & Kyrillidou, M. (2006). Using localized survey items to augment standardized benchmarking measures: A LibQUAL+[TM] study, portal: Libraries and the Academy, 6(2), 219-230.

Thompson, B., Cook, C., & Thompson, R. L. (2002). Reliability and structure of LibQUAL+[TM] scores: Measuring perceived library service quality, portal: Libraries and the Academy, 2(1), 3-12.

Thompson, B., Kyrillidou, M., & Cook, C. (2007a). User library service expectations in health science versus other settings: A LibQUAL+[R] study. Health Information and Libraries Journal, 24 (Supplement 1), 38-45.

Thompson, B., Kyrillidou, M., & Cook, C. (2007b). On-premises library versus Google[TM]-like information gateway usage patterns: A LibQUAL+[R] study, portal: Libraries and the Academy, 7(4), 463-480.

Thompson, B., Kyrillidou, M., & Cook, C. (2008). Library users' service desires: A LibQUAL+[R] study. Library Quarterly, 78(1), 1-18.

Town, S. (2006). Academic library performance, quality and evaluation in the UK and Europe. In Mersini Moreleli-Cacouris (Ed.), Library Assessment Conference--Thessaloniki 13-15 June 2005 (pp. 29-39). Washington, DC: Association of Research Libraries. http://www.arl.org/bm~doc/lac-greece-2005.pdf.

Webster, D. (2007). Library assessment: Demonstrating value-added in a time of constrained resources and unique opportunities. In Francine DeFranco et al. (Eds.), Proceedings of the Library Assessment Conference: Building effective, sustainable, practical assessment, September 25-27, 2006, Charlottesville, VA. (pp. 1-4). Washington, DC: Association of Research Libraries.

Weiner, S. (2005). Library quality and impact: Is there a relationship between new measures and traditional measures? Journal of Academic Librarianship, 31(5), 432-437.

Williams, J. (2004). How to know if it's real: Assessing diversity and organizational climate. Paper presented at the 2004 National Diversity in Libraries Conference: Diversity in Libraries: Making It Real.

Wright, S.,& White, L. S. (2007). SPEC kit 303 library assessment. Washington, DC: Association of Research Libraries.

NOTES

(1.) ARL Machine Readable Data files. Retrieved March 03, 2008, from <http://www.arl.org/stats/arlstat/mrstat.html>.

(2.) James Thayer Gerould (1872-1951). Retrieved March 03, 2008, from <http://www.lib.umn.edu/about/ul-gerould.phtml>

(3.) From 1985-86 through 2005-06, selected annual average percent increases were as follows: 7.5% annual rise in expenditures on serials, 5.3% annual rise in unit cost of serials, 3.1% annual rise in monograph expenditures, and 2.9% annual rise in unit cost of monographs. Over the same period, salary expenditures rose 4.5% annually and the Consumer Price Index rose 3.1% annually.

(4.) The median number of reference transactions in 2005-06 was 67,697, as opposed to 155,336 in 1995-96, based on data received from 79 libraries. The median number of circulation transactions in 2005-06 was 466,403, as opposed to 560,244 in 1995-96, based on data received from 80 libraries.

(5.) The median number of instructional sessions in 2005-06 was 833, as opposed to 719 in 1995-96, based on data received from 84 libraries. The median number of participants to these instruction sessions in 2005-06 was 13,051, as opposed to 8,410 in 1995-96, based on data received from 82 libraries.

Martha Kyrillidou has led the Association of Research Libraries' statistics and measurement activities since 1994. She is responsible for identifying tools for measuring the organizational performance and effectiveness of academic and research libraries, leading the StatsQUAL program that includes assessment tools such as LibQUAL+, ClimateQUAL, MINES for Libraries, and DigiQUAL. Previously, Martha worked in the Library Research Center at the Graduate School of Library and Information Science at the University of Illinois at Urbana-Champaign and the Bureau of Research at the School of Education at Kent State University. Martha has an MLS and an MEd with specialization in evaluation and measurement from Kent State University, and is the 2007 recipient of the Kent State University's School of Library and Information Science Alumni of the Year Award. She was awarded a Fulbright Scholarship in 1987-88.

Colleen Cook is dean and director of the Texas A&M University Libraries. Colleen oversaw the administration of the SERVQUAL protocol to the university library community in 1995, 1997, and 1999, which led to her role in developing LibQUAL+. She has published journal articles and book chapters and made numerous presentations in the fields of library science, history, and research methodology. She specializes in qualitative and quantitative methodologies. Colleen currently chairs the ARL Statistics and Assessment Committee and the IFLA Statistics and Evaluation Section Committee.
COPYRIGHT 2008 University of Illinois at Urbana-Champaign
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2008 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Kyrillidou, Martha; Cook, Colleen
Publication:Library Trends
Article Type:Essay
Geographic Code:1USA
Date:Mar 22, 2008
Words:10036
Previous Article:Excellence in evaluation: early landmarks at the National Library of Medicine.
Next Article:Evidence-based practice and organizational development in libraries.
Topics:


Related Articles
Introduction.
Introduction.
Determining how libraries and librarians help.
An overview of international research into the library and information needs of visually impaired people.
The influence of F. W. Lancaster on information science and on libraries: notes on the scope of this Festschrift.
F. W. Lancaster as scholar, teacher, and mentor: reflections of students.
Aftermath of a prediction: F. W. Lancaster and the paperless society.
Library education: its past, its present, its future.
F. W. Lancaster: a bibliometric analysis.
Curriculum vitae for F. Wilfrid Lancaster.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters