Printer Friendly

Cost, statistics, measures, and standards for digital reference services: a preliminary view. (Academic Libraries).

ABSTRACT

THIS PAPER REPORTS ON WORK FROM TWO STUDIES IN PROGRESS related to assessing digital library reference services and developing standards that support such services. The paper suggests that two types of standards--utilization and technical--should be considered together in the costing, statistics, and measures for digital reference services. The digital reference community has the opportunity to embed quality standards and assessment data into software and infrastructure by linking utilization and technical standards early in the evolution of digital reference markets. Such an approach would greatly enhance the collection and analysis of a range of cost data related to digital reference service.

1. INTRODUCTION

This paper outlines the current status of standards (both utilization and technical) in digital reference with special attention given to issues of cost; both costs incurred by adopting standards, as well as means of assessing cost in digital reference. The article represents preliminary results of a study to develop methods to assess the quality of digital reference services and ongoing work to develop technical standards in digital reference.

The Information Institute of Syracuse at Syracuse University and the Information Use Management and Policy Institute at Florida State University conducted the first study. This study is developing digital reference measures; testing and refining these measures and quality standards to describe digital reference services; and producing a guidebook that describes how to collect and report data for these measures and standards.

This study began at the October 2000 Virtual Reference Desk (VRD) Conference in Seattle, where the growing digital reference community identified assessment of quality as a top research priority. As patrons demand more services online, and as reference librarians seek to better meet patrons' information needs through the Internet, it has become essential to determine common standards quality. Library administrators need strong, grounded metrics and commonly understood data to support digital reference services, assess the success of these services, determine resource allocation to services, and determine a means for constant improvement of digital reference within their institutions. Project information about this effort can be found at http://quartz.syr.edu/quality/.

The second source for this article comes from ongoing work to develop technical standards in digital reference. This work is represented by the development of the Question Interchange Profile (Lankes, 2002) and the newly initiated work of NISO (National Information Standards Organization) Standards Committee AZ (NISO, 2002). This work responds to an increasing call by vendors and technical service staff for clear guidelines to ensure interoperability. Project information about this and related standards efforts can be found at http://www.niso.org/.

While, at first, utilization and technical standards may be seen as separate, this paper argues that both, tightly coupled, are essential for the advancement of digital reference and to truly capture a holistic picture of cost. While utilization standards may determine formulae and approaches to determining the total cost of digital reference, technical standards both impact this cost (through tool development or software acquisition), as well as provide a means of distributing and/or recouping these costs. For example, in a consortia, setting a per-question cost can be determined. Properly developed technical standards can "carry" this cost with the question (for example, by providing a field with a dollar figure), greatly easing accounting and enabling the creation of a "question economy" where consortia members can bid on questions or do automated routing to the most cost-effective answer source. These concepts will be expanded below.

2. A DIGITAL REFERENCE PRIMER

For the purposes of this paper, digital reference is defined as human-intermediated assistance offered to users through the Internet. Today, libraries are offering a range of human-intermediated reference services over the Internet at an increasing rate. Research by Joe Janes and his colleagues (Janes, 2000) found that 45 percent of academic libraries and 12.8 percent of public libraries offer some type of digital reference service. These services are often ad hoc and experimental. Janes and McClure (1999) found that, for quick factual questions, librarians using only the Web answered a sample of questions as well as did those using only print sources. Many libraries conduct digital reference service in addition to existing obligations with little sense of the scale of such work or its strategic importance to the library.

This paper does not provide a comprehensive review or analysis of digital reference and digital reference services. Gross, McClure, and Lankes (2002) have published elsewhere a detailed analysis of digital reference literature. Despite this and other such reviews, there is limited knowledge about costs, assessment, and standards related to digital reference services. As the studies discussed in this paper are completed, one product will be a manual to assist librarians assessing digital reference services on a range of criteria and measures (McClure, et al., 2002).

3. DEVELOPING A TYPOLOGY OF STANDARDS IN DIGITAL REFERENCE

The authors divide digital reference standards into two types:

1. Utilization: Those standards that deal with the use and delivery of digital reference services, specifically to determine whether a digital reference service is succeeding. These can include a mix of qualitative and quantitative metrics as well as more abstract statements on best practices or objectives for a service.

2. Technical: The use of hard tools (software, hardware, protocols, and other standards enforced by computers with little or no interpretive room) and soft tools (primarily metadata and organizational schema where aspects of human description are controlled, but still open to interpretation).

These two high-level categories have been further refined in two separate efforts. It should be noted, however, that both of these efforts are ongoing, and these refinements may change.

3.1. REFINING UTILIZATION STANDARDS

The first effort to refine the digital reference typology is the "Assessing Quality in Digital Reference Services" conducted by the Information Institute of Syracuse at Syracuse University and the Information Use Management and Policy Institute at Florida State University (Lankes, et al., 2001). This study is supported by OCLC and the Digital Library Federation and a wide range of library organizations (see Table 1)

This study has compiled a preliminary set of metrics, statistics, and standards for assessing digital reference from a review of the literature and a series of site visits (http://quartz.syr.edu/quality/VRDSiteVisitsummary.pdf). These measures were reviewed by the study's advisory committee (made up of the primary sponsors and the sustaining members), and revised. As of this writing the revised measures are being field tested in a variety of library types (federal, academic, and public).

3.1.1. QUALITY STANDARDS

Utilization standards can be first refined into performance measures and quality standards. A quality standard is a specific statement of the desired or expected level of performance that should be provided regarding a service or some aspect of that service. A quality standard can be measured to determine the degree to which that standard is in fact being met (Kasowitz, et al., 2000). A quality standard defines the level of performance that an organization is willing to accept for a particular service or activity. Quality standards are important because they:

* Encourage library staff and administration to discuss and agree upon what constitutes "quality" for a specific service;

* Provide clear guidance as to the expected quality that a particular service or activity should offer;

* Educate staff--and especially new staff--as to the expected quality of service that should be provided;

* Recognize that there may be differing acceptable levels of quality for different aspects of digital reference services; and

* Provide a basis for rewards and demonstrating/reporting accountability.

Quality standards are not performance measures. A performance measure might be "correct answer fill rate" whereas the quality standard might be "the digital reference service will have a correct answer fill rate of 65 percent."

The assessment study specifically states that there is no "correct" standard for any specific digital reference service. The correct standard will rather depend on the goals and objectives of the library, the amount of resources that can be committed to reaching a particular standard, local situations affecting digital reference services, and the relative importance of one quality standard versus another. For one library, an awareness level of digital reference services of 30 percent among faculty (for example) may be acceptable; for another, the standard might be 60 percent.

While not specifically spelling out all possible quality standards, the study proposes six Quality Standards that appear to span specific circumstances and domains:

1. Courtesy: The behavior of the library or institution's staff.

2. Accuracy: The "correctness" of answers provided by a digital reference staff.

3. Satisfaction: Users' determination of their success in interacting with the digital reference service.

4. Repeat Users: The percentage of users that reuse a service after first encounters.

5. Awareness: The population user group's knowledge that the service exists.

6. Cost: The cost per digital reference.

It is assumed that each of these standards will have a strong qualitative component. However, to fully define these standards, the study created five types of performance measures that can be used to better determine success in meeting quality standards:

1. Descriptive Statistics and Measures: Statistics and measures to determine the scale and scope of a digital reference service.

2. Log Analysis: Statistics that can be derived from analysis of logs generated by Web and digital reference software packages.

3. User Satisfaction Measures: Statistics and metrics seeking to understand the user view of a digital reference service.

4. Cost: Measures that gage outlay of financial resources to run an ongoing digital reference effort.

5. Staff Time Expended: Measures to determine staff time dedicated to digital reference.

Each of these classes of measures is then further refined into specific metrics and statistics as seen in Table 2.

Further refinement within these measures is also possible. For example, the assessment study has associated data collection methods to each measure, but such refinement is too specific for the discussion in this paper. Nonetheless, special attention should be given to the cost measures and standards.

3.1.2. Cost MEASURES AND STANDARDS

The economics of reference is an area that has long been neglected. Indeed, the economics of information in general has only recently received significant attention (Kingma, 2001). Assigning costs to reference service is a complicated task but one that must be faced in order to realistically assess the true costs of doing business, to make assessments about the most efficient ways to provide services, and to determine how to share the costs of this service in setting up and participating in collaborative service models.

Understanding what it costs to provide reference, the various funding models (and cost-recovery models) under which reference can be provided, and what the effect of supporting digital reference is on other library expenditures, is important for planning, monitoring, and evaluating these services, as well for performing cost-benefit analysis and measuring the cost-effectiveness of service.

Determining the cost of a digital reference service has many of the same manifold complexities of determining cost of traditional reference. There have been a number of attempts to determine the means of costing reference service, and there have been several estimates of average cost of reference. These estimates have varied widely due to the assumptions under which costs are identified, defined, and operationalized. In many cases staff and resources are often utilized by more than one service area within the library and it is difficult to prorate out costs for any one area. Some resources are utilized both within the library and externally (as in the case of remote access to databases) so it is difficult to ascribe the cost to any one department.

Some of the most costly resources for the provision of digital reference are subscriptions and licenses to online resources and databases. These resources are also available for use by other departments and by the patron from both within the library and at home. Also, different vendors have been varyingly successful or interested in providing meaningful statistics and data about database use. In many cases it is impossible to determine what percentage of costs can be allocated to the digital reference service (especially when authentication is by IP address only). Staff perform the duties of traditional and digital reference at the same time and keeping track of time allocated to either can be problematic. It is important however to make an attempt to determine costs.

Across all sites used in the "Assessing Quality in Digital Reference Study" the collection of cost data was minimally performed and only reported in general terms. Several sites indicate that they expect to be held more accountable for specific cost data in the future, but are unlikely to collect this data unless required. There is some fear that the findings of cost data might not support the continued provision of the service.

The cost for each digital reference transaction is difficult to determine. Two libraries report that cost for outsourcing digital (chat) reference through Library Systems and Services Inc. (LSSI) runs in the $12.00 to $15.00 range per question. How the cost of this service was computed by LSSI is unknown. Digital reference at these sites is not considered separately from traditional reference for accounting purposes, and even where handled separately the costs are not calculated. The per-question cost for traditional reference services, in fact, is also unknown.

There is a major gap in the literature on digital reference services in the area of economic models and accounting. This may follow largely from the fact that the economic and costing models have not been fully developed in the traditional reference realm. This means that effective measures of cost need to be developed for all types of reference so that each can be assessed and compared in terms of efficiency and benefit.

In the literature of traditional reference services some approaches are offered toward the problem of determining what reference service costs. For instance, the Input/Output Model (Sayre & Thielen, 1989) focuses on measuring inputs and service utilization in small libraries. Functional Cost Analysis (Abels, Kantor, & Saracevic, 1996), a process explored in a variety of reference service environments, seeks to define the various costs of providing a service and then allocates these costs to that service. Hayes (1996) reports on the intricacies of assessing the costs related to the provision of electronic resources in support of reference within the framework of the Library Costing Model (LCM), but does not solve the problem for digital reference services.

Murfin and Bunge (1989) offer four methods for assessing cost effectiveness in academic libraries. They are:

* Method One: Formula for Determining the Full Cost of the Reference Transaction.

* Method Two: A Reference Service Cost Effectiveness Index Based on Success, Helpfulness, Accessibility and Time/Cost.

* Method Three: Cost (time taken) per Successful Question.

* Method Four: A Cost-Benefit Formula. (p. 17-35).

These formulas were tested in academic libraries in a project funded by the Council on Library and Information Resources for research purposes and used in the Wisconsin-Ohio Reference Evaluation Program. There may be value in using this work as a starting point for addressing the current issue of how to evaluate digital reference services from a cost standpoint.

Cost issues also exist in the development and practical management of collaborative arrangements for providing digital reference services. As collaboration models form, the question of how to share the costs of providing 24/7 digital reference services, in what will inevitably be a global forum, has already come to light as an issue that will soon need resolution. In this regard the Library of Congress, Collaborative Digital Reference Services (CDRS) (http://www.loc.gov/rr/digiref/about.html) project will be interesting to watch as it learns how to share the cost of service among its members and finds its place in the information market.

3.1.2.1. OTHER CONSIDERATIONS OF COST IN DIGITAL REFERENCE

While many of the issues of costing in digital reference parallel traditional reference, there are some factors that change. For example, digital reference lends itself to greater and more precise analysis. One of the primary differences between traditional reference and digital reference is the creation of a document trail. That is to say that while in face-to-face reference recording the reference transaction, including resources used, is at best difficult, in digital reference an auditable record of the whole reference transaction is available for analysis. Be it a transcript from a real-time session or a collection of e-mails, an organization can precisely identify the number of questions asked, the number of responses given to that question, the nature of those questions and responses (their subject, or their depth for example), and the resources used in those transactions (Web pages pointed to, digital assets transferred, etc.). In many cases the output of a digital reference transaction is a knowledge base or FAQ archive that can be either reused in the reference process, or made available to patrons as a new information resource.

3.1.2.2. COUPLING UTILIZATION STANDARDS TO TECHNICAL STANDARDS

It is at this point that the link between utilization and technical standards becomes important. By having the data needed to determine utilization standards provided by (or encoded within) technical standards, the easier the task administrators and evaluators will have. For example, if technical standards record the cost of individual reference interactions, then digital reference software can easily report total cost of service with little or no data gathering on the part of the organization. Similarly, if the technical standards can identify sources (in an XML file, or simply by identifying URL's) used, then the evaluator is saved long tedious hours of trolling through transcripts and/or e-mail records. The point of tightly coupling (1) utilization and technical standards is to have software and systems aid evaluation as part of the reference process. Technical standards allow the opportunity of building assessment into the reference process itself, rather than as a separate, often costly activity.

3.2. REFINING TECHNICAL STANDARDS

This article will not go into great depth on technical standards. A deeper discussion of digital reference standards can be found in other writing. Rather, this article will discuss the methods of coupling utilization and technical standards, as well as the impacts technical standardization may have on libraries. It is sufficient for the reader to understand that current development activities in digital reference standards fall into three types:

* Question Interchange: The means of encoding reference questions and answers into computational formats and transferring questions form one domain (2) to another.

* Profile: Descriptive information about an organization or individual used to establish a digital reference network that may exist for a single interaction or long-standing relationships. Elements of a profile may include contact information, cost of providing answers, capacity (the number of reference questions that can be answered), etc.

* Knowledgebase: The means of encoding questions and answers into a reusable archive.

Of particular interest here are Question Interchange and Profile because they directly relate to the active reference process. Technical standards can encode cost data, institutionalize actions within reference (allowing an audit process to determine what institution did what in the reference process), and track resources used in responding to an inquiry. With this data generated as part of the reference activity (thus minimizing the burden of data collection) software can better report on the full range of resources used, and therefore the true cost of a reference process. Also, by creating an easily packaged format for reference inquiries, a market approach can be brought to bear on the entire reference process (see "Towards a Question Economy" below).

3.2.1. POTENTIAL IMPACTS OF TECHNICAL STANDARDS ON THE COST OF DIGITAL REFERENCE

One hope of most standards efforts is to minimize cost. By creating clear technical requirements and ensuring interoperability in software, it is hoped that market forces will force vendors to lower prices, or at least maximize the ratio of cost of software to features or functionality. The concept is that a library can shop a range of competing software vendors, selecting based on local needs without sacrificing interoperability with other libraries and partners. This is the model in today's current OPAC market. Wide-scale adoption of the MARC standard means that libraries are ensured that catalog information can be used in any system; it is simply a matter of features and cost. A vendor, understanding that their competition can handle all the basic functions and standards, must differentiate themselves on either cost or features.

This is, of course, the long-term view. The digital reference software market is still in its infancy. It currently consists of real-time vendors (i.e., LSSI), freeware (such as AOL Instant Messenger), e-mail solutions, and home-grown solutions (i.e., software created by libraries). Since this software market has developed in the absence of technical standards, any introduction and adoption of standards will force new costs in software development and migration of internal data representations to a new standard. In some cases this may be minimal (if an application already stores digital reference data in a structured database, then it may be as simple as renaming fields, or creating new output mappings), but may be quite substantial (for example migrating from low-cost or free e-mail options to systems created specifically for digital reference). While current technical standards are being crafted with the diversity of technical sophistication in mind, a minimal threshold will need to be established (most likely in the form of transferring XML files back and forth).

3.3. TOWARDS A QUESTION ECONOMY

There are larger implications in the creation of a standard way of encoding and distributing questions. In essence these technical standards create an object. That object has certain attributes (e.g., a metadata representation) that can be separated from the original software/system/process that created it. This object-oriented approach allows the creation of a question/answer marketplace in which question objects could be exchanged and bid upon.

For example, an organization could outsource a question, paying some fee to a third-party "answering organization." This third-party organization could subsist solely by answering questions without a direct user interface (as in the LSSI example mentioned previously). Organizations could use the technical standards as a foundation for cooperative support and reference services (such as the Library of Congress' CDRS). Originating services (those that receive the questions from patrons) could include minimum requirements in answering questions and a maximum amount they are willing to pay for each answer. Third-party answering agencies could "bid" on the question allowing a sort of supply and demand economy to develop. This bidding could be either automated or human-controlled. Money doesn't have to be the only resource exchanged. A barter economy (e.g., "I'll answer one of yours if you answer one of mine") could develop. Such a system of either resource swapping or fee exchange is essential in the development of cooperative reference services.

In today's public and research libraries there is a debate over how to support digital reference efforts. How does a public library in New York get reimbursed when it answers a question from California? What is the library's incentive to offer such services? This becomes particularly problematic when it is nearly impossible to determine a question's point of origin. With the use of technical standards, electronic IOUs or actual dollars can provide an incentive to these libraries not only to answer the occasional question, but to seek out questions.

3.4. THE FULL DIGITAL REFERENCE STANDARDS TYPOLOGY AND CONCLUSION

Table 3 offers a preliminary digital reference standards typology.

This typology can serve as a starting point for further refinement and development. The point of this article and exercise is not to close the book on digital reference standards, but rather to promote a more holistic approach to developing standards. All too often technical standards are formed with little concern for assessment, and utilization standards (or measures, or best practices) often either ignore the underlying technical standards (often because they are already in place) or do seek to inform technical standards development. This is very evident in the development of the Web, and the HyperText Transfer Protocol (HTTP). Web analysis and assessments would be greatly aided if more user information was passed between computers for logging purposes. One could imagine, for example, being able to determine the number of repeat users rather than making statistical assumptions about repeat use from IP address, or determining the length of time users spend searching in databases. Instead, log analysis is forced into uncomfortable statistical guessing, and Web application must often resort to work-arounds like cookies and login screens. What may have been a desire for technical ease, or even privacy, has instead led to a plethora of incomplete solutions that often threaten both technical ease and privacy.

The digital reference community has the opportunity to embed quality standards and assessment data into software and infrastructure. By linking technical and utilization standards early in the evolution of digital reference markets (software markets, question markets), libraries can advance the field (through technology) and prove they are advancing it at the same time (through utilization standards). Moreover, the resulting improvement in collecting a range of cost data will assist libraries better plan for and deploy digital reference services.
Table 1. Members of the Quality Study.

Sustaining Members

* Multnomah County Library (the first
public library to join the study)

* The Library of Congress

* Strozer Library, Florida State University

* Cleveland Public Library

* Pennsylvania Office of Commonwealth
Libraries, Bureau of Library
Development

* State Library of Florida, Division of
Library and Information Services

* Reference and User Services Association

Contributing Members

* McKeldin Library, University of Maryland

* Mid York Library System

* Bristol University, University Library

* Liverpool John Moores University

* University Library, Syracuse University

* Library of Michigan

Table 2. Utilization Standards by Class.

Descriptive Log User

Number of digital Number of digital Awareness of ser-
reference ques- reference sessions vice
tions received

Number of digital Usage of digital Accessibility of
reference re- reference service service
sponses by day of the week

Number of digital Usage of digital Expectations for
reference answers reference service service
 by time of day

Total reference User's browser Other sources user
activity tried

Percentage of User's platform Reasons for use
digital reference
questions to total
reference ques-
tions

Digital reference Reasons for non-
correct answer fill use
rate

Digital reference Satisfaction with
completion rate staff

Number of unan- Delivery mode
swered digital ref- satisfaction
erence questions

Type of digital Impact of service
reference ques- on user
tions received

Total number of Additional services
referrals that need to be
 offered

Saturation rate User demographic
 data
Sources used per
question

Repeat users (re-
turn rate)

Descriptive Cost Staff

Number of digital Cost of digital Percent of staff
reference ques- reference service time spent over-
tions received seeing technology

Number of digital Cost of digital Percent of staff
reference re- reference service time spent assist-
sponses as a percent of ing users with
 total reference technology
 budget

Number of digital Cost of digital
reference answers reference service
 as a percent of
 total library or
 organizational
 budget

Total reference
activity

Percentage of
digital reference
questions to total
reference ques-
tions

Digital reference
correct answer fill
rate

Digital reference
completion rate

Number of unan-
swered digital ref-
erence questions

Type of digital
reference ques-
tions received

Total number of
referrals

Saturation rate

Sources used per
question

Repeat users (re-
turn rate)

Table 3. Preliminary Typology of Digital Reference Standards.

 Courtesy
 Accuracy
 Quality Satisfaction
 Repeat Users
 Awareness
Utilization Cost

 Descriptive (see Table 2 for further
 refinements)
 Performance Log (see Table 2 for further
 Measures refinements)
 User (see Table 2 for further
 refinements)
 Cost (see Table 2 for further
 refinements)
 Staff (see Table 2 for further
 refinements)

 Question
 Interchange
Technical Profile Note refined in the scope of this article
 Knowledgebase


NOTES

(1.) Coupling refers to the consideration of one type of standard or system by another. Coupling is actually a continuum from tightly coupled to loosely coupled. Tightly coupled systems (standards) are ones with a great deal of knowledge about each other, allowing for a large degree of interaction and customization. Loosely coupled systems are often unaware of each other, and allow only minimal interoperability. Z39.50 is a tightly coupled protocol, for example, versus the wide-open nature of Web searches that utilize no underlying structures (such as MARC).

(2.) A domain is a deliberately broad term that can be used to describe a single organization, a consortium, industry, or some other differentiation. So a question may be sent from a library to another library, or from the library world to the business world.

REFERENCES

Abels, E. G., Kantor, P. B., & Saracevic, T. (1996). Studying the cost and value of library and information services: Applying functional cost analysis to the library in transition. Journal of the American Society for Information Science, 47(3), 217-227.

Bertot, J. C., McClure, C. R., & Ryan, J. (2001). Statistics and performance measures for public library networked services. Chicago: American Library Association.

Gross, M., McClure, C. R., & Lankes, R. D. (2002). Assessing quality in digital reference services: An overview of the key literature in digital reference. In Lankes, R. D., McClure, C. R., Gross, M., & Pomerantz, J. (Eds.), Implementing Digital Reference Services: Setting Standards and Making it Real. New York: Neal Schuman.

Hayes, R. M. (1996). Cost of electronic reference resources and LCM:

The library costing model. Journal of the American Society for Information Science, 47(3), 228-234.

Janes, J. (2000). Current Research in Digital Reference VRD Proceedings. Retrieved June 3, 2002 from http://www.vrd.org/conferences/VRD2000/proceedings/janes-intro.html.

Janes, J., & McClure, C. R. (1999). The web as a reference tool: Comparisons with traditional sources. Public Libraries, 38(January-February), 30-39.

Kasowitz, A., Bennett, B. A., & Lankes, R. D. (2000). Quality standards for digital reference consortia. Reference & User Services Quarterly, 39(4): 355-63.

Kingma, B. R. (2001). The economics of information: A guide to economic and cost-benefit analysis for information professionals, 2nd edition. Littleton, CO: Libraries Unlimited.

Lankes, R. D., McClure, C. R., & Gross, M. (2001a). Assessing quality in digital reference services. Syracuse, NY: Information Institute of Syracuse at Syracuse University and the Information Use Management and Policy Institute at Florida State University. Retrieved from http://quartz.syr.edu/quality/.

Lankes, R. D. (2001b). Emerging standards for digital reference: The Question Interchange Profile. In Lankes, R. D., McClure, C. R., Gross, M., & Pomerantz, J. (Eds.), Implementing digital reference services: Setting standards and making it real. New York: Neal Schuman.

McClure, C. R., & Bertot, J. C. (2001). Evaluating networked information services: Techniques, policies and issues. Medford, NJ: Information Today.

McClure, C. R., Lankes, R. D., Gross, M., & Choltco-Devlin, B. (2002). Statistics, measures, and quality standards for assessing digital reference library, services: Guidelines and procedures. Syracuse, NY: Information Institute (in press).

Murfin, M., & Bunge, C. (1989). A cost effectiveness formula for reference service in academic libraries. Washington, DC: Council on Library Resources.

NISO (2002). NISO Workshop on Networked Reference Services. [Online] http:// www.niso.org/news/events_workshops/netref.html.

Sayre, E., & Thielen, L. (1989). Cost accounting: A model for the small public library. The Bottom Line, 3, 15-19.

Shim, W., McClure, C. R., Bertot, J. C., Dagli, A., & Leahy, E. (2001). Measures and statistics for research library' networked services: Procedures and issues: ARL E-Metrics Phase H Report. Washington, DC: Association of Research Libraries.

White, M. (2001). Digital reference services: Framework for analysis and evaluation. Library. and Information Science Research, 23(3) 211-231.

R. DAVID LANKES is Director of the Information Institute of Syracuse and Assistant Professor in Syracuse University's School of Information Studies. Lankes' research is in education information and digital reference services. He has authored, coauthored or edited four books, and written numerous book chapters and journal articles about the Internet and digital reference. He also was a visiting scholar to Harvard's Graduate School of Education. Additional information about Lankes can be found on his homepage at http://www.askeric.org/~rdlankes.

MELISSA GROSS is Assistant Professor at Florida State University. Her area of specialty is information seeking behavior and the major focus of her research is on imposed and shared information seeking. She has a special interest in children as a user group. In this area she has published several articles and coauthored HIV/AIDS Information for Children: A Guide to Issues and Resources with Virginia Walter, published by H. W. Wilson Company.

CHARLES R. MCCLURE is Francis Eppes Professor of Information Studies at the School of Information Studies, Florida State University. He also serves as the Director of the Information Use Management and Policy Institute at Florida State University. He was the Coprincipal Investigator with Wonsik Jeff Shim and John Carlo Bertot on a project funded by selected members of the Association of Research Libraries in 2001-2002 to develop statistics and performance measures for academic research libraries. More recently he was the coauthor with John Carlo Bertot of Evaluating Networked Information Services: Techniques, Policy, and Issues (Information Today, 2002). He is also the coeditor with R. David Lankes and Melissa Gross of Implementing Digital Reference Services: Setting Standards and Making it Real (Neal Schuman, 2002). Additional information about McClure can be found on his homepage at http://slis-two.lis.fsu.edu/~cmcclure/.

R. David Lankes, 621 Skytop Road, Syracuse, NY 13244

Melissa Gross, 1112 Ivanhoe Road, Tallahassee, FL 32312

Charles R. McClure, Francis Eppes Professor and Director, Information Use Management and Policy Institute, School of Information Studies, Louis Shores Building, Rm. 226, Florida State University, Tallahassee, Florida 32306-2100
COPYRIGHT 2003 University of Illinois at Urbana-Champaign
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:McClure, Charles R.
Publication:Library Trends
Geographic Code:1USA
Date:Jan 1, 2003
Words:5544
Previous Article:Library economic metrics: examples of the comparison of electronic and print journal collections and collection services. (Academic Libraries).
Next Article:Public opinion and the funding of public libraries. (Public Libraries).
Topics:


Related Articles
Recent trends in academic library materials expenditures.
Perspectives on User Satisfaction Surveys.
Innovative United Kingdom Approaches to Measuring Service Quality.
The economic behavior of academic research libraries: toward a theory. (Academic Libraries).
The cost function and scale economies in academic research libraries. (Academic Libraries).
Applying DEA technique to library evaluation in academic research libraries. (Academic Libraries).
Activity-based costing in user services of an academic library. (Academic Libraries).
Scholarly materials: paper or digital? (Academic Libraries).
Library economic metrics: examples of the comparison of electronic and print journal collections and collection services. (Academic Libraries).
Digital archiving in the twenty-first century: practice at the national library of the Netherlands.

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters