Printer Friendly

Meta-analysis in library and information science: method, history, and recommendations for reporting research.

ABSTRACT

Meta-analysis is a method for summarizing statistical findings across multiple research studies. It is a useful method for assessing the level of agreement or disagreement surrounding a given research question. The ability to perform meta-analysis is dependent on the level of consistency in measures and the amount of data shared in published research. Guidelines to minimum standards for reporting research may improve the quality of writing in published research. Inconsistencies in reporting research findings across studies, failing to provide enough detail on method and instrumentation to facilitate replication, and the multiplicity of different operational definitions or measures for the same concept all pose difficulties to successfully attempting any form of research synthesis. This article presents a methodological explanation of meta-analysis, a literature review describing the application of meta-analysis in library and information science, and guidelines for reporting quantitative research that would enable subsequent researchers to perform meta-analysis.

INTRODUCTION

Every scholarly journal provides highly precise guidelines to its authors regarding the length of articles, the formatting of manuscripts, and the style of citations and footnotes. While authors may meet these guidelines with varying degrees of success, at least all parties involved in the scientific communication process recognize that a standard has been established. Curiously, few scholarly journals provide any guidelines regarding standards for the reporting of research in terms of the descriptive elements of a dataset that should be shared, the statistics that should be presented for a given method of analysis, and whether or not a copy of the instrument should be included. One reason for this omission in the field of library and information science (LIS) may be because of the variety of disciplinary and methodological approaches being used by researchers. To impose rules for the reporting of research might curtail the creative freedom of authors in presenting their work. However, this rich variety of quantitative and qualitative methods and different disciplinary orientations argues all the more for such guidelines to be established. For example, whereas physics or economics may have more rigid rules for publishing research that are well understood by researchers in their respective disciplines, LIS encompasses a much broader array of research methods that is harder to explicitly articulate. How does a researcher specializing in information retrieval working with a database of 10,000 records and hundreds of queries know how to evaluate a piece of research on information behavior based on twenty in-depth interviews? How does a researcher studying information services who reviews thousands of virtual reference transactions understand the validity of a philosophical investigation in classification theory? Such confusion may grow worse when LIS researchers examine the work of their colleagues in computer science, management, law, health informatics, or technical communications whose research questions may be similar to our own.

A guide to the minimum standards for reporting research may serve to help nonspecialists (as well as students) better understand what to expect when reading about a study employing a method with which they are unfamiliar. A second and perhaps more important benefit might be to improve the quality of writing in published research. Does the article provide enough detail so that the study could be replicated? Does the article then provide enough data so that results from a subsequent study could be compared to findings from the original study? Without replication, research in LIS advances haltingly, and validation of findings is difficult to achieve. The development of commonly accepted definitions and indicators for important concepts proceeds slowly. How do we measure information anxiety, collection strength, or user satisfaction? With the absence of a predominant method of observation, researchers often develop their own operational definitions for each new study. Even when discussing relatively concrete concepts such as number of volumes in the collection, different sources use different measures (compare the Association of Research Libraries [ARL] statistics to guidelines on counting given by various state libraries), and members of the ARL debate what it means to "own" volumes placed in a regional repository (ARL Committee on Statistics, 1997).

Inconsistencies in reporting research findings across studies, failing to provide enough detail on method and instrumentation to facilitate replication, and the multiplicity of different operational definitions or measures for the same concept all pose difficulties to successfully attempting any form of meta-analysis. Meta-analysis is a form of research synthesis, and the terms are used interchangeably in fields that rely heavily on quantitative methods. Meta-analysis is a body of techniques that enables researchers to draw conclusions based on the findings of previous studies and present them in a useful and compact fashion (Matt & Cook, 1994; Hunter & Schmidt, 1990). The benefit of meta-analysis is that it enables researchers to obtain a greater understanding of the nature of the association between outcome and independent variables by comparing different values of effect size gathered from a large body of research. The ability to summarize findings across multiple situations and discover consistent trends (or in some cases, inconsistent trends) is a critical component of scientific research.

The lack of common definitions and research replication may be explained by two factors. In terms of number of researchers, number of Ph.D. graduates, and amount of available research funding, LIS is clearly a much "smaller" field in comparison to the sciences and other social sciences. Also, the field has a growing number of new scholars as many graduate schools expanded their doctoral programs from 1995 to 2005 in response to a growing awareness of the looming shortage of new faculty. Original research and the introduction of new methods enables younger faculty to build a stronger case for tenure (ironically, the author's own interest in meta-analysis is just such an example of this behavior). Nonetheless, maturity of a research area cannot be achieved without consensus building among scholars, repetition of studies or experiments to validate findings, and research articles or books that represent what Boyer (1990) defines as the scholarship of synthesis. Meta-analysis is a useful methodology for assessing the level of agreement or disagreement surrounding a given research question, and the growth in the number of meta-analytic studies in the literature is itself an indicator of increasing maturity in a given research area.

This article begins with a brief methodological explanation of meta-analysis and refers the reader to further sources for information on how to perform this type of study. This is followed by a literature review explaining the application of meta-analysis in library and information science or closely related fields. In conclusion, the author presents a set of guidelines for reporting quantitative research that would enable subsequent researchers to perform meta-analysis (and also increase the likelihood of having one's own research included in such subsequent study).

META-ANALYSIS: NUTS AND BOLTS

Bivariate analysis involves examination of the extent to which one variable may have an influence on another variable, often described as the ability of one variable to predict (but not necessarily cause) the value of the other. Correlation and cross-tabulation are two common forms of bivariate analysis. Effect size is a measure of how much change in the dependent variable can be predicted by the independent variable. A correlation coefficient is a common form of estimating effect size. The overall process is relatively straightforward and easy to understand. In summary, meta-analysis is a method of testing whether findings from multiple studies involving bivariate analysis are homogeneous or heterogeneous, or in other words, do they agree or disagree in terms of the direction of association and effect size? If the findings are homogeneous, proponents of meta-analysis then argue that it is possible to calculate a truer estimate of the effect size utilizing the data from two or more studies. The meta-analyst is not averaging the findings but rather treating data from multiple studies as if they were all part of a single study. Given enough descriptive statistics in the published report, such estimates can be calculated without requiring access to the actual dataset.

This last part of the process is where opponents question the validity of the method, suggesting that data can only be properly interpreted within the context of how the observations were initially gathered (Hunter & Schmidt, 1990). However, such arguments provide means of their own refutation by defining the conditions under which meta-analysis can be considered valid. If subject populations are given the same tests or interventions using identical measures under similar conditions, then one may logically accept that multiple tests will yield a truer representation of a bivariate relationship, just as drawing multiple samples of cards with numbers oil them from a hat will yield a truer estimate of the mean of all the numbers in the hat. Therefore, the selection of variables and effect size estimates to be considered when planning to conduct meta-analysis is vital in that it will limit the number of possible studies that can be included.

Rosenthal (1991) outlines a large number of effect size estimates that can be used in meta-analysis. Unfortunately, a number of these estimates are dependent on the scale of the variables in question. Even variables originally based on the same operational definition are sometimes rescaled for the purpose of a given study. To overcome this difficulty, G. V. Peckham Glass (as cited by Hedges & Olkin, 1985) proposed using scale-free estimates of effect size. Popular scale-free estimates include Cohen's d and Glass's alpha, but these measures are specifically designed for use in experimental or comparison studies where at least two groups of subjects are involved. Many studies in LIS are descriptive in nature and do not involve the use of control groups.

Effect size estimates that are not scale-free (for example, correlation coefficients) are susceptible to bias. Small sample sizes will cause wide variability in estimates across studies. Also, range restriction of indicators for the dependent or independent variable may reduce the value of the estimate. For example, a correlation coefficient based on a measure using a seven-point Likert scale is likely to be lower than that obtained from a measure using a four-point scale. The best way to avoid criticism when using such estimates is to only compare variables across studies that have been measured using the same scale.

Such practice may severely limit the number of studies one may include in meta-analysis. For example, Saxton (1997) encountered a number of problems when looking for repeated measures in evaluation studies of reference service performance.
 Out of fifty-nine studies, forty-two use reference accuracy as an
 outcome variable, but of those only twenty measure accuracy on the
 same scale.... Out of those twenty studies, only five reported the
 correlation coefficients between reference accuracy and a multitude
 of independent variables ... [Of these], three studies sample fewer
 than twenty subjects. (p. 274)


The situation did not improve when examining independent variables. Saxton goes on to explain that he identified 38 concepts operationalized in the form of 162 different measures. Of those 162 variables, only 10 were repeated in more than one study.

Alternatively, the amount of error resulting from comparing variables of different scales may be small, and each future meta-analyst will have to assess the extent of the possible threat to validity. When introducing a method relatively new to the discipline, future researchers are encouraged to adopt a conservative approach until acceptance is more broadly attained.

Saxton (1997) articulated that the process for comparing and recalculating effect size estimates across studies requires three steps. First, the researcher must test the homogeneity (similarity) of significance levels across studies. If the significance levels for the findings in each respective study are not homogeneous, then the findings from each sample are contradictory. It is then inappropriate to combine the findings since they are not indicating consistent conclusions. Next, the researcher must test for the homogeneity of effect size estimates across studies to determine if it is appropriate to derive a new estimate from them. For example, if for a given pair of variables one study indicates a strong association and another study indicates a weak association, the researcher cannot simply "split the difference" and declare that the combined findings indicate a moderate association. Neither study suggested that the association was moderate; the samples exhibited conflicting characteristics (Hedges & Olkin, 1985). Finally, once homogeneity has been established, the researcher can calculate a new effect size estimate and associated significance value. Studies that employ larger sample sizes are weighted so as to give them greater emphasis in the actual calculations (Matt & Cook, 1994).

Meta-analytic techniques are controversial because they are susceptible to numerous threats to validity. First, publication bias, as discussed earlier, is one danger encountered by the researcher. Frequently, studies that do not yield significant findings are not reported. Second, range restriction limits the ability to compare results across studies. Third, failure on the part of investigators to note the number of missing cases for each variable contributes to error in meta-analysis since both significance levels and effect size estimates are strongly influenced by the number of subjects being examined. Fourth, lack of reliability in measurement and coding always threatens to invalidate the conclusions for all analyses. Researchers performing meta-analyses must apply strict quality control by excluding any studies that fail to meet methodological standards or appear to sample imprecisely (Matt & Cook, 1994).

Many different sources provide a wealth of technical detail on how to design a meta-analysis and perform the necessary calculations. Within LIS literature, Ankem (2005) offers perhaps the most sophisticated discussion of meta-analysis. She provides an overview of the three dominant methodological approaches to meta-analysis: the Hedges and Olkin approach that employs scale-free estimates of effect size estimates, the Rosenthal and Rubin approach that recommends transformation of effect size estimates to standard scores, and the Hunter and colleagues approach that attempts to correct for various sources of error in individual studies. This is followed by an illustrative example of a meta-analytic study of factors affecting information needs of cancer patients. An earlier study by Saxton (1997) provides a narrower, simpler example utilizing the Rosenthal and Rubin approach in a meta-analysis of studies of reference service quality. Both Ankem and Saxton cite Rosenthal's (1991) handbook, Meta-analytic Procedures for Social Research, as a useful and relatively accessible technical source for providing guidance on which calculations to use and addressing methodological concerns.

LITERATURE REVIEW

A search in Library and Information Science Abstracts (LISA) reveals that not only is the methodology rarely applied, but that the term itself, meta-analysis, rarely appears. Conducting a search for the terms meta-analysis or metaanalysis in any field yielded references to only 51 journal articles, and a search for the phrase research synthesis yielded only 1 article. Of these 52 articles, only 21 appear in LIS-oriented journals, while the other references are meta-analytic studies in the disciplines of communication, education, or human-computer interaction. While these studies all involve information and technology and may be of interest to LIS researchers, this review will focus on studies that appear in the LIS literature.

Meta-analysis has a long history in medicine, and health science librarians are perhaps the LIS professionals most familiar with the technique. Schell and Rathe (1992) have the earliest, though also brief, mention of the term meta-analysis in LISA when describing the method as a "quantitative procedure for combining results of clinical trials" (p. 219); they further note the important role that librarians will play in helping researchers conduct extensive literature reviews as this method gains in popularity. Over the past ten years, this theme has been echoed by many others discussing the challenges for medical researchers faced with large retrieval sets, the difficulties encountered in conducting exhaustive searches for the purpose of meta-analysis, and the ability of librarians to assist researchers (McKibbon & Dilks, 1993; Smith, Smith, Stullenbarger, & Foote, 1994; Mead & Richards, 1995; Smith, 1996; Timpka, Westergren, Hallberg, & Forsum, 1997; Johnson, McKinin, Sievert, & Reid, 1997; Yamazaki, 1998; Royle & Waugh, 2004; Demiris et al, 2004).

Interest in the method as a means to investigating research problems in LIS began to grow in the early 1990s. Trahan (1993) discussed the feasibility of meta-analysis in LIS and attempted to inform researchers about the potential of this methodology. Harsanyi (1993) suggested that studies of collaborative authorship would be a good topic for meta-analysis because of the complex relationship between collaboration and productivity.

The first published meta-analysis performed by an LIS researcher appeared in 1996. Salang (1996) used Glass's techniques in studying the relationship between user needs and options for retrieving information. However, the study was not published in a widely read journal and is not frequently cited.

The following year, Saxton (1997) performed a meta-analysis of reference service evaluation studies. The primary research question was to determine what factors predicted levels of accuracy in answering questions. Out of fifty-nine studies taking place over a thirty-year period from 1965 to 1995, only seven were eligible for inclusion in the meta-analysis because they reported sufficient descriptive data and used the same measures. Findings indicated that factors such as collection growth, library budget, and hours of operation consistently exhibited a positive moderate association with response accuracy. However, the greater value of this study was to provide a step-by-step demonstration of how to conduct a meta-analysis and discussion of methodological concerns such as publication bias, quality standards, requisite sample size of studies, the need for replication of previous studies, and the need for greater uniformity in reporting research.

To model the desirable practice he was advocating, Saxton (1997) provided sufficient statistical data to enable later researchers to include his work in future analysis. This action was clearly validated four years after publication when a doctoral student, Rafael Merens, at the University of Havana, Cuba, re-analyzed Saxton's work for his dissertation. Merens examined the same seven studies using a different meta-analytic approach to optimize the value of studies with small samples, resulting in alternative estimates of combined effect size (Merens & Morales, 2004).

Hwang and Lin (1999) reported the results of a meta-analysis examining the effect of information load (defined in terms of both information diversity and repetitiveness) on decision quality of managers as reported in bankruptcy prediction experiments. The meta-analysis compared findings from thirty-one experiments reported in eighteen studies but excluded several studies "that did not report requisite data" (p. 215). In conclusion, the researchers noted the success of meta-analysis in clarifying inconsistencies in the research record: "This meta-analysis has found clear evidence of the detrimental effect of information load on decision quality. Results showed that decision quality suffers with an increase in either the diversity or repetitiveness of an information cue set. The findings help to reconcile the inconsistent evidence reported in the bankruptcy prediction literature" (p. 216). Their article ends with a discussion of the implications for both information suppliers and information retrieval.

Wantland et al. (2004) published a complex, large-scale meta-analysis concerning how the medium of an intervention (Web-based vs. non-Web-based) influences the behavior change of an individual with a chronic condition. This study may be the first attempt in the medical library literature to apply meta-analysis to an information research problem rather than a clinical research problem. In preparation, the research team conducted an extensive systematic review of the literature (see McKibbon's article in this issue for more information on systematic reviews). Each study was rigorously reviewed for its suitability for inclusion in the meta-analysis.
 The compliance to standards for the studies is based on five
 criteria: (1) study design; (2) selection and specification of the
 study sample; (3) specification of the illness/condition; (4)
 reproducibility of the study; and (5) outcomes specification and the
 measurement instruments used/validity and reliability of
 documentation of instruments. The sum of the variables result in a
 total score ranging from 0 to 18 ... Only studies with a quality
 documentation score of 12 or greater were retained for the
 meta-analysis. (Wantland et al., 2004, p. 3)


The study used a scale-free estimate of effect size, Hedges d, to assess the impact of intervention medium on user behavior. The findings conclusively demonstrated that Web-based interventions were consistently more effective than other interventions, although the actual effect size varied widely and was not homogeneous across studies.

Ankem (2005) presents a more thorough, detailed discussion of methodology in her meta-analysis of factors affecting information needs among patients. After discussing the merits of three different statistical approaches to meta-analysis, she notes that the procedure is rarely used in LIS: "The reasons for the lack of use of meta-analysis in LIS may be attributed to the difficulty in accumulating results involving variables related to the same research problem across studies and the lack of appropriately measured variables related to the same research problem across studies so that the results can be combined meaningfully" (p. 165). The results of the meta-analysis based on four studies indicated that the age of individuals has a negative association with their need for information, possibly suggesting that older individuals are more susceptible to information overload, or may intentionally avoid seeking information about their medical condition, than younger individuals. One particular strength of her study is the use of studies conducted in fields other than LIS to investigate questions about information behavior. This example suggests that meta-analysis may be a useful vehicle to expand disciplinary knowledge in LIS by building on the research enterprise of "larger" fields (those with more researchers and more grant funding).

On occasion, researchers have used the term meta-analysis when only referring to the idea of aggregating findings across studies rather than actually performing the statistical analyses conventionally associated with the term. Haug (1997) reported on a study that utilized what he described as a meta-analytic procedure. The purpose of the study was to examine physician's preferences for using different types of information sources to answer questions in their clinical practice. Unfortunately, he encountered the same difficulties in finding suitable studies to consider.
 Comparative analysis of the twelve selected studies was limited by
 their dissimilar research questions, research instruments, and
 reportorial formats ... Unfortunately, the published findings of the
 research described in this paper do not permit rigorous statistical
 meta-analysis. Conventional meta-analysis marshals evidence for or
 against relations among variables common to several studies by
 combining results of significance tests or statistics which measure
 strength of relationship. The twelve investigations analyzed in this
 study neither share a common hypothesis nor test for relations among
 a common set of variables. (p. 225)


Haug settled for aggregating data on ranking physicians' preferences since he did not find any study that tested bivariate relationships. While Haug was conscientious in his use of the term, others have been less concerned. Olson and Schlegl (2001) describe their investigation of critiques of subject access standards in the classification literature as a "meta-analysis" although the only quantitative evidence they present are percentages of topics appearing in ninety-three articles.

Despite these individual efforts, meta-analysis has largely been underutilized in LIS. Hjorland (2001) wrote a letter to the Journal of the American Society for Information Science and Technology lamenting that meta-analysis was being neglected by information scientists and arguing that meta-analysis was a valuable research method and also "an expansion of the professions [sic] possibility in relation to what should be our core competence: document searching/information retrieval" (p. 1193). However, as has been demonstrated repeatedly in the above review, issues of consistency, replication, and adequate reporting must also be resolved before meta-analysis can be more widely applied.

RECOMMENDATIONS FOR REPORTING RESEARCH

The ability to conduct a meta-analysis is dependent upon the consistency with which earlier studies report findings. As discussed at the beginning of this article, it is ironic that stringent rules exist for governing the style of citations and a complex code administers the creation of bibliographic records, yet no commonly recognized standards exist for reporting the results of research in LIS. Saxton (1997) proposed a set of five minimum standards for reporting quantitative research studies that use Pearson's correlation coefficient for bivariate analysis. In response to Ankem's (2005) criticism of this narrow approach to meta-analysis, these standards are amended here as follows to accommodate a broader range of statistics:

1. Include the operational definition of every variable mentioned in the article. In some cases, such as survey research, the simplest way to do this may be to include a copy of the instrument (to save space in the journal, some items such as demographic questions may be omitted, and the instrument may be reformatted).

2. For every variable mentioned in the article, list the mean, minimum, maximum, and standard deviation. This data can be easily summarized in a short table in an appendix to the article.

3. List the number of responses for each variable. If the variable has missing cases, list the total number of subjects available for that variable. This data could also be included in the aforementioned table.

4. When describing bivariate relationships, include the precise level of significance (for example, p) associated with a given statistic for effect size (for example, Pearson's r) rather than just truncating (for example, p < .05). This enables the meta-analyst to calculate more accurately a significance level associated with the newly derived effect size based on multiple studies. Significance is an arbitrary level based on the degree of confidence the researcher is seeking in a given study and may often vary for studies using the same measures and methods.

5. When bivariate relationships are found to be insignificant, list the precise value of p rather than simply noting that the results were not significant. Significance is closely related to sample size, and meta-analysis utilizes larger samples by interpreting findings from multiple studies.

6. Explicitly describe the population and the unit of analysis for each variable within the population (for example, in a study of reference service, Saxton [2002] gathered observations at the library, librarian, and service transaction level). Findings across studies cannot be compared if they use different units of analysis. To apply group-level observations to individuals is known as the ecological fallacy, and to apply individual-level observations to groups is the reductionist fallacy (Schutt, 2004). Such errors result in intraclass correlation, an error that masks the true effect size between two variables by confounding group-individual relationships.

Of course, the primary objective for researchers is to explain the phenomena they are observing and what it means in terms of expanding disciplinary knowledge and improving teaching and practice. Few researchers set out with the goal of making meta-analysis easier to perform. However, scientific research is a cumulative process where advances are made through multiple investigations over time. Investigators who follow the above guidelines will encourage that process and potentially increase the impact of their own work as exemplified in the relationship between Saxton (1997) and Merens and Morales (2004).

While consistent reporting is the first issue to overcome, the second problem is the lack of consistency in measuring concepts over time. Investigators have not been using the same operational definitions either through oversight (lack of awareness of previous studies) or intention (a belief that previous studies used poor measures). Until some consensus is reached on what definitions and indicators are best to use for the significant concepts in given problem areas, repetition of tests across multiple studies will rarely occur. In terms of quantitative research, this will retard the maturation of the discipline by preventing the accumulation of large datasets and enabling new researchers to build upon the foundation laid by experienced researchers. This may also discourage new researchers from pursuing quantitative methods as a possible means of investigation for the questions that interest them.

As a final thought, the Internet has provided a platform to make it easier to perform meta-analysis than at any other time as scholars no longer view the refereed journal article as the sole means for disseminating information about their research. As journal editors review papers with an eye to cutting out "extraneous" material to conserve pages, the World Wide Web makes it possible to share tables of variables, statistics, copies of instruments, and any other information that would be of use to colleagues investigating the same research questions. In some cases, individual researchers may now provide their actual dataset to others (subject to regulations governing the privacy concerns of human subjects). However, scholars also have many good reasons to restrict the nature of access to their data, primarily to retain control of how the data is used and how findings are interpreted and presented. Likewise, releasing instruments to the public before conducting any reliability testing or cross-validation of the different variables may only result in the repeated use of poor measures. Reporting research findings according to the recommendations given above provides a "middle road" between providing total access to data or instruments and controlled sharing that enables researchers to receive peer feedback, facilitate meta-analysis, promote research synthesis, and still maintain ownership and control of their creative work.

REFERENCES

Ankem, K. (2005). Approaches to meta-analysis: A guide for LIS researchers. Library & Information Science Research, 27(2), 164-176.

ARL Committee on Statistics (1997). Interim guidelines for counting materials stored in library storage centers. Retrieved October 15, 2005, from http://www.arl.org/stats/arlstat/storage.html.

Boyer, E. L. (1990). Scholarship reconsidered: Priorities of the professoriate. Princeton, NJ: Council of Learned Societies.

Demiris, G., Folk, L. C., Mitchell, J. A., Moxley, D. E., Patrick, T. B., & Tao, D. (2004). Evidence-based retrieval in evidence-based medicine. Journal of the Medical Library Association, 92(2), 196-199.

Harsanyi, M. A. (1993). Multiple authors, multiple problems--bibliometrics and the study of scholarly collaboration: A literature review. Library & Information Science Research, 15(4), 325-354.

Haug, J. D. (1997). Physicians' preferences for information sources: A meta-analytic study. Bulletin of the Medical Library Association, 85(3), 223-232.

Hedges, L., & Olkin, I. (1985). Statistical methods for meta-analysis. Orlando, FL: Academic Press.

Hjorland, B. (2001). Why is meta analysis neglected by information scientists? [letter to the editor]. Journal of the American Society for Information Science & Technology, 52(13), 1193-1194.

Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in research findings. Newbury Park, CA: Sage Publications.

Hwang, M. I., & Lin, J. W. (1999). Information dimension, information overload and decision quality. Journal of Information Science, 25(3), 213-218.

Johnson, E. D., McKinin, E. J., Sievert, M., & Reid, J. C. (1997). An analysis of objective year book citations: Implications for MEDLINE searchers. Bulletin of the Medical Library Association, 85(4), 378-384.

Matt, G. E., & Cook, T. D. (1994). Threats to the validity of research synthesis. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 503-520). New York: Russell Sage Foundation.

McKibbon, K. A., & Dilks, C. W. (1993). Panning for applied clinical research gold. Online, 17(4), 105-108.

Mead, T. L., & Richards, D. T. (1995). Librarian participation in meta-analysis projects. Bulletin of the Library Medical Association, 83(4), 461-464.

Merens, R. A., & Morales, M. M. (2004). Los metanalisis: Aproximaciones utiles para su comprension. La Colaboracion Cochrane en Cuba, Parte VII. Retrieved November 1, 2005, from http://eprints.rclis.org/archive/00002680.

Olson, H. A., & Schlegl, R. (2001). Standardization, objectivity, and user focus: A meta-analysis of subject access critiques. Cataloging & Classification Quarterly, 32(2), 61-80.

Rosenthal, R. (1991). Meta-analytic procedures for social research. Beverly Hills, CA: Sage Publications.

Royle, P., & Waugh, N. (2004). Should systematic reviews include searches tot published errata? Health Information & Libraries Journal, 21(1), 14-20.

Salang, M. M. C. (1996). A meta-analysis of studies on user information needs and their relationship to information retrieval. Journal of Philippine Librarianship, 18(2), 36-56.

Saxton, M. L. (1997). Reference service evaluation and meta-analysis: Findings and methodological issues. Library Quarterly, 67(3), 267-289.

Saxton, M. L. (2002). Understanding reference transactions: Transforming an art into a science. San Diego, CA: Academic Press.

Schell, C. L., & Rathe, R. J. (1992). Meta-analysis: A tool for medical and scientific discoveries. Bulletin of the Medical Library Association, 80(3), 219-222.

Schutt, R. K. (2004). Investigating the social world: The process and practice of research. Thousand Oaks, CA: Pine Forge Press.

Smith, J. T. (1996). Meta-analysis: The librarian as a member of an interdisciplinary team. Library Trends, 45(2), 265-279.

Smith, J. T., Smith, M. C., Stullenbarger, E., & Foote, A. (1994). Integrative review and meta-analysis: An application. Medical Reference Services Quarterly, 13(1), 57-72.

Timpka, T., Westergren, V., Hallberg, N., & Forsum, U. (1997). Study of situation-dependent clinical cognition: A meta-analysis and preliminary method. Methods of Information in Medicine, 36(1), 44-50.

Trahan, E. (1993). Applying meta-analysis to library and information science research. Library Quarterly, 63(1), 73-91.

Wantland, D. J., Portilla, C. J., Holzemer, W. L., Slaughter, R., & McGhee, E. M. (2004). The effectiveness of Web-based vs. non-Web-based interventions: A meta-analysis of behavioral change outcomes. Journal of Internet Medical Research, 6(4). Retrieved May 30, 2006, from http://www.jmir.org/2004/4/e40/.

Yamazaki, S. (1998). MEDLINE searching for evidence-based medicine. Journal of the Japan Medical Library Association, 45(4), 402-405.

Matthew L. Saxton is an assistant professor at the Information School of the University of Washington. His primary research interests are question-answering behavior, intermediation, and the evaluation of information services. His book Understanding Reference Transactions explains the use of hierarchical linear modeling to investigate the factors that contribute to success in responding to reference questions in public libraries. He received the 1997 Methodology Award from the Association for Library and Information Science Education for his application of meta-analysis to evaluation studies of reference service. He earned his Ph.D. from the University of California, Los Angeles.
COPYRIGHT 2006 University of Illinois at Urbana-Champaign
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2006, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Saxton, Matthew L.
Publication:Library Trends
Geographic Code:1USA
Date:Jun 22, 2006
Words:5530
Previous Article:Identifying opinion leaders and elites: a longitudinal design.
Next Article:Observation: a complex research method.
Topics:


Related Articles
Meta-analysis: apples and oranges, or fruitless.
Meta-analysis: the librarian as a member of an interdisciplinary research team.
Grounded Classification: Grounded Theory and Faceted Classification.
Introduction.
Evaluating Digital Libraries for Teaching and Learning in Undergraduate Education: A Case Study of the Alexandria Digital Earth ProtoType (ADEPT).
Introduction.
Explanation and prediction: building a unified theory of librarianship, concept and review *.
Surveying the use of theory in Library and Information Science research: a disciplinary perspective.
Foster Mohrhardt: connecting the traditional world of libraries and the emerging world of information science.
Encounters with the library: understanding experience using the life history method.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters