The map of science collaboratory. (Research Paper).
Keywords collaboratories; meta-analysis; research synthesis; unity of science; map of science
Now that we have completed mapping the human genome, we should begin to map its corresponding social unit--the set of instructions that generate/constitute our knowledge. Mapping our knowledge requires a similar large-scale collaborative effort, but greater and more significant.
Why should we undertake such a task? Why isn't generating knowledge, as we currently do through disciplinary, and more recently through interdisciplinary teams, enough? Why is mapping, that is, taking stock and systematically charting our knowledge, something worth doing? Why undertake such a formidable task unless there's a tremendous pay-off or a real need?
Mapping our knowledge has a pressing urgency. The problems that we face in the twenty-first century, from population growth to regional conflicts, to economic inequities, to environmental degradation and global warming, require a better-informed public policy. We need, as we head into this century, to get beyond the seeming inconclusiveness of scientific research. We will not be able to afford waiting a decade or two, as we have on the issue of global warming, before taking public action because there are dissenting scientific voices.
Public policy notoriously makes little use of relevant scientific findings. Part of this has to do with the narrow reach of most scientific research, and the messy holistic domain of public policy. But certainly, part of the reason that public policy can ignore scientific findings until the bitter end is that on any one issue research yields varying or conflicting findings.
We also need to accelerate our knowledge acquisition in these areas. The map of science project can help us do this, by showing clearly where adequate consensus has been reached, and where more research is needed. Beyond taking stock of existing knowledge, the mapping of science should be a logical and generative task, linking the relationships that result from disciplinary research, and offering the promise of revealing linkages between different disciplines.
There is a powerful reason for undertaking this task, intrinsic to science itself. So far our focus in the pursuit of science has been on the research project: And this was the right focus for us. It has yielded great insights, tremendous advances in technology. And until recently, that is all we could do. The unit of scientific inquiry is, surely, the experiment, or the research project. But the unit of knowledge is not the finding(s) of individual experiments but the findings from a series of experiments over time dealing with the same or similar phenomena. The true unit of knowledge is a meta-analysis (or research synthesis) of such experiments or projects. A meta-analysis provides the empirical support and the error measures for a hypothesized relationship. Meta-analytic techniques are relatively recent innovations that have been developed to synthesize the results of experimental and statistical research studies. In meta-analysis, individual research studies become the subjects for statistical analysis. In this way, the individual results of a set of research studies are analyzed to yield the cumulative effect of their aggregation, and moderator and mediator variables. (Glass, 1976; Light and Pillemer, 1984; Cooper, 1984; Hedges and Olkin, 1985; Rosenthal, 1984; Wolf, 1986; Cooper and Hedges, 1994).
The mapping of individual research findings, since these seem to be growing exponentially, might be a practically impossible task, which explains why no one has proposed this. But hypothesized relationships are crucial to science. They are either the building-blocks of a theoretical science, or the essence of a descriptive one. Relationships and their relative strength, gauged through meta-analyses, would be the building-blocks for the map of science. Since meta-analyses basically chunk individual research projects, and thereby reduce the complexity by orders of magnitude, using meta-analyses, instead of individual research findings, as the units of the map of science makes such a mapping more feasible, but still a formidable task.
Computers and the Internet have opened up the possibility of creating such a map of knowledge. I see this project as the twenty-first-century contribution to our attempts as a species to record and make accessible cumulative knowledge. Libraries have been the chief means for collecting and making accessible written records, including scientific knowledge. The Museum and Library at Alexandria were said to contain hundreds of thousands of volumes. But the Library as an institution makes no attempt to synthesize knowledge. At best library innovations have focused on making retrieval of information easier by categorizing subject matter. Although encyclopedias seem to date back to second-century Rome, it was Diderot and D'Alambert's Encyclopedia project that sought to synthesize, record and make accessible the knowledge of the known world in a set of volumes. Encyclopedias are a major contribution to knowledge, in that entries attempt to provide an expert consensus, albeit introductory, to a subject. They, however, are not focused on relationships/hypotheses. The map of science would be a truly twenty-first-century innovation, the work of a collaboratory, in which various universities and research centers around the world would be linked electronically, sharing databases of scientific research, conducting meta-analyses of such research, forging consensus on hypotheses, and determining future research needs. Part of the work of this collaboratory would be to develop further the techniques for meta-analysis or synthesis in areas of research where these techniques are under-developed, for example, in political science, sociology or history, where the lone case study is still a standard. These case studies often contain richly textured insights on relationships. What methods can we develop to synthesize findings across single case study research? What standards can we develop for future case study research?
If meta-analyses are taken to be the units of science, and since meta-analyses can be expressed as scientific abstracts, is the concept of mapping science more than a rhetorical device? Yes, because I believe that the semantic content and the relative strength of relations can be visually portrayed and mapped, and that such a mapping will generate meaningful patterns and linkages (although the mappings are likely to be three-dimensional).
WHY THIS PROJECT REQUIRES A COLLABORATORY
The concept of the collaboratory was developed to facilitate collaborative research among researchers without regard to geography. Collaboratories are `integrated, tool-oriented computing and communication systems' to support scientific collaboration among researchers in various universities or research institutions (Cerf et al., 1993). Collaboratory research could be within a discipline or multidisciplinary. The objective of a collaboratory is to make accessible secure data, instruments or analysis to various laboratories, and thus quicken the pace of collaboration and research, and speed up the time from discovery to application (Kouzes et al., 1996) The National Science Foundation (NSF) funded half a dozen projects to explore collaborative technology beginning in 1990, such as the Worm Community System (Schatz, 1990-1994, $1.3 million, NSF Award #9015407) which supports researchers studying the C. elegans nematode, or the development of unified data management and interaction substrate for enabling distributed computer collaboratories (Parashar, 2000-2004, $260,000, NSF Award #9984357). (1) Where data is constantly generated, as in the genome project, or where instruments involved are rare and prohibitively expensive, such as those shared in the Upper Atmospheric Research Collaboratory (UARC) or in its current phase, the Space Physics and Aeronomy Research Collaboratory (SPARC) (Atkins 1998-2001, $2.4 million, NSF Award #9873025), or the Collaboratory foe Microscopic Digital Anatomy (Ellisman 1994-2000, $2.27 million, NSF Award #9318180), collaboratories provide a unique vehicle for research. Where these conditions do not hold, whatever happens in a collaboratory can occur through more traditional venues. For the map of science, however, collaboratories are essential. No one university can carry out the research to accomplish this project; indeed, eventually all research institutions should be involved.
Initially, the collaboratories will need to experiment with archiving and making accessible across platforms the scientific research that will provide the raw material for the meta-analyses. Standard protocols will have to be devised for archiving, as well as for conducting meta-analyses, which will require a consensus of experts.
Collaboratories are driven by needs for shared data, instruments or analysis, and often generate innovations in computer hardware and software. The map of science will generate and make accessible shared data and analysis, and will likely require innovations in computer hardware and software. I conceive it also as having a distinctive task, to generate innovative meta-analytic tools. I see the collaboratory as the major way to speed up the generation and testing of research synthesis. As such, it will play a vital role in the confirmation of scientific theories and the generation of knowledge-rich models. These collaboratories would need to be long term because the tasks of maintaining and updating meta-analyses as new research is generated is ongoing, and also because of the initial capital investments in technology and skilled personnel.
Visualization will also be an important part of the work of the collaboratory. The power of visualizations in portraying the results of meta-analysis is wonderfully captured in, for example, the Cochrane Collaboration's logo, which portrays the results of an actual meta-analysis. (2) But the visualization I have in mind for the map of science involves a logical and notational challenge; i.e., how do we represent hypotheses that are semantically connected to a plurality of other relations, within and outside disciplines in ways that make the logical and causal interactions as transparent as they can be? Traditional logics are not adequate to the task, but the work in conceptual graphs may be promising in this respect (Sowa, 1984; Ellis et al., 1995a, 1995b). The map of science requires multidimensional diagrammatic logic (3) to convey temporality and the multiplicity of scientific layers. Having already popularized negative and positive feedback processes, and championed more holistic approaches, systems thinking can also be enlisted in developing this logic.
The map of science will involve a wide array of collaborative activities, including peer-to-peer collaboration, where individual researchers can collaborate on a meta-analysis without geographical/institutional barriers, or in original research that responds to gaps in knowledge or reliability of research (Kouzes et al., 1996; Ross-Flanigan, 1998). Mentor-student collaborations will also be part of the collaboratory's function, where senior researchers will collaborate with junior researchers or students on meta-analysis or primary research. Students will be a valuable resource in the Map of Science Collaboratory; they could be trained to maintain the archives and to perform various meta-analytic tasks under supervision. This could take place within a center or across centers as communications technologies improve. The collaboratory could also support interdisciplinary research where researchers use meta-anlyses to develop models or theories. Collaboratories can also include producer-consumer collaboration, where results of meta-analyses are used as inputs to public policy/planning. Producer-consumer collaboration is a distinctive objective of the Map of Science Collaboratory, and thus active engagement of public policy/planning researchers is envisaged from the start of this effort. This collaboration is meant to fundamentally link public policy and planning to scientific research.
The collaboratory will require shared facilities for work within each institution, including powerful supercomputers for archiving and processing vast amounts of information. Facilities that support electronic collaboration among centers will also be required. These include audio/video conferencing, including network-based software, Internet2 Qbone; (4) shared computer displays; shared electronic notebooks; online instruments such as analytic and modeling tools for multiple users; shared whiteboards; WWW browser synchronization (Kouzes et al., 1996; Schur et al., 1998).
The distinctive foci of this collaboratory, synthesizing knowledge, innovating techniques for doing this, and linkage with the policy sciences is likely to result in more successful collaborations than current collaboratories that aim at producing primary research. In disciplinary research, individual or even inter-team competition is the rule, and issues of research ownership and authoring have not been fully addressed (Ross-Flanigan, 1998; Kiernan, 1999). These issues delay wider implementation of collaborations in primary research. Synthesizing relies on existing research and will be conducted using agreed-upon techniques. It is likely, although this itself is a topic for research, that collaboration on this essential task of science will not be fraught with the same distrust as the work of other collaboratives.
Most recently, a new project funded by NSF, the Science of Collaboratories, is monitoring and studying the social and technical factors associated with the success and failure of collaboratories. (5) The project aims to develop a collaboratory knowledge base, as well as principles and frameworks for developing successful ones. Guidelines for the use of the technology of collaboratories as well as technological assessment, including assessment of emerging testbeds, are also objectives of the project. The findings from this new research project, focused on the collaboratory medium itself, could help guide the Map of Science Collaboratory.
In sum, the map of science project is the ideal project for a collaboratory. The project can only be conducted through collaboratories, and the very task of synthesis is intrinsically collaborative.
MORE ON META-ANALYSIS OR RESEARCH SYNTHESIS
Although Karl Pearson and others at the turn of the twentieth century can be credited with the invention of statistical methods for synthesizing results of different experiments, meta-analysis can be said to have a modern birthdate, (6) April 1976, when G. V. Glass gave his landmark address to the American Educational Research Society, `Primary, Secondary, and Meta-analysis of Research'. Glass set out a process for synthesizing research studies that used statistical methods, including the use of probabilities and effect sizes, for aggregating results. Even earlier in 1972, the British epidemiologist, Archie Cochrane, had made a strong argument for systematic reviews of randomized controlled trials in medicine as a basis for informed public health policy and decision-making. Since then, various researchers (Rosenthal, 1984; Peto, 1987; Cooper and Hedges, 1994; Wolf, 1986; Lau et al., 1992; Hunter and Schmidt, 1990; Rubin, 1990) have contributed to the development of meta-analysis as an active and productive sub-field in statistics, and as a vital scientific methodology. Meta-analysis has had critics and criticisms over the years (Eysenck, 1978; Shapiro, 1994; Feinstein, 1995; Lelorier et al., 1997). Like most methods, it is not problem-free. Several biases and sources of error have been identified and debated in the literature, most prominent of which is publication bias. (7) However, Walker (1999), after recognizing the current problems with meta-analysis, answered his own question as to whether meta-analysis is really needed: `Yes. Because there is no serious alternative for taming medical publication ...'. Despite the problems it shares with most methods, meta-analysis has become a well-established and accepted methodology. (8) In addition to the handbooks cited above, various books and journal articles expand meta-analytic methods (the database Current Index to Statistics contained 325 journal articles, 19 books and 41 proceedings under the entry for meta-analysis in July 2000), and address theoretical and practical problems. Various computer software packages have been developed to aid in the conduct of meta-analysis, such as ABMA (Lawrence Erlbaum), DSTAT (Lawrence Erlbaum), EPIMETA (EPO at USCDC) and MetaView (Cochrane Collaboration).
The actual practice of meta-analysis has mushroomed since the late 1970s. Beginning with no publications in scientific journals in the late 1970s, currently there are hundreds of entries annually, and thousands in the databanks. See the Appendix for a summary of current entries in major databanks under meta-analysis. (9) Since there is much duplication of entries among the set of databanks, it is not appropriate to total up the entries in all the databanks. But the number of entries in the various databanks provides a good indicator of the influence of meta-analysis in the different fields. By far, the databanks of the health sciences have the greatest number of entries, with the biological sciences and psychology coming second. Educational research, not surprisingly, since Glass delivered his 1976 paper as the incoming president of the Educational Research Association, comes in third. Publications in sociology and economics and ecology are beginning to increase rapidly. The journal Ecology published a 1999 issue devoted to the use and adaptation of methods of meta-analysis in ecology.
Various branches of the federal government have employed meta-analysis. Until 1996, (10) a unit in USGAO, Program Evaluation and Methodology Division, specialized in meta-analyses, and conducted a number of influential meta-analyses, including one on the effects of the WICK program (Chelimsky, Committee on Agriculture, Nutrition, and Forestry, US Senate, unpublished, 1984). NIH's Agency for Healthcare Research and Quality (AHRQ) funds a dozen Evidence-Based Practice Centers (EPCs) affiliated with medical schools and research centers to conduct analyses of existing literature and provide evidence reports. It bases its advisory notices on the findings of these meta-analyses. AHRQ has also funded innovative tools in meta-analysis, such as Real-time Meta-Analysis System (RTMAS) in 1992 for $1.2 million to conduct, update and to tailor meta-analyses to patient characteristics. (11) NSF has funded a number of meta-analyses and research on meta-analysis methods since 1989. A search of the NSF awards databank on 8 July 2000 yielded 22 awards under meta-analysis for a total amount of about $2.2 million. This modest amount has to be interpreted in the light of the funding of meta-analyses by other branches of government, especially NIH.
The Cochrane Collaboration
Since the health sciences conduct randomized, controlled trials (RCTs), which are ideally suited for statistical analysis, meta-analysis has had its greatest effect on them. A major reason that meta-analysis has gained such a strong foot-hold in the health sciences is surely due to the exemplary work of an international collaboratory, the Cochrane Collaboration (CC). The Cochrane Collaboration was inspired by Archie Cochrane's influential book and work, which argued that in order for people to make informed decisions, they need to have access to `reliable reviews of the available evidence' (Cochrane, 1972). The idea for a center to generate and provide such reviews was initially supported by the UK's National Health Service, Research and Development Programme. The Center was founded in 1992. Within a year, it had become an international collaboration.
The major function of the CC is to generate and update systematic reviews of all relevant RCTs. In order to carry out this task, it also develops and maintains an international register of completed and ongoing RCTs. The CC, operating as a non-profit organization, carries out its work through a set of international centers, three of them in the United States (the New England, San Francisco, and San Antonio centers), which help to establish collaborative review groups, coordinate the international register of completed and ongoing RCTs, develops materials for and conducts training for review groups, and conducts further methodological research. At the heart of the CC are the review and methods groups. These groups are made up of volunteers, which include clinicians, researchers, statisticians and consumers. There are currently close to 50 groups who take responsibility for health problems or areas (which include, for example, Drugs and Alcohol, Heart, Eyes and Vision, Infectious Diseases), develop a work plan for conducting the reviews, and then proceed to conduct them. In its short life-span, the CC has managed to operationalize and keep true to its nine principles of collaboration which include minimizing bias, keeping up to date, improving the quality of reviews and ensuring access. It has so far generated 795 reviews and 738 protocols of reviews in process, and maintains and provides access to 263,775 RCTs. (12) It operates 16 methods groups organized around topics that include Statistical Methods, Information Retrieval, Reporting Bias, and Training and Support. These as well as other material are available through its electronic library, which is widely accessible. (13)
The meta-analyses conducted by the collaboration are models for the health sciences. The higher quality of the reviews produced by the CC was confirmed inadvertently by a 1997 study (Egger et al.) which tested for bias in meta-analyses by measuring the asymmetry of funnel plots generated by the comparison of the results of meta-analyses to paired, single large trials. The study compared two types of meta-analyses--those generated by the CC and those published in medical journals--to paired single, large trials, the gold standard of experimental medicine. While 38% of the journal meta-analyses included in the study showed evidence of significant asymmetry or bias, 13% of the Cochrane reviews displayed such asymmetry. This corroborates the potential superiority of the work of collaboratories in developing research syntheses. A collaboratory such the CC has built-in mechanisms for learning. Ongoing substantive review and methods groups, and center assistance through research on methods, and systematic, uniform, high-quality training ensure higher-quality reviews.
A most promising development is the establishment of the Campbell Group, the social sciences and policy sibling collaboratory of the Cochrane Group. This new international collaboration began officially in 2000 and focuses on studies of the effectiveness of social interventions in the areas of education, social welfare and criminal justice. The methods groups already established include a quasi-experimental design (14) group, an implementation processes group, as well as a statistics group. Its major tasks are to maintain, prepare and disseminate systematic reviews of the effects of social and behavioral interventions in various areas of public policy, as well as to develop methods and protocols for systematic reviews. Since this group is focused on social interventions where randomized controlled trials are not the norm, its methodological research and protocols for conducting research syntheses on quasi-experimental designs are likely to break new ground. (15)
META-ANALYSIS IN THE MAP OF SCIENCE
Meta-analysis has become an established part of the statistical tool-box, although its future is wide open, and prominent researchers have identified future challenges. Glass, for example, points to the need for meta-analysis to go beyond main effects, and specify `special relationships between moderator variable, mediator variable, and outcomes that account for different results of studies of the same subject' (Hunt, 1997). Rubin envisions that in the future meta-analysis will go beyond the confines of the studies it synthesizes to extrapolate a model `treatment-effect response surface' (Wachter and Straf, 1990), but the idea has not been developed further. The map of science will ensure that methods in research synthesis evolve and improve.
However the methodology develops in the future, what is clear today is that if meta-analyses are to be optimally generative, they need to be linked to each other and existing theory. Take the CC, which is an admirable institution. The standards for conducting the meta-analysis are excellent. The meta-analyses serve as models for conducting such studies. They trace the implications of the review for practice and research. But they stand on their own, piecemeal. They lack systematic linkages among themselves, and most important they lack connections with theory or larger hypotheses. As such, meta-analyses could be helpful for guiding specific policy, but not for shedding light on larger slices of science. In addition, the search capabilities of the CC are basic. As of 2000, there was much room for improvement; e.g., there was no electronic linkage with externally generated meta-analyses. Even the example of the CC makes clear the need for unified national or international archives with improved connectivity, both theoretical and electronic.
Just as in primary research, meta-analyses are subject to bias and other errors. They also require evaluation. The five-step meta-analytic process that Glass (1976) and Cooper (1984) characterize ends with publication. But meta-analyses are too important to rely on some individual(s) having a similar interest and time to review a meta-analysis adequately. Although Cochrane reviews are subject to more internal substantive and methodological criticism than meta-analyses published in journals, it is worth noting that the CC recognizes this problem. `After a review has been printed, opportunities for published criticism are usually limited to the few letters that editors can accept for publication, which are often unhelpfully brief and non-specific. It is also frustrating that there is no straight forward way in which the authors of printed reviews can amend their reports after taking account of valid criticisms' (Cochrane Collaboration, 1999). To address this issue, the CC plans to incorporate valid criticisms in updates of a review. To me, however, although a move in the right direction, it is only a half-measure. Meta-analyses require expert evaluation. Indeed, external evaluation of the meta-analysis should be built into the meta-analytic or research synthesis process. Even if the meta-analysts followed the best procedures, since a meta-analysis is not a mechanical or fully formalized set of techniques, but requires multiple judgment points, it requires an evaluation that should be conducted by a group that includes methodological and substantive area experts. The evaluation itself should be part of the meta-analytic report. If scientific agreement is urgently needed in certain areas because of public health, security or investment concerns, the extra analytic burden is warranted. If the evaluation of social policy and programs is now an established practice, how can we fail to evaluate a research synthesis that could form the basis for future social investments and policy? (16).
Meta-analyses have been primarily employed in a retrospective way to establish what we know in an area, given existing research. As such, they lack the excitement of the discovery process in primary research. But in the map of science collaboratory, meta-analysis would also be used prospectively to identify areas for new research, to consolidate and confirm theories, and to develop knowledge-rich models. This is the exception today. (17) In particular, in MSC, meta-analysis would be employed within research programs. Today, the results of meta-analysis in many cases suggest higher-order hypotheses, but these are not followed up. For example, the findings of the thin slice of behavior meta-analysis conducted by Ambady and Rosenthal (1992), that indicates the accuracy of observers' judgments on the basis of non-verbal video clips of 5 minutes or less, should lead to higher-order hypotheses that explain this curious phenomenon. This can best occur if meta-analyses are systematically mapped within appropriate theoretical support.
To date, meta-analysis constitutes a powerful suite of statistical tools. Unfortunately, as with much of statistics, on a piecemeal basis, they are readily mastered and widely used, but most users cannot situate them within an adequate philosophy/theory of knowledge and inquiry. This charge may not seem too significant today, since modeling seems to be short-circuiting theory. With the new modeling tools gleaned from AI, such as neural networks, genetic algorithms and fuzzy logic, and with computer capacity growing so rapidly, models may become more important than theory in many scientific fields. But even if models were to supplant theory, the need for meta-analysis would not lessen. Most models today, even if calibrated to data, even when they are information-rich, are knowledge-poor. (18) That is, the relations that they posit are often based on the expert opinions of the members of the team who develop the model, and prone to all the subjectivity involved. Models not based on meta-analyses, however quantitatively sophisticated they are, are prone to errors and biases comparable to the errors and biases of lone case studies in the social sciences.
Of course, the quality of knowledge synthesis that currently goes into models falls into a wide continuum from poor to good. The models developed by the Global Change consortium represent a tremendous amount of knowledge synthesis accomplished by one of the largest-scale collaborative efforts we have witnessed in the past 20 years, although they fall short of the best standards for research synthesis. Indeed, the first articles employing meta-analytic techniques in global change journals are just beginning to be published (Peterson et al., 1999). But even in this well-funded and organized collaboration, some of the modules of the models, e.g., human factors, are knowledge-poor. This is the case, not due to any failing of the social scientists involved in the collaboration, but because the knowledge in their fields is under-developed, i.e., has not been systematically synthesized. Large-scale models are as weak as their weakest links. Weak links are generated in two ways: either there is a genuine lack of knowledge, which calls for new research; or, although abundant research in an area exists, there are dissenting views. The latter type of weak link responds to meta-analysis. Meta-analysis can also pinpoint the areas where new research would be most fruitful.
Meta-analytic tools are not one-size-fits-all. In order to introduce meta-analytic tools into a new field of science, even in fields that use quantitative methods, e.g., ecology, meta-analytic tools need to be adapted to the particular problems, theories and approaches of the field. Exemplars need to be produced to show how the methods can be effective and reliable in synthesizing research in the field. This requires a collaboration between methodologists and experts in the field, such as occurred recently in the journal Ecology. The map of science project would take up this task of adapting existing tools and of generating exemplar meta-analyses in new fields. But in mapping science, surely the greatest challenge to meta-analysis will be to innovate methodologies for synthesizing the qualitative research of many of the social sciences, and to gain wide acceptance for these methodologies.
THE MAP OF SCIENCE CENTERS
Modeled on the concept of collaboratories, and benefiting from research on meta-analysis/ research synthesis, and in particular from the experience of the Cochrane Group, I envision the university centers in the Map of Science as having the following functions and objectives:
* Maintain an Internet archive or registry of empirical research (19) in specific areas. This would include a review of research projects, as well as grading or determining the quality of the research. This in many cases would involve retrieving basic research data not included in published accounts, since, given space constraints, most published articles can only include findings, and a description of the methods. With a few notable exceptions, such as the genome project or the CC, we lack searchable databases of scientific research or of meta-analyses.
* Serve as centers for research and training in meta-analysis. Each university center could develop and update meta-analyses in substantive areas of research where the university has research strengths.
* Provide and maintain Internet access to linked university centers to their registers of individual research reports, of meta-analyses, of methodological reports and of research area assessments. Centers could also provide free Internet access to the public to meta-analyses and research area assessments.
* Centers can ensure front-end linkages between science and public policy and evidence-based public policy.
* Development of knowledge-rich models. Models are not currently based on meta-analysis of research findings, but on selective expert knowledge. These centers could go beyond the current practice in meta-analysis and develop model-driven meta-analyses.
* Facilitate consensus in disciplinary research areas by bringing both substantive and methodological experts together through national and international computer-assisted workshops to review meta-analyses.
* Centers could also take responsibility for conducting exploratory integrative workshops to identify linkages between disciplines that could lead to the identification of inter-disciplinary research programs.
* Serve as centers of innovations in distributed computer hardware and software to support the project.
* Develop methods to visualize scientific findings and their linkages.
How can such a project be implemented? Ideally, the Map of Science Collaboratory could best be implemented through government funding of university centers, through institutions such as the National Science Foundation in the United States. The collaboratory could begin with initial funding for 6-12 university centers in the United States or throughout the world. The work of each of the university centers could be interdisciplinary, involving methods and substantive area researchers. For a university such as the University of Washington, for example, potential fields to link collaboratively could be the various environmental fields, public policy and planning, public health, social sciences and statistics. Other universities could choose other mixes, but some overlaps among centers should not be discouraged. Since hardware and software would be developed to ensure that researchers at different centers would have access to databases and could collaborate on meta-analyses, initial specializations are more a matter of convenience and interest than of fixed domains. If government funding is not forthcoming, I believe that the idea of a worldwide collaboratory concentrating on synthesizing our knowledge will be incrementally implemented through voluntary collaborative efforts, such as the Cochrane and Campbell Groups. It is a powerful idea that will generate its followers. And incrementalism would be fine, if we had `world enough, and time', as Marvell argued. But, as I argue, our need as a species to determine the state of our knowledge and the effects of our interventions is urgent. Public support for the Map of Science Collaboratory could deliver an evidence-based public policy, and a new scientific era.
A NEW SCIENTIFIC ERA
Research synthesis will not eliminate dissent or error, nor discourage innovation. It cannot displace primary scientific inquiry. But it can uniquely reinvigorate the status of science as an objective, consensual enterprise that accumulates knowledge, whose end-product is not just technology but more informed public policy.
How can we expect scientists even within narrow fields to come to wide consensus on anything, if there are no adequate or reliable vehicles for presenting actual evidence? And if consensus is not evident in the scientific community, how can we expect policy-makers or the public to agree on the implications of scientific research?
Without methods for research synthesis, individual scientific opinions within a field have a wide leeway for divergence, because there is no shared reliable evidence that can lead to consensus. Without these tools, we rely on the individual scientist to figure out what is reliable knowledge based on evidence available to her/ him. Outside an individual's own research, this evidence comes in the form of articles and traditional literature reviews in scientific journals and conferences. (20) Before the twentieth century, science and society could rely on individual scientists' synthetic capacity in the face of a manageable fund of scientific research and publications, to reach consensus more readily. Basically, all scientists within a field, if diligent, could keep abreast of available new research. Of course, the problem then was availability. But the community of science within a defined area could reasonably expect that all its members shared a common fund of experience.
Today, increasingly, individual researchers or individual members of research teams, given the individuality of experience, and the exponential growth in research and publications, can only sample the existing literature in a subject. The fact that individual researchers can no longer come to grips with the all the research occurring in their fields and rely on a subjective sample of such research increases the potential for divergent opinions.
Without the use of synthetic tools and vehicles, consensus within a scientific community can emerge, but it does so mechanically, as an outcome, as a resultant of conflicting individual or group views. By aggregating individual research, synthetic tools and vehicles provide a pattern of the population, not a convenient sample, of individual research. The aggregation itself reduces the attention, memory and analytic tasks required of each individual scientist to keep abreast of his or her field. The synthesis generates a new product, a pattern of evidence, which in some cases can be a literal pattern, such as contained in the Cochrane logo. It is this pattern of evidence that we can all experience, scientists, policy-makers, public. These patterns are the evidentiary common experience out of which a deliberate scientific consensus can arise. The strains on individual analytic capacities that the exponential growth in research over the last century has brought about are a real threat to the very notion of the community of science, because they erode the formative common ground for a community. Synthetic tools by creating new evidence patterns that can become a fund of common experience facilitate both consensus and community in science.
The creation and widespread use of synthethic tools in science will facilitate greater convergence in scientific communities. As a corollary, the synthetic turn that the Map of Science has the potential to usher in will propel science into a new era, by shortening time-lags between paradigms (Kuhn, 1970), through reductions in the time that advocates of a position hold on to it. (21) The closer linkage MSC will forge between science and public policy should facilitate societal consensus.
E. O. Wilson heralds the coming unity of science in Consilience (1998). The map of science collaboratory is the way to operationalize the quest for the unity of knowledge. Wilson makes a strong case for emerging connections among previously disparate disciplines. Connections are easy to make; we have an innate capacity for thinking metaphorically. But connections are as good as the work connected and this work must go beyond the exploratory or selective or it will remain inconclusive and subject to challenge. The connections that will outline the unity of science must be based on sound research syntheses. We will probably always rely on brilliant minds, like Wilson or Diamond (1997), to do exploratory syntheses, but progress in science requires more than this. It requires a committed collaboratory that takes seriously the role of synthesis in science and of community in the community of inquiry. Only thus can we hope to achieve conscious, deliberate progress in science, and to provide the knowledge-base for an informed public policy.
(1) A search of NSF awards abstracts database conducted on 9 July 2000 for the keyword `collaboratories' produced 26 awards for a total of about $29 million. These included grants for improved technology, as well as grants for establishing collaboratories.
(2) See the Cochrane Brochure at http://www.cochrane.org/cochrane/cc-broch.htm#LOGO.
(3) See the work of Barwise and Etchemendy (1998) for some initial work in this area.
(4) Qbone is an Internet2 initiative to develop and deploy an interdomain testbed for new IP network services. See their homepage at http://qbone.internet2.edu/.
(5) Researchers at the University of Michigan and at Howard University are undertaking this project. Their web-page is found at http://scienceofcollaboratories.org/.
(6) But several other researchers were coming up with similar ideas and methods at about the same time. For some of this history, see Olkin (1992); Wachter and Straf (1990); Hunt (1997).
(7) Publication bias refers to the tendency for journals and authors not to publish articles on research that has no significant findings. Since the reliability of research synthesis rests in including all the effects, including those of no significance, this bias is a vital threat to the method. This bias, however, has received quite a bit of attention. See Sterne et al. (2001) for a recent article on this topic and guidelines for addressing this bias in meta-analysis.
(8) A good indicator of this is the recent publication in the Lancet of the QUORUM guidelines for preparing meta-analyses (Moher et al., 1999).
(9) Note that this may be an underestimate since some researchers prefer the term research synthesis, which may not be picked up consistently as a meta-analysis item.
(10) PEMD was disbanded in 1996 due to general cuts in the federal government.
(12) The Cochrane Library also develops and maintains a dataset of abstracts of meta-analyses conducted externally (1634) that meet a set of quality criteria, and bibliographic entries for 799 other externally conducted meta-analyses that meet minimum criteria. It also maintains a database of abstracts of health technology assessments containing 1769 abstracts, and an economic evaluation database that contains 4735 economic evaluations. All of this information was reported on the Cochrane Library web-page in early July of 2000.
(13) Url: http://www.update-software.com/Cochrane/default.HTM [accessed August 9, 2002].
(14) The major feature that differentiates quasi-experimental designs from experimental designs is the lack of random assignment in the former.
(15) The Campbell Group inherited and is augmenting the Cochrane Group's registry on randomized and possibly randomized trials in education, social work and welfare, and criminal justice originally called Social, Psychological, Educational, and Criminological Trials Register (SPECTR), and now known as C2-SPECTR. Access to C2-SPECTR is provided free to the public via the internet. The Campbell Group's major web-page in the United States is http://campbell.gse.upenn.edu/. Information on the methods groups can be accessed via http://www.missouri.edu/%7Ec2method/.
(16) The meta-analyses could be electronically published as drafts, general comments solicited, and a group of substantive and methodological experts can be convened to review the draft, evaluate for various biases, and provide input on the greater theoretical or research linkages. The meta-analytic team would then have an opportunity to change its report and/or respond to the evaluation. The final electronic version would incorporate the report of the evaluation team, and any significant changes.
(17) See Cook et al. (1992) for several examples of how researchers can use meta-analysis as an explanatory tool, especially Becker's model-driven research synthesis on the forces affecting male and female performance in school science. See also Becker (2001) for a recent call for model-driven syntheses.
(18) I refer here to a commonsense distinction between information and knowledge. Information referring to more or less interpreted data that has been put into context, so that patterns can be discerned, while by knowledge I mean information that has been: (a) situated within a theoretical context, i.e., related to other known relationships or theories; (b) tested against competing claims in some way to establish its credibility; (c) is usable.
(19) Olkin (1990) made a similar recommendation to the Committee on National Statistics of the National Research Council in 1986 for a scientific repository for reporting all studies, whether significant, non-significant or indecisive.
(20) Of course, there are other factors that influence the divergence or consensus of opinion such as common education and training, and current research funding patterns, but publications of research are an important determining factor.
(21) Cooper (1990) evaluated how interested parties evaluated literature reviews on desegregation and black achievement. The results of that evaluation were mixed: while reviewers' views converged on quantitative estimates of effect, their characterization of the effect was still divergent. As Cooper put it, `Thus, while a great deal of agreement might be reached on the observation that an eight-ounce glass contains four ounces of water, there can still be much disagreement about whether the glass is half empty or half full' (Cooper, 1990, p. 87). However, as Cooper points out, the desegregation panelists that he studied had a great deal of prior knowledge and tenacious beliefs, which tends to reduce the possibility of change in basic beliefs.
APPENDIX: META-ANALYSES ENTRIES IN MAJOR DATABASES IN HEALTH, BIOLOGICAL, SOCIAL SCIENCES AND ENGINEERING, 6-7 JULY 2000 Year of Database Total Entries by time median entries period (varied) publication Time period Entries ABI/Inform Global (business) 432 1998-00 85 1986-97 318 <1986 29 Agricola 96 1992-00 76 1995 <1992 20 Anthropological Literature 3 Arts and Humanities Citation Index 18 ASFA (Aquatic Sciences 36 1997 and Fisheries Abstracts) ASCE (Civil Engineering) 114 BIOSIS (life sciences journals) 1993-00 3666 1991-92 399 Current contents 3199 1997-00 (week 27) Current Index to Statistics 385 Dissertation abstracts 825 1990-00 536 1989-89 281 EconLit 67 1990-00 57 1995 <1990 10 ERIC 1413 1992-00 575 1982-91 773 1966-81 65 Geobase 105 1990-00 102 <1990 3 Health Star 6364 IEEE Xplore 18 INSPEC (engineering) 134 1998-00 47 1990-97 64 <1990 24 PAIS Int'l 7 MedLine via Pub-Med 6836 Science Expanded 3157 articles 2000/06 202 1999 526 1998 477 1997 463 1996 333 1995 280 1994 221 1993 171 1992 174 1991 123 1990 11 1989 57 1988 27 1987 0 1986 0 1985 5 Social Science Citation Index 2112 articles 2000/06 130 1999 273 1998 282 1997 277 1996 263 1995 224 1994 199 1993 162 1992 176 1991 120 1990 6 Sociological 216 1986-00 197 1993 Abstracts <86 19 Social Work Abstracts 52 1991
Ambady N, Rosenthal R. 1992. Thin slices of behavior as predictors of interpersonal consequences: a meta-analysis. Psychological Bulletin 2: 265-274.
Barwise J, Etchemendy J. 1998. Computers, visualization, and the nature of reasoning. In The Digital Phoenix: How Computers Are Changing Philosophy, Bynum TW, Moor JH (eds). Blackwell: London; 93-116.
Becker BJ. 2001. Examining theoretical models through research synthesis: the benefits of model-driven meta-analysis. Evaluation and the Health Professions 24(2): 190-217.
Cochrane AL. 1972. Effectiveness and Efficiency: Random Reflections on Health Services. Nuffield Provincial Hospitals Trust: London (reprinted in 1989 in association with the BMJ).
Cochrane Collaboration. 1999. Cochrane Brochure. http://www.cochrane.org [accessed 5 December 1999].
Cook TD, et al. 1992. Meta-Analysis for Explanation: A Casebook. Russell Sage Foundation: New York.
Cooper HM. 1984. The Integrative Research Review: A Systematic Approach. Sage: Beverly Hills, CA.
Cooper HM. 1990. On the social psychology of using research reviews. In The Future of Meta-Analysis, Wachter KW, Straf ML (eds). Russell Sage Foundation: New York; 75-88.
Cooper HM, Hedges LV (eds). 1994. The Handbook of Research Synthesis. Russell Sage Foundation: New York.
Diamond J. 1997. Guns, Germs, and Steel. Norton: New York.
Egger M, Smith GD, Schneider M, Minder C. 1997. Bias in meta-analysis detected by a simple graphical test.
British Medical Journal 315(7109): 629-634.
Ellis G, Levinson RA, Rich W, Sowa JF (eds). 1995a. Conceptual graphs: structure-based knowledge representation. In Proceedings of the Third International Conference on Conceptual Structures, ICCS'95, 14-18 August, University of California, Santa Cruz.
Ellis G, Levinson RA, Rich W, Sowa JF (eds). 1995b. Conceptual Structures: Applications, Implementation, and Theory. Lecture Notes in A1954, Springer: Berlin.
Eysenck, HJ. 1978. An exercise in mega-silliness. American Psychologist 33: 517.
Feinstein AR. 1995. Meta-analysis: statistical alchemy for the 21st century. Journal of Clinical Epidemiology 48: 71-79.
Glass GV. 1976. Primary, secondary, and meta-analysis of research. Educational Researcher 10: 3--8.
Hedges LV, Olkin I. 1985. Statistical Methods for Meta-analysis. Academic Press: New York.
Hunt M. 1997. How Science Takes Stock: The Story of Meta-analysis. Russell Sage Foundation: New York.
Hunter JE, Schmidt FL. 1990. Methods of Meta-analysis. Sage: Newbury Park, CA.
Kiernan V. 1999. Internet-based `collaboratories' help scientists work together. Chronicle of Higher Education 45(27): A22-A23.
Kouzes RT, Myers JD, Wulf WA. 1996. Collaboratories: doing science on the Internet. IEEE Computer 29(8): 40-46.
Kuhn TS. 1970. The Structure of Scientific Revolutions (2nd edn). University of Chicago Press: Chicago.
Lau J, Antman EM, Jimenez-Silva J, Kupelnick B, Mosteller F, Chalmers TC. 1992. Cumulative meta-analysis of therapeutic trials for myocardial infraction. New England Journal of Medicine 327: 248-254.
LeLorier J, Gregoire G, Benhaddad A, Lapierre J, Derderian F. 1997. Discrepancies between meta-analysis and subsequent large randomized, controlled trials. New England Journal of Medicine 337(8): 536-542.
Light RJ, Pillemer DB. 1984. Summing Up: The Science of Reviewing Research. Harvard University Press: Cambridge, MA.
Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF. 1999. Improving the quality of reports of meta-analyses of randomized controlled experiments: the QUORUM Statement. Quality of Reporting of Meta-analyses. Lancet 354(9193): 1896-1900.
National Research Council. 1993. National Collaboratories: Applying Information Technologies for Scientific Research. National Academy Press: Washington, DC.
Olkin I. 1990. History and goals. In The Future of Meta-analysis, Wachter KW, Straf ML (eds). Russell Sage Foundation: New York; 3-10.
Peterson AG, et al. 1999. A photosynthesis--leaf nitrogen relationship at ambient and elevated atmospheric carbon dioxide: a meta-analysis. Global Change Biology 5(3): 331-346.
Peto R. 1987. Why do we need systematic overviews of randomized trials? Statistics in Medicine 6: 233-240.
Rosenthal R. 1984. Meta-analytic Procedures for Social Research. Sage: Beverly Hills, CA.
Ross-Flanigan N. 1998. The virtues (and vices) of virtual colleagues. Technology Review 101(2): 52-59.
Rubin D. 1990. A new perspective. In The Future of Meta-analysis, Wachter KW, Straf ML (eds). Russell Sage Foundation: New York.
Schur A, et al. 1998. Collaborative suites for experiment-oriented scientific research. ACM Interactions 3: 40-47.
Shapiro S. 1994. Meta-analysis/schmeta-analysis. American Journal of Epidemiology 140: 771-778.
Sowa JF. 1984. Conceptual Structures: Information Processing in Mind and Machines. Addison-Wesley: Reading, MA.
Sterne JAC, Egger M, Smith GD. 2001. Investigating and dealing with publication and other biases in meta-analysis. British Medical Journal 323(7304): 101-105.
Wachter KW, Straf ML (eds). 1990. The Future of Meta-analysis. Russell Sage Foundation: New York.
Walker A. 1999. Meta--style and expert review. Lancet 354(9193): 1834-1835.
Wilson EO. 1998. Consilience: The Unity of Knowledge. Vintage Books: New York.
Wolf F. 1986. Meta-analysis. Sage: Beverly Hills, CA.
Hilda Blanco, Correspondence to: Hilda Blanco, Department of Urban Design and Planning, College of Architecture and Urban Planning, Box 355740, University of Washington, Seattle, WA 98195-5740, USA. E-mail: email@example.com
|Printer friendly Cite/link Email Feedback|
|Publication:||Systems Research and Behavioral Science|
|Date:||Sep 1, 2002|
|Previous Article:||Exploring the genealogy of systems thinking. (Research Paper).|
|Next Article:||A trusting constructivist approach to systemic inquiry: exploring accountability. (Research Paper).|