Printer Friendly

What's in a Metric? Data Sources of Key Research Impact Tools.

Researchers, funders, publishers, and other entities are under continued pressure to demonstrate the relevance of their activities. Many demonstrations of relevance center on research impact metrics. Traditionally well-ensconced in the ivory tower, research impact is known as the element that makes or breaks academic promotion and tenure dossiers. Demonstration of impact using metrics is increasingly more broadly applied to a variety of research outputs. "Impact" is measured in numerous ways, but citation-based metrics, or bibliometrics, remain the linchpin for demonstrating impact even in these new venues and circumstances. This naturally begs the question of how well our bibliometric tools demonstrate the impact and the value of research activity.

There are four primary tools that offer cross-disciplinary and in-depth citation tracking on a level to facilitate research impact analysis: Web of Science (Clarivate Analytics), Scopus (Elsevier), Google Scholar (Google), and relative newcomer Dimensions (Digital Science). Each of these tools has its own slightly different suite of metrics, but all provide citation counts, which are the fundamental building blocks of bibliometric indicators.

Citation counts are pulled from the dataset compiled by the database provider. The actual count varies depending on which tool is used. This is because the datasets overlap from tool to tool; each one also has its own unique content. The citation count for an article found in Web of Science will, in all likelihood, be different from the count for the same article in Scopus or the count in Scholar or Dimensions. This can raise questions among researchers and others looking to demonstrate maximized impact of their research output. To better understand the differences in citation counts from tool to tool, we need to look at the content selection approaches, philosophies, and processes of the four big players in the bibliometrics arena.

First, a bit of background. Web of Science has the longest pedigree of the four, having emanated from the Institute for Scientific Information (ISI) which was created by the venerated Eugene Garfield in the 1960s. The initial print-format citation index focused on the so-called hard sciences. Today, Clarivate Analytics, which has owned the tool since 2017, touts Web of Science as the "the world's most trusted publisher-independent global citation database" (clarivate.com/webofsciencegroup/solutions/web-of-science). For decades, the ISI indexes and then Web of Science were the sole purveyor of reliable bibliometric data. Then came Scopus.

Scopus was created by Elsevier and released in 2004. Scopus documentation emphasizes its global focus, comprehensiveness, and quality control (elsevier.com/solutions/sco pus/why-choose-scopus). Google Scholar, also released in 2004, has a promotional focus on the technology behind the tool and how this automated approach benefits the researcher, according to Steven Levy's profile in Wired of Google Scholar founder Anurag Acharya (wired.com/2014/10/the-gentleman-who-made-scholar). Finally, the newest tool in our bibliometric kit is Dimensions, which also focuses on its unique technological approach: comprehensive coverage using linked data of not just peerreviewed journal articles (PRJAs), but other outputs related to the research lifecycle (dimensions.ai).

Both Web of Science and Scopus are proprietary and come with what are generally perceived as hefty price tags, although public data on the subscription rates is not available. Google Scholar, like other Google tools, is publicly accessible for free. Dimensions has a hybrid model in which some content is freely available to users who set up login credentials, and some is proprietary and available through a subscription.

BROAD DIFFERENCES IN COVERAGE

As noted, each of the four major citation databases contains both overlap and unique content. Knowing how content is identified, selected, and evaluated for inclusion in a given database is of particular relevance to understanding differences in citation count and other features of each tool. Not surprisingly, each database has a relatively distinct focus.

According to its website, Web of Science's Core Collection contains 1.5 billion cited references dating back to 1900, within 74.8 million records. It covers more than 21,000 unique journals in 254 disciplines (clarivate.com/webofsciencegroup/solutions/web-of-science-core-collection). It should be noted that Web of Science also has a Book Citation Index, a Data Citation Index, and an Emerging Sources Citation Index, which also provide meaningful information for impact metrics. Scopus contains 1.4 billion cited references going back to 1970, within 70,000 records. It covers 22,800 titles, more than 150,000 books, and 8 million conference papers, as well as articles in press, patents, and other materials that are scholarly but not PRJAs (elsevier.com/solutions/scopus/how-sco-pus-works/content).

Google Scholar, because it has an algorithmic- and articlebased collection technique, does not keep statistics on how much or what type of content is available. Google's documentation states: "Google Scholar includes journal and conference papers, theses and dissertations, academic books, preprints, abstracts, technical reports and other scholarly literature from all broad areas of research. You'll find works from a wide variety of academic publishers, professional societies and university repositories, as well as scholarly articles available anywhere across the web. Google Scholar also includes court opinions and patents" (scholar.google.com/intl/en/scholar/help.html#coverage).

Last but not least, Dimensions "harvests" data from reliable, but openly available, indexes and databases. By July 2019, Dimensions claimed 1.3 billion references and connections between records, 102 million publications, and full-text indexing of 69 million documents. Its content comes from CrossRef, PubMed, open citation data, and other publicly available resources. Content includes of course PRJAs and also related content such as datasets, supporting grants, patents, policy documents, and clinical trials. Thus, while the surface counts of publications, records, and references make the databases seem competitively similar, the content within the four tools varies quite considerably.

SELECTION AND EVALUATION PROCESSES

The types of content available in Web of Science, Scopus, Google Scholar, and Dimensions give strong clues about the respective approaches to quantifying research impact. The processes by which potential content is likewise identified, evaluated, and eventually selected vary from tool to tool and understanding this provides even greater insights. Naturally, because each of the database companies is essentially a forprofit, proprietary organization, the "secret sauce" by which content is added is described somewhat vaguely by all.

Both Scopus and Web of Science take applications from journal editors and user suggestions into account when adding new content. Because Google Scholar uses an algorithm to identify content, the philosophy seems to be that the tool will pick up as much content as possible that can be identified by the crawler as being a scholarly resource. Content in Dimensions takes an inclusive approach in that the content comes directly from other open sources without an internal evaluation process; the inference is that the sources from which content is drawn are already making quality assessments. However, the Dimensions documentation indicates concern about the need for filtering predatory journals from the mix and further allows the user to "whitelist" and "blacklist" content retrieved from results, stating: "People no longer expect or desire that a search engine should filter content based on the preferences of a vendor" (dimensions.ai/re sources/a-guide-to-the-dimensions-data-approach).

Web of Science uses a panel of subject matter experts who are employees of Web of Science to evaluate prospective content. Web of Science describes this internal panel as being publisherneutral and not influenced by outside interests, emphasizing that it does not use any automated evaluative processes, such as algorithmic applications. Scopus states that it has made an effort to create an international, globally focused advisory board that includes a variety of journal editors, librarians, bibliometrics experts, and so forth. This advisory board works with content evaluators working under "subject chairs," who appear to do the boots-on-the-ground review and evaluation (elsevier.com/__data/assets/pdf_file/ 0004/95116/general_introduction_csab.pdf). As is to be expected, neither Google Scholar nor Dimensions retains subject experts for journallevel or article-level review due to their automated approach to content inclusion.

Web of Science and Scopus both evaluate content largely at the journal level. They each identify basic minimum standards to assure a journal title is stable and enduring. Web of Science states that it uses 28 quality criteria to assure new content meets standards of rigor and four impact criteria, largely citation-based, to assure that the new content is influential within its subject domain (clarivate.com/webofscience group/wp-content/uploads/sites/2/2019/08/WS369553747_Fact-Sheet_Core-Collection_V4_Updated1.pdf). Scopus touts its STEP (Scopus Title Evaluation Platform) system as a tool for efficient automation of some of the more rote evaluation processes. Although Scopus identifies a focus on increasing global and international coverage, it nonetheless requires English-language abstracts and full text using the Roman alphabet (elsevier.com/solutions/scopus/how-scopus-works/content/content-policy-and-selection).

Both Web of Science and Scopus have become increasingly more transparent through the years as to their selection and inclusion processes targeted at journal editors who are trying to get their publications indexed in the tools. More vague is the internal analyses that Scopus and Web of Science are doing to identify impactful journals and proactively reach out to possible targets for inclusion. Google Scholar and Dimensions do not have evaluators. Thus, it is incumbent upon the researcher to decide what content to use and what to disregard.

STRENGTHS AND LIMITATIONS

Clearly there are some trade-offs here. Web of Science and Scopus seem to be running the most direct competitive approaches to providing bibliometric data. Both are aggressively taking action to increase coverage, particularly in the social sciences. Both emphasize quality over quantity but differ in their approach and definition of what constitutes quality. Dimensions' strength is in bringing disparate data together on a variety of research outputs, normalizing and linking the connections between them. There is more of a focus on quantity with Dimensions than Web of Science or Scopus, but the Dimensions approach is to feed in data from sources already vetted for quality, such as PubMed.

Dimensions also has stated it will add other indexes as they are suggested, so there is an implied evaluative process at an index level as opposed to Web of Science and Scopus's journal-level evaluation processes. While not evaluative per se, Google Scholar identifies a given web object as scholarly if it has the right metadata that allows it to be picked up by its crawler (scholar.google.com/intl/en/scholar/inclusion.html). In fact, a 2010 article pointed to the risks of the potential proliferation of "citation spam" in Google Scholar due to this lack of quality evaluation and the simplicity for creating the proper metadata in the architecture of a spoof webpage ("Academic Search Engine Spam and Google Scholar's Resilience Against It," Joeran Beel and Bela Gipp, Journal of Electronic Publishing, vol. 13, no. 3, December 2010; dx.doi.org/10.3998/3336451.0013.305).

All of this is to say that none of the four major citation tools have the data sources to provide a complete picture of research impact. Using these resources is more a matter of understanding which database is the right tool for the job at hand. Understanding how content comes to be included in the databases is only the beginning of understanding which tool best suits a given task. There are times when it may be appropriate for using thoroughly evaluated content as a mark of reliability and enduring impact. There may be other times when non-English-language materials may provide a more complete picture of impact. The discipline in question may not rely on PRJAs as the primary vehicle for scholarly communication.

In this column, I have not even broached the topic of other features present in these tools, such as unique indicators offered, analytics options, researcher profiles, ancillary tools such as reference management, peer-review documentation, or even the robustness of the record metadata in each tool. Some of these aspects will be discussed in future columns. There is much discussion about the appropriate and responsible use of research impact metrics and indicators in academic evaluation and elsewhere. Understanding what content is feeding those metrics is the first step to responsible use.

Elaine M. Lasda

University at Albany, SUNY

Elaine M. Lasda (elasda@albany.edu) is associate librarian for research impact and social welfare, University at Albany, SUNY.

Comments? Email the editor-in-chief (marydee@xmission.com).
COPYRIGHT 2020 Information Today, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2020 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:metrics mashup
Author:Lasda, Elaine M.
Publication:Online Searcher
Date:Jan 1, 2020
Words:2032
Previous Article:Data Sets, Game, and Match.
Next Article:Recommended Reading on Information Ethics and the News.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters