Printer Friendly

The missing link (resolver).

When is the last time you checked in on your link resolver? If you're like most of us, the answer is probably "not at all recently." Link resolvers are essentially "set it and forget it" pieces of our technology infrastructure. They're also critical pieces in the web scale discovery process. Chances are, the vast majority of content that your users discover using your discovery system is delivered via your OpenURL link resolver. Although OpenURL has been around for more than 10 years, no one is really talking about it one way or the other, so what's the point in writing about it?

There is, unfortunately, a fly in the ointment. OpenURL linking works great when it works. The problem is, these links have alarmingly high rates of failure--anywhere between 3% and 30%. That's disturbing. Putting that failure percentage in context, imagine if, during the good old print-only days, those amounts of the book call numbers in your catalog were incorrect. I doubt we would have found that acceptable. Of course books were, and are, unable to be found on the shelves because they're lost, stolen, or misshelved. However, the link resolver failure rate doesn't result from any of those factors. Rather, it's all about bad metadata.

Why are there so many bad links? Usually, the metadata is either incomplete or incorrect or correct but mismatched. Think about the well-known issue of authority control. Authors' names can be and often are expressed with several variations. What happens when the metadata in the OpenURL link doesn't exactly match what's in the database of library holdings? Often, failure. There have been attempts to improve upon the original OpenURL standard to address these issues, but nonetheless, after all this time, the dynamic OpenURL linking remains so sufficiently unreliable, major discovery and serials vendors such as ProQuest and EBSCO have developed alternatives to OpenURL linking.


OpenURL is not a technology, it's a standard. In libraries, it exists primarily to connect users in a virtual space where the library's full text does not live--say, in Google Scholar--to a place where the full text does live--say in a library subscribed ejournal. The standard specifies a number of elements an OpenURL link should have--a base URL (the location of an institution's link resolver) and a query string into which is embedded bibliographic data including title, journal, author, volume, number, and page numbers. When a user clicks an OpenURL link, the metadata transferred along with the link is matched against a database of the library's subscribed full-text holdings. If there's a match, the user is passed through to the full text (often with an intermediary step where the user needs to authenticate). A sample OpenURL link, theoretically representing Peter Morville's new book, Intertwingled: Information Changes Everything, would look like this:

Once you've invested a great deal of time and money in your discovery layer, you expect it to work. In most cases, implicit in the discovery experience is delivery, at least of online items. This doesn't happen when OpenURL fails. How important is this linking? OCLC's 2009 report, "Online Catalogs: What Users and Librarians Want" (, indicates that the No. 1 request of users is more links to online full-text content. In the executive summary, the OCLC report states, "The end user's experience of the delivery of wanted items is as important, if not more important, than his or her discovery experience." So, pretty important.

Why aren't we more upset about bad links? Hard to say. A few thoughts: If a discovery system has, say, 300 hundred million items, and we assume a failure rate on the high end of 30%, that's 9 million bad links. As an absolute number, that's a lot, but spread out across the hundreds, thousands, or tens of thousands of searches conducted daily, the failures play out fairly randomly.

Bonnie Imler from Penn State University-Altoona has done some excellent research on student use of link resolvers (Imler, B., & M. Eichelberger, 2011. "Do They 'Get It'? Student Usage of SFX Citation Linking Software," College & Research Libraries, 72(5):454-463; Her research has brought to light a number of issues, but the one most relevant to link resolver failure is this: When users encounter a link failure, they blame themselves. They assume they've done something wrong, so they hit the back button or re-run the search. Thus, in many cases, bad links aren't reported as bad links since the searcher doesn't recognize the problem for what it is: link failure.

On the library end, we know there's massive complexity when it comes to the universe of electronic holdings. Journals change publishers and providers, dates of coverage change, aggregator databases add and drop titles, titles change platforms, and other serials nightmares occur. I think on some level we're surprised and delighted that linking works at all.


Although I've been talking about OpenURL primarily in the context of the linking that takes place from within a discovery platform, your OpenURL resolver plays another important role: being the primary interface to content discovered outside of the library on the network at large, such as content discovered via Google Scholar.

We know that discovery is increasingly happening outside networks and not from library search tools. Google Scholar, for example, allows institutions to upload their holdings information on the back end and for users to identify their affiliation on the front end to enable them to link through to full text they have access to. That's fine if everything is working as it should, which, as we've seen, is often not the case. Nonetheless, the infrastructure is in place to allow users to access full text via their institutional affiliation despite the discovery happening outside the confines of the library. There's a fly in this ointment as well. That fly is your link resolver landing page. What's wrong with it? Probably everything.


Across the board, these resolver landing pages are usability nightmares. Noncontextual, confusing, lacking hierarchy, lacking clear calls to action, lacking explanation--the list goes on and on. That's one issue. Another is the multiple copy issue. Many libraries have access to many full-text objects via multiple providers, which is great. The problem is that resolver pages usually present all of these links to users, so there may be two or three links to the same article. From an end-user perspective, this creates confusion. Additionally, there are cases where some providers do not allow linking down to the article level, but only to the journal level. These links are presented as well. In that case, users need to intuit that, yes, the full text of this article is available to them, and, yes, they just searched for it but they need to go into another interface, that of the journal or aggregator itself, and either re-search or browse to the correct volume issue and pages. What could possibly go wrong?

Think this picture is already bleak? It gets worse. Assuming that a user makes it as far the full-text provider's interface, it's generally clear as mud how to get from what is often a citation and abstract into the actual full text, whether it is in HTML, PDF, or another format. Again, Bonnie Imler has done research in this area and the results aren't pretty (Imler, B., & M. Eichelberger, 2014. "Commercial Database Design vs. Library Terminology Comprehension: Why Do Students Print Abstracts Instead of Full-Text Articles?" College & Research Libraries, 75(3):284-297; crl.acrl. org/content/75/3/284.full.pdf+html). This is fodder for a completely separate column, but suffice to say that when users are unsuccessful in finding the often small, sometimes confusingly labeled, generally unintuitively located PDF or HTML links or buttons, we have an ecosystem that unfortunately seems designed to make users fail.

For many scenarios, then, these link resolver landing pages are in a sense the library's homepage, and they're a mess. In most systems, these pages are templated in such a way that making substantive changes is difficult or impossible.


Given the degree of unreliability in OpenURL links, vendors have taken it upon themselves to create indexes of direct links that are more reliable, thus, in most cases, bypassing OpenURL. Disclaimer: The specifics that follow are correct as of the time of writing and based on information available on the open web, as well as information from publicly available presentations, papers, and blogs. While the numbers might change, the general picture of each vendor's offering is correct. It should also be noted that, in both cases, EBSCO and ProQuest are still using OpenURL linking; these new products are, at least for now, supplementing, rather than replacing, OpenURL linking.

EBSCO's direct linking technology is called SmartLinks; ProQuest's is called IEDL: Index Enhanced Direct Linking. As of this writing, ExLibris' linking platform is still 100% OpenURL-based, as is OCLC's. IEDL links in ProQuest's 360 Link product cover more than 500 million articles, journals, ebooks, and chapters from across more than 370 providers and cover content in more than 4,000 databases. SmartLinks are available for 23 million articles and direct users to content only on the EBSCO platform.

ProQuest's IEDL grew out of the index used for its Summon discovery service, but has now been integrated into the linking product, 360 Link, so you needn't have Summon to have access to this linking.

EBSCO's SmartLinks provide just a single link to content on the EBSCO platform. When desired content is available from more than one resource or platform, 360 Link dynamically serves up multiple links for multiple providers. Users are automatically taken to the article on the platform prioritized by the library (if the library has made this customization), but they also get links to other resources where the same content is available. I understand there is some benefit to serving up multiple links to the same content, but from an end-user perspective, this may not be beneficial. I doubt many users care where the full text comes from, so depending on your perspective EBSCO's approach may or may not be better than ProQuest's.


The point here is that linking is a critical, but underdeveloped, piece of the research process. While much attention, resources, and energy have been given to the discovery piece of the process, precious little attention has been given to the delivery piece, which is just as important.

One can argue that this move away from a standards-based OpenURL approach is a step in the wrong direction, but it's pretty clear that the current user experience for linking is pretty dismal. Close to 15 years in, and OpenURL is still not as reliable as it ought to be, so the question is, do we convene another standards meeting, debate, propose, vote, and implement and hope that in 5 years from now, OpenURL has improved? I'd argue that we don't have this kind of time, and that it's incumbent on us to think of users first, standards second. I don't hate standards; I just like happy users more.


Remember that discovery platforms are resolver-agnostic, despite what your salesperson might say, so just because you have ExLibris' Primo or ProQuest's Summon does not mean you have to use its resolver as well.

Those who do not yet have a discovery system should pay as much attention to the linking piece of the process as to the discovery piece when investigating these systems. Test linking ruthlessly, talk to other customers, and usability-test the resolver landing page as you would the search interface. In short, don't assume a "set it and forget it" mentality when it comes to the actual delivery of full text.

For those of us who already have discovery systems in place, what should we do? Test link resolving. Usability-test the linking process. Ask questions, and consider that changing your linking infrastructure can be done, and unlike changing the public-facing discovery interface, is mostly back end, so there can be minimally disruptive to end users. Heck, it might even create more of those happy users we're so fond of.

Jeff Wisniewski

University of Pittsburgh

Jeff Wisniewski ( is web services librarian, University Library System, University of Pittsburgh.

Comments? Email the editor-in-chief (
COPYRIGHT 2015 Information Today, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2015 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:control-shift
Author:Wisniewski, Jeff
Publication:Online Searcher
Geographic Code:1USA
Date:Jan 1, 2015
Previous Article:The company line: lining up alternatives for corporate directory information.
Next Article:Recommended reading on library instruction design, future of libraries, ebook subscription models, and social media.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |