Printer Friendly

Tryouts, try-nots, and rip-offs.

The internet offers a bevy of services. But as two articles from the January issues of Computers in Libraries and Searcher (along with one from the January/February issue of ONLINE) point out, nearly every web-based product can be improved. Some, especially those touted as free, are not as good as those in which a fee is required; and some for which a fee is charged are actually in the public domain.

Using Users? You'd Beta Believe It!

In the beginning of "Building a Web-Based Laboratory so Users Can Experiment With New Services" (CIL, pp. 12-14, 44-46), authors Jason Battles and Jody Combs posit that, while skilled at starting and implementing new user products and services, librarians are not as adept at taking the time to find out what patrons really want. Battles, then systems librarian at Vanderbilt University, and Combs, director of the digital library at Vanderbilt's Heard Library, worked together from 2005 to 2007 to create and introduce the Test Pilot project ( to the university's staff and students as a means to better research, select, and promote the library's services.

Borrowing from the concept used in Google Labs, which lets remote users test-market new services (with the understanding that the services may not be fully developed or fully functional yet), Combs proposed the idea of building a sister site for the library's website that would serve as a library lab. He asked Battles to lead the design effort. The Test Pilot project had six major goals, including showcasing projects under development and under consideration, providing a venue for beta testing and service refinement, recruiting members for focus groups, and soliciting end-user feedback. A primary aim of Battles' team was to make sure the site could be easily updated to accommodate frequently changing projects, applications, and services. Also paramount was making sure the user feedback portion of the screen could store comments and information about the user and the application in a user-friendly way. (To see what technology was chosen and why, check out the article, which will be more helpful than me trying to explain it all.)


From start to finish, including the pre-launch period when Test Pilot was presented to various library groups and committees, the process took 5 months. Ayear later, Battles moved on to the University of Alabama (UA) as head of its web services department, and he continues to create and test web-based applications, which included the implementation of UA Libraries' Web Laboratory, a task made easier using the framework from Test Pilot. Both schools share enhancements that are developed for their individual lab projects. And the results so far at Vanderbilt? After 16 months, the Test Pilot site has had modest success and received positive comments from campus and remote users. However, marketing the service continues to be a major problem. The challenge for both universities is to embed their lab services where users are located rather than expecting the users to seek the services.

Net Lost, Info Gained?

Arno Rueser maintains that many end users and budget holders make the false assumption that the best (often the only-needed) information is available on the free web, a belief he takes to task in "When InterNET Is InterNOT" (ONLINE, pp. 32-36). Rueser shares six basic aspects of internet bias and a few general search queries that can be used to show how a skilled librarian using other information-seeking approaches can easily outperform the typical internet searcher who has no idea how inept he or she really is.


Rueser's first point is that the internet is not international in scope, as widely assumed. He notes that according to Internet World Stats, internet penetration, while high in North America and Australia/ Oceania (69% and 54%, respectively), drops precipitously in other zones and only averages 18% worldwide. These numbers have remained relatively constant for the past 5 years, which questions how a researcher can find any and all information on the internet when clearly many countries have limited connections at best.

Next, he contests that the internet is easy to use by noting that there is a big leap between searching for and actually finding reliable, up-to-date, relevant information. Research has found that the majority of users rate themselves as expert searchers, even though few ever read search engine help pages, haven't a clue how to pare down a search from thousands of results to a more manageable number, or know about (let alone know how much wealth can be uncovered in) the hidden web.

Many searchers are under the delusion that searching, the internet, and Google equate to the same thing. Whatever is needed is on the internet, and all of it can be found using Google, which eliminates the need to use other search engines or understand other search functionality. Similarly, most users believe that the internet (read Google) consists of vast and unequalled numbers of documents. Yet, the internet pales in comparison to LexisNexis and other premium content providers, which offer validated, structured, well-organized information minus the dead links, porn, and other useless material.

Rueser rounds off his internet disclaimers by shooting down the ideas that the internet is objective and anonymous. In fact, advertising heavily influences the ranking order of an average search engine. Worse yet, well-known engines in some sections of the globe only make some information available, enabling governments to censor content. As for anonymity, one of the statistics Rueser cites is that all queries a user enters on Yahoo!, AOL, and MSN are saved by the U.S. Department of Justice to track child pornography activity.

In the last portion of the article, Rueser illustrates how information pros, with the skill and knowledge of non-internet sources, such as premium content providers, conventional library holdings, and that old-fashioned tool known as the telephone, can easily outperform the average searcher. In fact, info pros can and do find superior results when looking for content ranging from full-text newspaper articles on a specific topic to chemical weapons production within a particular country. These findings can arm librarians with more than enough ammunition to fight off the erroneous mind-set of end users and budget holders alike.

Getting Wronged by Copyright

If you hear the phrase "copyright infringement," what probably comes to mind is people who intentionally use or repackage material that is under copyright without obtaining permission. However, Sidebar columnist Carol Ebbing-house reports on a different ruse in "'Copyfraud' and Public Domain Works" (Searcher, pp. 40-54): sites that charge for material that is no longer under copyright. Ebbinghouse begins with a scenario in which a PDF of the Federalist Papers from 1877 has a copyright tag on it on some sites but is also loaded on Project Gutenberg, the original source for public domain works. Clearly, something shady is going on.

From here, Ebbinghouse, a law librarian for the California Second District Court of Appeal, addresses several issues, first and foremost how sites get away with charging for public domain works (and no, it isn't legal), a tactic known as copyfraud. A major problem is that while the laws governing copyright protection are explicit, those governing material falling under public domain are not. Ebbinghouse looks at reasons why people and corporations place copyright symbols on public domain items and then centers on four basic questions pertaining to the complexities that often make copyright law confusing: What is in the public domain and what isn't? How can you tell whether something is out of copyright? Can an archive charge for access to its collection or require users to subscribe? How can copyfraud be challenged? She also provides a concise explanation of what constitutes orphan works and recommends several sites, in addition to Project Gutenberg, for verifying public domain materials.

Ebbinghouse devotes the remainder of her column to listing resources that help researchers check the validity of copyright claims and to find material available free of charge. Advocacy organizations pertaining to public domain and open access issues include the Digital Library Federation, Library Copyright Alliance, and Scholarly Publishing and Academic Resources Coalition (SPARC). Ebbinghouse provides nearly 50 sites in and outside North America that offer digital and ebook collections. These sites include Alexa Internet,, The English Server (EServer), Google Books, The Internet Public Library, Million Book Project, Read Print, WikiSource, and World eBook Fair. URLs and brief descriptions of all these sites can be found within the article.

Thanks to Ebbinghouse's efforts, readers should be able to avoid copyfraud and find free materials through public domain sites.


With any luck, by the time you read this, Henrietta, my 9-year-old and usually sweet-tempered feline, will have stopped hissing and glaring at the 8-month-old, inquisitive, high-octane male cat that joined our household during the Thanksgiving holiday. The little guy, Dugan, figured out quickly that Queen Henrietta did not appreciate being rushed at and that using her litter box was a definite "faux paw." Then, just when I thought the pair were moving past the hissy-fit stage (actually as I was writing this column), I heard some screeching that had nothing to do with tires. But, given some time and a little catnip (possibly for all three of us), I'm hoping they'll become tolerant roommates, if not bosom buddies. Until then, I better stock up on Band-Aids.

Lauree Padgett is Information Today, Inc.'s conference program manager. Her email address is Send your comments about this column to
COPYRIGHT 2008 Information Today, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2008 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:In Other Words
Author:Padgett, Lauree
Publication:Information Today
Geographic Code:1USA
Date:Jan 1, 2008
Previous Article:Developing a strategy for the next generation.
Next Article:The promise of natural language search.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters