Printer Friendly

Expanding library horizons through use of the Internet: growth of the Internet, past and future.

While familiarity with the Internet has moved from the bailiwick of the expert cybernaut to that of the general public, many computer users--those whose Internet experience rests solely on e-mail functions--still miss out on critical and fascinating applications offered by the communications age, much as users whose computing experience rests solely on word processing miss out on the benefits of modern graphics or database programs. To the uninitiated, the Internet has become a runaway train. With rapid technological expansion, self-education has become increasingly difficult, because the body of knowledge with which novices must acquaint themselves grows at an accelerating rate.

This article is intended to give the non-Internet population a grounding in the traditional and "older" choices of the Net, as well as an introduction to the World Wide Web and the "Killer Applications" it has spawned Generally, the article is directed to people with system decision-making authority but not a clear idea of the capabilities of the Internet's systems, nor what, precisely, will be needed. Basic technological awareness and computer literacy are assumed

Traditional Net Facilities

Put simply, the Internet is a network of networks of computers. One or more computers in close proximity form a local area network (LAN) with LANs then joined in much larger wide area networks (WANs) and so forth, such that the Internet is actually a worldwide WAN of other WANs, LANs, and individual systems. Originally born of a U.S. Defense Department project and later sponsored by the National Science Foundation (NSF), the Internet has come to provide research and data access to virtually everyone with a computer and basic telecommunications hardware and software.

Because the Internet began as a research tool--and, to a large extent, has remained thus--the features and facilities most widely used on the Net have been research-specific, allowing for the searching of computer files by users of other, remote computers or allowing for the transfer of files between Net-connected computers, for example. To librarians, these traditional capabilities are meaningful and critical, creating opportunities for exploring research facilities not directly proximate to the searching patron, and permitting immediate transfer or copying of information that would take days or weeks to obtain through processed and mailed paperwork.

The traditional facilities of the Internet (telnet, ftp, and e-mail) were--and are--not well suited to librarians or patrons. As technology advances, ease of use for Internet tools has provided concomitantly easier access for librarians and patrons alike; however, there are still excellent reasons to master and use traditional Net capabilities.

Terminal emulation is probably the most important use of the telnet protocol, which gives the user the ability to access any system connected to the network. Telnet is a simple protocol for passing terminal-specific codes as data between client (local) and server (remote) nodes on the network. Because the telnet protocol standardizes the way various systems communicate, it is used to support both terminal-to-terminal and application-to-application traffic between systems. A subset of the telnet protocol formed the basis for the file transfer protocol and mail transfer protocol.

Even though there is growing excitement about the World Wide Web and other comparatively advanced capabilities, it should be noted that many libraries currently on the Net do not have gopher or Web servers--that is to say, they cannot be visited by users traversing the Web--but, rather, provide access only through telnet connections. In other words, it is quite likely that information needed by a researcher using the Net would be stored on a system that isn't on the Web but that does allow telnet access for remote users.

After all, upgrading a site often requires significant resources, including time and money spent in reconfiguring the system, as well as new hardware in some cases. Plus, the Web (or gophers, for that matter) may not allow for using applications at remote sites. For example, if remote patrons (or libraries) wanted to access a library's OPAC system, those patrons would have to telnet to the library; they wouldn't find the OPAC on the Web. (Of course, that situation will be changing now that Web-compliant OPACs are being developed by library software vendors.

Still, the bottom line is that the Web and telnet clients are complementary, and most Web clients even support telnet agents for those connections that require telnet only.

For the occasions in which there is a need to "bring home" entire files of information found on a remote system, network engineers very early on developed a process, known as file transfer protocol (ftp). If an investigator on a particular computer wanted a permanent copy of a data file available at a remote site, he or she would use ftp to bring the copy of the file back to the home site.

Created to facilitate transfers of all types of files, from software applications to simple text, ftp is capable of operating in two different modes. The standard (ASCII) mode allows for the transfer of text files, such as letters, reports, or other textual documents, while binary mode allows for the transfer of all other types, such as executables, images, and packed data files.

One of the more attractive features of ftp is that the user interface implementation was made fairly uniform across all types of systems. To a user at a PC, a UNIX machine, a VMS system, or an IBM mainframe, ftp looks very much the same, and its commands are very similar. Moreover, while, under normal circumstances, a Macintosh may be unable to read a document created with an IBM-based application, ftp automatically effects a translation between the file formats.

Overall, ftp fulfills its obligation exceedingly well and provides users with, an important way to accomplish transfer of information and files throughout the Internet. But, due to the complexity of the process and the vast resources (hard drive space) necessary to successful utilization of ftp, it is likely that ftp will be used almost solely by technically-adept library staff.

And, while ftp is needed to get information at certain sites with gopher and Web servers, ftp is simply much more likely to be needed and used at research libraries than at public ones. Again, most Web clients today support an ftp agent for those circumstances in which it is required.

E-Mail

The best known and--until the advent of gopher and Web protocols--the most-used of all Internet applications, e-mail has become a highly popular method of communication between users at remote sites. Internet mail was made possible through the utilization of simple mail transfer protocol (SMTP), which standardized the transmission of messages between machines of dissimilar type.

End-user mail application programs, or "user agents," provide the interface to enable a user to prepare and send mail, to identify recipients, and to view or save incoming messages. Varying from the crude and minimal to the, sophisticated and highly-capable, user agents, at times, are capable of handling even embedded multimedia objects through the use of the MIME (multi-purpose Internet mail extensions) standard. In other words, through the use of MIME-compliant mail agents, graphic and sound objects can be included in transmissions of electronic mail. It is even possible to embed executable programs within MIME-compliant messages.

Taken together, telnet, ftp, and e-mail provide meaningful and extensive access to the depth of information available on the Internet. These abilities--to access remote facilities, to obtain copies of relevant information located at near or distant sites, and to trade communications--offer widespread access to resources for serious researchers. But their use relies heavily on a level of relative competence and on a knowledge of where on the Internet critical information is stored.

As the facilities of Internetworked systems began to respond to the general public, that same non-technical public began to see telnet, ftp, and e-mail as inadequate, too forbidding, and too difficult. These older applications needed to be replaced or enhanced by some new method of resource-finding accessible to those with limited computer expertise. In the early 1990s, gopher seemed to hold this promise. With the advent of gopher (and later the World Wide Web), the ease of finding information caught up to the ease of manipulating information. It is due to the development of these and other "killer" applications that the enormous Internet has made its vast resources readily available to the general public.

The first successful application to gain wide acceptance among the Internet's general public, gopher was born from research at the University of Minnesota. It soon became recognized as a technological breakthrough in basic Internet protocol, although its design was based on the ftp model with the addition of a refined user interface. Gopher clients expressed directories as descriptive menus and allowed users to display or retrieve (and view the hidden information behind) menu items, without ever needing to know the Internet address of the desired resource or the method (telnet, ftp) used to retrieve it.

When a gopher client is started, it simply "asks" the remote gopher server for a menu of information files it can offer the user. The server "answers" the request by sending the client a menu of files--as well as a hidden packet of information for each menu item presented. The hidden information contains all the required technical information to enable the client to send a properly formatted request for information represented by the menu items. For instance, the hidden item information would include the address of the server with the item information, the directory path on which the information can be found, the name of the file containing the information, the type of information contained in the file, and the identification of the port with which to contact the remote server.

Client/Server Implications

In many respects, gopher helped to popularize the client/server computing model, which has been touted by academics and computer specialists as an efficient model for the deployment of expensive computing resources. The efficiencies arise because the client (the computer on the receiving end, where a user is requesting information) performs some of the information-processing functions rather than being merely a "dumb" terminal, as in traditional host-based computing. This reduces the workload on the server (the computer that "serves" the requested information). The result is that the computer on the serving end requires less powerful (and less expensive) equipment, while the computer on the receiving end--typically a PC--more fully utilizes the capacity of today's more powerful desktop computers.

From the standpoint of this client/server model, it is important to note that once the server has handed the client the "requested" menu, there is no reason to maintain contact or connection between the server and client. The connection is dropped because the hidden information transferred to the client for each menu item contains all the data necessary for the client to independently make a new connection and issue another unique request to any gopher system in the world--to "ask" for the information behind any menu item. Meanwhile, the server is free to answer connections from other gopher clients. Gopher also implemented the concept of bookmarks, which enable the user to save the location and hidden information of frequently visited or particularly interesting sites for simple and immediate return without having to renegotiate a search for the desired gopher menu.

World Wide Web (WWW, Web)

While gopher successfully accomplished Internet searches for information through the use of an intuitive, menu-oriented tool, it was limited to serving strictly textual information. Recognizing the need to access considerably broader types of information (such as images and sounds) and realizing the desire to have menu items appear with a juxtaposed description of each item, researchers continued the search for a "Holy Grail" or "Killer Application" for the Internet, which would enable serving and viewing of data in all formats.

The quest led to the rediscovery of an "old" technology, httpd (or Web) protocol and server software developed at CERN in Switzerland. This rediscovery was made possible in part through the work being done at one of the super-computing centers at the University of Illinois (NCSA) with their development of the Mosaic Web client or browser.

On the surface, the World Wide Web appears to be only a variation on the gopher scheme with less conventional methods of implementing menus. However, the Web is a considerably more flexible "hypertext"-based model allowing cross-references between related resources. Web browsers (clients) also allow multimedia information to be readily displayed for the user, often with the aid of helper programs. And, unlike gopher, the Web can act as a read/write resource, because it can collect information from the user in addition to delivering information to the user.

When the librarian or patron selects information to be displayed (called a "page" on the Web), the Web client issues a standard uniform resource locator (URL) request for the page of interest. Web browsers can typically issue URL requests asking for information in one of several formats: hypertext format (http:hwww.anywhere.edu), telnet format (telnet://anywhere.edu), ftp format (ftp://anywhere.edu), file format (file:/// cl/), and even gopher format (gopher:// gophersite. anywhere.edu), among others. A current description of specifications for all types of URLS can be found on the Internet at http://www.ncsa. uiuc.edu/General/Internet/www/ HTMLPrimer.html.

The development of Web clients--such as the University of Illinois' Mosaic, Cornell's Cello, the University of Kansas' Lynx, and the current crop of commercial browsers such as Netscape's Navigator, Spry's Air Mosaic, and Navisoft's InternetWorks--has made possible the Web's remarkable success. The comerstone of all of these Web browsers is their capability of presenting users with multimedia pages of information based on information originating predominantly from hypertext markup language (HTML) files.

An HTML file is an extremely simple ASCII (text) file with embedded text codes or "tags." These tags are part of a standard collection of defined HTML tags that are specifically interpreted by the browser on the receiving end as commands to present information in predefined formats (see figure below). Such items as titles, headers, lists, images, and links to other documents, for example, all have unique tags that are embedded in the HTML text page. Embedded references (special HTML tags with an anchor and a URL reference) to other pages, documents, or multimedia images are called "links." When a user encounters a link while browsing an HTML document, the link will be clearly highlighted in some manner to signify to the user that the link is "selectable". The user may, elect to pursue the link or may ignore the link and continue to browse.

In some respects, URLs are similar to the hidden information packets presented by gopher; both methods possess complete and unique references to enable any browser to locate information anywhere on the Internet. But the Web is distinct from gopher in a number of ways; chief among them is the ability of Web links to write information as well as to read it. The most visible example of this was made possible by the development of the HTML 2 standard, through which a Web server could present the user with a "fill-out form" and gather (write to the server) any user-supplied information. This feature enables Web publishers to present registration forms, order forms, survey forms, etc., and receive user input electronically over the Internet.

Web Security

Since a Web server is capable of capturing user input data through the use of capabilities like forums over Internet connections, many organizations have become keenly interested in doing business over the Internet. The problem that immediately crops up when certain business transactions--like banking or credit card purchases--are considered is that of securing the information transferred. While this has been a serious challenge for Web software authors, several companies are now offering secure Web servers (s-httpd) and Web browsers to ensure the security of business transactions. For example, using the public-key encryption technology licensed from RSA Data Security, Inc., Netscape Communications Corp. is now providing secure links between their Web browsers and their secure servers as a vehicle to build Internet commerce. Using this technology, users will be able to send coded order forms and credit card information over the Net. The Web server will be able to authenticate the identity of the user and conduct the transactions in complete security. For libraries, fines could be collected via patron credit cards; vendor payment transactions, community service business transactions, and more could be accomplished electronically with complete security.

A Wide Open Future on the Internet

In the information, research, and archival business, libraries should be carefully schooled in the scope and potential of Internet applications, as the growth of Net capabilities provides libraries with an exponentially expanding array of powerful tools. In fact, as new and improved capabilities arise, libraries should expect more intuitive and easy-to-use applications. Vast improvements in gopher technology are expected, as are powerful Web updates, including HTML 3 standards and its additions to the HTML language.

Browsers such as Netscape are still evolving, and California-based Sun Microsystems has released Hot Java, a browser with the ability to "learn" on its own how to display information for which there is no preconfigured helper program. Also on the horizon is Arena, an HTML 3-compliant browser developed by the designers of the original Web at CERN.

In all, the future promises an interesting ride for those in the business of providing--or seeking--information electronically. For libraries, this recent progress offers the perfect chance to take advantage of the cutting edge in global network technologies. For patrons, access to Internet applications will, quite literally, throw open the doors to the world.

Jim English (english@gaylord.com) is manager of "CyberOdyssey," the Internet Engineering Consulting Services division of Gaylord Information Systems (GIS) in Syracuse, New York. He was named 1994 Central New York Computer Professional of the Year Josh Margulies (margulies@gaylord.com) is marketing communications specialist for GIS and is the author/editor of works appearing in New York's Newsday and various magazines.

RELATED ARTICLE: Internet Search Engines

Gopher server's point users to other gophers, the eventual aim is to "track down" information, but gopher works best when the user has at least some idea where the information resides. Enter the Internet search engines. If using gopher is like wandering through library stacks, then using Archie, VERONICA, and JUGHEAD is like searching through a card catalog. Archie, VERONICA, and JUGHEAD were developed to catalog the resources of the Internet. Archie keeps an updated list of files available for ftp transfer; Archie servers are often found on gopher servers as gopher items and can be used to search ftp/archie databases by keyword to find files containing specific information.

VERONICA performs similarly, but not identically. Like Archie, VERONICA maintains a database, but where-as Archie's database contains a list of ftp-available files, VERONICA maintains a list of gopher items. A search of VERONICA's database, then, provides not just files, but directories, as well. or even lists of remote login opportunities. Most impressive, VERONICA can create a menu of gopher-accessible desired items, so that, through a VERONICA search, a user can create a personalized "table of contents."

JUGHEAD mimics VERONICA in function but not in scope; JUGHEAD maintains an internal database of an organization's own resources, as opposed to VERONICA's archiving of the whole Internet gopherspace. So, one could think of JUGHEAD as a scaled-down version of VERONICA.

Since Archie, VERONICA, and JUGHEAD proved to be such valuable tools for searching ftp archives and gopher sites, it was inevitable that similar searching tools would be developed for the World Wide Web, which has undergone exponential growth since the introduction of the Mosaic Web browser a few years ago. However, developing search engines for all the World Wide Web pages has proven to be a much more complex issue than predecessor searching tools for ftp or gopher, because there is no comprehensive registry of sites (as there is with gopher) or any particular, standard way of documenting the contents of a site (as there is with ftp/Archie). Thus, what has resulted for searching the Web is a number of services to which Web users can connect, that have independently-developed search engines and, therefore, different approaches to searching.

At this time, the authors of almost all of the services available would admit that their searching tool is not comprehensive, but there are some that are very good and have the potential to become comprehensive. While it is not within the scope of this article to present a complete list of all Web search engines, here are a few of the more popular ones:

* World Wide Web Worm: http://www.cs.colorado.edu/home /mcbryan/wwww.htm]

* Web Crawler: http://www.webcrawler.com/

* Internet Meta-Index (actually a collection of search engines): http:hwww.ncsa.uiuc.edu/ Sdg/software/mosaic/demo/metaindex.htmi

* Galaxy:(http://galaxy.einet.net/search/.html

* Netscape Search: http://home.netscape.com/home/internet-search.htm]

It is worth noting that, at the 1994 International W3 Conference in Geneva, the World Wide Web Worm (WWWW) received the award for "Best Navigational Aid for 1994." A complete list of nominees can be found on the State University of New York at Buffalo system at http://wings. buffalo.edu/contest/awards/navigate.html. The WWWW is described as a robot that indexes titles, URLs, and reference links--sounds as though it has good information science skills!
COPYRIGHT 1995 Information Today, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1995 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:includes related article on Internet search engines
Author:English, Jim; Margulies, Josh
Publication:Computers in Libraries
Date:Sep 1, 1995
Words:3545
Previous Article:Internet librarian.
Next Article:Get out your wallets again.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters